hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d04da2efc7999f7be71dab4bd35b49941a681a78 | 52,631 | ipynb | Jupyter Notebook | docs-src/notebooks/nb-gfe-example1.ipynb | tlamadon/pygfe | d12ee279c02c9e32f8ca0ceb0d2132d832a8d819 | [
"MIT"
] | 1 | 2021-01-20T02:38:46.000Z | 2021-01-20T02:38:46.000Z | docs-src/notebooks/nb-gfe-example1.ipynb | tlamadon/pygrpfe | d12ee279c02c9e32f8ca0ceb0d2132d832a8d819 | [
"MIT"
] | null | null | null | docs-src/notebooks/nb-gfe-example1.ipynb | tlamadon/pygrpfe | d12ee279c02c9e32f8ca0ceb0d2132d832a8d819 | [
"MIT"
] | null | null | null | 83.014196 | 22,744 | 0.809713 | [
[
[
"import torch\nimport numpy as np\nimport pandas as pd\nfrom sklearn.cluster import KMeans\nfrom statsmodels.discrete.discrete_model import Probit\nimport patsy\nimport matplotlib.pylab as plt\nimport tqdm\nimport itertools\n\nax = np.newaxis",
"_____no_output_____"
]
],
[
[
"Make sure you have installed the pygfe package. You can simply call `pip install pygrpfe` in the terminal or call the magic command `!pip install pygrpfe` from within the notebook. If you are using the binder link, then `pygrpfe` is already installed. You can import the package directly.",
"_____no_output_____"
]
],
[
[
"import pygrpfe as gfe",
"_____no_output_____"
]
],
[
[
"# A simple model of wage and participation\n\n\\begin{align*}\nY^*_{it} & = \\alpha_i + \\epsilon_{it} \\\\\nD_{it} &= 1\\big[ u(\\alpha_i) \\geq c(D_{it-1}) + V_{it} \\big] \\\\\nY_{it} &= D_{it} Y^*_{it} \\\\\n\\end{align*}\n\nwhere we use \n\n$$u(\\alpha) = \\frac{e^{(1-\\gamma) \\alpha } -1}{1-\\gamma}$$\n\nand use as initial conditions $D_{i1} = 1\\big[ u(\\alpha_i) \\geq c(1) + V_{i1} \\big]$.",
"_____no_output_____"
]
],
[
[
"def dgp_simulate(ni,nt,gamma=2.0,eps_sd=1.0):\n \"\"\" simulates according to the model \"\"\"\n alpha = np.random.normal(size=(ni))\n eps = np.random.normal(size=(ni,nt))\n v = np.random.normal(size=(ni,nt))\n \n # non-censored outcome\n W = alpha[:,ax] + eps*eps_sd\n \n # utility\n U = (np.exp( alpha * (1-gamma)) - 1)/(1-gamma)\n U = U - U.mean()\n \n # costs\n C1 = -1; C0=0;\n \n # binary decision\n Y = np.ones((ni,nt))\n Y[:,0] = U.squeeze() > C1 + v[:,0]\n for t in range(1,nt): \n Y[:,t] = U > C1*Y[:,t-1] + C0*(1-Y[:,t-1]) + v[:,t]\n W = W * Y\n \n return(W,Y)",
"_____no_output_____"
]
],
[
[
"# Estimating the model\n\nWe show the steps to estimating the model. Later on, we will run a Monte-Carlo Simulation.\n\nWe simulate from the DGP we have defined.",
"_____no_output_____"
]
],
[
[
"ni = 1000\nnt = 50\nY,D = dgp_simulate(ni,nt,2.0)",
"_____no_output_____"
]
],
[
[
"## Step 1: grouping observations\n\nWe group individuals based on their outcomes. We consider as moments the average value of $Y$ and the average value of $D$. We give our gfe function the $t$ sepcific values so that it can compute the within individual variation. This is a measure used to pick the nubmer of groups.\n\nThe `group` function chooses the number of groups based on the rule described in the paper. ",
"_____no_output_____"
]
],
[
[
"# we create the moments\n# this has dimension ni x nt x nm \nM_itm = np.stack([Y,D],axis=2)\n\n# we use our sugar function to get the groups\nG_i,_ = gfe.group(M_itm)\n\nprint(\"Number of groups = {:d}\".format(G_i.max()))",
"Number of groups = 11\n"
]
],
[
[
"We can plot the grouping:",
"_____no_output_____"
]
],
[
[
"dd = pd.DataFrame({'Y':Y.mean(1),'G':G_i,'D':D.mean(1)})\nplt.scatter(dd.Y,dd.D,c=dd.G*1.0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Step 2: Estimate the likelihood model with group specific parameters\n\nIn the model we proposed, this second step is a probit. We can then directly use python probit routine with group dummies.",
"_____no_output_____"
]
],
[
[
"ni,nt = D.shape\n\n# next we minimize using groups as FE\ndd = pd.DataFrame({\n 'd': D[:,range(1,nt)].flatten(), \n 'dl':D[:,range(nt-1)].flatten(), \n 'gi':np.broadcast_to(G_i[:,ax], (ni,nt-1)).flatten()})\n\nyv,Xv = patsy.dmatrices(\"d ~ 0 + dl + C(gi)\", dd, return_type='matrix')\nmod = Probit(dd['d'], Xv)\nres = mod.fit(maxiter=2000,method='bfgs') \n\nprint(\"Estimated cost parameters = {:.3f}\".format(res.params[-1]))",
"Optimization terminated successfully.\n Current function value: 0.228267\n Iterations: 87\n Function evaluations: 88\n Gradient evaluations: 88\nEstimated cost parameters = 0.985\n"
]
],
[
[
"## Step 2 (alternative implementation): Pytorch and auto-diff\n\nWe next write down a likelihood that we want to optimize. Instead of using the Python routine for the Probit, we make use of automatic differentiation from PyTorch. This makes it easy to modify the estimating model to accomodate for less standard likelihoods! \n\nWe create a class which initializes the parameters in the `__init__` method and computes the loss in the `loss` method. We will see later how we can use this to define a fixed effect estimator. ",
"_____no_output_____"
]
],
[
[
"class GrpProbit:\n\n # initialize parameters and data\n def __init__(self,D,G_i):\n # define parameters and tell PyTorch to keep track of gradients\n self.alpha = torch.tensor( np.ones(G_i.max()+1), requires_grad=True)\n self.cost = torch.tensor( np.random.normal(1), requires_grad=True)\n self.params = [self.alpha,self.cost]\n \n # predefine some components\n ni,nt = D.shape\n self.ni = ni\n self.G_i = G_i\n self.Dlag = torch.tensor(D[:,range(0,nt-1)])\n self.Dout = torch.tensor(D[:,range(1,nt)])\n self.N = torch.distributions.normal.Normal(0,1)\n \n # define our loss function\n def loss(self):\n Id = self.alpha[self.G_i].reshape(self.ni,1) + self.cost * self.Dlag\n lik_it = self.Dout * torch.log( torch.clamp( self.N.cdf( Id ), min=1e-7)) + \\\n (1-self.Dout)*torch.log( torch.clamp( self.N.cdf( -Id ), min=1e-7) )\n return(- lik_it.mean())\n",
"_____no_output_____"
],
[
"# initialize the model with groups and estimate it\nmodel = GrpProbit(D,G_i)\ngfe.train(model)\n\nprint(\"Estimated cost parameters = {:.3f}\".format(model.params[1]))",
"Estimated cost parameters = 0.985\n"
]
],
[
[
"## Use PyTorch to estimate Fixed Effect version\n\nSince Pytorch makes use of efficient automatic differentiation, we can use it with many variables. This allows us to give each individual their own group, effectivily estimating a fixed-effect model.",
"_____no_output_____"
]
],
[
[
"model_fe = GrpProbit(D,np.arange(ni))\ngfe.train(model_fe)\n\nprint(\"Estimated cost parameters FE = {:.3f}\".format(model_fe.params[1]))",
"Estimated cost parameters FE = 0.901\n"
]
],
[
[
"# Monte-Carlo\n\nWe finish with running a short Monte-Carlo exercise.",
"_____no_output_____"
]
],
[
[
"all = []\nimport itertools\n\nll = list(itertools.product(range(50), [10,20,30,40]))\nfor r, nt in tqdm.tqdm(ll):\n ni = 1000\n gamma =2.0\n \n Y,D = dgp_simulate(ni,nt,gamma)\n \n M_itm = np.stack([Y,D],axis=2)\n G_i,_ = blm2.group(M_itm,scale=True)\n\n model_fe = GrpProbit(D,np.arange(ni))\n gfe.train(model_fe)\n \n model_gfe = GrpProbit(D,G_i)\n gfe.train(model_gfe)\n \n all.append({\n 'c_fe' : model_fe.params[1].item(), \n 'c_gfe': model_gfe.params[1].item(), \n 'ni':ni,\n 'nt':nt,\n 'gamma':gamma, \n 'ng':G_i.max()+1})\n\n ",
"100%|██████████| 200/200 [19:18<00:00, 5.79s/it]\n"
],
[
"df = pd.DataFrame(all)\ndf2 = df.groupby(['ni','nt','gamma']).mean().reset_index()\nplt.plot(df2['nt'],df2['c_gfe'],label=\"gfe\",color=\"orange\")\nplt.plot(df2['nt'],df2['c_fe'],label=\"fe\",color=\"red\")\nplt.axhline(1.0,label=\"true\",color=\"black\",linestyle=\":\")\nplt.xlabel(\"T\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"df.groupby(['ni','nt','gamma']).mean()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d04dc0e00bfa37f79731ecec820fb9a025df6638 | 6,399 | ipynb | Jupyter Notebook | Code/Phase1BowlerCluster.ipynb | 18shrijeet/Big-Data | 7ff7eec868aa7d80ec13b27f0665ba505445ee64 | [
"MIT"
] | null | null | null | Code/Phase1BowlerCluster.ipynb | 18shrijeet/Big-Data | 7ff7eec868aa7d80ec13b27f0665ba505445ee64 | [
"MIT"
] | null | null | null | Code/Phase1BowlerCluster.ipynb | 18shrijeet/Big-Data | 7ff7eec868aa7d80ec13b27f0665ba505445ee64 | [
"MIT"
] | null | null | null | 35.159341 | 523 | 0.528208 | [
[
[
"from numpy import array\nfrom math import sqrt\nimport numpy as np\n\nfrom pyspark.mllib.clustering import KMeans, KMeansModel\n\n# Load and parse the data\n\nparsedData = sc.textFile(\"hdfs://localhost:54310/project/bowler_stat.csv\") \\\n .map(lambda line: line.split(\",\")) \\\n .filter(lambda line: len(line)>1 and line[1]!=\"Name\") \\\n .map(lambda line: array([float(line[2]),float(line[3]),float(line[4]),float(line[5]),\\\n float(line[6]),float(line[7]),float(line[8])]))\n\n# Build the model (cluster the data)\nclusters = KMeans.train(parsedData, 5, maxIterations=100000, initializationMode=\"random\")\n\n# Evaluate clustering by computing Within Set Sum of Squared Errors\ndef error(point):\n center = clusters.centers[clusters.predict(point)]\n return sqrt(sum([x**2 for x in (point - center)]))\n\nWSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)\nprint(\"Within Set Sum of Squared Error = \" + str(WSSSE))\n\n# Save and load model\nclusters.save(sc, \"hdfs://localhost:54310/project/bowl_result\")\nsameModel = KMeansModel.load(sc, \"hdfs://localhost:54310/project/bowl_result\")\n\nprint(\"A Mishra class :\")\nprint(sameModel.predict(array([284,14,13,332,23.7142857143,7.014084507,0.9285714286])))\n\nprint(\"Bumrah class :\")\nprint(sameModel.predict(array([322,14,15,396,28.2857142857,7.3788819876,1.0714285714])))\n\nprint(\"Virat Kohli class :\")\nprint(sameModel.predict(array([8,1,0,11,11,8.25,0])))\n\n",
"Within Set Sum of Squared Error = 3158.079092522345\nA Mishra class :\n4\nBumrah class :\n1\nVirat Kohli class :\n0\n"
],
[
"names = sc.textFile(\"hdfs://localhost:54310/project/bowler_stat.csv\") \\\n .map(lambda line: line.split(\",\")).filter(lambda line: len(line)>1 and line[1]!=\"Name\").map(lambda line: array([line[1]]))\nnames = names.map(lambda x:x[0])\n#print(names.collect())\n#print(len(names.collect()))\n\nplayer_class = parsedData.map(lambda x: (x,sameModel.predict(x)))\n#print(player_class.collect())\n#print(len(player_class.collect()))\n\nname_class = names.zip(player_class)\n#print(name_class.collect())\n\n\ndef players_of_class(k):\n print(\"Players of class \",k)\n k_class_players = name_class.map(lambda x:(x[0],x[1][1])).filter(lambda x:x[1]==k)\n a = k_class_players.collect()\n l = [i[0] for i in a]\n print(\"Number of players under class \"+str(k)+\" = \",len(a))\n print(\"\\n\\n\")\n print(l)\n print(\"\\n\\n\")\n #for i in a:\n # print(i)\n\n \nfor i in range(5):\n players_of_class(i)\n",
"Players of class 0\nNumber of players under class 0 = 38\n\n\n\n['A Ashish Reddy', 'AF Milne', 'Ankit Sharma', 'Anureet Singh', 'BCJ Cutting', 'C Munro', 'CH Gayle', 'D Wiese', 'DL Chahar', 'DW Steyn', 'GJ Maxwell', 'Gurkeerat Singh', 'IK Pathan', 'J Suchith', 'JA Morkel', 'JD Unadkat', 'JP Duminy', 'JW Hastings', 'KA Pollard', 'KS Williamson', 'M Vijay', 'MR Marsh', 'N Rana', 'P Negi', 'PJ Sangwan', 'R Dhawan', 'R Sathish', 'R Vinay Kumar', 'S Gopal', 'S Ladda', 'SK Raina', 'SM Boland', 'Sachin Baby', 'Swapnil Singh', 'TA Boult', 'V Kohli', 'YK Pathan', 'Yuvraj Singh']\n\n\n\nPlayers of class 1\nNumber of players under class 1 = 15\n\n\n\n['AR Patel', 'B Kumar', 'BB Sran', 'DJ Bravo', 'DS Kulkarni', 'Harbhajan Singh', 'JJ Bumrah', 'MC Henriques', 'MJ McClenaghan', 'MM Sharma', 'Mustafizur Rahman', 'P Kumar', 'SR Watson', 'Sandeep Sharma', 'YS Chahal']\n\n\n\nPlayers of class 2\nNumber of players under class 2 = 28\n\n\n\n['A Zampa', 'AS Rajpoot', 'Bipul Sharma', 'DJ Hooda', 'DR Smith', 'GB Hogg', 'HH Pandya', 'HV Patel', 'I Sharma', 'Imran Tahir', 'J Yadav', 'JO Holder', 'JP Faulkner', 'KC Cariappa', 'KJ Abbott', 'KV Sharma', 'KW Richardson', 'Kuldeep Yadav', 'MG Johnson', 'NM Coulter-Nile', 'P Sahu', 'PV Tambe', 'Parvez Rasool', 'RP Singh', 'S Nadeem', 'SB Jakati', 'STR Binny', 'T Shamsi']\n\n\n\nPlayers of class 3\nNumber of players under class 3 = 15\n\n\n\n['A Nehra', 'AB Dinda', 'CJ Jordan', 'CR Brathwaite', 'Iqbal Abdulla', 'KH Pandya', 'M Ashwin', 'MP Stoinis', 'Mohammed Shami', 'R Bhatia', 'S Aravind', 'S Kaushik', 'Shakib Al Hasan', 'UT Yadav', 'VR Aaron']\n\n\n\nPlayers of class 4\nNumber of players under class 4 = 11\n\n\n\n['A Mishra', 'AD Russell', 'CH Morris', 'M Morkel', 'NLTC Perera', 'PP Chawla', 'R Ashwin', 'RA Jadeja', 'SP Narine', 'TG Southee', 'Z Khan']\n\n\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d04dc10fe202044b1d327d98b56caae0ea9171b0 | 46,136 | ipynb | Jupyter Notebook | Ugwu Lilian WT-21-138/2018_LE_GDP.ipynb | ruthwaiharo/Week-5-Assessment | f320a9e553c9b723fff996128fcdca45bbe0f2b0 | [
"MIT"
] | 1 | 2021-06-18T22:08:40.000Z | 2021-06-18T22:08:40.000Z | Ugwu Lilian WT-21-138/2018_LE_GDP.ipynb | ruthwaiharo/Week-5-Assessment | f320a9e553c9b723fff996128fcdca45bbe0f2b0 | [
"MIT"
] | 4 | 2021-06-19T00:36:02.000Z | 2021-07-05T08:48:08.000Z | Ugwu Lilian WT-21-138/2018_LE_GDP.ipynb | ruthwaiharo/Week-5-Assessment | f320a9e553c9b723fff996128fcdca45bbe0f2b0 | [
"MIT"
] | 68 | 2021-06-12T09:24:30.000Z | 2021-08-31T12:14:36.000Z | 55.054893 | 19,376 | 0.653784 | [
[
[
"# GDP and life expectancy\n\nRicher countries can afford to invest more on healthcare, on work and road safety, and other measures that reduce mortality. On the other hand, richer countries may have less healthy lifestyles. Is there any relation between the wealth of a country and the life expectancy of its inhabitants?\n\nThe following analysis checks whether there is any correlation between the total gross domestic product (GDP) of a country in 2013 and the life expectancy of people born in that country in 2013.",
"_____no_output_____"
],
[
"Getting the data\nTwo datasets of the World Bank are considered. One dataset, available at http://data.worldbank.org/indicator/NY.GDP.MKTP.CD, lists the GDP of the world's countries in current US dollars, for various years. The use of a common currency allows us to compare GDP values across countries. The other dataset, available at http://data.worldbank.org/indicator/SP.DYN.LE00.IN, lists the life expectancy of the world's countries. The datasets were downloaded as CSV files in March 2016.",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.simplefilter('ignore', FutureWarning)\n\nimport pandas as pd\n\nYEAR = 2018\nGDP_INDICATOR = 'NY.GDP.MKTP.CD'\ngdpReset = pd.read_csv('WB 2018 GDP.csv')\n\n\nLIFE_INDICATOR = 'SP.DYN.LE00.IN_'\nlifeReset = pd.read_csv('WB 2018 LE.csv')\nlifeReset.head()",
"_____no_output_____"
]
],
[
[
"## Cleaning the data\n\nInspecting the data with `head()` and `tail()` shows that:\n\n1. the first 34 rows are aggregated data, for the Arab World, the Caribbean small states, and other country groups used by the World Bank;\n- GDP and life expectancy values are missing for some countries.\n\nThe data is therefore cleaned by:\n1. removing the first 34 rows;\n- removing rows with unavailable values.",
"_____no_output_____"
]
],
[
[
"gdpCountries = gdpReset.dropna()\nlifeCountries = lifeReset.dropna()",
"_____no_output_____"
]
],
[
[
"## Transforming the data\n\nThe World Bank reports GDP in US dollars and cents. To make the data easier to read, the GDP is converted to millions of British pounds (the author's local currency) with the following auxiliary functions, using the average 2013 dollar-to-pound conversion rate provided by <http://www.ukforex.co.uk/forex-tools/historical-rate-tools/yearly-average-rates>. ",
"_____no_output_____"
]
],
[
[
"def roundToMillions (value):\n return round(value / 1000000)\n\ndef usdToGBP (usd):\n return usd / 1.334801\n\nGDP = 'GDP (£m)'\ngdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions)\ngdpCountries.head()",
"<ipython-input-33-a8f6c23dc95c>:8: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n gdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions)\n"
],
[
"COUNTRY = 'Country Name'\nheadings = [COUNTRY, GDP]\ngdpClean = gdpCountries[headings]\ngdpClean.head()",
"_____no_output_____"
],
[
"LIFE = 'Life expectancy (years)'\nlifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round)\nheadings = [COUNTRY, LIFE]\nlifeClean = lifeCountries[headings]\nlifeClean.head()",
"<ipython-input-35-62070c039e83>:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round)\n"
],
[
"gdpVsLife = pd.merge(gdpClean, lifeClean, on=COUNTRY, how='inner')\ngdpVsLife.head()",
"_____no_output_____"
]
],
[
[
"## Calculating the correlation\nTo measure if the life expectancy and the GDP grow together, the Spearman rank correlation coefficient is used. It is a number from -1 (perfect inverse rank correlation: if one indicator increases, the other decreases) to 1 (perfect direct rank correlation: if one indicator increases, so does the other), with 0 meaning there is no rank correlation. A perfect correlation doesn't imply any cause-effect relation between the two indicators. A p-value below 0.05 means the correlation is statistically significant.",
"_____no_output_____"
]
],
[
[
"from scipy.stats import spearmanr\n\ngdpColumn = gdpVsLife[GDP]\nlifeColumn = gdpVsLife[LIFE]\n(correlation, pValue) = spearmanr(gdpColumn, lifeColumn)\nprint('The correlation is', correlation)\nif pValue < 0.05:\n print('It is statistically significant.')\nelse:\n print('It is not statistically significant.')",
"The correlation is -0.01111757436417062\nIt is not statistically significant.\n"
]
],
[
[
"The value shows a direct correlation, i.e. richer countries tend to have longer life expectancy.",
"_____no_output_____"
],
[
"## Showing the data\n\nMeasures of correlation can be misleading, so it is best to see the overall picture with a scatterplot. The GDP axis uses a logarithmic scale to better display the vast range of GDP values, from a few million to several billion (million of million) pounds.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\ngdpVsLife.plot(x=GDP, y=LIFE, kind='scatter', grid=True, logx=True, figsize=(10, 4))",
"_____no_output_____"
]
],
[
[
"The plot shows there is no clear correlation: there are rich countries with low life expectancy, poor countries with high expectancy, and countries with around 10 thousand (104) million pounds GDP have almost the full range of values, from below 50 to over 80 years. Towards the lower and higher end of GDP, the variation diminishes. Above 40 thousand million pounds of GDP (3rd tick mark to the right of 104), most countries have an expectancy of 70 years or more, whilst below that threshold most countries' life expectancy is below 70 years.\n\nComparing the 10 poorest countries and the 10 countries with the lowest life expectancy shows that total GDP is a rather crude measure. The population size should be taken into account for a more precise definiton of what 'poor' and 'rich' means. Furthermore, looking at the countries below, droughts and internal conflicts may also play a role in life expectancy.",
"_____no_output_____"
]
],
[
[
"# the 10 countries with lowest GDP\ngdpVsLife.sort_values(GDP).head(10)",
"_____no_output_____"
],
[
"# the 10 countries with lowest life expectancy\ngdpVsLife.sort_values(LIFE).head(10)",
"_____no_output_____"
]
],
[
[
"## Conclusions\nTo sum up, there is no strong correlation between a country's wealth and the life expectancy of its inhabitants: there is often a wide variation of life expectancy for countries with similar GDP, countries with the lowest life expectancy are not the poorest countries, and countries with the highest expectancy are not the richest countries. Nevertheless there is some relationship, because the vast majority of countries with a life expectancy below 70 years is on the left half of the scatterplot.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d04dc7722d7500850d249067517697ede0daa6fb | 188,840 | ipynb | Jupyter Notebook | disease/neanderthal_gwas.ipynb | kshiyao/neanderthal_introgression | 985dc2f4943ebd0df3d77c3a8182ddc3663dcf59 | [
"MIT"
] | null | null | null | disease/neanderthal_gwas.ipynb | kshiyao/neanderthal_introgression | 985dc2f4943ebd0df3d77c3a8182ddc3663dcf59 | [
"MIT"
] | null | null | null | disease/neanderthal_gwas.ipynb | kshiyao/neanderthal_introgression | 985dc2f4943ebd0df3d77c3a8182ddc3663dcf59 | [
"MIT"
] | null | null | null | 38.265451 | 305 | 0.377627 | [
[
[
"# Immune disease associations of Neanderthal-introgressed SNPs\n\nThis code investigates if Neanderthal-introgressed SNPs (present in Chen introgressed sequences) have been associated with any immune-related diseases, including infectious diseases, allergic diseases, autoimmune diseases and autoinflammatory diseases, using data from the NHGRI-EBI GWAS Catalog.\n\nNeanderthal-introgressed SNPs from:\n1. Dannemann M, Prufer K & Kelso J. Functional implications of Neandertal introgression in modern humans. *Genome Biol* 2017 **18**:61.\n2. Simonti CN *et al.* The phenotypic legacy of admixture between modern humans and Neandertals. *Science* 2016 **351**:737-41. \n\nNeanderthal-introgressed sequences by Chen *et al.* from:\n* Chen L *et al.* Identifying and interpreting apparent Neanderthal ancestry in African individuals. *Cell* 2020 **180**:677-687. \n\nGWAS summary statistics from:\n* [GWAS Catalog](https://www.ebi.ac.uk/gwas/docs/file-downloads)",
"_____no_output_____"
]
],
[
[
"# Import modules\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"## Get Neanderthal SNPs present in GWAS Catalog",
"_____no_output_____"
]
],
[
[
"# Load Chen Neanderthal-introgressed SNPs\nchen = pd.read_excel('../chen/Additional File 1.xlsx', 'Sheet1', usecols=['Chromosome', 'Position', 'Source', 'ID', 'Chen'])\nneanderthal = chen.loc[chen.Chen == 'Yes'].copy()\nneanderthal.drop('Chen', axis=1)",
"_____no_output_____"
],
[
"# Load GWAS catalog\ncatalog = pd.read_csv('GWAS_Catalog.tsv', sep=\"\\t\", header=0,\n usecols=['DISEASE/TRAIT', 'CHR_ID', 'CHR_POS', 'REPORTED GENE(S)', 'MAPPED_GENE',\n 'STRONGEST SNP-RISK ALLELE', 'SNPS', 'RISK ALLELE FREQUENCY', 'P-VALUE', 'OR or BETA',\n '95% CI (TEXT)', 'MAPPED_TRAIT', 'STUDY ACCESSION'], low_memory=False)\ncatalog = catalog.loc[catalog.CHR_ID != 'X'].copy()\ncatalog = catalog.loc[catalog.CHR_ID != 'Y'].copy()\ncatalog.rename(columns={'CHR_ID': 'Chromosome', 'CHR_POS': 'Position', 'SNPS': 'ID'}, inplace=True)",
"_____no_output_____"
],
[
"# Neanderthal SNPs present in GWAS catalog\nnean_catalog = neanderthal.merge(catalog.drop(columns=['Chromosome', 'Position']), how='inner', on='ID')\nnean_catalog",
"_____no_output_____"
]
],
[
[
"## Immune-related diseases associated with Neanderthal SNPs",
"_____no_output_____"
],
[
"### Infections",
"_____no_output_____"
]
],
[
[
"nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('influenza')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('wart')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('HIV')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('Malaria')]",
"_____no_output_____"
]
],
[
[
"### Allergic diseases",
"_____no_output_____"
]
],
[
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('allerg')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('asthma')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Eczema')]",
"_____no_output_____"
]
],
[
[
"### Autoimmune/autoinflammatory diseases",
"_____no_output_____"
]
],
[
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('lupus')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('rheumatoid')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('scleroderma')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Sjogren')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Grave')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('glomerulonephritis')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('colitis')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Crohn')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('bowel')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('psoriasis')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('celiac')]",
"_____no_output_____"
],
[
"nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('multiple sclerosis')]",
"_____no_output_____"
]
],
[
[
"## Do immune disease-associated Neanderthal SNPs show eQTL?",
"_____no_output_____"
]
],
[
[
"# Load eQTL data\nfairfax_ori = pd.read_csv(\"../fairfax/tab2_a_cis_eSNPs.txt\", sep=\"\\t\", usecols=[\"SNP\", \"Gene\", \"Min.dataset\", \"LPS2.FDR\", \"LPS24.FDR\", \"IFN.FDR\", \"Naive.FDR\"])\n\nfairfax_re = pd.read_csv('overlap_filtered_fairfax.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])\nfairfax_re.sort_values('pvalue', inplace=True)\nfairfax_re.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)\n\nnedelec_re = pd.read_csv('overlap_filtered_nedelec.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])\nnedelec_re.sort_values('pvalue', inplace=True)\nnedelec_re.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)\n\nquach = pd.read_csv('overlap_filtered_quach.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])\nquach.sort_values('pvalue', inplace=True)\nquach.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)\n\nalasoo = pd.read_csv('overlap_filtered_alasoo.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])\nalasoo.sort_values('pvalue', inplace=True)\nalasoo.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)",
"_____no_output_____"
],
[
"# Selected Neanderthal SNPs with immune disease associations\ngwas = open('overlapped_SNPs.txt', 'r').read().splitlines()\ngwas",
"_____no_output_____"
],
[
"# Overlap with original Fairfax eQTLs\nls = set(list(fairfax_ori.SNP)).intersection(gwas)\nfairfax_ori.loc[fairfax_ori.SNP.isin(ls)]",
"_____no_output_____"
],
[
"# Overlap with recomputed Fairfax eQTLs\nls = set(list(fairfax_re.rsid)).intersection(gwas)\nfairfax_re.loc[fairfax_re.rsid.isin(ls)]",
"_____no_output_____"
],
[
"# Overlap with recomputed Nedelec eQTLs\nls = set(list(nedelec_re.rsid)).intersection(gwas)\nnedelec_re.loc[nedelec_re.rsid.isin(ls)]",
"_____no_output_____"
],
[
"# Overlap with recomputed Quach eQTLs\nls = set(list(quach.rsid)).intersection(gwas)\nquach.loc[quach.rsid.isin(ls)]",
"_____no_output_____"
],
[
"# Overlap with recomputed Alasoo eQTLs\nls = set(list(alasoo.rsid)).intersection(gwas)\nalasoo.loc[alasoo.rsid.isin(ls)]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04de5814488acdb696702cf7b4a52feea7212d0 | 6,591 | ipynb | Jupyter Notebook | mgnify/src/notebooks/American_Gut_filter_based_in_location.ipynb | ProteinsWebTeam/ebi-metagenomics-examples | 44d3ae57fdd5ba35a243659e582a8c513dbcd85f | [
"Apache-2.0"
] | 8 | 2018-10-30T14:04:36.000Z | 2021-10-02T13:07:08.000Z | mgnify/src/notebooks/American_Gut_filter_based_in_location.ipynb | ProteinsWebTeam/ebi-metagenomics-examples | 44d3ae57fdd5ba35a243659e582a8c513dbcd85f | [
"Apache-2.0"
] | 2 | 2021-03-01T22:30:37.000Z | 2021-11-09T10:18:32.000Z | mgnify/src/notebooks/American_Gut_filter_based_in_location.ipynb | ProteinsWebTeam/ebi-metagenomics-examples | 44d3ae57fdd5ba35a243659e582a8c513dbcd85f | [
"Apache-2.0"
] | 8 | 2018-12-11T20:43:33.000Z | 2021-01-06T03:42:58.000Z | 34.689474 | 767 | 0.497497 | [
[
[
"# American Gut Project example\n\nThis notebook was created from a question we recieved from a user of MGnify.\n\nThe question was:\n\n```\nI am attempting to retrieve some of the MGnify results from samples that are part of the American Gut Project based on sample location. \nHowever latitude and longitude do not appear to be searchable fields. \nIs it possible to query these fields myself or to work with someone to retrieve a list of samples from a specific geographic range? I am interested in samples from people in Hawaii, so 20.5 - 20.7 and -154.0 - -161.2.\n```\n\nLet's decompose the question:\n- project \"American Gut Project\"\n- Metadata filtration using the geographic location of a sample. \n- Get samples for Hawai: 20.5 - 20.7 ; -154.0 - -161.2\n\nEach sample if MGnify it's obtained from [ENA](https://www.ebi.ac.uk/ena).\n\n## Get samples\n\nThe first step is to obtain the samples using [ENA advanced search API](https://www.ebi.ac.uk/ena/browser/advanced-search).\n\n",
"_____no_output_____"
]
],
[
[
"from pandas import DataFrame\nimport requests\n\nbase_url = 'https://www.ebi.ac.uk/ena/portal/api/search' \n\n# parameters\nparams = {\n 'result': 'sample',\n 'query': ' AND '.join([\n 'geo_box1(16.9175,-158.4687,21.6593,-152.7969)',\n 'description=\"*American Gut Project*\"'\n ]),\n 'fields': ','.join(['secondary_sample_accession', 'lat', 'lon']),\n 'format': 'json',\n}\n\nresponse = requests.post(base_url, data=params)\n\nagp_samples = response.json()\n\ndf = DataFrame(columns=('secondary_sample_accession', 'lat', 'lon'))\ndf.index.name = 'accession'\n\nfor s in agp_samples:\n df.loc[s.get('accession')] = [\n s.get('secondary_sample_accession'),\n s.get('lat'),\n s.get('lon')\n ]\n\ndf\n",
"secondary_sample_accession lat lon\naccession \nSAMEA104163502 ERS1822520 19.6 -155.0\nSAMEA104163503 ERS1822521 19.6 -155.0\nSAMEA104163504 ERS1822522 19.6 -155.0\nSAMEA104163505 ERS1822523 19.6 -155.0\nSAMEA104163506 ERS1822524 19.6 -155.0\n... ... ... ...\nSAMEA4588733 ERS2409455 21.5 -157.8\nSAMEA4588734 ERS2409456 21.5 -157.8\nSAMEA4786501 ERS2606437 21.4 -157.7\nSAMEA92368918 ERS1561273 19.4 -155.0\nSAMEA92936668 ERS1562030 21.3 -157.7\n\n[121 rows x 3 columns]\n"
]
],
[
[
"Now we can use EMG API to get the information.\n",
"_____no_output_____"
]
],
[
[
"#!/bin/usr/env python\n\nimport requests\nimport sys\n\n\ndef get_links(data):\n return data[\"links\"][\"related\"]\n\n\nif __name__ == \"__main__\":\n samples_url = \"https://www.ebi.ac.uk/metagenomics/api/v1/samples/\"\n \n tsv = sys.argv[1] if len(sys.argv) == 2 else None\n if not tsv:\n print(\"The first arg is the tsv file\")\n exit(1)\n\n tsv_fh = open(tsv, \"r\")\n\n # header\n next(tsv_fh)\n\n for record in tsv_fh:\n # get the runs first\n\n # mgnify references the secondary accession\n _, sec_acc, *_ = record.split(\"\\t\")\n samples_res = requests.get(samples_url + sec_acc)\n\n if samples_res.status_code == 404:\n print(sec_acc + \" not found in MGnify\")\n continue\n\n # then the analysis for that run\n runs_url = get_links(samples_res.json()[\"data\"][\"relationships\"][\"runs\"])\n\n if not runs_url:\n print(\"No runs for sample \" + sec_acc)\n continue\n\n print(\"Getting the runs: \" + runs_url)\n\n run_res = requests.get(runs_url)\n\n if run_res.status_code != 200:\n print(run_url + \" failed\", file=sys.stderr)\n continue\n\n # iterate over the sample runs\n run_data = run_res.json()\n\n # this script doesn't consider pagination, it's just an example\n # there could be more that one page of runs\n # use links -> next to get the next page\n for run in run_data[\"data\"]:\n analyses_url = get_links(run[\"relationships\"][\"analyses\"])\n\n if not analyses_url:\n print(\"No analyses for run \" + run)\n continue\n\n analyses_res = requests.get(analyses_url)\n\n if analyses_res.status_code != 200:\n print(analyses_url + \" failed\", file=sys.stderr)\n continue\n\n # dump\n print(\"Raw analyses data\")\n print(analyses_res.json())\n print(\"=\" * 30)\n\n tsv_fh.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d04dea3c442a050b599e2d17ab63c9c1b859d12e | 225,810 | ipynb | Jupyter Notebook | employee_attrition/attrition-tf.ipynb | mlarionov/machine_learning_POC | 52cdece108f285a4e67212fd289f6dbf9035dca0 | [
"Apache-2.0"
] | 24 | 2018-12-07T13:16:43.000Z | 2022-03-24T11:22:29.000Z | employee_attrition/attrition-tf.ipynb | mlarionov/machine_learning_POC | 52cdece108f285a4e67212fd289f6dbf9035dca0 | [
"Apache-2.0"
] | 1 | 2021-02-25T07:07:17.000Z | 2021-02-25T07:07:17.000Z | employee_attrition/attrition-tf.ipynb | mlarionov/machine_learning_POC | 52cdece108f285a4e67212fd289f6dbf9035dca0 | [
"Apache-2.0"
] | 13 | 2019-06-03T17:29:49.000Z | 2022-01-05T01:41:13.000Z | 47.379354 | 29,068 | 0.329321 | [
[
[
"# Employee Attrition Prediction\nThere is a class of problems that predict that some event happens after N years. Examples are employee attrition, hard drive failure, life expectancy, etc. \n\nUsually these kind of problems are considered simple problems and are the models have vairous degree of performance. Usually it is treated as a classification problem, predicting if after exactly N years the event happens. The problem with this approach is that people care not so much about the likelihood that the event happens exactly after N years, but the probability that the event happens today. While you can infer this using Bayes theorem, doing it during prediction will not give you good accuracy because the Bayesian inference will be based on one piece of data. It is better to do this kind of inference during training time, and learn the probability than the likelihood function.\n\nThus, the problem is learning a conditional probability of the person quitting, given he has not quit yet, and is similar to the Hazard function in survival analysis problem",
"_____no_output_____"
]
],
[
[
"#Import\nimport numpy as np\nimport pandas as pd\nimport numpy.random\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport tensorflow as tf\nfrom sklearn.preprocessing import MinMaxScaler\nimport math\n%matplotlib inline\nnumpy.random.seed(1239)",
"C:\\Users\\michael\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n"
],
[
"# Read the data\n# Source: https://www.ibm.com/communities/analytics/watson-analytics-blog/hr-employee-attrition/\nraw_data = pd.read_csv('data/WA_Fn-UseC_-HR-Employee-Attrition.csv')",
"_____no_output_____"
],
[
"#Check if any is nan. If no nans, we don't need to worry about dealing with them\nraw_data.isna().sum().sum()",
"_____no_output_____"
],
[
"def prepare_data(raw_data):\n '''\n Prepare the data\n 1. Set EmployeeNumber as the index\n 2. Drop redundant columns\n 3. Reorder columns to make YearsAtCompany first\n 4. Change OverTime to the boolean type\n 5. Do 1-hot encoding\n '''\n labels = raw_data.Attrition == 'Yes'\n employee_data = raw_data.set_index('EmployeeNumber').drop(columns=['Attrition', 'EmployeeCount', 'Over18'])\n employee_data.loc[:, 'OverTime'] = (employee_data.OverTime == 'Yes').astype('float')\n employee_data = pd.get_dummies(employee_data)\n employee_data = pd.concat([employee_data.YearsAtCompany, employee_data.drop(columns='YearsAtCompany')], axis=1)\n return employee_data, labels\n",
"_____no_output_____"
],
[
"#Split to features and labels\nemployee_data, labels = prepare_data(raw_data)",
"_____no_output_____"
]
],
[
[
"First we will work on the synthetic set of data, for this reason we will not split the dataset to train/test yet",
"_____no_output_____"
]
],
[
[
"#Now scale the entire dataset, but not the first column (YearsAtCompany). Instead scale the dataset to be similar range\n#to the first column\nmax_year = employee_data.YearsAtCompany.max()\nscaler = MinMaxScaler(feature_range=(0, max_year))\nscaled_data = pd.DataFrame(scaler.fit_transform(employee_data.values.astype('float')),\n columns=employee_data.columns,\n index=employee_data.index)\n",
"_____no_output_____"
]
],
[
[
"Based on the chart it seems like a realistic data set.\nNow we need to construct our loss function. It will have an additional parameter: number of years\n\nWe define probability $p(x, t)$ that the person quits this very day, where t is the number of years and x is the remaining features. Then the likelihood that the person has quit after the year $t$ is \n$$P(x,t) = (\\prod_{l=0}^{t-1} (1-p(x,l))) p(x,t) $$ whereas the likelihood that the person will remain after the year $t$ is \n$$P(x,t) = \\prod_{l=0}^{t} (1-p(x,l)) $$\nStrictly speaking x is also dependent on t, but we don't have the historical data for this, so we assume that x is independent of t.\n\nUsing the principle of maximum likelihood, we derive the loss function taking negative log of the likelihood function:\n$$\\mathscr{L}(y,p) = -\\sum_{l=0}^{t-1} \\log(1-p(x,l)) - y \\log{p} - (1-y) \\log(1-p) $$\nWhere y is an indicator if the person has quit after working exactly t years or not.\nNotice that the last two terms is the cross-entropy loss function, and the first term is a hitorical term. ",
"_____no_output_____"
],
[
"We will use a modified Cox Hazard function mechanism and model the conditional probability $p(x,l)$ a sigmoid function (for simplicity we include bias in the list of weights, and so the weight for the t parameter): $$p=\\frac{1}{1 + e^{-\\bf{w}\\bf{x}}}$$\n\n\n\n",
"_____no_output_____"
],
[
"To create a synthetic set we assume that p does not depend on anything. Then the maximum likelihood gives us this simple formula: $$Pos=M p \\bar{t}$$ \nHere Pos is the number of positive example (people who quit) and M is the total number of examples and $\\bar{t}$ is the mean time (number of years)\n",
"_____no_output_____"
]
],
[
[
"#pick a p\np = 0.01\n#Get the maximum years. We need it to make sure that the product of p YearsAtCompany never exceeds 1.\n#In reality that is not a problem, but we will use it to correctly create synthetic labels\nscaled_data.YearsAtCompany.max()",
"_____no_output_____"
],
[
"#Create the synthetic labels. \nsynthetic_labels = numpy.random.rand(employee_data.shape[0]) < p * employee_data.YearsAtCompany\n#Plot the data with the synthetic labels\nsns.swarmplot(y='years', x='quit', data=pd.DataFrame({\"quit\":synthetic_labels, 'years':employee_data.YearsAtCompany}));",
"_____no_output_____"
],
[
"#We expect the probability based on the synthesized data (but we are getting something else....) to be close to p\nsynthetic_labels.sum()/len(synthetic_labels)/employee_data.YearsAtCompany.mean()",
"_____no_output_____"
]
],
[
[
"Indeed pretty close to the value of p we set beforehand",
"_____no_output_____"
],
[
"## Logistic Regression with the synthetic labels\n\nIn this version of the POC we will use TensorFlow",
"_____no_output_____"
],
[
"We need to add ones to the dataframe.\nBut since we scaled everything to be between `0` and `40`, the convergence will be faster if we add `40.0` instead of `1`",
"_____no_output_____"
]
],
[
[
"#Add 1 to the employee data.\n#But to make convergence fa\nscaled_data['Ones'] = 40.0",
"_____no_output_____"
],
[
"scaled_data",
"_____no_output_____"
],
[
"def reset_graph(seed=1239):\n tf.reset_default_graph()\n tf.set_random_seed(seed)\n np.random.seed(seed)",
"_____no_output_____"
],
[
"def create_year_column(X, w, year):\n year_term = tf.reshape(X[:,0]-year, (-1,1)) * w[0]\n year_column = tf.reshape(X @ w - year_term,(-1,))\n return year_column * tf.cast(tf.greater(X[:,0],year), dtype=tf.float32)",
"_____no_output_____"
],
[
"def logit(X, w):\n '''\n \n IMPORTANT: This assumes that the weight for the temporal variable is w[0]\n TODO: Remove this assumption and allow to specify the index of the temporal variable\n '''\n max_year_tf = tf.reduce_max(X[:,0])\n tensors = tf.map_fn(lambda year: create_year_column(X, w, year), tf.range(max_year_tf))\n return tf.transpose(tensors)",
"_____no_output_____"
],
[
"logit_result = logit(X,weights)\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n result = logit_result.eval()\nresult[1]",
"_____no_output_____"
],
[
"def get_loss(X, y, w):\n '''\n The loss function\n '''\n #The first term\n logit_ = logit(X, w)\n temp_tensor = tf.sigmoid(logit_) * tf.cast(tf.greater(logit_, 0), tf.float32)\n sum_loss = tf.reduce_sum(tf.log(1-temp_tensor),1)\n sum_loss = tf.reshape(sum_loss, (-1,1))\n logistic_prob = tf.sigmoid(X @ w)\n return -sum_loss - y * tf.log(logistic_prob) - (1-y) * tf.log(1-logistic_prob)\n",
"_____no_output_____"
],
[
"loss_result = get_loss(X, y, weights/100)\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n result = loss_result.eval()\nresult",
"_____no_output_____"
],
[
"reset_graph()\n\nlearning_rate = 0.0005\nl2 = 2.0\n\n\nX = tf.constant(scaled_data.values, dtype=tf.float32, name=\"X\")\ny = tf.constant(synthetic_labels.values.reshape(-1, 1), dtype=tf.float32, name=\"y\")\nweights = tf.Variable(tf.random_uniform([scaled_data.values.shape[1], 1], -0.01, 0.01, seed=1239), name=\"weights\")\nloss = get_loss(X, y, weights)\n\nl2_regularizer = tf.nn.l2_loss(weights) - 0.5 * weights[-1] ** 2\n\ncost = tf.reduce_mean(loss) + l2 * l2_regularizer\n\noptimizer = tf.train.GradientDescentOptimizer(learning_rate)\ntraining_op = optimizer.minimize(cost)\n ",
"_____no_output_____"
],
[
"init = tf.global_variables_initializer()\nn_epochs = 20000\n\n\nwith tf.Session() as sess:\n sess.run(init)\n\n for epoch in range(n_epochs):\n if epoch % 1000 == 0:\n print(\"Epoch\", epoch, \"Cost =\", cost.eval())\n print(f'w: {weights[-1].eval()}')\n sess.run(training_op)\n \n best_theta = weights.eval()",
"Epoch 0 Cost = [0.4480857]\nw: [-0.00260041]\nEpoch 1000 Cost = [0.25044656]\nw: [-0.04913734]\nEpoch 2000 Cost = [0.24958777]\nw: [-0.06650413]\nEpoch 3000 Cost = [0.24919516]\nw: [-0.07856989]\nEpoch 4000 Cost = [0.2489799]\nw: [-0.08747929]\nEpoch 5000 Cost = [0.24980566]\nw: [-0.09409016]\nEpoch 6000 Cost = [0.24926803]\nw: [-0.09901612]\nEpoch 7000 Cost = [0.24923217]\nw: [-0.10267571]\nEpoch 8000 Cost = [0.24968402]\nw: [-0.10539492]\nEpoch 9000 Cost = [0.24967311]\nw: [-0.10741644]\nEpoch 10000 Cost = [0.2496681]\nw: [-0.10891172]\nEpoch 11000 Cost = [0.24966364]\nw: [-0.1100379]\nEpoch 12000 Cost = [0.24966182]\nw: [-0.11086603]\nEpoch 13000 Cost = [0.24966045]\nw: [-0.11149137]\nEpoch 14000 Cost = [0.24966016]\nw: [-0.11194912]\nEpoch 15000 Cost = [0.24965991]\nw: [-0.11229044]\nEpoch 16000 Cost = [0.24965975]\nw: [-0.1125449]\nEpoch 17000 Cost = [0.24965967]\nw: [-0.11273381]\nEpoch 18000 Cost = [0.24966054]\nw: [-0.1128688]\nEpoch 19000 Cost = [0.2496596]\nw: [-0.11298056]\n"
]
],
[
[
"The cost will never go down to zero, because of the additional term in the loss function.",
"_____no_output_____"
]
],
[
[
"#We will print the learned weights.\nlearned_weights = [(column_name,float(best_theta[column_num])) \\\n for column_num, column_name in enumerate(scaled_data.columns)]",
"_____no_output_____"
],
[
"#We print the weights sorted by the absolute value of the value\nsorted(learned_weights, key=lambda x: abs(x[1]), reverse=True)",
"_____no_output_____"
]
],
[
[
"To compare with the other result we need to multiplty the last weight by 40",
"_____no_output_____"
]
],
[
[
"print(f'The predicted probability is: {float(1/(1+np.exp(-best_theta[-1]*40)))}')",
"The predicted probability is: 0.010747312568128109\n"
]
],
[
[
"This is very close indeed to the value `0.01` we created for the synthetic dataset of ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d04e035365bd3d8344ee958aca752da4bf606286 | 12,802 | ipynb | Jupyter Notebook | caffe2/python/tutorials/Loading_Pretrained_Models.ipynb | ZhaoJ9014/caffe2 | 40a1ae36afd2d1b6126f171209c83a4a5b95737c | [
"MIT"
] | 1 | 2017-04-04T07:41:40.000Z | 2017-04-04T07:41:40.000Z | caffe2/python/tutorials/Loading_Pretrained_Models.ipynb | ZhaoJ9014/caffe2 | 40a1ae36afd2d1b6126f171209c83a4a5b95737c | [
"MIT"
] | null | null | null | caffe2/python/tutorials/Loading_Pretrained_Models.ipynb | ZhaoJ9014/caffe2 | 40a1ae36afd2d1b6126f171209c83a4a5b95737c | [
"MIT"
] | null | null | null | 48.862595 | 1,344 | 0.612482 | [
[
[
"# Configuration --- Change to your setup and preferences!\nCAFFE_ROOT = \"~/caffe2\"\n\n# What image do you want to test? Can be local or URL.\n# IMAGE_LOCATION = \"images/cat.jpg\"\n# IMAGE_LOCATION = \"https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Whole-Lemon.jpg/1235px-Whole-Lemon.jpg\"\n# IMAGE_LOCATION = \"https://upload.wikimedia.org/wikipedia/commons/7/7b/Orange-Whole-%26-Split.jpg\"\n# IMAGE_LOCATION = \"https://upload.wikimedia.org/wikipedia/commons/7/7c/Zucchini-Whole.jpg\"\n# IMAGE_LOCATION = \"https://upload.wikimedia.org/wikipedia/commons/a/ac/Pretzel.jpg\"\nIMAGE_LOCATION = \"https://cdn.pixabay.com/photo/2015/02/10/21/28/flower-631765_1280.jpg\"\n\n# What model are we using? You should have already converted or downloaded one.\n# format below is the model's: \n# folder, init_net, predict_net, mean, input image size\n# you can switch the comments on MODEL to try out different model conversions\nMODEL = 'squeezenet', 'init_net.pb', 'run_net.pb', 'ilsvrc_2012_mean.npy', 227\n\n# googlenet will fail with \"enforce fail at fully_connected_op.h:25\"\n# MODEL = 'bvlc_googlenet', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 224\n\n# these will run out of memory and fail... waiting for C++ version of predictor\n# MODEL = 'bvlc_alexnet', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 224\n# MODEL = 'finetune_flickr_style', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 224\n\n# The list of output codes for the AlexNet models (squeezenet)\ncodes = \"https://gist.githubusercontent.com/maraoz/388eddec39d60c6d52d4/raw/791d5b370e4e31a4e9058d49005be4888ca98472/gistfile1.txt\"\nprint \"Config set!\"",
"Config set!\n"
],
[
"%matplotlib inline\nfrom caffe2.proto import caffe2_pb2\nimport numpy as np\nimport skimage.io\nimport skimage.transform\nfrom matplotlib import pyplot\nimport os\nfrom caffe2.python import core, workspace\nimport urllib2\nprint(\"Required modules imported.\")\ndef crop_center(img,cropx,cropy):\n y,x,c = img.shape\n startx = x//2-(cropx//2)\n starty = y//2-(cropy//2) \n return img[starty:starty+cropy,startx:startx+cropx]\n\ndef rescale(img, input_height, input_width):\n print(\"Original image shape:\" + str(img.shape) + \" and remember it should be in H, W, C!\")\n print(\"Model's input shape is %dx%d\") % (input_height, input_width)\n aspect = img.shape[1]/float(img.shape[0])\n print(\"Orginal aspect ratio: \" + str(aspect))\n if(aspect>1):\n # landscape orientation - wide image\n res = int(aspect * input_height)\n imgScaled = skimage.transform.resize(img, (input_width, res))\n if(aspect<1):\n # portrait orientation - tall image\n res = int(input_width/aspect)\n imgScaled = skimage.transform.resize(img, (res, input_height))\n if(aspect == 1):\n imgScaled = skimage.transform.resize(img, (input_width, input_height))\n pyplot.figure()\n pyplot.imshow(imgScaled)\n pyplot.axis('on')\n pyplot.title('Rescaled image')\n print(\"New image shape:\" + str(imgScaled.shape) + \" in HWC\")\n return imgScaled\nprint \"Functions set.\"\n\n# set paths and variables from model choice\nCAFFE_ROOT = os.path.expanduser(CAFFE_ROOT)\nCAFFE_MODELS = os.path.join(CAFFE_ROOT, 'models')\nMEAN_FILE = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[3])\n\nif not os.path.exists(MEAN_FILE):\n mean = 128\nelse:\n mean = np.load(MEAN_FILE).mean(1).mean(1)\n mean = mean[:, np.newaxis, np.newaxis]\n\nprint \"mean was set to: \", mean\nINPUT_IMAGE_SIZE = MODEL[4]\nif not os.path.exists(CAFFE_ROOT):\n print(\"Houston, you may have a problem.\") \nINIT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[1])\nPREDICT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[2])\nif not os.path.exists(INIT_NET):\n print(INIT_NET + \" not found!\")\nelse:\n print \"Found \", INIT_NET, \"...Now looking for\", PREDICT_NET\n if not os.path.exists(PREDICT_NET):\n print \"Caffe model file, \" + PREDICT_NET + \" was not found!\"\n else:\n print \"All needed files found! Loading the model in the next block.\"",
"Required modules imported.\nFunctions set.\nmean was set to: 128\nFound /home/aaron/caffe2/models/finetune_flickr_style/init_net.pb ...Now looking for /home/aaron/caffe2/models/finetune_flickr_style/predict_net.pb\nAll needed files found! Loading the model in the next block.\n"
],
[
"# initialize the neural net\np = workspace.Predictor(INIT_NET, PREDICT_NET)\n\n# load and transform image\nimg = skimage.img_as_float(skimage.io.imread(IMAGE_LOCATION)).astype(np.float32)\nimg = rescale(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)\nimg = crop_center(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)\nprint \"After crop: \" , img.shape\npyplot.figure()\npyplot.imshow(img)\npyplot.axis('on')\npyplot.title('Cropped')\n\n# switch to CHW\nimg = img.swapaxes(1, 2).swapaxes(0, 1)\npyplot.figure()\nfor i in range(3):\n # For some reason, pyplot subplot follows Matlab's indexing\n # convention (starting with 1). Well, we'll just follow it...\n pyplot.subplot(1, 3, i+1)\n pyplot.imshow(img[i])\n pyplot.axis('off')\n pyplot.title('RGB channel %d' % (i+1))\n\n# switch to BGR\nimg = img[(2, 1, 0), :, :]\n\n# remove mean for better results\nimg = img * 255 - mean\n\n# add batch size\nimg = img[np.newaxis, :, :, :].astype(np.float32)\nprint \"NCHW: \", img.shape\n\n# run the net and return prediction\nresults = p.run([img])\nresults = np.asarray(results)\nresults = np.delete(results, 1)\nindex = 0\nhighest = 0\narr = np.empty((0,2), dtype=object)\narr[:,0] = int(10)\narr[:,1:] = float(10)\nfor i, r in enumerate(results):\n # imagenet index begins with 1!# imagenet index begins with 1!\n i=i+1\n arr = np.append(arr, np.array([[i,r]]), axis=0)\n if (r > highest):\n highest = r\n index = i \n\nprint index, \" :: \", highest\n\n# top 3\n# sorted(arr, key=lambda x: x[1], reverse=True)[:3]\n\nresponse = urllib2.urlopen(codes)\n\nfor line in response:\n code, result = line.partition(\":\")[::2]\n if (code.strip() == str(index)):\n print result.strip()[1:-2]",
"_____no_output_____"
]
],
[
[
"Check [this list](https://gist.github.com/maraoz/388eddec39d60c6d52d4) to verify the results.",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d04e17a8027630a58da3a3c3dc108c8c58031bce | 43,696 | ipynb | Jupyter Notebook | BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb | michellab/bgflow | 46c1f6035a7baabcbaee015603d08b8ce63d9717 | [
"MIT"
] | null | null | null | BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb | michellab/bgflow | 46c1f6035a7baabcbaee015603d08b8ce63d9717 | [
"MIT"
] | null | null | null | BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb | michellab/bgflow | 46c1f6035a7baabcbaee015603d08b8ce63d9717 | [
"MIT"
] | null | null | null | 51.407059 | 18,966 | 0.738672 | [
[
[
"# Training a Boltzmann Generator for Alanine Dipeptide\n\nThis notebook introduces basic concepts behind `bgflow`. \n\nIt shows how to build an train a Boltzmann generator for a small peptide. The most important aspects it will cover are\n\n- retrieval of molecular training data\n- defining a internal coordinate transform\n- defining normalizing flow classes\n- combining different normalizing flows\n- training a Boltzmann generator via NLL and KLL\n\nThe main purpose of this tutorial is to introduce the implementation. The network design is optimized for educational purposes rather than good performance. In the conlusions, we will discuss some aspects of the generator that are not ideal and outline improvements.\n\n## Some Preliminaries\n\nWe instruct jupyter to reload any imports automatically and define the device and datatype, on which we want to perform the computations.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload",
"_____no_output_____"
],
[
"%autoreload 2",
"_____no_output_____"
],
[
"import torch\n\ndevice = \"cuda:3\" if torch.cuda.is_available() else \"cpu\"\ndtype = torch.float32\n# a context tensor to send data to the right device and dtype via '.to(ctx)'\nctx = torch.zeros([], device=device, dtype=dtype)",
"_____no_output_____"
]
],
[
[
"\n\n## Load the Data and the Molecular System\n\nMolecular trajectories and their corresponding potential energy functions are available from the `bgmol` repository.",
"_____no_output_____"
]
],
[
[
"# import os\n# from bgmol.datasets import Ala2TSF300\n\n# target_energy = Ala2TSF300().get_energy_model(n_workers=1)\n",
"_____no_output_____"
],
[
"import os\nimport mdtraj\n#dataset = mdtraj.load('output.dcd', top='ala2_fromURL.pdb')\ndataset = mdtraj.load('TSFtraj.dcd', top='ala2_fromURL.pdb')\n#fname = \"obc_xmlsystem_savedmodel\"\n#coordinates = dataset.xyz\n#target_energy = Ala2TSF300().get_energy_model(n_workers=1)\nprint(dataset)",
"<mdtraj.Trajectory with 1000000 frames, 22 atoms, 3 residues, without unitcells>\n"
],
[
"import numpy as np\nrigid_block = np.array([6, 8, 9, 10, 14])\nz_matrix = np.array([\n [0, 1, 4, 6],\n [1, 4, 6, 8],\n [2, 1, 4, 0],\n [3, 1, 4, 0],\n [4, 6, 8, 14],\n [5, 4, 6, 8],\n [7, 6, 8, 4],\n [11, 10, 8, 6],\n [12, 10, 8, 11],\n [13, 10, 8, 11],\n [15, 14, 8, 16],\n [16, 14, 8, 6],\n [17, 16, 14, 15],\n [18, 16, 14, 8],\n [19, 18, 16, 14],\n [20, 18, 16, 19],\n [21, 18, 16, 19]\n])",
"_____no_output_____"
],
[
"\ndef dimensions(dataset):\n return np.prod(dataset.xyz[0].shape)\ndim = dimensions(dataset)\nprint(dim)\n",
"66\n"
],
[
"from simtk import openmm\nwith open('ala2_xml_system.txt') as f:\n xml = f.read()\nsystem = openmm.XmlSerializer.deserialize(xml)\n\nfrom bgflow.distribution.energy.openmm import OpenMMBridge, OpenMMEnergy\nfrom openmmtools import integrators\nfrom simtk import unit\ntemperature = 300.0 * unit.kelvin\ncollision_rate = 1.0 / unit.picosecond\ntimestep = 4.0 * unit.femtosecond\nintegrator = integrators.LangevinIntegrator(temperature=temperature,collision_rate=collision_rate,timestep=timestep)\n\nenergy_bridge = OpenMMBridge(system, integrator, n_workers=1)\ntarget_energy = OpenMMEnergy(int(dim), energy_bridge)",
"_____no_output_____"
]
],
[
[
"The energy model is a `bgflow.Energy` that wraps around OpenMM. The `n_workers` argument determines the number of openmm contexts that are used for energy evaluations. In notebooks, we set `n_workers=1` to avoid hickups. In production, we can omit this argument so that `n_workers` is automatically set to the number of CPU cores.",
"_____no_output_____"
],
[
"### Visualize Data: Ramachandran Plot for the Backbone Angles",
"_____no_output_____"
]
],
[
[
"# def compute_phi_psi(trajectory):\n# phi_atoms = [4, 6, 8, 14]\n# phi = md.compute_dihedrals(trajectory, indices=[phi_atoms])[:, 0]\n# psi_atoms = [6, 8, 14, 16]\n# psi = md.compute_dihedrals(trajectory, indices=[psi_atoms])[:, 0]\n# return phi, psi",
"_____no_output_____"
],
[
"import numpy as np\nimport mdtraj as md \nfrom matplotlib import pyplot as plt\nfrom matplotlib.colors import LogNorm\n\n# def plot_phi_psi(ax, trajectory):\n# if not isinstance(trajectory, md.Trajectory):\n# trajectory = md.Trajectory(\n# xyz=trajectory.cpu().detach().numpy().reshape(-1, 22, 3), \n# topology=md.load('ala2_fromURL.pdb').topology\n# )\n# phi, psi = compute_phi_psi(trajectory)\n \n# ax.hist2d(phi, psi, 50, norm=LogNorm())\n# ax.set_xlim(-np.pi, np.pi)\n# ax.set_ylim(-np.pi, np.pi)\n# ax.set_xlabel(\"$\\phi$\")\n# _ = ax.set_ylabel(\"$\\psi$\")\n \n# return trajectory",
"_____no_output_____"
],
[
"import numpy as np\nn_train = len(dataset)//2\nn_test = len(dataset) - n_train\npermutation = np.random.permutation(n_train)\n\nall_data = dataset.xyz.reshape(-1, dimensions(dataset))\ntraining_data = torch.tensor(all_data[permutation]).to(ctx)\ntest_data = torch.tensor(all_data[permutation + n_train]).to(ctx)",
"_____no_output_____"
],
[
"#print(training_data.shape)",
"torch.Size([143147, 66])\n"
]
],
[
[
"## Define the Internal Coordinate Transform\n\nRather than generating all-Cartesian coordinates, we use a mixed internal coordinate transform.\nThe five central alanine atoms will serve as a Cartesian \"anchor\", from which all other atoms are placed with respect to internal coordinates (IC) defined through a z-matrix. We have deposited a valid `z_matrix` and the corresponding `rigid_block` in the `dataset.system` from `bgmol`.",
"_____no_output_____"
]
],
[
[
"import bgflow as bg",
"_____no_output_____"
],
[
"# throw away 6 degrees of freedom (rotation and translation)\ndim_cartesian = len(rigid_block) * 3 - 6\nprint(dim_cartesian)\n#dim_cartesian = len(system.rigid_block) * 3\ndim_bonds = len(z_matrix)\nprint(dim_bonds)\ndim_angles = dim_bonds\ndim_torsions = dim_bonds",
"9\n17\n"
],
[
"coordinate_transform = bg.MixedCoordinateTransformation(\n data=training_data, \n z_matrix=z_matrix,\n fixed_atoms=rigid_block,\n #keepdims=None,\n keepdims=dim_cartesian, \n normalize_angles=True,\n).to(ctx)",
"_____no_output_____"
]
],
[
[
"For demonstration, we transform the first 3 samples from the training data set into internal coordinates as follows:",
"_____no_output_____"
]
],
[
[
"# bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(training_data[:3])\n# bonds.shape, angles.shape, torsions.shape, cartesian.shape, dlogp.shape\n# #print(bonds)",
"_____no_output_____"
]
],
[
[
"## Prior Distribution\n\nThe next step is to define a prior distribution that we can easily sample from. The normalizing flow will be trained to transform such latent samples into molecular coordinates. Here, we just take a normal distribution, which is a rather naive choice for reasons that will be discussed in other notebooks.",
"_____no_output_____"
]
],
[
[
"dim_ics = dim_bonds + dim_angles + dim_torsions + dim_cartesian\nmean = torch.zeros(dim_ics).to(ctx) \n# passing the mean explicitly to create samples on the correct device\nprior = bg.NormalDistribution(dim_ics, mean=mean)",
"_____no_output_____"
]
],
[
[
"## Normalizing Flow\n\nNext, we set up the normalizing flow by stacking together different neural networks. For now, we will do this in a rather naive way, not distinguishing between bonds, angles, and torsions. Therefore, we will first define a flow that splits the output from the prior into the different IC terms.\n\n### Split Layer",
"_____no_output_____"
]
],
[
[
"split_into_ics_flow = bg.SplitFlow(dim_bonds, dim_angles, dim_torsions, dim_cartesian)",
"_____no_output_____"
],
[
"# test\n#print(prior.sample(3))\n# ics = split_into_ics_flow(prior.sample(1))\n# #print(_ics)\n# coordinate_transform.forward(*ics, inverse=True)[0].shape",
"_____no_output_____"
]
],
[
[
"### Coupling Layers\n\nNext, we will set up so-called RealNVP coupling layers, which split the input into two channels and then learn affine transformations of channel 1 conditioned on channel 2. Here we will do the split naively between the first and second half of the degrees of freedom.",
"_____no_output_____"
]
],
[
[
"class RealNVP(bg.SequentialFlow):\n \n def __init__(self, dim, hidden):\n self.dim = dim\n self.hidden = hidden\n super().__init__(self._create_layers())\n \n def _create_layers(self):\n dim_channel1 = self.dim//2\n dim_channel2 = self.dim - dim_channel1\n split_into_2 = bg.SplitFlow(dim_channel1, dim_channel2)\n \n layers = [\n # -- split\n split_into_2,\n # --transform\n self._coupling_block(dim_channel1, dim_channel2),\n bg.SwapFlow(),\n self._coupling_block(dim_channel2, dim_channel1),\n # -- merge\n bg.InverseFlow(split_into_2)\n ]\n return layers\n \n def _dense_net(self, dim1, dim2):\n return bg.DenseNet(\n [dim1, *self.hidden, dim2],\n activation=torch.nn.ReLU()\n )\n \n def _coupling_block(self, dim1, dim2):\n return bg.CouplingFlow(bg.AffineTransformer(\n shift_transformation=self._dense_net(dim1, dim2),\n scale_transformation=self._dense_net(dim1, dim2)\n ))\n ",
"_____no_output_____"
],
[
"#RealNVP(dim_ics, hidden=[128]).to(ctx).forward(prior.sample(3))[0].shape",
"_____no_output_____"
]
],
[
[
"### Boltzmann Generator\n\nFinally, we define the Boltzmann generator.\nIt will sample molecular conformations by \n\n1. sampling in latent space from the normal prior distribution,\n2. transforming the samples into a more complication distribution through a number of RealNVP blocks (the parameters of these blocks will be subject to optimization),\n3. splitting the output of the network into blocks that define the internal coordinates, and\n4. transforming the internal coordinates into Cartesian coordinates through the inverse IC transform.",
"_____no_output_____"
]
],
[
[
"n_realnvp_blocks = 5\nlayers = []\n\nfor i in range(n_realnvp_blocks):\n layers.append(RealNVP(dim_ics, hidden=[128, 128, 128]))\nlayers.append(split_into_ics_flow)\nlayers.append(bg.InverseFlow(coordinate_transform))\n\nflow = bg.SequentialFlow(layers).to(ctx)",
"_____no_output_____"
],
[
"# test\n#flow.forward(prior.sample(3))[0].shape",
"_____no_output_____"
],
[
"flow.load_state_dict(torch.load('modelTSFtraj_xmlsystem_20000KLL.pt'))",
"_____no_output_____"
],
[
"# print number of trainable parameters\n\"#Parameters:\", np.sum([np.prod(p.size()) for p in flow.parameters()])",
"_____no_output_____"
],
[
"generator = bg.BoltzmannGenerator(\n flow=flow,\n prior=prior,\n target=target_energy\n)",
"_____no_output_____"
],
[
"def plot_energies(ax, samples, target_energy, test_data):\n sample_energies = target_energy.energy(samples).cpu().detach().numpy()\n md_energies = target_energy.energy(test_data[:len(samples)]).cpu().detach().numpy()\n cut = max(np.percentile(sample_energies, 80), 20)\n \n ax.set_xlabel(\"Energy [$k_B T$]\")\n # y-axis on the right\n ax2 = plt.twinx(ax)\n ax.get_yaxis().set_visible(False)\n \n ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label=\"BG\")\n ax2.hist(md_energies, range=(-50, cut), bins=40, density=False, label=\"MD\")\n ax2.set_ylabel(f\"Count [#Samples / {len(samples)}]\")\n ax2.legend()",
"_____no_output_____"
],
[
"def plot_energy_onlyMD(ax, target_energy, test_data):\n md_energies = target_energy.energy(test_data[:1000]).cpu().detach().numpy()\n \n ax.set_xlabel(\"Energy [$k_B T$]\")\n # y-axis on the right\n ax2 = plt.twinx(ax)\n ax.get_yaxis().set_visible(False)\n \n #ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label=\"BG\")\n ax2.hist(md_energies, bins=40, density=False, label=\"MD\")\n ax2.set_ylabel(f\"Count [#Samples / 1000]\")\n ax2.legend()",
"_____no_output_____"
],
[
"n_samples = 10000\nsamples = generator.sample(n_samples)\nprint(samples.shape)\n\nfig, axes = plt.subplots(1, 2, figsize=(6,3))\nfig.tight_layout()\n\nsamplestrajectory = plot_phi_psi(axes[0], samples)\nplot_energies(axes[1], samples, target_energy, test_data)\n#plt.savefig(f\"varysnapshots/{fname}.png\", bbox_inches = 'tight')\n\n#samplestrajectory.save(\"mytraj_full_samples.dcd\")\n\n#del samples",
"torch.Size([10000, 66])\n"
]
],
[
[
"bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(samples)\nprint(bonds.shape)\nprint('1:', bonds[0])\n\nCHbond_indices = [0, 2, 3 ,7 ,8, 9 ,14 ,15 ,16]\nbonds_new = bonds.clone().detach()\nbonds_new[:,CHbond_indices] = 0.109\n\nprint('2:', bonds_new[0:3])\n\nsamples_corrected = coordinate_transform.forward(bonds_new,angles,torsions,cartesian,inverse=True)\nprint(samples_corrected[0].shape)",
"_____no_output_____"
]
],
[
[
"samplestrajectory = mdtraj.Trajectory(\n xyz=samples[0].cpu().detach().numpy().reshape(-1, 22, 3), \n topology=mdtraj.load('ala2_fromURL.pdb').topology\n )",
"_____no_output_____"
],
[
"#samplestrajectory.save('mysamples_traj_correctedonce.dcd')",
"_____no_output_____"
],
[
"import nglview as nv\n\n\n#samplestrajectory.save(\"Samplestraj.pdb\")\n#md.save(samplestrajectory, \"obcstride10Samplestraj.dcd\")\n\nwidget = nv.show_mdtraj(samplestrajectory)\n\nwidget",
"_____no_output_____"
]
],
[
[
"## Conclusions\n\nThis tutorial has introduced the most basic concepts and implementations underlying Boltzmann generators and `bgflow`. That said, the trained networks did not do a particularly good job in reproducing the molecular Boltzmann distribution. Specifically, they only modeled the major modes of the $\\phi$ angle and still produced many samples with unreasonably large energies. Let's look at a few shortcomings of the present architecture:\n\n### 1) Unconstrained Internal Coordinates\nBonds, angles, and torsions must not take arbitrary values in principle. Bond lengths need to be positive, angles live in $[0,\\pi],$ and torsions are periodic in $[-\\pi, \\pi].$ Neither those bounds nor the periodicity of torsions distributions have been taken into account by the present Boltzmann generator. The layers of the normalizing flow should be build in a way that preserves these constraints on the ICs.\n\n### 2) Arbitrary Coupling\nThe input for the coupling layers was split into two channels rather arbitrarily (first vs. second half). A partial remedy is to define the conditioning in a physically informed manner. Another solution is to augment the base space by momenta, which can be done with augmented normalizing flows (see for instance the notebook on temperature-steering flows).\n\n### 3) RealNVP Layers\nAffine coupling layers are well-known to perform poorly in separating modes. This explains that the metastable region around $\\phi \\approx \\pi/2$ was not captured by the generator. Other architectures such as augmented flows or neural spline flows do a better job for complicated, multimodal distributions.\n\n### 4) Training\nThe generators were only trained for relatively few iterations and performance may improve with longer training and better choices of the learning rate and hyperparameters.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d04e2626bd5b4db76ebda456b6ee3d560c2c9b4e | 2,490 | ipynb | Jupyter Notebook | Tubles In Python.ipynb | mhmdawnallah/Python_Projects | be04ee0153dfb577f1ece04ec83d20e51b5d62d7 | [
"Apache-2.0"
] | 1 | 2022-01-07T03:01:02.000Z | 2022-01-07T03:01:02.000Z | Tubles In Python.ipynb | mhmdawnallah/Python_Projects | be04ee0153dfb577f1ece04ec83d20e51b5d62d7 | [
"Apache-2.0"
] | null | null | null | Tubles In Python.ipynb | mhmdawnallah/Python_Projects | be04ee0153dfb577f1ece04ec83d20e51b5d62d7 | [
"Apache-2.0"
] | null | null | null | 21.101695 | 334 | 0.493173 | [
[
[
"#Tuble === Immutable List",
"_____no_output_____"
],
[
"t1 = (345, 674, 934)",
"_____no_output_____"
],
[
"t1[0]",
"_____no_output_____"
],
[
"t1[1]",
"_____no_output_____"
],
[
"t1[1] = 45",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d04e2913f6755f0318ec6491bbfdf47f08064e28 | 157,565 | ipynb | Jupyter Notebook | Regression/Linear Models/LassoLars_RobustScaler.ipynb | shreepad-nade/ds-seed | 93ddd3b73541f436b6832b94ca09f50872dfaf10 | [
"Apache-2.0"
] | 53 | 2021-08-28T07:41:49.000Z | 2022-03-09T02:20:17.000Z | Regression/Linear Models/LassoLars_RobustScaler.ipynb | shreepad-nade/ds-seed | 93ddd3b73541f436b6832b94ca09f50872dfaf10 | [
"Apache-2.0"
] | 142 | 2021-07-27T07:23:10.000Z | 2021-08-25T14:57:24.000Z | Regression/Linear Models/LassoLars_RobustScaler.ipynb | shreepad-nade/ds-seed | 93ddd3b73541f436b6832b94ca09f50872dfaf10 | [
"Apache-2.0"
] | 38 | 2021-07-27T04:54:08.000Z | 2021-08-23T02:27:20.000Z | 207.59552 | 73,504 | 0.878939 | [
[
[
"# LassoLars Regression with Robust Scaler",
"_____no_output_____"
],
[
"This Code template is for the regression analysis using a simple LassoLars Regression. It is a lasso model implemented using the LARS algorithm and feature scaling using Robust Scaler in a Pipeline",
"_____no_output_____"
],
[
"### Required Packages",
"_____no_output_____"
]
],
[
[
"import warnings\r\nimport numpy as np \r\nimport pandas as pd \r\nimport seaborn as se \r\nimport matplotlib.pyplot as plt \r\nfrom sklearn.model_selection import train_test_split \r\nfrom sklearn.pipeline import make_pipeline\r\nfrom sklearn.preprocessing import RobustScaler\r\nfrom sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error \r\nfrom sklearn.linear_model import LassoLars\r\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"### Initialization\n\nFilepath of CSV file",
"_____no_output_____"
]
],
[
[
"#filepath\r\nfile_path= \"\"",
"_____no_output_____"
]
],
[
[
"List of features which are required for model training .",
"_____no_output_____"
]
],
[
[
"#x_values\r\nfeatures=[]",
"_____no_output_____"
]
],
[
[
"Target feature for prediction.",
"_____no_output_____"
]
],
[
[
"#y_value\ntarget=''",
"_____no_output_____"
]
],
[
[
"### Data Fetching\n\nPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.\n\nWe will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.",
"_____no_output_____"
]
],
[
[
"df=pd.read_csv(file_path)\ndf.head()",
"_____no_output_____"
]
],
[
[
"### Feature Selections\n\nIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.\n\nWe will assign all the required input features to X and target/outcome to Y.",
"_____no_output_____"
]
],
[
[
"X=df[features]\nY=df[target]",
"_____no_output_____"
]
],
[
[
"### Data Preprocessing\n\nSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.\n",
"_____no_output_____"
]
],
[
[
"def NullClearner(df):\n if(isinstance(df, pd.Series) and (df.dtype in [\"float64\",\"int64\"])):\n df.fillna(df.mean(),inplace=True)\n return df\n elif(isinstance(df, pd.Series)):\n df.fillna(df.mode()[0],inplace=True)\n return df\n else:return df\ndef EncodeX(df):\n return pd.get_dummies(df)",
"_____no_output_____"
]
],
[
[
"Calling preprocessing functions on the feature and target set.\n",
"_____no_output_____"
]
],
[
[
"x=X.columns.to_list()\nfor i in x:\n X[i]=NullClearner(X[i])\nX=EncodeX(X)\nY=NullClearner(Y)\nX.head()",
"_____no_output_____"
]
],
[
[
"#### Correlation Map\n\nIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.",
"_____no_output_____"
]
],
[
[
"f,ax = plt.subplots(figsize=(18, 18))\nmatrix = np.triu(X.corr())\nse.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Data Splitting\n\nThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.",
"_____no_output_____"
]
],
[
[
"x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)",
"_____no_output_____"
]
],
[
[
"### Model\n\nLassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients.\n\n### Tuning parameters\n\n> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations\n\n> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.\n\n> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.\n\n> **max_iter** -> Maximum number of iterations to perform.\n\n> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.\n\n> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations. \n\n### Feature Scaling\nRobust Scaler scale features using statistics that are robust to outliers.\n\nThis Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).<br>\nFor more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)",
"_____no_output_____"
]
],
[
[
"model=make_pipeline(RobustScaler(),LassoLars())\nmodel.fit(x_train,y_train)",
"_____no_output_____"
]
],
[
[
"#### Model Accuracy\n\nWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.\n\nscore: The score function returns the coefficient of determination R2 of the prediction.\n",
"_____no_output_____"
]
],
[
[
"print(\"Accuracy score {:.2f} %\\n\".format(model.score(x_test,y_test)*100))",
"Accuracy score 79.97 %\n\n"
]
],
[
[
"> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. \n\n> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. \n\n> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ",
"_____no_output_____"
]
],
[
[
"y_pred=model.predict(x_test)\nprint(\"R2 Score: {:.2f} %\".format(r2_score(y_test,y_pred)*100))\nprint(\"Mean Absolute Error {:.2f}\".format(mean_absolute_error(y_test,y_pred)))\nprint(\"Mean Squared Error {:.2f}\".format(mean_squared_error(y_test,y_pred)))",
"R2 Score: 79.97 %\nMean Absolute Error 4016.94\nMean Squared Error 30625388.66\n"
]
],
[
[
"#### Prediction Plot\n\nFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.\nFor the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14,10))\nplt.plot(range(20),y_test[0:20], color = \"green\")\nplt.plot(range(20),model.predict(x_test[0:20]), color = \"red\")\nplt.legend([\"Actual\",\"prediction\"]) \nplt.title(\"Predicted vs True Value\")\nplt.xlabel(\"Record number\")\nplt.ylabel(target)\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Creator: Anu Rithiga , Github: [Profile](https://github.com/iamgrootsh7)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d04e2dccce6b79f9dee262b461b9784f10b22581 | 27,612 | ipynb | Jupyter Notebook | matrix_two/matrix2_day5_hyperopt.ipynb | AardJan/dw_matrix | dba6fb819e02a3976d6f379469b37077590cfe31 | [
"MIT"
] | null | null | null | matrix_two/matrix2_day5_hyperopt.ipynb | AardJan/dw_matrix | dba6fb819e02a3976d6f379469b37077590cfe31 | [
"MIT"
] | null | null | null | matrix_two/matrix2_day5_hyperopt.ipynb | AardJan/dw_matrix | dba6fb819e02a3976d6f379469b37077590cfe31 | [
"MIT"
] | null | null | null | 41.647059 | 224 | 0.50373 | [
[
[
"import pandas as pd\nimport numpy as np\n\nimport xgboost as xgb\n\nfrom sklearn.metrics import mean_absolute_error as mae\nfrom sklearn.model_selection import cross_val_score\n\nfrom hyperopt import hp, fmin, tpe, STATUS_OK\n\nimport eli5\nfrom eli5.sklearn import PermutationImportance",
"_____no_output_____"
]
],
[
[
"## Wczytanie danych",
"_____no_output_____"
]
],
[
[
"df = pd.read_hdf(\"../data/car.h5\")\ndf.sample()",
"/home/adrian/miniconda3/envs/dataworkshop/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject\n return f(*args, **kwds)\n"
],
[
"SUFFIX_CAT = '__cat'\nfor feat in df.columns:\n if isinstance(df[feat][0], list):\n continue\n \n factorized_values = df[feat].factorize()[0]\n if SUFFIX_CAT in feat:\n df[feat] = factorized_values\n else:\n df[feat+ SUFFIX_CAT] = factorized_values\n \ncat_feats = [x for x in df.columns if SUFFIX_CAT in x]\ncat_feats = [x for x in cat_feats if 'price' not in x]\ndf['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))\ndf['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) =='None' else int(x.split(' ')[0]))\ndf['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) =='None' else int(x.split('cm')[0].replace(' ', '')))\n",
"_____no_output_____"
],
[
"feats = [\n 'param_rok-produkcji',\n 'param_stan__cat',\n 'param_napęd__cat',\n 'param_skrzynia-biegów__cat',\n 'param_moc',\n 'param_faktura-vat__cat',\n 'param_marka-pojazdu__cat',\n 'param_typ__cat', \n 'feature_kamera-cofania__cat',\n 'param_wersja__cat',\n 'param_model-pojazdu__cat',\n 'param_pojemność-skokowa',\n 'param_kod-silnika__cat',\n 'seller_name__cat',\n 'feature_wspomaganie-kierownicy__cat',\n 'feature_czujniki-parkowania-przednie__cat',\n 'param_uszkodzony__cat',\n 'feature_system-start-stop__cat',\n 'feature_regulowane-zawieszenie__cat',\n 'feature_asystent-pasa-ruchu__cat',\n]",
"_____no_output_____"
],
[
"def run_model(model, feats):\n X = df[feats].values\n y = df['price_value'].values\n\n scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')\n return np.mean(scores), np.std(scores)",
"_____no_output_____"
]
],
[
[
"## XGBoost",
"_____no_output_____"
]
],
[
[
"xgb_params = {\n 'max_depth':5,\n 'n_estimatords':50,\n 'learning_rate':0.1,\n 'seed':0,\n 'nthread': 3 \n}",
"_____no_output_____"
],
[
"model = xgb.XGBRegressor(**xgb_params)\nrun_model(model, feats)",
"[18:17:40] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[18:17:44] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[18:17:48] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n"
],
[
"def obj_func(params):\n print(\"Traniang with params: \")\n print(params)\n \n mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)\n \n return {\"loss\": np.abs(mean_mae), \"status\": STATUS_OK}\n\n\nxgb_reg_params = {\n 'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),\n 'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),\n 'subsample': hp.quniform('subsample', 0.5, 1, 0.05),\n 'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),\n 'objective': 'reg:squarederror',\n 'n_estimatords': 100,\n 'seed':0,\n 'nthread': 4 \n}\n\nbest = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=30)\n\nbest",
"Traniang with params: \n{'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.3, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}\nTraniang with params: \n{'colsample_bytree': 0.5, 'learning_rate': 0.2, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}\nTraniang with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.25, 'max_depth': 7, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.9, 'learning_rate': 0.15000000000000002, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}\nTraniang with params: \n{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}\nTraniang with params: \n{'colsample_bytree': 0.55, 'learning_rate': 0.25, 'max_depth': 7, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\nTraniang with params: \n{'colsample_bytree': 0.5, 'learning_rate': 0.2, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55}\nTraniang with params: \n{'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}\nTraniang with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.05, 'max_depth': 7, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.6000000000000001}\nTraniang with params: \n{'colsample_bytree': 0.9, 'learning_rate': 0.25, 'max_depth': 8, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.55, 'learning_rate': 0.3, 'max_depth': 14, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}\nTraniang with params: \n{'colsample_bytree': 0.9, 'learning_rate': 0.3, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\nTraniang with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.25, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}\nTraniang with params: \n{'colsample_bytree': 0.5, 'learning_rate': 0.05, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55}\nTraniang with params: \n{'colsample_bytree': 0.9, 'learning_rate': 0.3, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\nTraniang with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.05, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.55, 'learning_rate': 0.25, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.65}\nTraniang with params: \n{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.25, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}\nTraniang with params: \n{'colsample_bytree': 0.55, 'learning_rate': 0.15000000000000002, 'max_depth': 13, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\nTraniang with params: \n{'colsample_bytree': 0.65, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\nTraniang with params: \n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\nTraniang with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 10, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\nTraniang with params: \n{'colsample_bytree': 0.65, 'learning_rate': 0.1, 'max_depth': 6, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 11, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\nTraniang with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.1, 'max_depth': 11, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraniang with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.2, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}\n100%|██████████| 30/30 [19:10<00:00, 55.79s/it, best loss: 6987.881796093094]\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d04e3530bc0963336dfe99769f1f3135094eb069 | 4,473 | ipynb | Jupyter Notebook | notebooks/SLAM/Introduction.ipynb | GimpelZhang/git_test | 78dddbdc71209c3cfba58d831cfde1588989f8ab | [
"MIT"
] | 1 | 2020-11-30T03:23:22.000Z | 2020-11-30T03:23:22.000Z | notebooks/SLAM/Introduction.ipynb | GimpelZhang/git_test | 78dddbdc71209c3cfba58d831cfde1588989f8ab | [
"MIT"
] | null | null | null | notebooks/SLAM/Introduction.ipynb | GimpelZhang/git_test | 78dddbdc71209c3cfba58d831cfde1588989f8ab | [
"MIT"
] | null | null | null | 30.636986 | 283 | 0.647887 | [
[
[
"# SLAM算法介绍",
"_____no_output_____"
],
[
"## 1. 名词解释:\n\n### 1.1 什么是SLAM?\n\nSLAM,即Simultaneous localization and mapping,中文可译作“同时定位与地图构建”。它描述的是这样一类过程:机器人在陌生环境中运动,通过处理各类传感器收集的机器人自身及环境信息,精确地获取对机器人自身位置的估计(即“定位”),再通过机器人自身位置确定周围环境特征的位置(即“建图”)\n\n在SLAM过程中,机器人不断地在收集各类传感器信息,如激光雷达的点云、相机的图像、imu的信息、里程计的信息等,通过对这些不断变化的传感器的一系列分析计算,机器人会实时地得出自身行进的轨迹(比如一系列时刻的位姿),但得到的轨迹往往包含很大误差,因此需要进行修正优化,修正的过程很可能不再是实时进行的。实时得出自身行进轨迹的过程一般称作“前端”,修正优化的过程一般称作“后端”。\n\n实现后端优化的处理方法可以分为滤波和优化两类。\n\n### 1.2 什么是滤波?\n\n滤波在一般工程领域指的是根据一定规则对信号进行筛选,保留需要的内容,如各种高通滤波、低通滤波、带通滤波等。但在SLAM算法的语境下,滤波指的是“贝叶斯滤波”概念下的一系列“滤波器”,它们通过概率分析,使用传感器读数、传感器参数、机器人上一时刻位姿等信息,对机器人的下一时刻位姿作出修正:机器人不够准确的粗略轨迹经过”过滤“,变得更准确了。\n\nSLAM中常见滤波有:EKF扩展卡尔曼滤波、UKF无迹卡尔曼滤波、particle filter粒子滤波等。\n\n### 1.3 什么是优化问题?什么是非线性最小二乘优化问题?\n\n各种滤波手段在SLAM问题中曾经占据主导地位,但随着地图规模的扩大(如机器人行进的面积范围增大、引入视觉算法后地图更“精细”),滤波方法所需要的计算量会不断增大。因此现阶段各种优化算法成为了SLAM问题后端处理方法的主流。\n\n什么是优化问题呢?假设有一个函数f,以x为输入,以y为输出,那么一个优化问题就是通过某种手段找到一个x,使y的值最大/最小。而一个SLAM问题的优化中,x通常指的是各种待确定的状态量,比如机器人在各个时刻的位姿、地图中特征点的空间位置等,y通常指的是各种误差,比如传感器测量的量与状态量的差。SLAM问题待优化的函数f通常是非线性的,而且是以二次方项加和的形式存在的,因此属于非线性最小二乘优化问题。\n\n解决非线性优化的开源库如google的Ceres,应用于cartographer、VINS等算法中。\n\n### 1.4 什么是图优化?\n\n图优化指的是把一个优化问题以一个“图”(graph)的形式表示出来(注:这里的”图“可以看做是指一种特殊的数据结构),可以用到图论相关的性质和算法,本质上还是一个优化问题。可以简单理解:待优化的状态量,即机器人在各个时刻的位姿、地图中特征点的空间位置,可以表示为graph的各个顶点,相关的顶点间以边连接,各个边代表的就是误差项,所以图优化问题就是通过优化各个顶点的”位置“,使所有的边加起来的和最小。\n\n解决图优化的开源库如g2o,应用于ORB SLAM等算法中。\n\n### 1.5 什么是约束?\n\n在图优化问题中,顶点与顶点间连接的边就称为一个“约束”(constraint),这个约束可以表示如激光测量量与位置状态量之间的差值、imu测量量与位置状态量之间的差值等。\n\n### 1.6 什么是回环检测\n\n回环检测,也可以称为闭环检测等。简单理解就是,机器人“看到”了看到过的场景,就叫做回环检测成功。回环检测在SLAM问题中,对后端优化具有重要作用。\n\n### 1.7 一个最简单的例子:\n\n[graph slam tutorial : 从推导到应用1](https://heyijia.blog.csdn.net/article/details/47686523)",
"_____no_output_____"
],
[
"## 2. 举例分析\n\n主武器与辅助武器:\n\n对于一辆坦克来说,炮塔中央的主炮显然就是主武器,其他辅助武器可以有:机枪、反坦克导弹等。\n\n相似地,对于激光slam算法,激光雷达是主武器,imu、里程计等属于辅助武器;对于视觉slam算法,相机就是主武器,imu、里程计等属于辅助武器。\n",
"_____no_output_____"
],
[
"### 2.1 激光slam举例:\n\ncartographer\n\n\n\n在SLAM问题的工程实践中,所谓的非线性优化,其实不止出现在后端的全局优化阶段。以google的cartographer为例:\n\n算法前端接收一帧接一帧的激光扫描数据scans,插入到一个小范围的子图(submap)中(比如规定90帧scans组成一个子图),通过调用非线性优化解算库Ceres解决scan在submap中的插入位置问题,在这个优化过程中,imu和里程计负责提供初始值;后端负责进行“回环检测”,寻找新建立的子图submap和之前的scan间的约束,调用非线性优化解算库Ceres计算这个约束,使用一种叫”分支定界“的方法提供这类优化的初始值;最终,后端还要根据约束对所有已有的scan和submap进行全局优化,再次调用非线性优化解算库Ceres解决这个问题。\n\n所以可以粗略地认为,在cartographer中有三处都运用了非线性优化。",
"_____no_output_____"
],
[
"### 2.2 视觉slam举例:\n\nVINS-mono\n\n\n\n港科大的VINS是视觉融合imu信息处理SLAM问题的典范。以单目视觉算法为主的VINS-mono为例:\n\n首先进行”初始化“步骤,在此步骤中,视觉图像和imu信息互相辅助,imu解决了单目图像无法测量深度的问题,并提供了重力方向,视觉图像标定了imu的某些内部参数;\n\n通过”滑窗“方法,使用图像、imu信息建立非线性优化问题,解算每帧图像的优化后位姿,以上内容组成了VIO,即所谓”视觉imu里程计“,可以算是前端的内容,但实际上这个前端也是在使用非线性优化在一直优化每帧的位姿的。\n\n如果回环检测成功检测到了闭环,那么通过非线性优化进行”重定位“,调整滑窗内的位姿;最终通过全局优化,使用非线性优化方法修正所有帧的位姿。\n\n以下是论文中对于重定位及全局优化的配图:\n\n\n\n为便于理解,总结一下imu在不同slam算法中的作用:\n\n1. imu在cartographer中的主要作用:通过scan match插入一帧激光建立submap前,预估机器人新位姿,给非线性优化提供初始值。\n\n2. imu在VINS中的主要作用:在“初始化”阶段,获取图像深度尺度等参数;参与VIO优化约束建立。",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d04e47c4bed7da7681942ac20f0111621c921cab | 34,839 | ipynb | Jupyter Notebook | FuzzyKNN/Esperimenti su FKNN.ipynb | ritafolisi/Tirocinio | c9a14ac33ab20c3c6524d32de4634f93ece001fb | [
"CC-BY-4.0"
] | null | null | null | FuzzyKNN/Esperimenti su FKNN.ipynb | ritafolisi/Tirocinio | c9a14ac33ab20c3c6524d32de4634f93ece001fb | [
"CC-BY-4.0"
] | null | null | null | FuzzyKNN/Esperimenti su FKNN.ipynb | ritafolisi/Tirocinio | c9a14ac33ab20c3c6524d32de4634f93ece001fb | [
"CC-BY-4.0"
] | null | null | null | 37.745395 | 174 | 0.520509 | [
[
[
"from fknn import *\nimport numpy as np",
"_____no_output_____"
],
[
"import pandas as pd\ndataset = pd.read_csv(\"iris-virginica.csv\")",
"_____no_output_____"
],
[
"dataset = dataset.sample(frac=1)\ndataset",
"_____no_output_____"
],
[
"X = dataset.iloc[:, 1:3].values\nY = dataset.iloc[:,0].values",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nxTrain, xTest, yTrain, yTest = train_test_split(X,Y)",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import accuracy_score, mean_squared_error",
"_____no_output_____"
],
[
"model = FuzzyKNN()",
"_____no_output_____"
],
[
"model.fit(xTrain, yTrain)",
"_____no_output_____"
],
[
"model.score(xTest, yTest)",
"_____no_output_____"
],
[
"model.mean_squared_error(xTest, yTest)",
"_____no_output_____"
],
[
"model.predict(xTrain[3])",
"_____no_output_____"
]
],
[
[
"# Cross Validation",
"_____no_output_____"
]
],
[
[
"value_array = []\nerror_array = []\nfrom sklearn.model_selection import StratifiedKFold\nskf = StratifiedKFold(n_splits=5, shuffle=True, random_state=1)\nfor train_index, test_index in skf.split(X, Y):\n print(\"TRAIN:\", train_index, \"TEST:\", test_index)\n xTrain, xTest = X[train_index], X[test_index]\n yTrain, yTest = Y[train_index], Y[test_index]\n model.fit(xTrain, yTrain)\n value = model.score(xTest, yTest)\n error = model.mean_squared_error(xTest, yTest)\n value_array.append(value)\n error_array.append(error)",
"TRAIN: [ 0 1 2 3 4 5 6 7 8 9 11 12 13 14 16 18 19 20\n 21 22 23 24 25 26 27 29 30 31 32 33 34 35 36 37 38 39\n 40 42 43 44 45 46 47 48 50 52 54 55 56 57 58 60 61 63\n 64 65 66 68 69 70 71 72 73 74 75 76 77 80 81 82 83 85\n 86 87 88 89 90 91 92 93 94 95 97 98 100 101 103 104 106 107\n 108 110 111 112 113 114 115 119 121 122 123 128 130 131 132 133 134 135\n 136 137 138 140 142 143 144 145 146 147 148 149] TEST: [ 10 15 17 28 41 49 51 53 59 62 67 78 79 84 96 99 102 105\n 109 116 117 118 120 124 125 126 127 129 139 141]\n"
],
[
"np.mean(value_array)",
"_____no_output_____"
],
[
"np.mean(error_array)",
"_____no_output_____"
]
],
[
[
"# Model Selection & Cross Validation",
"_____no_output_____"
]
],
[
[
"a = np.arange (1, 21, 2)\nparameters = {\"k\" : a}\nparameters[\"k\"]",
"_____no_output_____"
],
[
"from sklearn.model_selection import GridSearchCV\nclf = GridSearchCV(model, parameters, cv = 5)",
"_____no_output_____"
],
[
"clf.fit(xTrain, yTrain)",
"C:\\Users\\rita folisi\\Desktop\\Tirocinio\\Codice\\knn\\Funzionanti\\Tirocinio\\FuzzyKNN\\fknn.py:58: RuntimeWarning: divide by zero encountered in double_scalars\n den += 1 / (dist ** (2 / (m-1))) # sommatoria nel denominatore\nC:\\Users\\rita folisi\\Desktop\\Tirocinio\\Codice\\knn\\Funzionanti\\Tirocinio\\FuzzyKNN\\fknn.py:63: RuntimeWarning: divide by zero encountered in double_scalars\n num = (neighbors.iloc[n].membership[c]) / (dist ** (2 / (m-1))) # sommatoria nel numeratore\nC:\\Users\\rita folisi\\Desktop\\Tirocinio\\Codice\\knn\\Funzionanti\\Tirocinio\\FuzzyKNN\\fknn.py:65: RuntimeWarning: invalid value encountered in double_scalars\n vote = num/den # calcolo grado membership del vettore al fuzzy set considerato\nC:\\Users\\rita folisi\\Desktop\\Tirocinio\\Codice\\knn\\Funzionanti\\Tirocinio\\FuzzyKNN\\fknn.py:63: RuntimeWarning: invalid value encountered in double_scalars\n num = (neighbors.iloc[n].membership[c]) / (dist ** (2 / (m-1))) # sommatoria nel numeratore\n"
],
[
"clf.score(xTest, yTest) ",
"_____no_output_____"
],
[
"best_params = clf.best_params_\nbest_params",
"_____no_output_____"
],
[
"model = clf.best_estimator_",
"_____no_output_____"
],
[
"\tdef MSE_membership(self, X, y):\n\t\tmemb, _ = self.predict(X)\n\t\tres = []\n\t\tfor t in memb:\n\t\t\tres.append(t[1])\n\t\treturn mean_squared_error(y, res) ",
"_____no_output_____"
],
[
"model.RMSE_membership(xTest, yTest)",
"_____no_output_____"
],
[
"from sklearn.model_selection import GridSearchCV, StratifiedKFold\nfrom sklearn.metrics import classification_report, mean_squared_error\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.utils import shuffle\n\ndf = pd.read_csv('iris-setosa.csv')\n\n\nX = df.iloc[:, 1:3].values\ny = df.iloc[:,0].values\n\nseed = 10\nX, y = shuffle(X, y, random_state=seed)\n\na = np.arange (1, 21, 2)\nparameters = {\"k\" : a}\nN_SPLIT = 5\nerr = []\nacc = []\n\n\nskf = StratifiedKFold(n_splits=N_SPLIT, shuffle=False, random_state=5)\nfor train_index, validation_index in skf.split(X, y):\n print(train_index)\n X_train, X_validation = X[train_index], X[validation_index]\n y_train, y_validation = y[train_index], y[validation_index]\n \n model = FuzzyKNN()\n clf = GridSearchCV(model, parameters, cv=5)\n clf.fit(X_train, y_train)\n best_model = clf.best_estimator_\n best_model.fit(X_train, y_train)\n acc.append(best_model.score(X_validation, y_validation))\n val = best_model.RMSE_membership(X_validation, y_validation)\n err.append(val)",
"[ 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47\n 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65\n 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83\n 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101\n 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119\n 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137\n 138 139 140 141 142 143 144 145 146 147 148 149]\n"
],
[
"acc",
"_____no_output_____"
],
[
"err",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04e4ab2ae9d994a83faea908ea93021a4ee34ed | 16,838 | ipynb | Jupyter Notebook | labwork/lab2/tensorflow/linearRegression.ipynb | patilsahana/ml_lab_ecsc_306 | 51d47d623f1f9309ee79aae6f86fa89df6b75aba | [
"Apache-2.0"
] | null | null | null | labwork/lab2/tensorflow/linearRegression.ipynb | patilsahana/ml_lab_ecsc_306 | 51d47d623f1f9309ee79aae6f86fa89df6b75aba | [
"Apache-2.0"
] | null | null | null | labwork/lab2/tensorflow/linearRegression.ipynb | patilsahana/ml_lab_ecsc_306 | 51d47d623f1f9309ee79aae6f86fa89df6b75aba | [
"Apache-2.0"
] | null | null | null | 67.352 | 9,034 | 0.746704 | [
[
[
"import tensorflow as tf\nimport numpy as np\nrng = np.random\n\nimport matplotlib.pyplot as plt\nlearning_rate = 0.0001\ntraining_epochs = 1000\ndisplay_step = 50",
"_____no_output_____"
],
[
"with tf.name_scope(\"Creation_of_array\"):\n x_array=np.asarray([2.0,9.4,3.32,0.88,-2.23,1.11,0.57,-2.25,-3.31,6.45])\n y_array=np.asarray([1.22,0.34,-0.08,2.25,4.41,3.09,-6.66,-9.77,0.001,2.25])\n x = tf.constant(x_array,dtype = tf.float32,name = \"x_array\")\n y = tf.constant(y_array,dtype = tf.float32, name= \"y_array\")\nwith tf.name_scope(\"Calculating_y_mean\"):\n mean_y = tf.reduce_mean(y, name = \"mean_y\")\n with tf.Session() as sess:\n result_y = sess.run(mean_y)\n print(result_y)",
"-0.2949\n"
],
[
"with tf.name_scope(\"Calculating_x_mean_and_x_variance\"):\n mean_x, variance = tf.nn.moments(x, [0], name = \"mean_x_and_variance_x\")\n with tf.Session() as sess:\n m, v = sess.run([mean_x, variance])\n print(m)\n print(v)",
"1.594\n14.2899\n"
],
[
"with tf.name_scope(\"Calculating_covariance\"):\n def tensorflow_covariance(x_array,y_array,x_mean,y_mean):\n cov = 0.0\n for i in range(0,10):\n x_val = tf.subtract(x_array[i],x_mean, name=\"Finding_difference_of_xval_and_mean\")\n y_val = tf.subtract(y_array[i],y_mean, name=\"Finding_difference_of_yval_and_mean\")\n total_val = tf.multiply(x_val,y_val, name=\"Multiplying_found_values\")\n cov = tf.add(cov,total_val, name=\"Recursive_addition\")\n return cov/10.0\n with tf.Session() as sess:\n covar = sess.run(tensorflow_covariance(x,y,m,result_y))\n print(covar)",
"3.83422\n"
],
[
"with tf.name_scope(\"Calculating_slope_m_and_c\"):\n slope = tf.div(covar,v,name=\"Finding_slope\")\n intm = tf.multiply(slope,m,name = \"Intermediate_step\")\n c_intm = tf.subtract(result_y,intm,name = \"Finding_c\")\n\n with tf.Session() as sess:\n m_slope = sess.run(slope)\n c = sess.run(c_intm)\n print(m_slope)\n print(c)",
"0.268316\n-0.722596\n"
],
[
"with tf.name_scope(\"Plotting\"):\n n_samples = x_array.shape[0]\n X = tf.placeholder(\"float\")\n Y = tf.placeholder(\"float\")\n\n # Set model weights\n W = tf.Variable(rng.randn(), name=\"weight\")\n b = tf.Variable(rng.randn(), name=\"bias\")\n\n # Construct a linear model\n pred = tf.add(tf.multiply(X, W), b)\n\n\n # Mean squared error\n cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)\n # Gradient descent\n optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n\n # Initializing the variables\n init = tf.global_variables_initializer()\n\n # Launch the graph\n with tf.Session() as sess:\n sess.run(init)\n\n # Fit all training data\n for epoch in range(training_epochs):\n for (p, r) in zip(x_array, y_array):\n sess.run(optimizer, feed_dict={X: p, Y: r})\n\n # Display logs per epoch step\n if (epoch+1) % display_step == 0:\n c = sess.run(cost, feed_dict={X: x_array, Y:y_array})\n print(\"Epoch:\", '%04d' % (epoch+1), \"cost=\", \"{:.9f}\".format(c), \\\n \"W=\", sess.run(W), \"b=\", sess.run(b))\n\n print(\"Optimization Finished!\")\n training_cost = sess.run(cost, feed_dict={X: x_array, Y: y_array})\n print(\"Training cost=\", training_cost, \"W=\", sess.run(W), \"b=\", sess.run(b), '\\n')\n\n # Graphic display\n plt.plot(x_array, y_array, 'ro', label='Original data')\n plt.plot(x_array, sess.run(W) * x_array + sess.run(b), label='Fitted line')\n plt.legend()\n plt.show()",
"Epoch: 0050 cost= 12.125116348 W= -0.446558 b= 0.321133\nEpoch: 0100 cost= 11.630267143 W= -0.396823 b= 0.321412\nEpoch: 0150 cost= 11.212119102 W= -0.351104 b= 0.32131\nEpoch: 0200 cost= 10.858688354 W= -0.309074 b= 0.32086\nEpoch: 0250 cost= 10.559865952 W= -0.270432 b= 0.320092\nEpoch: 0300 cost= 10.307125092 W= -0.234903 b= 0.319033\nEpoch: 0350 cost= 10.093267441 W= -0.202233 b= 0.317708\nEpoch: 0400 cost= 9.912219048 W= -0.17219 b= 0.31614\nEpoch: 0450 cost= 9.758859634 W= -0.144559 b= 0.314351\nEpoch: 0500 cost= 9.628865242 W= -0.119145 b= 0.31236\nEpoch: 0550 cost= 9.518587112 W= -0.0957664 b= 0.310185\nEpoch: 0600 cost= 9.424952507 W= -0.074258 b= 0.307843\nEpoch: 0650 cost= 9.345363617 W= -0.0544675 b= 0.305348\nEpoch: 0700 cost= 9.277628899 W= -0.0362551 b= 0.302715\nEpoch: 0750 cost= 9.219902039 W= -0.0194925 b= 0.299955\nEpoch: 0800 cost= 9.170622826 W= -0.00406161 b= 0.297082\nEpoch: 0850 cost= 9.128476143 W= 0.0101459 b= 0.294105\nEpoch: 0900 cost= 9.092351913 W= 0.0232295 b= 0.291034\nEpoch: 0950 cost= 9.061313629 W= 0.0352806 b= 0.287879\nEpoch: 1000 cost= 9.034570694 W= 0.0463832 b= 0.284648\nOptimization Finished!\nTraining cost= 9.03457 W= 0.0463832 b= 0.284648 \n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04e4b818be359388e2d0246da4f094437946c71 | 175,033 | ipynb | Jupyter Notebook | CatBoost(Features).ipynb | edwinytleung/AdTracking-Fraud-Detection | 8eb7b7d7497fe087fe3864625b42bf775871cb81 | [
"MIT"
] | 1 | 2021-07-24T11:11:34.000Z | 2021-07-24T11:11:34.000Z | CatBoost(Features).ipynb | edwinytleung/AdTracking-Fraud-Detection | 8eb7b7d7497fe087fe3864625b42bf775871cb81 | [
"MIT"
] | null | null | null | CatBoost(Features).ipynb | edwinytleung/AdTracking-Fraud-Detection | 8eb7b7d7497fe087fe3864625b42bf775871cb81 | [
"MIT"
] | 1 | 2021-07-24T11:11:39.000Z | 2021-07-24T11:11:39.000Z | 41.684449 | 127 | 0.455286 | [
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport gc\n\nplt.style.use('ggplot')\n\ndtypes = {\n 'ip' : 'uint32',\n 'app' : 'uint16',\n 'device' : 'uint16',\n 'os' : 'uint16',\n 'channel' : 'uint16',\n 'is_attributed' : 'uint8',\n }\n \nrandom = pd.read_csv('train_random_10_percent.csv', dtype=dtypes)\ndf = random.sample(3000000)\n# prepare test data\ntest = pd.read_csv(\"test.csv\", dtype=dtypes)\n\n",
"_____no_output_____"
],
[
"df = df.sort_values(['ip','click_time'])\ntest = test.sort_values(['ip','click_time'])",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"gc.collect()",
"_____no_output_____"
],
[
"df['click_time'] = pd.to_datetime(df.click_time)\ndf['attributed_time'] = pd.to_datetime(df.attributed_time)\ntest['click_time'] = pd.to_datetime(test.click_time)\n",
"_____no_output_____"
],
[
"did_download = df[df.is_attributed==1].ip.values\ndid_download",
"_____no_output_____"
],
[
"df[df.is_attributed==1]",
"_____no_output_____"
],
[
"#ip of people that downloaded an application at some point\ndid_download = df[df.ip.apply(lambda x: x in did_download)]\ndid_download\ndid_download.shape",
"_____no_output_____"
],
[
"ip_ad_exposure = did_download.ip.value_counts()\nip_ad_exposure",
"_____no_output_____"
],
[
"app_or_channel = did_download[did_download.is_attributed == 1]\napp_or_channel.shape",
"_____no_output_____"
],
[
"downloaded = did_download.dropna() ",
"_____no_output_____"
],
[
"#lets explore more just the adds that led to download\n\ntime_of_exposure = did_download.attributed_time.dropna().groupby(did_download.attributed_time.dt.hour).count()\ntime_of_exposure",
"_____no_output_____"
],
[
"t = downloaded.attributed_time - downloaded.click_time\n\nchannel_success = did_download.groupby(['channel']).is_attributed.mean()",
"_____no_output_____"
],
[
"channel_success.head(10)",
"_____no_output_____"
],
[
"app_success = did_download.groupby(['app']).is_attributed.mean()\nchannel_success = channel_success.to_dict()\napp_success = app_success.to_dict()",
"_____no_output_____"
],
[
"df['channel_success'] = df.channel.map(channel_success)\ndf['app_success'] = df.channel.map(app_success)\n\ndf.channel_success.fillna(0,inplace=True)\ndf.app_success.fillna(0,inplace=True)\n",
"_____no_output_____"
],
[
"df.head(10)",
"_____no_output_____"
],
[
"s = df.groupby(['ip']).os.value_counts().to_frame().rename(columns={'os':'ip_os_count'}).reset_index()\nu = test.groupby(['ip']).os.value_counts().to_frame().rename(columns={'os':'ip_os_count'}).reset_index()\n\n",
"_____no_output_____"
],
[
"s.head(10)",
"_____no_output_____"
],
[
"gc.collect()",
"_____no_output_____"
],
[
"df = pd.merge(df,s,on=['ip','os'])\ndf['ip_os_count'] = df.ip_os_count.astype('float')\ntest = pd.merge(test,u,on=['ip','os'])\ntest['ip_os_count'] = test.ip_os_count.astype('float')",
"_____no_output_____"
],
[
"df.head(10)",
"_____no_output_____"
],
[
"n_chans = df.groupby(['ip','app']).channel.count().reset_index().rename(columns={'channel':'ip_app_count'})\ndf = df.merge(n_chans,on=['ip','app'],how='left')\nx_chans = test.groupby(['ip','app']).channel.count().reset_index().rename(columns={'channel':'ip_app_count'})\ntest = test.merge(x_chans,on=['ip','app'],how='left')",
"_____no_output_____"
],
[
"test.head(10)",
"_____no_output_____"
],
[
"df['clicked'] = np.ones(df.shape[0],dtype= np.float64)\ndf['app_exposure'] = df.groupby(['ip','app',]).clicked.cumsum()\ndf['channel_exposure'] = df.groupby(['ip','channel',]).clicked.cumsum()\ntest['clicked'] = np.ones(test.shape[0],dtype= np.float64)\ntest['app_exposure'] = test.groupby(['ip','app',]).clicked.cumsum()\ntest['channel_exposure'] = test.groupby(['ip','channel',]).clicked.cumsum()",
"_____no_output_____"
],
[
"df.head(10)",
"_____no_output_____"
],
[
"\ndf['daily_usage'] = df.groupby(['ip',df.click_time.dt.day]).clicked.cumsum()",
"_____no_output_____"
],
[
"df.head(10)",
"_____no_output_____"
],
[
"df['hour'] = df.click_time.dt.hour\ndf['hour_cumative_clicks'] = df.groupby(['ip',df.click_time.dt.hour]).clicked.cumsum()",
"_____no_output_____"
],
[
"df.head(10)",
"_____no_output_____"
],
[
"gc.collect()",
"_____no_output_____"
],
[
"test['daily_usage'] = test.groupby(['ip', test.click_time.dt.day]).clicked.cumsum()\ntest['hour'] = test.click_time.dt.hour\ntest['hour_cumative_clicks'] = test.groupby(['ip', test.click_time.dt.hour]).clicked.cumsum()",
"_____no_output_____"
],
[
"gc.collect()",
"_____no_output_____"
],
[
"\nfrom sklearn.model_selection import train_test_split\nX = df[['app','device','os','channel','app_exposure','daily_usage','hour','hour_cumative_clicks','ip_os_count']]\ny = df.is_attributed\nX_test = test[['app','device','os','channel','app_exposure','daily_usage','hour','hour_cumative_clicks','ip_os_count']]\n",
"_____no_output_____"
],
[
"gc.collect()",
"_____no_output_____"
],
[
"\nfrom catboost import CatBoostClassifier\ncategorical_features_indices = np.where(X.dtypes != np.float)[0]\ncategorical_features_indices = np.where(X_test.dtypes != np.float)[0]\ncat = CatBoostClassifier()\n\nmodel = cat.fit(X, y,cat_features=categorical_features_indices,plot=False,verbose=True)",
"0:\tlearn: 0.5936537\ttotal: 3.12s\tremaining: 51m 56s\n1:\tlearn: 0.5078537\ttotal: 6.03s\tremaining: 50m 6s\n2:\tlearn: 0.4337026\ttotal: 9.44s\tremaining: 52m 18s\n3:\tlearn: 0.3707005\ttotal: 12.7s\tremaining: 52m 34s\n4:\tlearn: 0.3173991\ttotal: 15s\tremaining: 49m 47s\n5:\tlearn: 0.2724359\ttotal: 18s\tremaining: 49m 49s\n6:\tlearn: 0.2345700\ttotal: 21.1s\tremaining: 49m 50s\n7:\tlearn: 0.2027005\ttotal: 23.3s\tremaining: 48m 11s\n8:\tlearn: 0.1758700\ttotal: 25.6s\tremaining: 47m 2s\n9:\tlearn: 0.1532583\ttotal: 28.7s\tremaining: 47m 21s\n10:\tlearn: 0.1341709\ttotal: 31s\tremaining: 46m 26s\n11:\tlearn: 0.1180247\ttotal: 33.3s\tremaining: 45m 40s\n12:\tlearn: 0.1043332\ttotal: 36.2s\tremaining: 45m 48s\n13:\tlearn: 0.0926916\ttotal: 39.1s\tremaining: 45m 53s\n14:\tlearn: 0.0827645\ttotal: 42.1s\tremaining: 46m 2s\n15:\tlearn: 0.0730997\ttotal: 44.4s\tremaining: 45m 32s\n16:\tlearn: 0.0649297\ttotal: 47.1s\tremaining: 45m 25s\n17:\tlearn: 0.0575609\ttotal: 50.7s\tremaining: 46m 4s\n18:\tlearn: 0.0511883\ttotal: 53.5s\tremaining: 46m 2s\n19:\tlearn: 0.0457417\ttotal: 56.2s\tremaining: 45m 54s\n20:\tlearn: 0.0410926\ttotal: 58.7s\tremaining: 45m 35s\n21:\tlearn: 0.0372796\ttotal: 1m 1s\tremaining: 45m 32s\n22:\tlearn: 0.0338200\ttotal: 1m 4s\tremaining: 45m 35s\n23:\tlearn: 0.0308331\ttotal: 1m 7s\tremaining: 45m 34s\n24:\tlearn: 0.0282501\ttotal: 1m 9s\tremaining: 45m 20s\n25:\tlearn: 0.0261573\ttotal: 1m 12s\tremaining: 45m 9s\n26:\tlearn: 0.0242775\ttotal: 1m 15s\tremaining: 45m 15s\n27:\tlearn: 0.0225187\ttotal: 1m 18s\tremaining: 45m 27s\n28:\tlearn: 0.0210116\ttotal: 1m 21s\tremaining: 45m 33s\n29:\tlearn: 0.0196863\ttotal: 1m 24s\tremaining: 45m 36s\n30:\tlearn: 0.0184917\ttotal: 1m 27s\tremaining: 45m 34s\n31:\tlearn: 0.0174391\ttotal: 1m 30s\tremaining: 45m 27s\n32:\tlearn: 0.0165988\ttotal: 1m 32s\tremaining: 45m 9s\n33:\tlearn: 0.0157632\ttotal: 1m 35s\tremaining: 45m 25s\n34:\tlearn: 0.0150391\ttotal: 1m 39s\tremaining: 45m 30s\n35:\tlearn: 0.0143841\ttotal: 1m 42s\tremaining: 45m 49s\n36:\tlearn: 0.0138583\ttotal: 1m 45s\tremaining: 45m 55s\n37:\tlearn: 0.0133231\ttotal: 1m 48s\tremaining: 45m 43s\n38:\tlearn: 0.0128522\ttotal: 1m 51s\tremaining: 45m 41s\n39:\tlearn: 0.0124228\ttotal: 1m 54s\tremaining: 45m 49s\n40:\tlearn: 0.0120444\ttotal: 1m 57s\tremaining: 45m 56s\n41:\tlearn: 0.0117132\ttotal: 2m\tremaining: 45m 48s\n42:\tlearn: 0.0113905\ttotal: 2m 3s\tremaining: 45m 47s\n43:\tlearn: 0.0111060\ttotal: 2m 6s\tremaining: 45m 45s\n44:\tlearn: 0.0108378\ttotal: 2m 8s\tremaining: 45m 31s\n45:\tlearn: 0.0105957\ttotal: 2m 11s\tremaining: 45m 32s\n46:\tlearn: 0.0104258\ttotal: 2m 15s\tremaining: 45m 45s\n47:\tlearn: 0.0102807\ttotal: 2m 18s\tremaining: 45m 40s\n48:\tlearn: 0.0101318\ttotal: 2m 22s\tremaining: 46m 2s\n49:\tlearn: 0.0099788\ttotal: 2m 25s\tremaining: 46m 13s\n50:\tlearn: 0.0098402\ttotal: 2m 30s\tremaining: 46m 32s\n51:\tlearn: 0.0097271\ttotal: 2m 33s\tremaining: 46m 39s\n52:\tlearn: 0.0095871\ttotal: 2m 36s\tremaining: 46m 40s\n53:\tlearn: 0.0094607\ttotal: 2m 40s\tremaining: 46m 49s\n54:\tlearn: 0.0093495\ttotal: 2m 43s\tremaining: 46m 50s\n55:\tlearn: 0.0092702\ttotal: 2m 47s\tremaining: 46m 58s\n56:\tlearn: 0.0091670\ttotal: 2m 50s\tremaining: 47m\n57:\tlearn: 0.0090771\ttotal: 2m 53s\tremaining: 47m 1s\n58:\tlearn: 0.0089880\ttotal: 2m 57s\tremaining: 47m 5s\n59:\tlearn: 0.0088982\ttotal: 3m\tremaining: 47m 6s\n60:\tlearn: 0.0088353\ttotal: 3m 2s\tremaining: 46m 56s\n61:\tlearn: 0.0087647\ttotal: 3m 6s\tremaining: 47m 1s\n62:\tlearn: 0.0086990\ttotal: 3m 9s\tremaining: 47m 2s\n63:\tlearn: 0.0086362\ttotal: 3m 12s\tremaining: 46m 59s\n64:\tlearn: 0.0085852\ttotal: 3m 15s\tremaining: 46m 57s\n65:\tlearn: 0.0085294\ttotal: 3m 18s\tremaining: 46m 45s\n66:\tlearn: 0.0084839\ttotal: 3m 21s\tremaining: 46m 47s\n67:\tlearn: 0.0084362\ttotal: 3m 25s\tremaining: 46m 57s\n68:\tlearn: 0.0084023\ttotal: 3m 28s\tremaining: 46m 58s\n69:\tlearn: 0.0083703\ttotal: 3m 32s\tremaining: 47m\n70:\tlearn: 0.0083370\ttotal: 3m 35s\tremaining: 46m 56s\n71:\tlearn: 0.0082991\ttotal: 3m 38s\tremaining: 46m 52s\n72:\tlearn: 0.0082638\ttotal: 3m 41s\tremaining: 46m 51s\n73:\tlearn: 0.0082335\ttotal: 3m 44s\tremaining: 46m 48s\n74:\tlearn: 0.0082029\ttotal: 3m 47s\tremaining: 46m 49s\n75:\tlearn: 0.0081869\ttotal: 3m 51s\tremaining: 46m 53s\n76:\tlearn: 0.0081580\ttotal: 3m 54s\tremaining: 46m 52s\n77:\tlearn: 0.0081423\ttotal: 3m 57s\tremaining: 46m 53s\n78:\tlearn: 0.0081173\ttotal: 4m 1s\tremaining: 46m 54s\n79:\tlearn: 0.0080964\ttotal: 4m 4s\tremaining: 46m 52s\n80:\tlearn: 0.0080746\ttotal: 4m 7s\tremaining: 46m 45s\n81:\tlearn: 0.0080594\ttotal: 4m 10s\tremaining: 46m 44s\n82:\tlearn: 0.0080383\ttotal: 4m 14s\tremaining: 46m 47s\n83:\tlearn: 0.0080213\ttotal: 4m 16s\tremaining: 46m 41s\n84:\tlearn: 0.0080032\ttotal: 4m 20s\tremaining: 46m 40s\n85:\tlearn: 0.0079885\ttotal: 4m 24s\tremaining: 46m 45s\n86:\tlearn: 0.0079693\ttotal: 4m 27s\tremaining: 46m 50s\n87:\tlearn: 0.0079516\ttotal: 4m 30s\tremaining: 46m 48s\n88:\tlearn: 0.0079356\ttotal: 4m 34s\tremaining: 46m 46s\n89:\tlearn: 0.0079215\ttotal: 4m 37s\tremaining: 46m 49s\n90:\tlearn: 0.0079047\ttotal: 4m 41s\tremaining: 46m 48s\n91:\tlearn: 0.0078890\ttotal: 4m 43s\tremaining: 46m 42s\n92:\tlearn: 0.0078778\ttotal: 4m 47s\tremaining: 46m 43s\n93:\tlearn: 0.0078642\ttotal: 4m 50s\tremaining: 46m 35s\n94:\tlearn: 0.0078512\ttotal: 4m 52s\tremaining: 46m 24s\n95:\tlearn: 0.0078388\ttotal: 4m 55s\tremaining: 46m 20s\n96:\tlearn: 0.0078286\ttotal: 4m 58s\tremaining: 46m 15s\n97:\tlearn: 0.0078202\ttotal: 5m 1s\tremaining: 46m 15s\n98:\tlearn: 0.0078110\ttotal: 5m 4s\tremaining: 46m 14s\n99:\tlearn: 0.0078035\ttotal: 5m 7s\tremaining: 46m 10s\n100:\tlearn: 0.0077991\ttotal: 5m 10s\tremaining: 46m 6s\n101:\tlearn: 0.0077934\ttotal: 5m 14s\tremaining: 46m 5s\n102:\tlearn: 0.0077903\ttotal: 5m 17s\tremaining: 46m 6s\n103:\tlearn: 0.0077860\ttotal: 5m 21s\tremaining: 46m 6s\n104:\tlearn: 0.0077830\ttotal: 5m 24s\tremaining: 46m 5s\n105:\tlearn: 0.0077751\ttotal: 5m 28s\tremaining: 46m 8s\n106:\tlearn: 0.0077676\ttotal: 5m 31s\tremaining: 46m 5s\n107:\tlearn: 0.0077651\ttotal: 5m 34s\tremaining: 46m 3s\n108:\tlearn: 0.0077608\ttotal: 5m 37s\tremaining: 46m 2s\n109:\tlearn: 0.0077565\ttotal: 5m 41s\tremaining: 46m 1s\n110:\tlearn: 0.0077507\ttotal: 5m 44s\tremaining: 46m 1s\n111:\tlearn: 0.0077431\ttotal: 5m 48s\tremaining: 45m 59s\n112:\tlearn: 0.0077405\ttotal: 5m 51s\tremaining: 45m 56s\n113:\tlearn: 0.0077358\ttotal: 5m 54s\tremaining: 45m 56s\n114:\tlearn: 0.0077314\ttotal: 5m 56s\tremaining: 45m 44s\n115:\tlearn: 0.0077269\ttotal: 5m 58s\tremaining: 45m 34s\n116:\tlearn: 0.0077223\ttotal: 6m 1s\tremaining: 45m 24s\n117:\tlearn: 0.0077170\ttotal: 6m 3s\tremaining: 45m 13s\n118:\tlearn: 0.0077110\ttotal: 6m 5s\tremaining: 45m 5s\n119:\tlearn: 0.0077050\ttotal: 6m 7s\tremaining: 44m 56s\n120:\tlearn: 0.0077001\ttotal: 6m 9s\tremaining: 44m 45s\n121:\tlearn: 0.0076960\ttotal: 6m 11s\tremaining: 44m 34s\n122:\tlearn: 0.0076899\ttotal: 6m 13s\tremaining: 44m 25s\n123:\tlearn: 0.0076888\ttotal: 6m 15s\tremaining: 44m 14s\n124:\tlearn: 0.0076854\ttotal: 6m 18s\tremaining: 44m 7s\n125:\tlearn: 0.0076812\ttotal: 6m 19s\tremaining: 43m 55s\n126:\tlearn: 0.0076750\ttotal: 6m 22s\tremaining: 43m 46s\n127:\tlearn: 0.0076706\ttotal: 6m 24s\tremaining: 43m 37s\n128:\tlearn: 0.0076693\ttotal: 6m 26s\tremaining: 43m 27s\n129:\tlearn: 0.0076654\ttotal: 6m 27s\tremaining: 43m 16s\n130:\tlearn: 0.0076618\ttotal: 6m 29s\tremaining: 43m 6s\n131:\tlearn: 0.0076605\ttotal: 6m 31s\tremaining: 42m 55s\n132:\tlearn: 0.0076597\ttotal: 6m 33s\tremaining: 42m 48s\n133:\tlearn: 0.0076560\ttotal: 6m 36s\tremaining: 42m 40s\n134:\tlearn: 0.0076536\ttotal: 6m 38s\tremaining: 42m 33s\n135:\tlearn: 0.0076504\ttotal: 6m 40s\tremaining: 42m 25s\n136:\tlearn: 0.0076465\ttotal: 6m 42s\tremaining: 42m 15s\n137:\tlearn: 0.0076433\ttotal: 6m 44s\tremaining: 42m 5s\n138:\tlearn: 0.0076404\ttotal: 6m 46s\tremaining: 41m 56s\n139:\tlearn: 0.0076395\ttotal: 6m 48s\tremaining: 41m 49s\n140:\tlearn: 0.0076385\ttotal: 6m 50s\tremaining: 41m 42s\n141:\tlearn: 0.0076369\ttotal: 6m 53s\tremaining: 41m 37s\n142:\tlearn: 0.0076345\ttotal: 6m 56s\tremaining: 41m 34s\n143:\tlearn: 0.0076324\ttotal: 6m 58s\tremaining: 41m 28s\n144:\tlearn: 0.0076310\ttotal: 7m\tremaining: 41m 20s\n145:\tlearn: 0.0076293\ttotal: 7m 2s\tremaining: 41m 12s\n146:\tlearn: 0.0076289\ttotal: 7m 4s\tremaining: 41m 1s\n147:\tlearn: 0.0076268\ttotal: 7m 6s\tremaining: 40m 54s\n148:\tlearn: 0.0076254\ttotal: 7m 8s\tremaining: 40m 49s\n149:\tlearn: 0.0076246\ttotal: 7m 10s\tremaining: 40m 41s\n150:\tlearn: 0.0076211\ttotal: 7m 13s\tremaining: 40m 35s\n151:\tlearn: 0.0076209\ttotal: 7m 15s\tremaining: 40m 31s\n152:\tlearn: 0.0076188\ttotal: 7m 18s\tremaining: 40m 26s\n153:\tlearn: 0.0076152\ttotal: 7m 20s\tremaining: 40m 19s\n154:\tlearn: 0.0076130\ttotal: 7m 23s\tremaining: 40m 16s\n155:\tlearn: 0.0076117\ttotal: 7m 25s\tremaining: 40m 11s\n156:\tlearn: 0.0076099\ttotal: 7m 27s\tremaining: 40m 3s\n157:\tlearn: 0.0076081\ttotal: 7m 30s\tremaining: 39m 58s\n158:\tlearn: 0.0076061\ttotal: 7m 32s\tremaining: 39m 51s\n159:\tlearn: 0.0076036\ttotal: 7m 34s\tremaining: 39m 44s\n160:\tlearn: 0.0076033\ttotal: 7m 36s\tremaining: 39m 38s\n161:\tlearn: 0.0076010\ttotal: 7m 38s\tremaining: 39m 30s\n162:\tlearn: 0.0076004\ttotal: 7m 40s\tremaining: 39m 22s\n163:\tlearn: 0.0075985\ttotal: 7m 42s\tremaining: 39m 17s\n164:\tlearn: 0.0075974\ttotal: 7m 44s\tremaining: 39m 10s\n165:\tlearn: 0.0075961\ttotal: 7m 47s\tremaining: 39m 6s\n166:\tlearn: 0.0075930\ttotal: 7m 49s\tremaining: 39m 1s\n167:\tlearn: 0.0075923\ttotal: 7m 51s\tremaining: 38m 53s\n168:\tlearn: 0.0075914\ttotal: 7m 53s\tremaining: 38m 49s\n169:\tlearn: 0.0075890\ttotal: 7m 56s\tremaining: 38m 46s\n170:\tlearn: 0.0075881\ttotal: 7m 58s\tremaining: 38m 40s\n171:\tlearn: 0.0075866\ttotal: 8m\tremaining: 38m 35s\n172:\tlearn: 0.0075816\ttotal: 8m 3s\tremaining: 38m 29s\n173:\tlearn: 0.0075808\ttotal: 8m 5s\tremaining: 38m 24s\n174:\tlearn: 0.0075788\ttotal: 8m 8s\tremaining: 38m 22s\n175:\tlearn: 0.0075780\ttotal: 8m 10s\tremaining: 38m 16s\n176:\tlearn: 0.0075770\ttotal: 8m 12s\tremaining: 38m 11s\n177:\tlearn: 0.0075752\ttotal: 8m 14s\tremaining: 38m 4s\n178:\tlearn: 0.0075749\ttotal: 8m 16s\tremaining: 37m 59s\n179:\tlearn: 0.0075743\ttotal: 8m 18s\tremaining: 37m 52s\n180:\tlearn: 0.0075739\ttotal: 8m 20s\tremaining: 37m 46s\n181:\tlearn: 0.0075733\ttotal: 8m 23s\tremaining: 37m 42s\n182:\tlearn: 0.0075732\ttotal: 8m 25s\tremaining: 37m 36s\n183:\tlearn: 0.0075727\ttotal: 8m 27s\tremaining: 37m 32s\n184:\tlearn: 0.0075713\ttotal: 8m 30s\tremaining: 37m 29s\n185:\tlearn: 0.0075693\ttotal: 8m 33s\tremaining: 37m 26s\n186:\tlearn: 0.0075668\ttotal: 8m 35s\tremaining: 37m 20s\n187:\tlearn: 0.0075660\ttotal: 8m 37s\tremaining: 37m 15s\n188:\tlearn: 0.0075653\ttotal: 8m 39s\tremaining: 37m 8s\n189:\tlearn: 0.0075644\ttotal: 8m 41s\tremaining: 37m 1s\n190:\tlearn: 0.0075638\ttotal: 8m 42s\tremaining: 36m 54s\n191:\tlearn: 0.0075631\ttotal: 8m 44s\tremaining: 36m 47s\n192:\tlearn: 0.0075617\ttotal: 8m 46s\tremaining: 36m 42s\n193:\tlearn: 0.0075591\ttotal: 8m 48s\tremaining: 36m 36s\n194:\tlearn: 0.0075572\ttotal: 8m 51s\tremaining: 36m 32s\n195:\tlearn: 0.0075552\ttotal: 8m 52s\tremaining: 36m 25s\n196:\tlearn: 0.0075535\ttotal: 8m 54s\tremaining: 36m 19s\n197:\tlearn: 0.0075518\ttotal: 8m 56s\tremaining: 36m 13s\n198:\tlearn: 0.0075513\ttotal: 8m 58s\tremaining: 36m 7s\n199:\tlearn: 0.0075502\ttotal: 9m\tremaining: 36m\n200:\tlearn: 0.0075492\ttotal: 9m 2s\tremaining: 35m 54s\n201:\tlearn: 0.0075473\ttotal: 9m 3s\tremaining: 35m 47s\n202:\tlearn: 0.0075458\ttotal: 9m 6s\tremaining: 35m 43s\n203:\tlearn: 0.0075455\ttotal: 9m 7s\tremaining: 35m 37s\n204:\tlearn: 0.0075440\ttotal: 9m 9s\tremaining: 35m 32s\n205:\tlearn: 0.0075431\ttotal: 9m 12s\tremaining: 35m 27s\n206:\tlearn: 0.0075418\ttotal: 9m 14s\tremaining: 35m 23s\n207:\tlearn: 0.0075413\ttotal: 9m 17s\tremaining: 35m 21s\n208:\tlearn: 0.0075408\ttotal: 9m 19s\tremaining: 35m 17s\n209:\tlearn: 0.0075395\ttotal: 9m 21s\tremaining: 35m 12s\n210:\tlearn: 0.0075368\ttotal: 9m 24s\tremaining: 35m 9s\n211:\tlearn: 0.0075360\ttotal: 9m 26s\tremaining: 35m 5s\n212:\tlearn: 0.0075347\ttotal: 9m 28s\tremaining: 35m 1s\n213:\tlearn: 0.0075333\ttotal: 9m 31s\tremaining: 34m 58s\n214:\tlearn: 0.0075310\ttotal: 9m 33s\tremaining: 34m 55s\n215:\tlearn: 0.0075300\ttotal: 9m 36s\tremaining: 34m 53s\n216:\tlearn: 0.0075296\ttotal: 9m 39s\tremaining: 34m 49s\n217:\tlearn: 0.0075288\ttotal: 9m 41s\tremaining: 34m 44s\n218:\tlearn: 0.0075286\ttotal: 9m 42s\tremaining: 34m 38s\n219:\tlearn: 0.0075283\ttotal: 9m 45s\tremaining: 34m 34s\n220:\tlearn: 0.0075281\ttotal: 9m 46s\tremaining: 34m 29s\n221:\tlearn: 0.0075272\ttotal: 9m 48s\tremaining: 34m 23s\n222:\tlearn: 0.0075271\ttotal: 9m 50s\tremaining: 34m 18s\n223:\tlearn: 0.0075261\ttotal: 9m 53s\tremaining: 34m 14s\n224:\tlearn: 0.0075252\ttotal: 9m 55s\tremaining: 34m 10s\n225:\tlearn: 0.0075241\ttotal: 9m 57s\tremaining: 34m 4s\n226:\tlearn: 0.0075232\ttotal: 9m 59s\tremaining: 34m 1s\n227:\tlearn: 0.0075223\ttotal: 10m 2s\tremaining: 33m 59s\n228:\tlearn: 0.0075219\ttotal: 10m 4s\tremaining: 33m 54s\n229:\tlearn: 0.0075215\ttotal: 10m 6s\tremaining: 33m 51s\n230:\tlearn: 0.0075205\ttotal: 10m 9s\tremaining: 33m 49s\n231:\tlearn: 0.0075202\ttotal: 10m 12s\tremaining: 33m 46s\n232:\tlearn: 0.0075199\ttotal: 10m 14s\tremaining: 33m 43s\n233:\tlearn: 0.0075197\ttotal: 10m 16s\tremaining: 33m 39s\n234:\tlearn: 0.0075185\ttotal: 10m 18s\tremaining: 33m 33s\n235:\tlearn: 0.0075181\ttotal: 10m 20s\tremaining: 33m 29s\n236:\tlearn: 0.0075169\ttotal: 10m 23s\tremaining: 33m 26s\n237:\tlearn: 0.0075164\ttotal: 10m 25s\tremaining: 33m 21s\n238:\tlearn: 0.0075156\ttotal: 10m 27s\tremaining: 33m 16s\n239:\tlearn: 0.0075153\ttotal: 10m 28s\tremaining: 33m 11s\n240:\tlearn: 0.0075137\ttotal: 10m 31s\tremaining: 33m 9s\n241:\tlearn: 0.0075116\ttotal: 10m 34s\tremaining: 33m 6s\n242:\tlearn: 0.0075102\ttotal: 10m 36s\tremaining: 33m 2s\n243:\tlearn: 0.0075082\ttotal: 10m 38s\tremaining: 32m 58s\n244:\tlearn: 0.0075073\ttotal: 10m 40s\tremaining: 32m 53s\n245:\tlearn: 0.0075068\ttotal: 10m 43s\tremaining: 32m 51s\n246:\tlearn: 0.0075059\ttotal: 10m 45s\tremaining: 32m 48s\n247:\tlearn: 0.0075054\ttotal: 10m 47s\tremaining: 32m 43s\n248:\tlearn: 0.0075053\ttotal: 10m 49s\tremaining: 32m 38s\n249:\tlearn: 0.0075047\ttotal: 10m 51s\tremaining: 32m 33s\n250:\tlearn: 0.0075041\ttotal: 10m 52s\tremaining: 32m 28s\n251:\tlearn: 0.0075010\ttotal: 10m 55s\tremaining: 32m 25s\n252:\tlearn: 0.0075003\ttotal: 10m 57s\tremaining: 32m 21s\n253:\tlearn: 0.0075000\ttotal: 10m 59s\tremaining: 32m 18s\n254:\tlearn: 0.0074999\ttotal: 11m 1s\tremaining: 32m 13s\n255:\tlearn: 0.0074983\ttotal: 11m 3s\tremaining: 32m 9s\n256:\tlearn: 0.0074980\ttotal: 11m 6s\tremaining: 32m 6s\n257:\tlearn: 0.0074968\ttotal: 11m 8s\tremaining: 32m 1s\n258:\tlearn: 0.0074963\ttotal: 11m 10s\tremaining: 31m 58s\n259:\tlearn: 0.0074960\ttotal: 11m 12s\tremaining: 31m 54s\n260:\tlearn: 0.0074948\ttotal: 11m 15s\tremaining: 31m 51s\n261:\tlearn: 0.0074939\ttotal: 11m 17s\tremaining: 31m 47s\n262:\tlearn: 0.0074917\ttotal: 11m 19s\tremaining: 31m 44s\n263:\tlearn: 0.0074912\ttotal: 11m 21s\tremaining: 31m 39s\n264:\tlearn: 0.0074908\ttotal: 11m 23s\tremaining: 31m 36s\n265:\tlearn: 0.0074902\ttotal: 11m 25s\tremaining: 31m 32s\n266:\tlearn: 0.0074894\ttotal: 11m 27s\tremaining: 31m 28s\n267:\tlearn: 0.0074882\ttotal: 11m 30s\tremaining: 31m 25s\n268:\tlearn: 0.0074880\ttotal: 11m 31s\tremaining: 31m 20s\n269:\tlearn: 0.0074862\ttotal: 11m 34s\tremaining: 31m 16s\n270:\tlearn: 0.0074848\ttotal: 11m 35s\tremaining: 31m 11s\n271:\tlearn: 0.0074846\ttotal: 11m 37s\tremaining: 31m 7s\n272:\tlearn: 0.0074843\ttotal: 11m 39s\tremaining: 31m 3s\n273:\tlearn: 0.0074835\ttotal: 11m 41s\tremaining: 30m 59s\n274:\tlearn: 0.0074833\ttotal: 11m 43s\tremaining: 30m 54s\n275:\tlearn: 0.0074828\ttotal: 11m 45s\tremaining: 30m 50s\n276:\tlearn: 0.0074822\ttotal: 11m 47s\tremaining: 30m 46s\n277:\tlearn: 0.0074819\ttotal: 11m 49s\tremaining: 30m 41s\n278:\tlearn: 0.0074813\ttotal: 11m 50s\tremaining: 30m 36s\n279:\tlearn: 0.0074811\ttotal: 11m 52s\tremaining: 30m 31s\n280:\tlearn: 0.0074807\ttotal: 11m 54s\tremaining: 30m 27s\n281:\tlearn: 0.0074805\ttotal: 11m 56s\tremaining: 30m 23s\n282:\tlearn: 0.0074799\ttotal: 11m 57s\tremaining: 30m 18s\n283:\tlearn: 0.0074792\ttotal: 11m 59s\tremaining: 30m 14s\n284:\tlearn: 0.0074786\ttotal: 12m 1s\tremaining: 30m 11s\n285:\tlearn: 0.0074782\ttotal: 12m 4s\tremaining: 30m 8s\n286:\tlearn: 0.0074779\ttotal: 12m 6s\tremaining: 30m 3s\n287:\tlearn: 0.0074773\ttotal: 12m 8s\tremaining: 30m\n288:\tlearn: 0.0074769\ttotal: 12m 10s\tremaining: 29m 57s\n289:\tlearn: 0.0074759\ttotal: 12m 12s\tremaining: 29m 54s\n290:\tlearn: 0.0074752\ttotal: 12m 15s\tremaining: 29m 51s\n291:\tlearn: 0.0074747\ttotal: 12m 17s\tremaining: 29m 49s\n292:\tlearn: 0.0074744\ttotal: 12m 20s\tremaining: 29m 46s\n293:\tlearn: 0.0074726\ttotal: 12m 23s\tremaining: 29m 45s\n294:\tlearn: 0.0074723\ttotal: 12m 26s\tremaining: 29m 44s\n295:\tlearn: 0.0074722\ttotal: 12m 28s\tremaining: 29m 40s\n296:\tlearn: 0.0074709\ttotal: 12m 30s\tremaining: 29m 36s\n297:\tlearn: 0.0074706\ttotal: 12m 33s\tremaining: 29m 33s\n298:\tlearn: 0.0074700\ttotal: 12m 35s\tremaining: 29m 31s\n299:\tlearn: 0.0074697\ttotal: 12m 38s\tremaining: 29m 29s\n300:\tlearn: 0.0074683\ttotal: 12m 40s\tremaining: 29m 25s\n301:\tlearn: 0.0074682\ttotal: 12m 42s\tremaining: 29m 22s\n302:\tlearn: 0.0074677\ttotal: 12m 44s\tremaining: 29m 19s\n303:\tlearn: 0.0074673\ttotal: 12m 46s\tremaining: 29m 15s\n304:\tlearn: 0.0074671\ttotal: 12m 49s\tremaining: 29m 13s\n305:\tlearn: 0.0074668\ttotal: 12m 51s\tremaining: 29m 10s\n306:\tlearn: 0.0074665\ttotal: 12m 53s\tremaining: 29m 5s\n307:\tlearn: 0.0074650\ttotal: 12m 55s\tremaining: 29m 2s\n308:\tlearn: 0.0074646\ttotal: 12m 57s\tremaining: 28m 58s\n309:\tlearn: 0.0074638\ttotal: 12m 59s\tremaining: 28m 54s\n310:\tlearn: 0.0074631\ttotal: 13m 1s\tremaining: 28m 51s\n311:\tlearn: 0.0074628\ttotal: 13m 3s\tremaining: 28m 47s\n312:\tlearn: 0.0074626\ttotal: 13m 5s\tremaining: 28m 43s\n313:\tlearn: 0.0074618\ttotal: 13m 7s\tremaining: 28m 40s\n314:\tlearn: 0.0074613\ttotal: 13m 9s\tremaining: 28m 36s\n315:\tlearn: 0.0074603\ttotal: 13m 11s\tremaining: 28m 32s\n316:\tlearn: 0.0074600\ttotal: 13m 13s\tremaining: 28m 28s\n317:\tlearn: 0.0074581\ttotal: 13m 14s\tremaining: 28m 24s\n318:\tlearn: 0.0074570\ttotal: 13m 16s\tremaining: 28m 21s\n319:\tlearn: 0.0074553\ttotal: 13m 18s\tremaining: 28m 17s\n320:\tlearn: 0.0074551\ttotal: 13m 20s\tremaining: 28m 13s\n321:\tlearn: 0.0074549\ttotal: 13m 22s\tremaining: 28m 9s\n322:\tlearn: 0.0074546\ttotal: 13m 24s\tremaining: 28m 5s\n323:\tlearn: 0.0074538\ttotal: 13m 25s\tremaining: 28m 1s\n324:\tlearn: 0.0074530\ttotal: 13m 28s\tremaining: 27m 58s\n325:\tlearn: 0.0074526\ttotal: 13m 30s\tremaining: 27m 55s\n326:\tlearn: 0.0074524\ttotal: 13m 32s\tremaining: 27m 51s\n327:\tlearn: 0.0074517\ttotal: 13m 34s\tremaining: 27m 48s\n328:\tlearn: 0.0074502\ttotal: 13m 36s\tremaining: 27m 45s\n329:\tlearn: 0.0074500\ttotal: 13m 38s\tremaining: 27m 41s\n330:\tlearn: 0.0074497\ttotal: 13m 40s\tremaining: 27m 37s\n331:\tlearn: 0.0074494\ttotal: 13m 42s\tremaining: 27m 33s\n332:\tlearn: 0.0074491\ttotal: 13m 43s\tremaining: 27m 30s\n333:\tlearn: 0.0074489\ttotal: 13m 45s\tremaining: 27m 26s\n334:\tlearn: 0.0074484\ttotal: 13m 47s\tremaining: 27m 22s\n335:\tlearn: 0.0074482\ttotal: 13m 49s\tremaining: 27m 19s\n336:\tlearn: 0.0074454\ttotal: 13m 51s\tremaining: 27m 16s\n337:\tlearn: 0.0074450\ttotal: 13m 53s\tremaining: 27m 12s\n338:\tlearn: 0.0074445\ttotal: 13m 55s\tremaining: 27m 9s\n339:\tlearn: 0.0074441\ttotal: 13m 57s\tremaining: 27m 5s\n340:\tlearn: 0.0074437\ttotal: 13m 59s\tremaining: 27m 2s\n341:\tlearn: 0.0074433\ttotal: 14m 1s\tremaining: 26m 58s\n342:\tlearn: 0.0074427\ttotal: 14m 3s\tremaining: 26m 55s\n343:\tlearn: 0.0074424\ttotal: 14m 5s\tremaining: 26m 51s\n344:\tlearn: 0.0074418\ttotal: 14m 7s\tremaining: 26m 48s\n345:\tlearn: 0.0074411\ttotal: 14m 9s\tremaining: 26m 45s\n346:\tlearn: 0.0074407\ttotal: 14m 11s\tremaining: 26m 42s\n347:\tlearn: 0.0074397\ttotal: 14m 14s\tremaining: 26m 40s\n348:\tlearn: 0.0074394\ttotal: 14m 15s\tremaining: 26m 36s\n349:\tlearn: 0.0074392\ttotal: 14m 18s\tremaining: 26m 34s\n350:\tlearn: 0.0074387\ttotal: 14m 20s\tremaining: 26m 30s\n351:\tlearn: 0.0074385\ttotal: 14m 22s\tremaining: 26m 27s\n352:\tlearn: 0.0074366\ttotal: 14m 24s\tremaining: 26m 24s\n353:\tlearn: 0.0074361\ttotal: 14m 26s\tremaining: 26m 21s\n354:\tlearn: 0.0074358\ttotal: 14m 28s\tremaining: 26m 18s\n355:\tlearn: 0.0074354\ttotal: 14m 30s\tremaining: 26m 15s\n356:\tlearn: 0.0074337\ttotal: 14m 32s\tremaining: 26m 12s\n357:\tlearn: 0.0074329\ttotal: 14m 34s\tremaining: 26m 8s\n358:\tlearn: 0.0074328\ttotal: 14m 36s\tremaining: 26m 5s\n359:\tlearn: 0.0074326\ttotal: 14m 38s\tremaining: 26m 1s\n360:\tlearn: 0.0074315\ttotal: 14m 41s\tremaining: 26m\n361:\tlearn: 0.0074305\ttotal: 14m 43s\tremaining: 25m 57s\n362:\tlearn: 0.0074300\ttotal: 14m 46s\tremaining: 25m 55s\n363:\tlearn: 0.0074285\ttotal: 14m 48s\tremaining: 25m 53s\n364:\tlearn: 0.0074276\ttotal: 14m 50s\tremaining: 25m 49s\n365:\tlearn: 0.0074273\ttotal: 14m 53s\tremaining: 25m 47s\n366:\tlearn: 0.0074267\ttotal: 14m 55s\tremaining: 25m 44s\n367:\tlearn: 0.0074263\ttotal: 14m 58s\tremaining: 25m 42s\n368:\tlearn: 0.0074258\ttotal: 15m\tremaining: 25m 40s\n369:\tlearn: 0.0074252\ttotal: 15m 3s\tremaining: 25m 37s\n370:\tlearn: 0.0074249\ttotal: 15m 5s\tremaining: 25m 35s\n371:\tlearn: 0.0074245\ttotal: 15m 9s\tremaining: 25m 34s\n372:\tlearn: 0.0074239\ttotal: 15m 11s\tremaining: 25m 32s\n373:\tlearn: 0.0074233\ttotal: 15m 13s\tremaining: 25m 29s\n374:\tlearn: 0.0074230\ttotal: 15m 16s\tremaining: 25m 27s\n375:\tlearn: 0.0074220\ttotal: 15m 19s\tremaining: 25m 25s\n376:\tlearn: 0.0074214\ttotal: 15m 21s\tremaining: 25m 23s\n377:\tlearn: 0.0074204\ttotal: 15m 24s\tremaining: 25m 21s\n378:\tlearn: 0.0074201\ttotal: 15m 27s\tremaining: 25m 18s\n379:\tlearn: 0.0074199\ttotal: 15m 29s\tremaining: 25m 16s\n380:\tlearn: 0.0074187\ttotal: 15m 31s\tremaining: 25m 13s\n381:\tlearn: 0.0074171\ttotal: 15m 33s\tremaining: 25m 10s\n382:\tlearn: 0.0074165\ttotal: 15m 35s\tremaining: 25m 7s\n383:\tlearn: 0.0074162\ttotal: 15m 37s\tremaining: 25m 4s\n384:\tlearn: 0.0074161\ttotal: 15m 39s\tremaining: 25m 1s\n385:\tlearn: 0.0074157\ttotal: 15m 42s\tremaining: 24m 59s\n386:\tlearn: 0.0074153\ttotal: 15m 45s\tremaining: 24m 57s\n387:\tlearn: 0.0074149\ttotal: 15m 48s\tremaining: 24m 55s\n388:\tlearn: 0.0074146\ttotal: 15m 50s\tremaining: 24m 53s\n389:\tlearn: 0.0074145\ttotal: 15m 52s\tremaining: 24m 50s\n390:\tlearn: 0.0074139\ttotal: 15m 55s\tremaining: 24m 48s\n391:\tlearn: 0.0074132\ttotal: 15m 58s\tremaining: 24m 46s\n392:\tlearn: 0.0074103\ttotal: 16m\tremaining: 24m 43s\n393:\tlearn: 0.0074097\ttotal: 16m 3s\tremaining: 24m 41s\n394:\tlearn: 0.0074096\ttotal: 16m 6s\tremaining: 24m 39s\n395:\tlearn: 0.0074093\ttotal: 16m 9s\tremaining: 24m 38s\n396:\tlearn: 0.0074091\ttotal: 16m 11s\tremaining: 24m 35s\n397:\tlearn: 0.0074090\ttotal: 16m 13s\tremaining: 24m 32s\n398:\tlearn: 0.0074068\ttotal: 16m 15s\tremaining: 24m 30s\n399:\tlearn: 0.0074063\ttotal: 16m 17s\tremaining: 24m 26s\n400:\tlearn: 0.0074060\ttotal: 16m 20s\tremaining: 24m 24s\n401:\tlearn: 0.0074055\ttotal: 16m 22s\tremaining: 24m 20s\n402:\tlearn: 0.0074054\ttotal: 16m 24s\tremaining: 24m 17s\n403:\tlearn: 0.0074050\ttotal: 16m 26s\tremaining: 24m 14s\n404:\tlearn: 0.0074046\ttotal: 16m 28s\tremaining: 24m 11s\n405:\tlearn: 0.0074046\ttotal: 16m 30s\tremaining: 24m 8s\n406:\tlearn: 0.0074034\ttotal: 16m 32s\tremaining: 24m 6s\n407:\tlearn: 0.0074030\ttotal: 16m 35s\tremaining: 24m 4s\n408:\tlearn: 0.0074029\ttotal: 16m 37s\tremaining: 24m 1s\n409:\tlearn: 0.0074025\ttotal: 16m 39s\tremaining: 23m 58s\n410:\tlearn: 0.0074011\ttotal: 16m 41s\tremaining: 23m 55s\n411:\tlearn: 0.0074007\ttotal: 16m 44s\tremaining: 23m 53s\n412:\tlearn: 0.0074004\ttotal: 16m 47s\tremaining: 23m 51s\n413:\tlearn: 0.0073996\ttotal: 16m 49s\tremaining: 23m 49s\n414:\tlearn: 0.0073995\ttotal: 16m 52s\tremaining: 23m 46s\n415:\tlearn: 0.0073985\ttotal: 16m 54s\tremaining: 23m 43s\n416:\tlearn: 0.0073963\ttotal: 16m 56s\tremaining: 23m 40s\n417:\tlearn: 0.0073961\ttotal: 16m 58s\tremaining: 23m 37s\n418:\tlearn: 0.0073955\ttotal: 17m\tremaining: 23m 35s\n419:\tlearn: 0.0073953\ttotal: 17m 2s\tremaining: 23m 31s\n420:\tlearn: 0.0073948\ttotal: 17m 4s\tremaining: 23m 29s\n421:\tlearn: 0.0073943\ttotal: 17m 7s\tremaining: 23m 26s\n422:\tlearn: 0.0073940\ttotal: 17m 8s\tremaining: 23m 23s\n423:\tlearn: 0.0073938\ttotal: 17m 11s\tremaining: 23m 21s\n424:\tlearn: 0.0073934\ttotal: 17m 14s\tremaining: 23m 19s\n425:\tlearn: 0.0073924\ttotal: 17m 16s\tremaining: 23m 17s\n426:\tlearn: 0.0073912\ttotal: 17m 18s\tremaining: 23m 14s\n427:\tlearn: 0.0073911\ttotal: 17m 20s\tremaining: 23m 10s\n428:\tlearn: 0.0073907\ttotal: 17m 22s\tremaining: 23m 8s\n429:\tlearn: 0.0073900\ttotal: 17m 24s\tremaining: 23m 5s\n430:\tlearn: 0.0073893\ttotal: 17m 27s\tremaining: 23m 2s\n431:\tlearn: 0.0073889\ttotal: 17m 29s\tremaining: 22m 59s\n432:\tlearn: 0.0073888\ttotal: 17m 31s\tremaining: 22m 57s\n433:\tlearn: 0.0073873\ttotal: 17m 33s\tremaining: 22m 54s\n434:\tlearn: 0.0073867\ttotal: 17m 35s\tremaining: 22m 51s\n435:\tlearn: 0.0073858\ttotal: 17m 37s\tremaining: 22m 48s\n436:\tlearn: 0.0073856\ttotal: 17m 39s\tremaining: 22m 45s\n437:\tlearn: 0.0073853\ttotal: 17m 42s\tremaining: 22m 43s\n438:\tlearn: 0.0073847\ttotal: 17m 44s\tremaining: 22m 40s\n439:\tlearn: 0.0073845\ttotal: 17m 46s\tremaining: 22m 37s\n440:\tlearn: 0.0073844\ttotal: 17m 48s\tremaining: 22m 34s\n441:\tlearn: 0.0073840\ttotal: 17m 51s\tremaining: 22m 32s\n442:\tlearn: 0.0073831\ttotal: 17m 54s\tremaining: 22m 31s\n443:\tlearn: 0.0073828\ttotal: 17m 57s\tremaining: 22m 28s\n444:\tlearn: 0.0073826\ttotal: 17m 59s\tremaining: 22m 26s\n445:\tlearn: 0.0073823\ttotal: 18m 2s\tremaining: 22m 24s\n446:\tlearn: 0.0073815\ttotal: 18m 4s\tremaining: 22m 21s\n447:\tlearn: 0.0073810\ttotal: 18m 6s\tremaining: 22m 18s\n448:\tlearn: 0.0073808\ttotal: 18m 9s\tremaining: 22m 16s\n449:\tlearn: 0.0073783\ttotal: 18m 11s\tremaining: 22m 14s\n450:\tlearn: 0.0073777\ttotal: 18m 14s\tremaining: 22m 12s\n451:\tlearn: 0.0073769\ttotal: 18m 16s\tremaining: 22m 9s\n452:\tlearn: 0.0073761\ttotal: 18m 19s\tremaining: 22m 7s\n453:\tlearn: 0.0073758\ttotal: 18m 22s\tremaining: 22m 6s\n454:\tlearn: 0.0073755\ttotal: 18m 24s\tremaining: 22m 3s\n455:\tlearn: 0.0073744\ttotal: 18m 26s\tremaining: 22m\n456:\tlearn: 0.0073739\ttotal: 18m 29s\tremaining: 21m 57s\n457:\tlearn: 0.0073735\ttotal: 18m 31s\tremaining: 21m 54s\n458:\tlearn: 0.0073722\ttotal: 18m 33s\tremaining: 21m 52s\n459:\tlearn: 0.0073718\ttotal: 18m 35s\tremaining: 21m 49s\n460:\tlearn: 0.0073716\ttotal: 18m 37s\tremaining: 21m 47s\n461:\tlearn: 0.0073713\ttotal: 18m 39s\tremaining: 21m 44s\n462:\tlearn: 0.0073707\ttotal: 18m 42s\tremaining: 21m 41s\n463:\tlearn: 0.0073706\ttotal: 18m 44s\tremaining: 21m 38s\n464:\tlearn: 0.0073699\ttotal: 18m 46s\tremaining: 21m 35s\n465:\tlearn: 0.0073697\ttotal: 18m 48s\tremaining: 21m 33s\n466:\tlearn: 0.0073695\ttotal: 18m 50s\tremaining: 21m 30s\n467:\tlearn: 0.0073689\ttotal: 18m 52s\tremaining: 21m 27s\n468:\tlearn: 0.0073687\ttotal: 18m 54s\tremaining: 21m 24s\n469:\tlearn: 0.0073676\ttotal: 18m 56s\tremaining: 21m 21s\n470:\tlearn: 0.0073674\ttotal: 18m 58s\tremaining: 21m 18s\n471:\tlearn: 0.0073671\ttotal: 19m\tremaining: 21m 16s\n472:\tlearn: 0.0073654\ttotal: 19m 3s\tremaining: 21m 13s\n473:\tlearn: 0.0073653\ttotal: 19m 4s\tremaining: 21m 10s\n474:\tlearn: 0.0073651\ttotal: 19m 6s\tremaining: 21m 7s\n475:\tlearn: 0.0073650\ttotal: 19m 8s\tremaining: 21m 4s\n476:\tlearn: 0.0073648\ttotal: 19m 10s\tremaining: 21m 1s\n477:\tlearn: 0.0073645\ttotal: 19m 12s\tremaining: 20m 59s\n478:\tlearn: 0.0073639\ttotal: 19m 15s\tremaining: 20m 56s\n479:\tlearn: 0.0073637\ttotal: 19m 16s\tremaining: 20m 53s\n480:\tlearn: 0.0073625\ttotal: 19m 19s\tremaining: 20m 50s\n481:\tlearn: 0.0073621\ttotal: 19m 21s\tremaining: 20m 47s\n482:\tlearn: 0.0073619\ttotal: 19m 24s\tremaining: 20m 45s\n483:\tlearn: 0.0073617\ttotal: 19m 25s\tremaining: 20m 43s\n484:\tlearn: 0.0073615\ttotal: 19m 28s\tremaining: 20m 40s\n485:\tlearn: 0.0073611\ttotal: 19m 30s\tremaining: 20m 37s\n486:\tlearn: 0.0073608\ttotal: 19m 32s\tremaining: 20m 35s\n487:\tlearn: 0.0073603\ttotal: 19m 35s\tremaining: 20m 32s\n488:\tlearn: 0.0073600\ttotal: 19m 37s\tremaining: 20m 29s\n489:\tlearn: 0.0073592\ttotal: 19m 39s\tremaining: 20m 27s\n490:\tlearn: 0.0073589\ttotal: 19m 40s\tremaining: 20m 24s\n491:\tlearn: 0.0073588\ttotal: 19m 42s\tremaining: 20m 21s\n492:\tlearn: 0.0073586\ttotal: 19m 44s\tremaining: 20m 18s\n493:\tlearn: 0.0073581\ttotal: 19m 46s\tremaining: 20m 14s\n494:\tlearn: 0.0073578\ttotal: 19m 48s\tremaining: 20m 12s\n495:\tlearn: 0.0073567\ttotal: 19m 49s\tremaining: 20m 8s\n496:\tlearn: 0.0073563\ttotal: 19m 52s\tremaining: 20m 6s\n497:\tlearn: 0.0073559\ttotal: 19m 54s\tremaining: 20m 4s\n498:\tlearn: 0.0073554\ttotal: 19m 56s\tremaining: 20m 1s\n499:\tlearn: 0.0073535\ttotal: 19m 59s\tremaining: 19m 59s\n500:\tlearn: 0.0073531\ttotal: 20m\tremaining: 19m 56s\n501:\tlearn: 0.0073528\ttotal: 20m 3s\tremaining: 19m 53s\n502:\tlearn: 0.0073522\ttotal: 20m 5s\tremaining: 19m 51s\n503:\tlearn: 0.0073518\ttotal: 20m 8s\tremaining: 19m 48s\n504:\tlearn: 0.0073513\ttotal: 20m 10s\tremaining: 19m 46s\n505:\tlearn: 0.0073509\ttotal: 20m 12s\tremaining: 19m 43s\n506:\tlearn: 0.0073507\ttotal: 20m 14s\tremaining: 19m 40s\n507:\tlearn: 0.0073505\ttotal: 20m 16s\tremaining: 19m 37s\n508:\tlearn: 0.0073499\ttotal: 20m 18s\tremaining: 19m 34s\n509:\tlearn: 0.0073494\ttotal: 20m 19s\tremaining: 19m 32s\n510:\tlearn: 0.0073492\ttotal: 20m 22s\tremaining: 19m 29s\n511:\tlearn: 0.0073489\ttotal: 20m 24s\tremaining: 19m 27s\n512:\tlearn: 0.0073487\ttotal: 20m 26s\tremaining: 19m 24s\n513:\tlearn: 0.0073483\ttotal: 20m 29s\tremaining: 19m 22s\n514:\tlearn: 0.0073480\ttotal: 20m 30s\tremaining: 19m 19s\n515:\tlearn: 0.0073478\ttotal: 20m 32s\tremaining: 19m 16s\n516:\tlearn: 0.0073473\ttotal: 20m 34s\tremaining: 19m 13s\n517:\tlearn: 0.0073469\ttotal: 20m 36s\tremaining: 19m 10s\n518:\tlearn: 0.0073466\ttotal: 20m 38s\tremaining: 19m 8s\n519:\tlearn: 0.0073465\ttotal: 20m 41s\tremaining: 19m 5s\n520:\tlearn: 0.0073464\ttotal: 20m 43s\tremaining: 19m 2s\n521:\tlearn: 0.0073462\ttotal: 20m 44s\tremaining: 18m 59s\n522:\tlearn: 0.0073457\ttotal: 20m 46s\tremaining: 18m 57s\n523:\tlearn: 0.0073450\ttotal: 20m 49s\tremaining: 18m 54s\n524:\tlearn: 0.0073449\ttotal: 20m 50s\tremaining: 18m 51s\n525:\tlearn: 0.0073446\ttotal: 20m 53s\tremaining: 18m 49s\n526:\tlearn: 0.0073442\ttotal: 20m 56s\tremaining: 18m 47s\n527:\tlearn: 0.0073437\ttotal: 20m 59s\tremaining: 18m 45s\n528:\tlearn: 0.0073435\ttotal: 21m 1s\tremaining: 18m 43s\n529:\tlearn: 0.0073432\ttotal: 21m 4s\tremaining: 18m 41s\n530:\tlearn: 0.0073426\ttotal: 21m 6s\tremaining: 18m 38s\n531:\tlearn: 0.0073424\ttotal: 21m 9s\tremaining: 18m 36s\n532:\tlearn: 0.0073420\ttotal: 21m 11s\tremaining: 18m 33s\n533:\tlearn: 0.0073408\ttotal: 21m 13s\tremaining: 18m 31s\n534:\tlearn: 0.0073406\ttotal: 21m 15s\tremaining: 18m 28s\n535:\tlearn: 0.0073402\ttotal: 21m 17s\tremaining: 18m 26s\n536:\tlearn: 0.0073401\ttotal: 21m 20s\tremaining: 18m 23s\n537:\tlearn: 0.0073396\ttotal: 21m 22s\tremaining: 18m 21s\n538:\tlearn: 0.0073392\ttotal: 21m 24s\tremaining: 18m 18s\n539:\tlearn: 0.0073384\ttotal: 21m 27s\tremaining: 18m 16s\n540:\tlearn: 0.0073380\ttotal: 21m 29s\tremaining: 18m 13s\n541:\tlearn: 0.0073373\ttotal: 21m 31s\tremaining: 18m 11s\n542:\tlearn: 0.0073369\ttotal: 21m 33s\tremaining: 18m 8s\n543:\tlearn: 0.0073366\ttotal: 21m 36s\tremaining: 18m 6s\n544:\tlearn: 0.0073362\ttotal: 21m 38s\tremaining: 18m 3s\n545:\tlearn: 0.0073357\ttotal: 21m 42s\tremaining: 18m 2s\n546:\tlearn: 0.0073356\ttotal: 21m 45s\tremaining: 18m 1s\n547:\tlearn: 0.0073346\ttotal: 21m 47s\tremaining: 17m 58s\n548:\tlearn: 0.0073341\ttotal: 21m 50s\tremaining: 17m 56s\n549:\tlearn: 0.0073336\ttotal: 21m 53s\tremaining: 17m 54s\n550:\tlearn: 0.0073326\ttotal: 21m 55s\tremaining: 17m 52s\n551:\tlearn: 0.0073323\ttotal: 21m 58s\tremaining: 17m 50s\n552:\tlearn: 0.0073319\ttotal: 22m 1s\tremaining: 17m 48s\n553:\tlearn: 0.0073317\ttotal: 22m 3s\tremaining: 17m 45s\n554:\tlearn: 0.0073312\ttotal: 22m 6s\tremaining: 17m 43s\n555:\tlearn: 0.0073304\ttotal: 22m 8s\tremaining: 17m 41s\n556:\tlearn: 0.0073302\ttotal: 22m 10s\tremaining: 17m 38s\n557:\tlearn: 0.0073300\ttotal: 22m 12s\tremaining: 17m 35s\n558:\tlearn: 0.0073298\ttotal: 22m 14s\tremaining: 17m 32s\n559:\tlearn: 0.0073288\ttotal: 22m 16s\tremaining: 17m 30s\n560:\tlearn: 0.0073285\ttotal: 22m 19s\tremaining: 17m 28s\n561:\tlearn: 0.0073283\ttotal: 22m 21s\tremaining: 17m 25s\n562:\tlearn: 0.0073279\ttotal: 22m 23s\tremaining: 17m 22s\n563:\tlearn: 0.0073278\ttotal: 22m 25s\tremaining: 17m 19s\n564:\tlearn: 0.0073277\ttotal: 22m 27s\tremaining: 17m 17s\n565:\tlearn: 0.0073274\ttotal: 22m 29s\tremaining: 17m 14s\n566:\tlearn: 0.0073269\ttotal: 22m 31s\tremaining: 17m 12s\n567:\tlearn: 0.0073263\ttotal: 22m 34s\tremaining: 17m 10s\n568:\tlearn: 0.0073261\ttotal: 22m 37s\tremaining: 17m 8s\n569:\tlearn: 0.0073258\ttotal: 22m 39s\tremaining: 17m 5s\n570:\tlearn: 0.0073257\ttotal: 22m 41s\tremaining: 17m 2s\n571:\tlearn: 0.0073255\ttotal: 22m 43s\tremaining: 16m 59s\n572:\tlearn: 0.0073247\ttotal: 22m 45s\tremaining: 16m 57s\n573:\tlearn: 0.0073247\ttotal: 22m 47s\tremaining: 16m 54s\n574:\tlearn: 0.0073241\ttotal: 22m 49s\tremaining: 16m 52s\n575:\tlearn: 0.0073238\ttotal: 22m 53s\tremaining: 16m 50s\n576:\tlearn: 0.0073233\ttotal: 22m 55s\tremaining: 16m 48s\n577:\tlearn: 0.0073228\ttotal: 22m 58s\tremaining: 16m 46s\n578:\tlearn: 0.0073224\ttotal: 23m\tremaining: 16m 43s\n579:\tlearn: 0.0073222\ttotal: 23m 2s\tremaining: 16m 41s\n580:\tlearn: 0.0073221\ttotal: 23m 5s\tremaining: 16m 38s\n581:\tlearn: 0.0073219\ttotal: 23m 7s\tremaining: 16m 36s\n582:\tlearn: 0.0073218\ttotal: 23m 9s\tremaining: 16m 33s\n583:\tlearn: 0.0073214\ttotal: 23m 11s\tremaining: 16m 31s\n584:\tlearn: 0.0073209\ttotal: 23m 14s\tremaining: 16m 29s\n585:\tlearn: 0.0073207\ttotal: 23m 17s\tremaining: 16m 26s\n586:\tlearn: 0.0073203\ttotal: 23m 20s\tremaining: 16m 25s\n587:\tlearn: 0.0073201\ttotal: 23m 23s\tremaining: 16m 23s\n588:\tlearn: 0.0073194\ttotal: 23m 25s\tremaining: 16m 20s\n589:\tlearn: 0.0073193\ttotal: 23m 28s\tremaining: 16m 18s\n590:\tlearn: 0.0073191\ttotal: 23m 30s\tremaining: 16m 15s\n591:\tlearn: 0.0073188\ttotal: 23m 32s\tremaining: 16m 13s\n592:\tlearn: 0.0073185\ttotal: 23m 34s\tremaining: 16m 10s\n593:\tlearn: 0.0073176\ttotal: 23m 37s\tremaining: 16m 8s\n594:\tlearn: 0.0073170\ttotal: 23m 39s\tremaining: 16m 6s\n595:\tlearn: 0.0073168\ttotal: 23m 42s\tremaining: 16m 3s\n596:\tlearn: 0.0073159\ttotal: 23m 44s\tremaining: 16m 1s\n597:\tlearn: 0.0073157\ttotal: 23m 46s\tremaining: 15m 58s\n598:\tlearn: 0.0073155\ttotal: 23m 48s\tremaining: 15m 56s\n599:\tlearn: 0.0073150\ttotal: 23m 50s\tremaining: 15m 53s\n600:\tlearn: 0.0073147\ttotal: 23m 52s\tremaining: 15m 51s\n601:\tlearn: 0.0073143\ttotal: 23m 55s\tremaining: 15m 48s\n602:\tlearn: 0.0073142\ttotal: 23m 57s\tremaining: 15m 46s\n603:\tlearn: 0.0073142\ttotal: 24m\tremaining: 15m 44s\n604:\tlearn: 0.0073141\ttotal: 24m 1s\tremaining: 15m 41s\n605:\tlearn: 0.0073137\ttotal: 24m 3s\tremaining: 15m 38s\n606:\tlearn: 0.0073134\ttotal: 24m 6s\tremaining: 15m 36s\n607:\tlearn: 0.0073129\ttotal: 24m 8s\tremaining: 15m 34s\n608:\tlearn: 0.0073124\ttotal: 24m 11s\tremaining: 15m 31s\n609:\tlearn: 0.0073123\ttotal: 24m 13s\tremaining: 15m 29s\n610:\tlearn: 0.0073121\ttotal: 24m 16s\tremaining: 15m 27s\n611:\tlearn: 0.0073119\ttotal: 24m 19s\tremaining: 15m 25s\n612:\tlearn: 0.0073116\ttotal: 24m 21s\tremaining: 15m 22s\n613:\tlearn: 0.0073111\ttotal: 24m 24s\tremaining: 15m 20s\n614:\tlearn: 0.0073109\ttotal: 24m 27s\tremaining: 15m 18s\n615:\tlearn: 0.0073104\ttotal: 24m 29s\tremaining: 15m 15s\n616:\tlearn: 0.0073100\ttotal: 24m 31s\tremaining: 15m 13s\n617:\tlearn: 0.0073095\ttotal: 24m 32s\tremaining: 15m 10s\n618:\tlearn: 0.0073089\ttotal: 24m 34s\tremaining: 15m 7s\n619:\tlearn: 0.0073080\ttotal: 24m 36s\tremaining: 15m 5s\n620:\tlearn: 0.0073078\ttotal: 24m 39s\tremaining: 15m 2s\n621:\tlearn: 0.0073075\ttotal: 24m 40s\tremaining: 15m\n622:\tlearn: 0.0073070\ttotal: 24m 43s\tremaining: 14m 57s\n623:\tlearn: 0.0073068\ttotal: 24m 45s\tremaining: 14m 55s\n624:\tlearn: 0.0073064\ttotal: 24m 47s\tremaining: 14m 52s\n625:\tlearn: 0.0073056\ttotal: 24m 49s\tremaining: 14m 50s\n626:\tlearn: 0.0073054\ttotal: 24m 51s\tremaining: 14m 47s\n627:\tlearn: 0.0073052\ttotal: 24m 53s\tremaining: 14m 44s\n628:\tlearn: 0.0073043\ttotal: 24m 56s\tremaining: 14m 42s\n629:\tlearn: 0.0073042\ttotal: 24m 57s\tremaining: 14m 39s\n630:\tlearn: 0.0073038\ttotal: 24m 59s\tremaining: 14m 37s\n631:\tlearn: 0.0073037\ttotal: 25m 1s\tremaining: 14m 34s\n632:\tlearn: 0.0073033\ttotal: 25m 4s\tremaining: 14m 32s\n633:\tlearn: 0.0073032\ttotal: 25m 6s\tremaining: 14m 29s\n634:\tlearn: 0.0073027\ttotal: 25m 9s\tremaining: 14m 27s\n635:\tlearn: 0.0073025\ttotal: 25m 11s\tremaining: 14m 25s\n636:\tlearn: 0.0073019\ttotal: 25m 13s\tremaining: 14m 22s\n637:\tlearn: 0.0073016\ttotal: 25m 17s\tremaining: 14m 20s\n638:\tlearn: 0.0073010\ttotal: 25m 19s\tremaining: 14m 18s\n639:\tlearn: 0.0073004\ttotal: 25m 21s\tremaining: 14m 15s\n640:\tlearn: 0.0073001\ttotal: 25m 23s\tremaining: 14m 13s\n641:\tlearn: 0.0072998\ttotal: 25m 26s\tremaining: 14m 11s\n642:\tlearn: 0.0072995\ttotal: 25m 28s\tremaining: 14m 8s\n643:\tlearn: 0.0072992\ttotal: 25m 30s\tremaining: 14m 6s\n644:\tlearn: 0.0072987\ttotal: 25m 32s\tremaining: 14m 3s\n645:\tlearn: 0.0072983\ttotal: 25m 35s\tremaining: 14m 1s\n646:\tlearn: 0.0072981\ttotal: 25m 37s\tremaining: 13m 58s\n647:\tlearn: 0.0072979\ttotal: 25m 39s\tremaining: 13m 56s\n648:\tlearn: 0.0072977\ttotal: 25m 41s\tremaining: 13m 53s\n649:\tlearn: 0.0072972\ttotal: 25m 43s\tremaining: 13m 51s\n650:\tlearn: 0.0072969\ttotal: 25m 47s\tremaining: 13m 49s\n651:\tlearn: 0.0072964\ttotal: 25m 49s\tremaining: 13m 47s\n652:\tlearn: 0.0072962\ttotal: 25m 51s\tremaining: 13m 44s\n653:\tlearn: 0.0072959\ttotal: 25m 54s\tremaining: 13m 42s\n654:\tlearn: 0.0072956\ttotal: 25m 56s\tremaining: 13m 39s\n655:\tlearn: 0.0072952\ttotal: 25m 58s\tremaining: 13m 37s\n656:\tlearn: 0.0072950\ttotal: 26m\tremaining: 13m 34s\n657:\tlearn: 0.0072947\ttotal: 26m 3s\tremaining: 13m 32s\n658:\tlearn: 0.0072944\ttotal: 26m 5s\tremaining: 13m 30s\n659:\tlearn: 0.0072943\ttotal: 26m 8s\tremaining: 13m 27s\n660:\tlearn: 0.0072939\ttotal: 26m 10s\tremaining: 13m 25s\n661:\tlearn: 0.0072929\ttotal: 26m 13s\tremaining: 13m 23s\n662:\tlearn: 0.0072928\ttotal: 26m 16s\tremaining: 13m 21s\n663:\tlearn: 0.0072924\ttotal: 26m 18s\tremaining: 13m 18s\n664:\tlearn: 0.0072920\ttotal: 26m 21s\tremaining: 13m 16s\n665:\tlearn: 0.0072916\ttotal: 26m 23s\tremaining: 13m 14s\n666:\tlearn: 0.0072916\ttotal: 26m 25s\tremaining: 13m 11s\n667:\tlearn: 0.0072904\ttotal: 26m 28s\tremaining: 13m 9s\n668:\tlearn: 0.0072900\ttotal: 26m 31s\tremaining: 13m 7s\n669:\tlearn: 0.0072896\ttotal: 26m 34s\tremaining: 13m 5s\n670:\tlearn: 0.0072895\ttotal: 26m 36s\tremaining: 13m 2s\n671:\tlearn: 0.0072892\ttotal: 26m 39s\tremaining: 13m\n672:\tlearn: 0.0072889\ttotal: 26m 41s\tremaining: 12m 58s\n673:\tlearn: 0.0072885\ttotal: 26m 44s\tremaining: 12m 56s\n674:\tlearn: 0.0072883\ttotal: 26m 47s\tremaining: 12m 54s\n675:\tlearn: 0.0072880\ttotal: 26m 50s\tremaining: 12m 51s\n676:\tlearn: 0.0072874\ttotal: 26m 52s\tremaining: 12m 49s\n677:\tlearn: 0.0072872\ttotal: 26m 55s\tremaining: 12m 47s\n678:\tlearn: 0.0072860\ttotal: 26m 57s\tremaining: 12m 44s\n679:\tlearn: 0.0072858\ttotal: 26m 59s\tremaining: 12m 42s\n680:\tlearn: 0.0072856\ttotal: 27m 2s\tremaining: 12m 40s\n681:\tlearn: 0.0072855\ttotal: 27m 5s\tremaining: 12m 37s\n682:\tlearn: 0.0072851\ttotal: 27m 7s\tremaining: 12m 35s\n683:\tlearn: 0.0072845\ttotal: 27m 9s\tremaining: 12m 32s\n684:\tlearn: 0.0072841\ttotal: 27m 11s\tremaining: 12m 30s\n685:\tlearn: 0.0072839\ttotal: 27m 14s\tremaining: 12m 28s\n686:\tlearn: 0.0072834\ttotal: 27m 16s\tremaining: 12m 25s\n687:\tlearn: 0.0072834\ttotal: 27m 19s\tremaining: 12m 23s\n688:\tlearn: 0.0072830\ttotal: 27m 21s\tremaining: 12m 21s\n689:\tlearn: 0.0072826\ttotal: 27m 23s\tremaining: 12m 18s\n690:\tlearn: 0.0072824\ttotal: 27m 25s\tremaining: 12m 15s\n691:\tlearn: 0.0072821\ttotal: 27m 28s\tremaining: 12m 13s\n692:\tlearn: 0.0072818\ttotal: 27m 30s\tremaining: 12m 10s\n693:\tlearn: 0.0072811\ttotal: 27m 32s\tremaining: 12m 8s\n694:\tlearn: 0.0072810\ttotal: 27m 33s\tremaining: 12m 5s\n695:\tlearn: 0.0072808\ttotal: 27m 35s\tremaining: 12m 3s\n696:\tlearn: 0.0072805\ttotal: 27m 37s\tremaining: 12m\n697:\tlearn: 0.0072802\ttotal: 27m 39s\tremaining: 11m 58s\n698:\tlearn: 0.0072799\ttotal: 27m 41s\tremaining: 11m 55s\n699:\tlearn: 0.0072798\ttotal: 27m 43s\tremaining: 11m 52s\n700:\tlearn: 0.0072796\ttotal: 27m 45s\tremaining: 11m 50s\n701:\tlearn: 0.0072793\ttotal: 27m 47s\tremaining: 11m 47s\n702:\tlearn: 0.0072787\ttotal: 27m 50s\tremaining: 11m 45s\n703:\tlearn: 0.0072783\ttotal: 27m 52s\tremaining: 11m 43s\n704:\tlearn: 0.0072780\ttotal: 27m 54s\tremaining: 11m 40s\n705:\tlearn: 0.0072776\ttotal: 27m 56s\tremaining: 11m 38s\n706:\tlearn: 0.0072767\ttotal: 27m 58s\tremaining: 11m 35s\n707:\tlearn: 0.0072758\ttotal: 28m 1s\tremaining: 11m 33s\n708:\tlearn: 0.0072750\ttotal: 28m 3s\tremaining: 11m 31s\n709:\tlearn: 0.0072746\ttotal: 28m 6s\tremaining: 11m 29s\n710:\tlearn: 0.0072741\ttotal: 28m 9s\tremaining: 11m 26s\n711:\tlearn: 0.0072739\ttotal: 28m 12s\tremaining: 11m 24s\n712:\tlearn: 0.0072736\ttotal: 28m 14s\tremaining: 11m 21s\n713:\tlearn: 0.0072731\ttotal: 28m 16s\tremaining: 11m 19s\n714:\tlearn: 0.0072726\ttotal: 28m 18s\tremaining: 11m 16s\n715:\tlearn: 0.0072714\ttotal: 28m 21s\tremaining: 11m 14s\n716:\tlearn: 0.0072705\ttotal: 28m 23s\tremaining: 11m 12s\n717:\tlearn: 0.0072702\ttotal: 28m 25s\tremaining: 11m 10s\n718:\tlearn: 0.0072694\ttotal: 28m 28s\tremaining: 11m 7s\n719:\tlearn: 0.0072693\ttotal: 28m 30s\tremaining: 11m 5s\n720:\tlearn: 0.0072692\ttotal: 28m 31s\tremaining: 11m 2s\n721:\tlearn: 0.0072691\ttotal: 28m 33s\tremaining: 10m 59s\n722:\tlearn: 0.0072683\ttotal: 28m 35s\tremaining: 10m 57s\n723:\tlearn: 0.0072680\ttotal: 28m 38s\tremaining: 10m 55s\n724:\tlearn: 0.0072677\ttotal: 28m 40s\tremaining: 10m 52s\n725:\tlearn: 0.0072673\ttotal: 28m 42s\tremaining: 10m 50s\n726:\tlearn: 0.0072668\ttotal: 28m 44s\tremaining: 10m 47s\n727:\tlearn: 0.0072665\ttotal: 28m 47s\tremaining: 10m 45s\n728:\tlearn: 0.0072661\ttotal: 28m 49s\tremaining: 10m 43s\n729:\tlearn: 0.0072659\ttotal: 28m 52s\tremaining: 10m 40s\n730:\tlearn: 0.0072654\ttotal: 28m 54s\tremaining: 10m 38s\n731:\tlearn: 0.0072650\ttotal: 28m 56s\tremaining: 10m 35s\n732:\tlearn: 0.0072649\ttotal: 28m 58s\tremaining: 10m 33s\n733:\tlearn: 0.0072647\ttotal: 29m\tremaining: 10m 30s\n734:\tlearn: 0.0072643\ttotal: 29m 2s\tremaining: 10m 28s\n735:\tlearn: 0.0072640\ttotal: 29m 5s\tremaining: 10m 25s\n736:\tlearn: 0.0072637\ttotal: 29m 7s\tremaining: 10m 23s\n737:\tlearn: 0.0072635\ttotal: 29m 9s\tremaining: 10m 21s\n738:\tlearn: 0.0072632\ttotal: 29m 11s\tremaining: 10m 18s\n739:\tlearn: 0.0072630\ttotal: 29m 13s\tremaining: 10m 16s\n740:\tlearn: 0.0072628\ttotal: 29m 15s\tremaining: 10m 13s\n741:\tlearn: 0.0072618\ttotal: 29m 17s\tremaining: 10m 11s\n742:\tlearn: 0.0072613\ttotal: 29m 19s\tremaining: 10m 8s\n743:\tlearn: 0.0072612\ttotal: 29m 21s\tremaining: 10m 6s\n744:\tlearn: 0.0072608\ttotal: 29m 23s\tremaining: 10m 3s\n745:\tlearn: 0.0072604\ttotal: 29m 25s\tremaining: 10m 1s\n746:\tlearn: 0.0072602\ttotal: 29m 27s\tremaining: 9m 58s\n747:\tlearn: 0.0072599\ttotal: 29m 28s\tremaining: 9m 55s\n748:\tlearn: 0.0072596\ttotal: 29m 31s\tremaining: 9m 53s\n749:\tlearn: 0.0072593\ttotal: 29m 33s\tremaining: 9m 51s\n750:\tlearn: 0.0072591\ttotal: 29m 35s\tremaining: 9m 48s\n751:\tlearn: 0.0072589\ttotal: 29m 37s\tremaining: 9m 46s\n752:\tlearn: 0.0072581\ttotal: 29m 39s\tremaining: 9m 43s\n753:\tlearn: 0.0072577\ttotal: 29m 42s\tremaining: 9m 41s\n754:\tlearn: 0.0072576\ttotal: 29m 43s\tremaining: 9m 38s\n755:\tlearn: 0.0072575\ttotal: 29m 45s\tremaining: 9m 36s\n756:\tlearn: 0.0072573\ttotal: 29m 47s\tremaining: 9m 33s\n757:\tlearn: 0.0072572\ttotal: 29m 49s\tremaining: 9m 31s\n758:\tlearn: 0.0072569\ttotal: 29m 51s\tremaining: 9m 28s\n759:\tlearn: 0.0072565\ttotal: 29m 53s\tremaining: 9m 26s\n760:\tlearn: 0.0072562\ttotal: 29m 55s\tremaining: 9m 23s\n761:\tlearn: 0.0072544\ttotal: 29m 56s\tremaining: 9m 21s\n762:\tlearn: 0.0072541\ttotal: 29m 59s\tremaining: 9m 18s\n763:\tlearn: 0.0072539\ttotal: 30m\tremaining: 9m 16s\n764:\tlearn: 0.0072538\ttotal: 30m 2s\tremaining: 9m 13s\n765:\tlearn: 0.0072533\ttotal: 30m 4s\tremaining: 9m 11s\n766:\tlearn: 0.0072530\ttotal: 30m 6s\tremaining: 9m 8s\n767:\tlearn: 0.0072526\ttotal: 30m 8s\tremaining: 9m 6s\n768:\tlearn: 0.0072522\ttotal: 30m 10s\tremaining: 9m 3s\n769:\tlearn: 0.0072520\ttotal: 30m 12s\tremaining: 9m 1s\n770:\tlearn: 0.0072518\ttotal: 30m 15s\tremaining: 8m 59s\n771:\tlearn: 0.0072514\ttotal: 30m 17s\tremaining: 8m 56s\n772:\tlearn: 0.0072514\ttotal: 30m 19s\tremaining: 8m 54s\n773:\tlearn: 0.0072507\ttotal: 30m 21s\tremaining: 8m 51s\n774:\tlearn: 0.0072502\ttotal: 30m 24s\tremaining: 8m 49s\n775:\tlearn: 0.0072494\ttotal: 30m 26s\tremaining: 8m 47s\n776:\tlearn: 0.0072490\ttotal: 30m 28s\tremaining: 8m 44s\n777:\tlearn: 0.0072488\ttotal: 30m 30s\tremaining: 8m 42s\n778:\tlearn: 0.0072486\ttotal: 30m 32s\tremaining: 8m 39s\n779:\tlearn: 0.0072482\ttotal: 30m 35s\tremaining: 8m 37s\n780:\tlearn: 0.0072473\ttotal: 30m 37s\tremaining: 8m 35s\n781:\tlearn: 0.0072472\ttotal: 30m 39s\tremaining: 8m 32s\n782:\tlearn: 0.0072472\ttotal: 30m 41s\tremaining: 8m 30s\n783:\tlearn: 0.0072463\ttotal: 30m 43s\tremaining: 8m 27s\n784:\tlearn: 0.0072460\ttotal: 30m 45s\tremaining: 8m 25s\n785:\tlearn: 0.0072458\ttotal: 30m 47s\tremaining: 8m 23s\n786:\tlearn: 0.0072455\ttotal: 30m 50s\tremaining: 8m 20s\n787:\tlearn: 0.0072446\ttotal: 30m 52s\tremaining: 8m 18s\n788:\tlearn: 0.0072443\ttotal: 30m 54s\tremaining: 8m 16s\n789:\tlearn: 0.0072441\ttotal: 30m 57s\tremaining: 8m 13s\n790:\tlearn: 0.0072439\ttotal: 30m 59s\tremaining: 8m 11s\n791:\tlearn: 0.0072437\ttotal: 31m 1s\tremaining: 8m 8s\n792:\tlearn: 0.0072430\ttotal: 31m 3s\tremaining: 8m 6s\n793:\tlearn: 0.0072429\ttotal: 31m 6s\tremaining: 8m 4s\n794:\tlearn: 0.0072426\ttotal: 31m 8s\tremaining: 8m 1s\n795:\tlearn: 0.0072424\ttotal: 31m 10s\tremaining: 7m 59s\n796:\tlearn: 0.0072417\ttotal: 31m 12s\tremaining: 7m 57s\n797:\tlearn: 0.0072414\ttotal: 31m 15s\tremaining: 7m 54s\n798:\tlearn: 0.0072411\ttotal: 31m 17s\tremaining: 7m 52s\n799:\tlearn: 0.0072408\ttotal: 31m 19s\tremaining: 7m 49s\n800:\tlearn: 0.0072401\ttotal: 31m 22s\tremaining: 7m 47s\n801:\tlearn: 0.0072397\ttotal: 31m 24s\tremaining: 7m 45s\n802:\tlearn: 0.0072394\ttotal: 31m 26s\tremaining: 7m 42s\n803:\tlearn: 0.0072392\ttotal: 31m 28s\tremaining: 7m 40s\n804:\tlearn: 0.0072384\ttotal: 31m 30s\tremaining: 7m 37s\n805:\tlearn: 0.0072382\ttotal: 31m 32s\tremaining: 7m 35s\n806:\tlearn: 0.0072380\ttotal: 31m 35s\tremaining: 7m 33s\n807:\tlearn: 0.0072379\ttotal: 31m 37s\tremaining: 7m 30s\n808:\tlearn: 0.0072377\ttotal: 31m 40s\tremaining: 7m 28s\n809:\tlearn: 0.0072375\ttotal: 31m 42s\tremaining: 7m 26s\n810:\tlearn: 0.0072373\ttotal: 31m 44s\tremaining: 7m 23s\n811:\tlearn: 0.0072369\ttotal: 31m 46s\tremaining: 7m 21s\n812:\tlearn: 0.0072366\ttotal: 31m 49s\tremaining: 7m 19s\n813:\tlearn: 0.0072357\ttotal: 31m 52s\tremaining: 7m 16s\n814:\tlearn: 0.0072355\ttotal: 31m 54s\tremaining: 7m 14s\n815:\tlearn: 0.0072353\ttotal: 31m 56s\tremaining: 7m 12s\n816:\tlearn: 0.0072350\ttotal: 31m 58s\tremaining: 7m 9s\n817:\tlearn: 0.0072346\ttotal: 32m\tremaining: 7m 7s\n818:\tlearn: 0.0072344\ttotal: 32m 2s\tremaining: 7m 4s\n819:\tlearn: 0.0072342\ttotal: 32m 4s\tremaining: 7m 2s\n820:\tlearn: 0.0072340\ttotal: 32m 6s\tremaining: 7m\n821:\tlearn: 0.0072338\ttotal: 32m 9s\tremaining: 6m 57s\n822:\tlearn: 0.0072336\ttotal: 32m 11s\tremaining: 6m 55s\n823:\tlearn: 0.0072333\ttotal: 32m 14s\tremaining: 6m 53s\n824:\tlearn: 0.0072331\ttotal: 32m 17s\tremaining: 6m 50s\n825:\tlearn: 0.0072328\ttotal: 32m 19s\tremaining: 6m 48s\n826:\tlearn: 0.0072323\ttotal: 32m 20s\tremaining: 6m 45s\n827:\tlearn: 0.0072310\ttotal: 32m 23s\tremaining: 6m 43s\n828:\tlearn: 0.0072307\ttotal: 32m 25s\tremaining: 6m 41s\n829:\tlearn: 0.0072305\ttotal: 32m 27s\tremaining: 6m 38s\n830:\tlearn: 0.0072303\ttotal: 32m 30s\tremaining: 6m 36s\n831:\tlearn: 0.0072300\ttotal: 32m 32s\tremaining: 6m 34s\n832:\tlearn: 0.0072298\ttotal: 32m 34s\tremaining: 6m 31s\n833:\tlearn: 0.0072294\ttotal: 32m 36s\tremaining: 6m 29s\n834:\tlearn: 0.0072290\ttotal: 32m 38s\tremaining: 6m 27s\n835:\tlearn: 0.0072287\ttotal: 32m 40s\tremaining: 6m 24s\n836:\tlearn: 0.0072284\ttotal: 32m 42s\tremaining: 6m 22s\n837:\tlearn: 0.0072283\ttotal: 32m 45s\tremaining: 6m 19s\n838:\tlearn: 0.0072280\ttotal: 32m 47s\tremaining: 6m 17s\n839:\tlearn: 0.0072275\ttotal: 32m 49s\tremaining: 6m 15s\n840:\tlearn: 0.0072271\ttotal: 32m 51s\tremaining: 6m 12s\n841:\tlearn: 0.0072265\ttotal: 32m 53s\tremaining: 6m 10s\n842:\tlearn: 0.0072264\ttotal: 32m 55s\tremaining: 6m 7s\n843:\tlearn: 0.0072262\ttotal: 32m 57s\tremaining: 6m 5s\n844:\tlearn: 0.0072261\ttotal: 33m\tremaining: 6m 3s\n845:\tlearn: 0.0072258\ttotal: 33m 2s\tremaining: 6m\n846:\tlearn: 0.0072256\ttotal: 33m 4s\tremaining: 5m 58s\n847:\tlearn: 0.0072255\ttotal: 33m 5s\tremaining: 5m 55s\n848:\tlearn: 0.0072252\ttotal: 33m 8s\tremaining: 5m 53s\n849:\tlearn: 0.0072250\ttotal: 33m 10s\tremaining: 5m 51s\n850:\tlearn: 0.0072246\ttotal: 33m 12s\tremaining: 5m 48s\n851:\tlearn: 0.0072242\ttotal: 33m 14s\tremaining: 5m 46s\n852:\tlearn: 0.0072241\ttotal: 33m 16s\tremaining: 5m 44s\n853:\tlearn: 0.0072236\ttotal: 33m 18s\tremaining: 5m 41s\n854:\tlearn: 0.0072235\ttotal: 33m 21s\tremaining: 5m 39s\n855:\tlearn: 0.0072231\ttotal: 33m 23s\tremaining: 5m 37s\n856:\tlearn: 0.0072227\ttotal: 33m 26s\tremaining: 5m 34s\n857:\tlearn: 0.0072225\ttotal: 33m 29s\tremaining: 5m 32s\n858:\tlearn: 0.0072224\ttotal: 33m 31s\tremaining: 5m 30s\n859:\tlearn: 0.0072221\ttotal: 33m 33s\tremaining: 5m 27s\n860:\tlearn: 0.0072218\ttotal: 33m 36s\tremaining: 5m 25s\n861:\tlearn: 0.0072214\ttotal: 33m 39s\tremaining: 5m 23s\n862:\tlearn: 0.0072214\ttotal: 33m 41s\tremaining: 5m 20s\n863:\tlearn: 0.0072211\ttotal: 33m 44s\tremaining: 5m 18s\n864:\tlearn: 0.0072210\ttotal: 33m 46s\tremaining: 5m 16s\n865:\tlearn: 0.0072207\ttotal: 33m 48s\tremaining: 5m 13s\n866:\tlearn: 0.0072206\ttotal: 33m 51s\tremaining: 5m 11s\n867:\tlearn: 0.0072204\ttotal: 33m 53s\tremaining: 5m 9s\n868:\tlearn: 0.0072203\ttotal: 33m 55s\tremaining: 5m 6s\n869:\tlearn: 0.0072202\ttotal: 33m 57s\tremaining: 5m 4s\n870:\tlearn: 0.0072200\ttotal: 33m 59s\tremaining: 5m 2s\n871:\tlearn: 0.0072196\ttotal: 34m 2s\tremaining: 4m 59s\n872:\tlearn: 0.0072193\ttotal: 34m 4s\tremaining: 4m 57s\n873:\tlearn: 0.0072191\ttotal: 34m 6s\tremaining: 4m 54s\n874:\tlearn: 0.0072189\ttotal: 34m 8s\tremaining: 4m 52s\n875:\tlearn: 0.0072185\ttotal: 34m 10s\tremaining: 4m 50s\n876:\tlearn: 0.0072184\ttotal: 34m 12s\tremaining: 4m 47s\n877:\tlearn: 0.0072182\ttotal: 34m 14s\tremaining: 4m 45s\n878:\tlearn: 0.0072178\ttotal: 34m 16s\tremaining: 4m 43s\n879:\tlearn: 0.0072176\ttotal: 34m 18s\tremaining: 4m 40s\n880:\tlearn: 0.0072175\ttotal: 34m 20s\tremaining: 4m 38s\n881:\tlearn: 0.0072172\ttotal: 34m 22s\tremaining: 4m 35s\n882:\tlearn: 0.0072169\ttotal: 34m 24s\tremaining: 4m 33s\n883:\tlearn: 0.0072167\ttotal: 34m 26s\tremaining: 4m 31s\n884:\tlearn: 0.0072165\ttotal: 34m 28s\tremaining: 4m 28s\n885:\tlearn: 0.0072157\ttotal: 34m 30s\tremaining: 4m 26s\n886:\tlearn: 0.0072154\ttotal: 34m 33s\tremaining: 4m 24s\n887:\tlearn: 0.0072149\ttotal: 34m 35s\tremaining: 4m 21s\n888:\tlearn: 0.0072148\ttotal: 34m 37s\tremaining: 4m 19s\n889:\tlearn: 0.0072144\ttotal: 34m 39s\tremaining: 4m 17s\n890:\tlearn: 0.0072140\ttotal: 34m 42s\tremaining: 4m 14s\n891:\tlearn: 0.0072137\ttotal: 34m 44s\tremaining: 4m 12s\n892:\tlearn: 0.0072134\ttotal: 34m 46s\tremaining: 4m 9s\n893:\tlearn: 0.0072131\ttotal: 34m 48s\tremaining: 4m 7s\n894:\tlearn: 0.0072130\ttotal: 34m 50s\tremaining: 4m 5s\n895:\tlearn: 0.0072127\ttotal: 34m 52s\tremaining: 4m 2s\n896:\tlearn: 0.0072125\ttotal: 34m 53s\tremaining: 4m\n897:\tlearn: 0.0072121\ttotal: 34m 56s\tremaining: 3m 58s\n898:\tlearn: 0.0072120\ttotal: 34m 58s\tremaining: 3m 55s\n899:\tlearn: 0.0072117\ttotal: 35m\tremaining: 3m 53s\n900:\tlearn: 0.0072113\ttotal: 35m 2s\tremaining: 3m 51s\n901:\tlearn: 0.0072111\ttotal: 35m 5s\tremaining: 3m 48s\n902:\tlearn: 0.0072109\ttotal: 35m 7s\tremaining: 3m 46s\n903:\tlearn: 0.0072108\ttotal: 35m 9s\tremaining: 3m 43s\n904:\tlearn: 0.0072107\ttotal: 35m 11s\tremaining: 3m 41s\n905:\tlearn: 0.0072104\ttotal: 35m 13s\tremaining: 3m 39s\n906:\tlearn: 0.0072102\ttotal: 35m 15s\tremaining: 3m 36s\n907:\tlearn: 0.0072092\ttotal: 35m 17s\tremaining: 3m 34s\n908:\tlearn: 0.0072090\ttotal: 35m 19s\tremaining: 3m 32s\n909:\tlearn: 0.0072088\ttotal: 35m 20s\tremaining: 3m 29s\n910:\tlearn: 0.0072084\ttotal: 35m 22s\tremaining: 3m 27s\n911:\tlearn: 0.0072082\ttotal: 35m 24s\tremaining: 3m 25s\n912:\tlearn: 0.0072080\ttotal: 35m 26s\tremaining: 3m 22s\n913:\tlearn: 0.0072079\ttotal: 35m 28s\tremaining: 3m 20s\n914:\tlearn: 0.0072076\ttotal: 35m 30s\tremaining: 3m 17s\n915:\tlearn: 0.0072068\ttotal: 35m 32s\tremaining: 3m 15s\n916:\tlearn: 0.0072063\ttotal: 35m 35s\tremaining: 3m 13s\n917:\tlearn: 0.0072061\ttotal: 35m 37s\tremaining: 3m 10s\n918:\tlearn: 0.0072059\ttotal: 35m 39s\tremaining: 3m 8s\n919:\tlearn: 0.0072057\ttotal: 35m 41s\tremaining: 3m 6s\n920:\tlearn: 0.0072050\ttotal: 35m 44s\tremaining: 3m 3s\n921:\tlearn: 0.0072045\ttotal: 35m 46s\tremaining: 3m 1s\n922:\tlearn: 0.0072043\ttotal: 35m 49s\tremaining: 2m 59s\n923:\tlearn: 0.0072041\ttotal: 35m 52s\tremaining: 2m 57s\n924:\tlearn: 0.0072038\ttotal: 35m 54s\tremaining: 2m 54s\n925:\tlearn: 0.0072035\ttotal: 35m 56s\tremaining: 2m 52s\n926:\tlearn: 0.0072032\ttotal: 35m 58s\tremaining: 2m 49s\n927:\tlearn: 0.0072029\ttotal: 36m\tremaining: 2m 47s\n928:\tlearn: 0.0072029\ttotal: 36m 2s\tremaining: 2m 45s\n929:\tlearn: 0.0072026\ttotal: 36m 4s\tremaining: 2m 42s\n930:\tlearn: 0.0072023\ttotal: 36m 7s\tremaining: 2m 40s\n931:\tlearn: 0.0072018\ttotal: 36m 10s\tremaining: 2m 38s\n932:\tlearn: 0.0072012\ttotal: 36m 12s\tremaining: 2m 36s\n933:\tlearn: 0.0072011\ttotal: 36m 15s\tremaining: 2m 33s\n934:\tlearn: 0.0072000\ttotal: 36m 18s\tremaining: 2m 31s\n935:\tlearn: 0.0071997\ttotal: 36m 21s\tremaining: 2m 29s\n936:\tlearn: 0.0071996\ttotal: 36m 24s\tremaining: 2m 26s\n937:\tlearn: 0.0071995\ttotal: 36m 27s\tremaining: 2m 24s\n938:\tlearn: 0.0071994\ttotal: 36m 30s\tremaining: 2m 22s\n939:\tlearn: 0.0071992\ttotal: 36m 32s\tremaining: 2m 19s\n940:\tlearn: 0.0071991\ttotal: 36m 34s\tremaining: 2m 17s\n941:\tlearn: 0.0071989\ttotal: 36m 37s\tremaining: 2m 15s\n942:\tlearn: 0.0071986\ttotal: 36m 39s\tremaining: 2m 12s\n943:\tlearn: 0.0071983\ttotal: 36m 42s\tremaining: 2m 10s\n944:\tlearn: 0.0071982\ttotal: 36m 44s\tremaining: 2m 8s\n945:\tlearn: 0.0071978\ttotal: 36m 46s\tremaining: 2m 5s\n946:\tlearn: 0.0071977\ttotal: 36m 48s\tremaining: 2m 3s\n947:\tlearn: 0.0071975\ttotal: 36m 50s\tremaining: 2m 1s\n948:\tlearn: 0.0071971\ttotal: 36m 52s\tremaining: 1m 58s\n949:\tlearn: 0.0071966\ttotal: 36m 55s\tremaining: 1m 56s\n950:\tlearn: 0.0071958\ttotal: 36m 57s\tremaining: 1m 54s\n951:\tlearn: 0.0071957\ttotal: 36m 59s\tremaining: 1m 51s\n952:\tlearn: 0.0071953\ttotal: 37m 1s\tremaining: 1m 49s\n953:\tlearn: 0.0071951\ttotal: 37m 3s\tremaining: 1m 47s\n954:\tlearn: 0.0071946\ttotal: 37m 6s\tremaining: 1m 44s\n955:\tlearn: 0.0071938\ttotal: 37m 8s\tremaining: 1m 42s\n956:\tlearn: 0.0071935\ttotal: 37m 10s\tremaining: 1m 40s\n957:\tlearn: 0.0071929\ttotal: 37m 13s\tremaining: 1m 37s\n958:\tlearn: 0.0071926\ttotal: 37m 15s\tremaining: 1m 35s\n959:\tlearn: 0.0071924\ttotal: 37m 18s\tremaining: 1m 33s\n960:\tlearn: 0.0071922\ttotal: 37m 20s\tremaining: 1m 30s\n961:\tlearn: 0.0071919\ttotal: 37m 23s\tremaining: 1m 28s\n962:\tlearn: 0.0071917\ttotal: 37m 26s\tremaining: 1m 26s\n963:\tlearn: 0.0071913\ttotal: 37m 28s\tremaining: 1m 23s\n964:\tlearn: 0.0071911\ttotal: 37m 30s\tremaining: 1m 21s\n965:\tlearn: 0.0071910\ttotal: 37m 32s\tremaining: 1m 19s\n966:\tlearn: 0.0071906\ttotal: 37m 35s\tremaining: 1m 16s\n967:\tlearn: 0.0071903\ttotal: 37m 37s\tremaining: 1m 14s\n968:\tlearn: 0.0071901\ttotal: 37m 39s\tremaining: 1m 12s\n969:\tlearn: 0.0071897\ttotal: 37m 41s\tremaining: 1m 9s\n970:\tlearn: 0.0071894\ttotal: 37m 43s\tremaining: 1m 7s\n971:\tlearn: 0.0071888\ttotal: 37m 45s\tremaining: 1m 5s\n972:\tlearn: 0.0071886\ttotal: 37m 48s\tremaining: 1m 2s\n973:\tlearn: 0.0071881\ttotal: 37m 52s\tremaining: 1m\n974:\tlearn: 0.0071880\ttotal: 37m 54s\tremaining: 58.3s\n975:\tlearn: 0.0071878\ttotal: 37m 57s\tremaining: 56s\n976:\tlearn: 0.0071873\ttotal: 37m 59s\tremaining: 53.7s\n977:\tlearn: 0.0071872\ttotal: 38m 3s\tremaining: 51.4s\n978:\tlearn: 0.0071870\ttotal: 38m 5s\tremaining: 49s\n979:\tlearn: 0.0071869\ttotal: 38m 7s\tremaining: 46.7s\n980:\tlearn: 0.0071866\ttotal: 38m 9s\tremaining: 44.4s\n981:\tlearn: 0.0071864\ttotal: 38m 12s\tremaining: 42s\n982:\tlearn: 0.0071856\ttotal: 38m 15s\tremaining: 39.7s\n983:\tlearn: 0.0071850\ttotal: 38m 17s\tremaining: 37.4s\n984:\tlearn: 0.0071847\ttotal: 38m 19s\tremaining: 35s\n985:\tlearn: 0.0071845\ttotal: 38m 22s\tremaining: 32.7s\n986:\tlearn: 0.0071839\ttotal: 38m 24s\tremaining: 30.4s\n987:\tlearn: 0.0071837\ttotal: 38m 27s\tremaining: 28s\n988:\tlearn: 0.0071834\ttotal: 38m 28s\tremaining: 25.7s\n989:\tlearn: 0.0071831\ttotal: 38m 31s\tremaining: 23.3s\n990:\tlearn: 0.0071830\ttotal: 38m 33s\tremaining: 21s\n991:\tlearn: 0.0071829\ttotal: 38m 35s\tremaining: 18.7s\n992:\tlearn: 0.0071826\ttotal: 38m 37s\tremaining: 16.3s\n993:\tlearn: 0.0071824\ttotal: 38m 39s\tremaining: 14s\n994:\tlearn: 0.0071822\ttotal: 38m 41s\tremaining: 11.7s\n995:\tlearn: 0.0071821\ttotal: 38m 44s\tremaining: 9.33s\n996:\tlearn: 0.0071818\ttotal: 38m 46s\tremaining: 7s\n997:\tlearn: 0.0071818\ttotal: 38m 48s\tremaining: 4.67s\n998:\tlearn: 0.0071811\ttotal: 38m 50s\tremaining: 2.33s\n999:\tlearn: 0.0071809\ttotal: 38m 52s\tremaining: 0us\n"
],
[
"y_pred_prob = model.predict_proba(X_test)",
"_____no_output_____"
],
[
"gc.collect()\n\noutput = pd.DataFrame(test['click_id'])\noutput['is_attributed'] = y_pred_prob[:,1]\noutput = output.set_index('click_id')\n\noutput.to_csv(\"submission_stackF.csv\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04e4e9016e9205773510b8ef9d515c63733e57b | 44,361 | ipynb | Jupyter Notebook | executable/REST_example.ipynb | smartdatalake/best_region_search | cfcd21c242d478a47d5dddce601df7da3db8379e | [
"Apache-2.0"
] | 1 | 2020-07-28T14:59:29.000Z | 2020-07-28T14:59:29.000Z | executable/REST_example.ipynb | smartdatalake/best_region_search | cfcd21c242d478a47d5dddce601df7da3db8379e | [
"Apache-2.0"
] | null | null | null | executable/REST_example.ipynb | smartdatalake/best_region_search | cfcd21c242d478a47d5dddce601df7da3db8379e | [
"Apache-2.0"
] | null | null | null | 211.242857 | 38,651 | 0.931832 | [
[
[
"## Import libraries and define const values",
"_____no_output_____"
]
],
[
[
"import json\nimport folium\nfrom geopandas import GeoDataFrame\nfrom pysal.viz.mapclassify import Natural_Breaks\nimport requests\n\nid_field = 'id'\nvalue_field = 'score'\nnum_bins = 4\nfill_color = 'YlOrRd'\nfill_opacity = 0.9\nREST_API_ADDRESS= 'http://10.90.46.32:4646/'\nAlive_URL = REST_API_ADDRESS + 'alive'\nBRS_URL = REST_API_ADDRESS + 'BRS'\nFlush_URL = REST_API_ADDRESS + 'flushBuffer'\nChangeProteus_URL = REST_API_ADDRESS + 'changeProteus'",
"_____no_output_____"
]
],
[
[
"## Identify the areas where start-ups thrive",
"_____no_output_____"
]
],
[
[
"topk = 11 #\neps = 0.1 # We measure distance in radians, where 1 radian is around 100km, and epsilon is the length of each side of the region\nf = \"null\" # \ndist = True\nkeywordsColumn = \"flags\"\nkeywords = \"startup-registroimprese\"\nkeywordsColumn2 = \"\"\nkeywords2 = \"\"\ntable = \"BRSflags\"\n\ndata = {'topk' : topk, 'eps' : eps, 'f' : f, 'input' : table, \"keywordsColumn\" : keywordsColumn, \"keywords\" : keywords,\"keywordsColumn2\":keywordsColumn2,\"keywords2\":keywords2,\"dist\":dist}\nresponse = requests.get(BRS_URL, params=data)\nprint(response.text)\nres = json.loads(response.text)\nresults_geojson={\"type\":\"FeatureCollection\",\"features\":[]}\nfor region in res:\n results_geojson['features'].append({\"type\": \"Feature\", \"geometry\": { \"type\": \"Point\", \"coordinates\": region['center']},\n \"properties\": {\n \"id\": region['rank'],\n \"score\": region['score']\n }})",
"[\n{\n\"rank\":1,\n\"center\":[9.191005,45.47981],\n\"score\":77.0\n}\n,{\n\"rank\":2,\n\"center\":[12.50779,41.873835],\n\"score\":35.0\n}\n,{\n\"rank\":3,\n\"center\":[7.661105,45.064135],\n\"score\":16.0\n}\n,{\n\"rank\":4,\n\"center\":[14.238015,40.869564999999994],\n\"score\":12.0\n}\n,{\n\"rank\":5,\n\"center\":[11.382850000000001,44.483135],\n\"score\":9.0\n}\n,{\n\"rank\":6,\n\"center\":[9.652125,45.671640000000004],\n\"score\":7.0\n}\n,{\n\"rank\":7,\n\"center\":[11.92423,45.40219000000001],\n\"score\":6.0\n}\n,{\n\"rank\":8,\n\"center\":[18.183224735000003,40.369488649999994],\n\"score\":6.0\n}\n,{\n\"rank\":9,\n\"center\":[11.223689069999999,43.809649345],\n\"score\":6.0\n}\n,{\n\"rank\":10,\n\"center\":[13.353245000000003,38.117855000000006],\n\"score\":6.0\n}\n,{\n\"rank\":11,\n\"center\":[8.93764,44.41054],\n\"score\":5.0\n}\n]\n\n"
]
],
[
[
"### Initialize the map and visualize the output regions",
"_____no_output_____"
]
],
[
[
"m = folium.Map(\n location=[45.474989560000004,9.205786594999998],\n tiles='Stamen Toner',\n zoom_start=11\n)\ngdf = GeoDataFrame.from_features(results_geojson['features'])\ngdf.crs = {'init': 'epsg:4326'}\ngdf['geometry'] = gdf.buffer(data['eps']/2).envelope\nthreshold_scale = Natural_Breaks(gdf[value_field], k=num_bins).bins.tolist()\nthreshold_scale.insert(0, gdf[value_field].min())\nchoropleth = folium.Choropleth(gdf, data=gdf, columns=[id_field, value_field],\n key_on='feature.properties.{}'.format(id_field),\n fill_color=fill_color, fill_opacity=fill_opacity,\n threshold_scale=threshold_scale).add_to(m)\nfields = list(gdf.columns.values)\nfields.remove('geometry')\ntooltip = folium.features.GeoJsonTooltip(fields=fields)\nchoropleth.geojson.add_child(tooltip)\nm",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d04e509b7ed74168dfec9f255c29bf5c2f80a445 | 5,613 | ipynb | Jupyter Notebook | 2020WinterIPS-Tech/xinguan-002.ipynb | UncleLincoln/trainee | eb9f4be00e80fddd0ab3d3e6ea9a20c55f5bcab8 | [
"MIT"
] | 36 | 2018-11-03T01:37:30.000Z | 2019-04-07T19:52:34.000Z | 2020WinterIPS-Tech/xinguan-002.ipynb | UncleLincoln/trainee | eb9f4be00e80fddd0ab3d3e6ea9a20c55f5bcab8 | [
"MIT"
] | 8 | 2020-11-13T19:06:32.000Z | 2022-01-13T03:24:20.000Z | 2020WinterIPS-Tech/xinguan-002.ipynb | BuErTech/trainee | eb9f4be00e80fddd0ab3d3e6ea9a20c55f5bcab8 | [
"MIT"
] | 86 | 2018-11-03T01:38:25.000Z | 2019-04-07T05:55:02.000Z | 35.751592 | 74 | 0.58026 | [
[
[
"\nimport requests \nimport json\n\n\nfor i in range(37):\n url = 'https://wuliang.art/ncov/rumor/getRumorList?page=%d'%i\n print('我们要去爬取的页面'+url)\n response = requests.get(url)\n \n datas = response.text\n\n pythonJsonDataObject = json.loads(datas)\n \n for row in pythonJsonDataObject['data']:\n rowString = ''\n \n rowString += row['date']+','\n rowString += row['explain']+','\n rowString += row['arttype']+','\n rowString += row['author']+','\n rowString += row['section']+','\n rowString += row['abstract']+','\n rowString += row['title']+','\n rowString += str(row['type'])+','\n rowString += row['coversqual']+','\n rowString += row['result']+','\n rowString += row['cover']+','\n rowString += str(row['iscolled'])+','\n rowString += row['videourl']+','\n rowString += row['authordesc']+','\n rowString += row['coverrect']+','\n rowString += row['markstyle']+','\n rowString += row['id']+','\n for st in row['tag']:\n rowString += st+'|'\n rowString += '\\n'\n with open('./result.csv','a') as f:\n f.write(rowString)\n \n \n \n ",
"我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=0\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=1\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=2\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=3\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=4\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=5\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=6\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=7\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=8\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=9\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=10\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=11\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=12\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=13\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=14\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=15\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=16\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=17\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=18\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=19\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=20\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=21\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=22\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=23\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=24\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=25\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=26\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=27\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=28\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=29\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=30\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=31\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=32\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=33\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=34\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=35\n我们要去爬取的页面https://wuliang.art/ncov/rumor/getRumorList?page=36\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d04e58760c6188b2a4f60017abdf7642bd0ac835 | 14,187 | ipynb | Jupyter Notebook | sierp_turtle.ipynb | tsgreenwood-flatiron-data-science/fractals | 0d647419715c04616b1ea6a1aeca43a98a8e044d | [
"MIT"
] | null | null | null | sierp_turtle.ipynb | tsgreenwood-flatiron-data-science/fractals | 0d647419715c04616b1ea6a1aeca43a98a8e044d | [
"MIT"
] | null | null | null | sierp_turtle.ipynb | tsgreenwood-flatiron-data-science/fractals | 0d647419715c04616b1ea6a1aeca43a98a8e044d | [
"MIT"
] | null | null | null | 84.952096 | 1,681 | 0.609008 | [
[
[
"# likely the simplest possible version?\n# import turtle as t\n# def sier(n,length):\n# if (n==0):\n# return\n# for i in range(3):\n# sier(n-1, length/2)\n# t.fd(length)\n# t.rt(120)",
"_____no_output_____"
],
[
"#!/usr/bin/env python\n##########################################################################################\n# a very complicated version\n# import necessary modules\n# ------------------------\nfrom numpy import *\nimport turtle\n \n##########################################################################################\n#\tFunctions defining the drawing actions\n# (used by the function DrawSierpinskiTriangle).\n#\t----------------------------------------------\ndef Left(turn, point, fwd, angle, turt):\n\tturt.left(angle)\n\treturn [turn, point, fwd, angle, turt]\ndef Right(turn, point, fwd, angle, turt):\n\tturt.right(angle)\n\treturn [turn, point, fwd, angle, turt]\ndef Forward(turn, point, fwd, angle, turt):\n\tturt.forward(fwd)\n\treturn [turn, point, fwd, angle, turt]",
"_____no_output_____"
],
[
"##########################################################################################\n#\t\tThe drawing function\n#\t\t--------------------\n#\n# level\t\tlevel of Sierpinski triangle (minimum value = 1)\n# ss\t\tscreensize (Draws on a screen of size ss x ss. Default value = 400.)\n#-----------------------------------------------------------------------------------------\ndef DrawSierpinskiTriangle(level, ss=400):\n\t# typical values\n\tturn = 0\t\t# initial turn (0 to start horizontally)\n\tangle=60.0 \t\t# in degrees\n \n\t# Initialize the turtle\n\tturtle.hideturtle()\n\tturtle.screensize(ss,ss)\n\tturtle.penup()\n\tturtle.degrees()\n \n\t# The starting point on the canvas\n\tfwd0 = float(ss)\n\tpoint=array([-fwd0/2.0, -fwd0/2.0])\n \n\t# Setting up the Lindenmayer system\n\t# Assuming that the triangle will be drawn in the following way:\n\t#\t1.) Start at a point\n\t#\t2.) Draw a straight line - the horizontal line (H)\n\t#\t3.) Bend twice by 60 degrees to the left (--)\n\t#\t4.) Draw a straight line - the slanted line (X)\n\t#\t5.) Bend twice by 60 degrees to the left (--)\n\t#\t6.) Draw a straight line - another slanted line (X)\n\t# \t\tThis produces the triangle in the first level. (so the axiom to begin with is H--X--X)\n\t#\t7.) For the next level replace each horizontal line using\n\t#\t\tX->XX\n\t#\t\tH -> H--X++H++X--H\n\t#\t\t\tThe lengths will be halved.\n \n \n\tdecode = {'-':Left, '+':Right, 'X':Forward, 'H':Forward}\n\taxiom = 'H--X--X'\n \n\t# Start the drawing\n\tturtle.goto(point[0], point[1])\n\tturtle.pendown()\n\tturtle.hideturtle()\n\tturt=turtle.getpen()\n\tstartposition=turt.clone()\n \n\t# Get the triangle in the Lindenmayer system\n\tfwd = fwd0/(2.0**level)\n\tpath = axiom\n\tfor i in range(0,level):\n\t\tpath=path.replace('X','XX')\n\t\tpath=path.replace('H','H--X++H++X--H')\n \n\t# Draw it.\n\tfor i in path:\n\t\t[turn, point, fwd, angle, turt]=decode[i](turn, point, fwd, angle, turt)\n##########################################################################################\n \nDrawSierpinskiTriangle(5)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d04e594d5b81912cf1ff6e4c58d6365ebe1fe80f | 102,017 | ipynb | Jupyter Notebook | Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb | DishaMukherjee/Analyze-A-B-Results | cb561766f7d06cc54ce56dba3e4328926d643071 | [
"FTL",
"CECILL-B"
] | null | null | null | Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb | DishaMukherjee/Analyze-A-B-Results | cb561766f7d06cc54ce56dba3e4328926d643071 | [
"FTL",
"CECILL-B"
] | null | null | null | Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb | DishaMukherjee/Analyze-A-B-Results | cb561766f7d06cc54ce56dba3e4328926d643071 | [
"FTL",
"CECILL-B"
] | null | null | null | 41.776003 | 9,548 | 0.570444 | [
[
[
"## Analyze A/B Test Results\n\nYou may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). **Please save regularly.**\n\nThis project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck!\n\n## Table of Contents\n- [Introduction](#intro)\n- [Part I - Probability](#probability)\n- [Part II - A/B Test](#ab_test)\n- [Part III - Regression](#regression)\n\n\n<a id='intro'></a>\n### Introduction\n\nA/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these \n\nFor this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.\n\n**As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question.** The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the [RUBRIC](https://review.udacity.com/#!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric).\n\n<a id='probability'></a>\n#### Part I - Probability\n\nTo get started, let's import our libraries.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport random\nimport matplotlib.pyplot as plt\n%matplotlib inline\n#We are setting the seed to assure you get the same answers on quizzes as we set up\nrandom.seed(42)",
"_____no_output_____"
]
],
[
[
"`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**\n\na. Read in the dataset and take a look at the top few rows here:",
"_____no_output_____"
]
],
[
[
"#import the dataset\ndf = pd.read_csv('ab_data.csv')\n\n#show the first 5 rows\ndf.head()",
"_____no_output_____"
]
],
[
[
"b. Use the cell below to find the number of rows in the dataset.",
"_____no_output_____"
]
],
[
[
"#show the total number of rows\ndf.shape[0]",
"_____no_output_____"
]
],
[
[
"c. The number of unique users in the dataset.",
"_____no_output_____"
]
],
[
[
"#calculare the number of unique user_id \nlen(df['user_id'].unique())",
"_____no_output_____"
]
],
[
[
"d. The proportion of users converted.",
"_____no_output_____"
]
],
[
[
"#calculate the converted users\ndf['converted'].mean()",
"_____no_output_____"
]
],
[
[
"e. The number of times the `new_page` and `treatment` don't match.",
"_____no_output_____"
]
],
[
[
"#treatment in group will be called A and new_page in landing_page will be called B\n\ndf_A_not_B = df.query('group == \"treatment\" & landing_page != \"new_page\"')\n\ndf_B_not_A = df.query('group != \"treatment\" & landing_page == \"new_page\"')\n\n#calculate thenumber of time new_page and treatment don't line up\nlen(df_A_not_B) + len(df_B_not_A)",
"_____no_output_____"
]
],
[
[
"f. Do any of the rows have missing values?",
"_____no_output_____"
]
],
[
[
"#view if there is any missing value\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 294478 entries, 0 to 294477\nData columns (total 5 columns):\nuser_id 294478 non-null int64\ntimestamp 294478 non-null object\ngroup 294478 non-null object\nlanding_page 294478 non-null object\nconverted 294478 non-null int64\ndtypes: int64(2), object(3)\nmemory usage: 11.2+ MB\n"
]
],
[
[
"**No missing Values**",
"_____no_output_____"
],
[
"`2.` For the rows where **treatment** does not match with **new_page** or **control** does not match with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to figure out how we should handle these rows. \n\na. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in **df2**.",
"_____no_output_____"
]
],
[
[
"#remove the mismatch rows\ndf1 = df.drop(df[(df.group == \"treatment\") & (df.landing_page != \"new_page\")].index)\ndf2 = df1.drop(df1[(df1.group == \"control\") & (df1.landing_page != \"old_page\")].index)\n",
"_____no_output_____"
],
[
"# Double Check all of the correct rows were removed - this should be 0\ndf2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]",
"_____no_output_____"
]
],
[
[
"`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom.",
"_____no_output_____"
],
[
"a. How many unique **user_id**s are in **df2**?",
"_____no_output_____"
]
],
[
[
"#calculare the number of unique user_id \nlen(df2['user_id'].unique())",
"_____no_output_____"
]
],
[
[
"b. There is one **user_id** repeated in **df2**. What is it?",
"_____no_output_____"
]
],
[
[
"#find out the duplicate user_id\ndf2.loc[df2.user_id.duplicated()]",
"_____no_output_____"
]
],
[
[
"c. What is the row information for the repeat **user_id**? ",
"_____no_output_____"
]
],
[
[
"#find out the duplicate user_id\ndf2.loc[df2.user_id.duplicated()]",
"_____no_output_____"
]
],
[
[
"d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.",
"_____no_output_____"
]
],
[
[
"# Now we remove duplicate rows\ndf2 = df2.drop_duplicates()",
"_____no_output_____"
],
[
"# Check agin if duplicated values are deleted or not\nsum(df2.duplicated())",
"_____no_output_____"
]
],
[
[
"`4.` Use **df2** in the cells below to answer the quiz questions related to **Quiz 4** in the classroom.\n\na. What is the probability of an individual converting regardless of the page they receive?",
"_____no_output_____"
]
],
[
[
"# Probability of an individual converting regardless of the page they receive\ndf2['converted'].mean()",
"_____no_output_____"
]
],
[
[
"b. Given that an individual was in the `control` group, what is the probability they converted?",
"_____no_output_____"
]
],
[
[
"# The probability of an individual converting given that an individual was in the control group\ncontrol_group = len(df2.query('group==\"control\" and converted==1'))/len(df2.query('group==\"control\"'))\ncontrol_group",
"_____no_output_____"
]
],
[
[
"c. Given that an individual was in the `treatment` group, what is the probability they converted?",
"_____no_output_____"
]
],
[
[
"# The probability of an individual converting given that an individual was in the treatment group\ntreatment_group = len(df2.query('group==\"treatment\" and converted==1'))/len(df2.query('group==\"treatment\"'))\ntreatment_group",
"_____no_output_____"
]
],
[
[
"d. What is the probability that an individual received the new page?",
"_____no_output_____"
]
],
[
[
"# The probability of individual received new page\nlen(df2.query('landing_page==\"new_page\"'))/len(df2.index)",
"_____no_output_____"
]
],
[
[
"e. Consider your results from parts (a) through (d) above, and explain below whether you think there is sufficient evidence to conclude that the new treatment page leads to more conversions.",
"_____no_output_____"
],
[
"**Your answer goes here.**",
"_____no_output_____"
],
[
"<a id='ab_test'></a>\n### Part II - A/B Test\n\nNotice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed. \n\nHowever, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another? \n\nThese questions are the difficult parts associated with A/B tests in general. \n\n\n`1.` For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of **$p_{old}$** and **$p_{new}$**, which are the converted rates for the old and new pages.",
"_____no_output_____"
],
[
"**Put your answer here.**",
"_____no_output_____"
],
[
"`2.` Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have \"true\" success rates equal to the **converted** success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the **converted** rate in **ab_data.csv** regardless of the page. <br><br>\n\nUse a sample size for each page equal to the ones in **ab_data.csv**. <br><br>\n\nPerform the sampling distribution for the difference in **converted** between the two pages over 10,000 iterations of calculating an estimate from the null. <br><br>\n\nUse the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use **Quiz 5** in the classroom to make sure you are on the right track.<br><br>",
"_____no_output_____"
],
[
"a. What is the **conversion rate** for $p_{new}$ under the null? ",
"_____no_output_____"
]
],
[
[
"p_new = len(df2.query( 'converted==1'))/len(df2.index)\np_new",
"_____no_output_____"
]
],
[
[
"b. What is the **conversion rate** for $p_{old}$ under the null? <br><br>",
"_____no_output_____"
]
],
[
[
"p_old = len(df2.query('converted==1'))/len(df2.index)\np_old\n",
"_____no_output_____"
],
[
"p_new = len(df2.query( 'converted==1'))/len(df2.index)\np_new",
"_____no_output_____"
],
[
"# probablity under null\np=np.mean([p_old,p_new])\np",
"_____no_output_____"
],
[
"# difference of p_new and p_old\np_diff=p_new-p_old",
"_____no_output_____"
]
],
[
[
"#### Under null p_old is equal to p_new",
"_____no_output_____"
],
[
"c. What is $n_{new}$, the number of individuals in the treatment group?",
"_____no_output_____"
]
],
[
[
"#calculate number of queries when landing_page is equal to new_page\nn_new = len(df2.query('landing_page==\"new_page\"'))\n#print n_new\nn_new",
"_____no_output_____"
]
],
[
[
"d. What is $n_{old}$, the number of individuals in the control group?",
"_____no_output_____"
]
],
[
[
"#calculate number of queries when landing_page is equal to old_page\nn_old = len(df2.query('landing_page==\"old_page\"'))\n#print n_old\nn_old",
"_____no_output_____"
]
],
[
[
"e. Simulate $n_{new}$ transactions with a conversion rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.",
"_____no_output_____"
]
],
[
[
"## simulate n_old transactions with a convert rate of p_new under the null\nnew_page_converted = np.random.choice([0, 1], n_new, p = [p_new, 1-p_new])",
"_____no_output_____"
]
],
[
[
"f. Simulate $n_{old}$ transactions with a conversion rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.",
"_____no_output_____"
]
],
[
[
"# simulate n_old transactions with a convert rate of p_old under the null\nold_page_converted = np.random.choice([0, 1], n_old, p = [p_old, 1-p_old])",
"_____no_output_____"
]
],
[
[
"g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).",
"_____no_output_____"
]
],
[
[
"# differences computed in from p_new and p_old\nobs_diff= new_page_converted.mean() - old_page_converted.mean()# differences computed in from p_new and p_old\nobs_diff",
"_____no_output_____"
]
],
[
[
"h. Create 10,000 $p_{new}$ - $p_{old}$ values using the same simulation process you used in parts (a) through (g) above. Store all 10,000 values in a NumPy array called **p_diffs**.",
"_____no_output_____"
]
],
[
[
"# Create sampling distribution for difference in p_new-p_old simulated values\n# with boostrapping\np_diffs = []\nfor i in range(10000):\n \n # 1st parameter dictates the choices you want. In this case [1, 0]\n p_new1 = np.random.choice([1, 0],n_new,replace = True,p = [p_new, 1-p_new])\n p_old1 = np.random.choice([1, 0],n_old,replace = True,p = [p_old, 1-p_old])\n p_new2 = p_new1.mean()\n p_old2 = p_old1.mean()\n p_diffs.append(p_new2-p_old2)\n#_p_diffs = np.array(_p_diffs)",
"_____no_output_____"
]
],
[
[
"i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.",
"_____no_output_____"
]
],
[
[
"p_diffs=np.array(p_diffs)\n#histogram of p_diff\nplt.hist(p_diffs)\nplt.title('Graph of p_diffs')#title of graphs\nplt.xlabel('Page difference') # x-label of graphs\nplt.ylabel('Count') # y-label of graphs",
"_____no_output_____"
]
],
[
[
"j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?",
"_____no_output_____"
]
],
[
[
"#histogram of p_diff\nplt.hist(p_diffs);\n\nplt.title('Graph of p_diffs') #title of graphs\nplt.xlabel('Page difference') # x-label of graphs\nplt.ylabel('Count') # y-label of graphs\n\nplt.axvline(x= obs_diff, color='r');",
"_____no_output_____"
]
],
[
[
"k. Please explain using the vocabulary you've learned in this course what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?",
"_____no_output_____"
],
[
"89.57% is the proportion of the p_diffs that are greater than the actual difference observed in ab_data.csv. In scientific studies this value is also called p-value. This value means that we cannot reject the null hypothesis and that we do not have sufficient evidence that the new_page has a higher conversion rate than the old_page. ",
"_____no_output_____"
],
[
"**Put your answer here.**",
"_____no_output_____"
],
[
"l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let `n_old` and `n_new` refer the the number of rows associated with the old page and new pages, respectively.",
"_____no_output_____"
]
],
[
[
"import statsmodels.api as sm\n\nconvert_old = len(df2.query('converted==1 and landing_page==\"old_page\"')) #rows converted with old_page\nconvert_new = len(df2.query('converted==1 and landing_page==\"new_page\"')) #rows converted with new_page\nn_old = len(df2.query('landing_page==\"old_page\"')) #rows_associated with old_page\nn_new = len(df2.query('landing_page==\"new_page\"')) #rows associated with new_page\nn_new",
"_____no_output_____"
]
],
[
[
"m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](https://docs.w3cub.com/statsmodels/generated/statsmodels.stats.proportion.proportions_ztest/) is a helpful link on using the built in.",
"_____no_output_____"
]
],
[
[
"#Computing z_score and p_value\nz_score, p_value = sm.stats.proportions_ztest([convert_old,convert_new], [n_old, n_new],alternative='smaller') \n\n#display z_score and p_value\nprint(z_score,p_value)",
"1.31160753391 0.905173705141\n"
]
],
[
[
"n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**?",
"_____no_output_____"
]
],
[
[
"from scipy.stats import norm\nnorm.cdf(z_score) #how significant our z_score is",
"_____no_output_____"
],
[
"norm.ppf(1-(0.05)) #critical value of 95% confidence",
"_____no_output_____"
]
],
[
[
"The z-score and the p_value mean that one doesn't reject the Null. The Null being the converted rate of the old_page is the same or greater than the converted rate of the new_page. The p_value is 0.91 and is higher than 0.05 significance level. That means we can not be confident with a 95% confidence level that the converted rate of the new_page is larger than the old_page. ",
"_____no_output_____"
],
[
"<a id='regression'></a>\n### Part III - A regression approach\n\n`1.` In this final part, you will see that the result you achieved in the A/B test in Part II above can also be achieved by performing regression.<br><br> \n\na. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?",
"_____no_output_____"
],
[
"The dependent variable is a binary variable (converted vs not converted). Thus, you need to use a logistic regression. ",
"_____no_output_____"
],
[
"b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create in df2 a column for the intercept, and create a dummy variable column for which page each user received. Add an **intercept** column, as well as an **ab_page** column, which is 1 when an individual receives the **treatment** and 0 if **control**.",
"_____no_output_____"
]
],
[
[
"#adding an intercept column\ndf2['intercept'] = 1\n\n#Create dummy variable column\ndf2['ab_page'] = pd.get_dummies(df2['group'])['treatment']\n\ndf2.head()",
"_____no_output_____"
]
],
[
[
"c. Use **statsmodels** to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part **b.** to predict whether or not an individual converts. ",
"_____no_output_____"
]
],
[
[
"import statsmodels.api as sm\nmodel=sm.Logit(df2['converted'],df2[['intercept','ab_page']])\nresults=model.fit() ",
"Optimization terminated successfully.\n Current function value: 0.366118\n Iterations 6\n"
]
],
[
[
"d. Provide the summary of your model below, and use it as necessary to answer the following questions.",
"_____no_output_____"
]
],
[
[
"\nresults.summary()\n",
"_____no_output_____"
]
],
[
[
"e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**?<br><br> **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in **Part II**?",
"_____no_output_____"
],
[
"The p-value associated with ab_page is 0.19. It is higher than 0.05. Thus, the coefficient is not significant.\n\nAlternative hypothesis from part II: the conversion rate of the old_page is less than the conversion rate of the new_page. This assumes a one-tailed test. In Part III, the alternative hypothesis can be formulated as follows: (1) The landing_page type influences (positively or negatively) the conversion rate or (2) the conversion rate of the old_page is different to the conversion rate of the new_page. This assumes a two-tailed test.\n\nin both cases, the results do not support the alternative hypothesis sufficiently.\n\nThe p-value is very different. In part II the p-value is 0.91. This might be because the tests of the regression model (not the A/B test) assumes an intercept and because of differences in one or two-tailed testing.",
"_____no_output_____"
],
[
"f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?",
"_____no_output_____"
],
[
"It is a good idea to consider other factors in order to identify other potencial influences on the conversion rate. \n\nA disadvantage is that the model gets more complex. ",
"_____no_output_____"
],
[
"g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives in. You will need to read in the **countries.csv** dataset and merge together your datasets on the appropriate rows. [Here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) are the docs for joining tables. \n\nDoes it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - **Hint: You will need two columns for the three dummy variables.** Provide the statistical output as well as a written response to answer this question.",
"_____no_output_____"
]
],
[
[
"# Store Countries.csv data in dataframe\ncountries = pd.read_csv('countries.csv')\ncountries.head()\n",
"_____no_output_____"
],
[
"#Inner join two datas\nnew = countries.set_index('user_id').join(df2.set_index('user_id'), how = 'inner')\nnew.head()",
"_____no_output_____"
],
[
"#adding dummy variables with 'CA' as the baseline\nnew[['US', 'UK']] = pd.get_dummies(new['country'])[['US', \"UK\"]]\nnew.head()",
"_____no_output_____"
],
[
"new['US_ab_page'] = new['US']*new['ab_page']\nnew.head()",
"_____no_output_____"
],
[
"new['UK_ab_page'] = new['UK']*new['ab_page']\nnew.head()",
"_____no_output_____"
],
[
"new['intercept'] = 1\nlogit3 = sm.Logit(new['converted'], new[['intercept', 'ab_page', 'US', 'UK', 'US_ab_page', 'US_ab_page']])\nlogit3\n",
"_____no_output_____"
],
[
"#Check the result\nresults = logit3.fit()",
"Optimization terminated successfully.\n Current function value: 0.366111\n Iterations 6\n"
],
[
"#Check the result\nresults.summary()",
"_____no_output_____"
]
],
[
[
"h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model. \n\nProvide the summary results, and your conclusions based on the results.",
"_____no_output_____"
],
[
"**Conclusions:** None of the variables have significant p-values. Therefore, we will fail to reject the null and conclude that there is not sufficient evidence to suggest that there is an interaction between country and page received that will predict whether a user converts or not.\n\nIn the larger picture, based on the available information, we do not have sufficient evidence to suggest that the new page results in more conversions than the old page.",
"_____no_output_____"
]
],
[
[
"from subprocess import call\ncall(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])",
"_____no_output_____"
]
],
[
[
"<a id='conclusions'></a>\n## Finishing Up\n\n> Congratulations! You have reached the end of the A/B Test Results project! You should be very proud of all you have accomplished!\n\n> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the rubric (found on the project submission page at the end of the lesson). You should also probably remove all of the \"Tips\" like this one so that the presentation is as polished as possible.\n\n\n## Directions to Submit\n\n> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).\n\n> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.\n\n> Once you've done this, you can submit your project by clicking on the \"Submit Project\" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!",
"_____no_output_____"
]
],
[
[
"from subprocess import call\ncall(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d04e5add7d2803a9f4259a8204258e53343f68fc | 368,647 | ipynb | Jupyter Notebook | notebooks/ch02-diff.ipynb | evilboy1973/math_dl_book_info | f7a9aa1da42a14df8c5c5ebae4e59bb6d1463ce2 | [
"Apache-2.0"
] | 1 | 2021-01-10T07:47:37.000Z | 2021-01-10T07:47:37.000Z | notebooks/ch02-diff.ipynb | evilboy1973/math_dl_book_info | f7a9aa1da42a14df8c5c5ebae4e59bb6d1463ce2 | [
"Apache-2.0"
] | null | null | null | notebooks/ch02-diff.ipynb | evilboy1973/math_dl_book_info | f7a9aa1da42a14df8c5c5ebae4e59bb6d1463ce2 | [
"Apache-2.0"
] | null | null | null | 425.689376 | 24,384 | 0.949507 | [
[
[
"# 2章 微分積分",
"_____no_output_____"
],
[
"## 2.1 関数",
"_____no_output_____"
]
],
[
[
"# 必要ライブラリの宣言\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# PDF出力用\nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats('png', 'pdf')",
"_____no_output_____"
],
[
"def f(x):\n return x**2 +1",
"_____no_output_____"
],
[
"f(1)",
"_____no_output_____"
],
[
"f(2)",
"_____no_output_____"
]
],
[
[
"### 図2-2 点(x, f(x))のプロットとy=f(x)のグラフ",
"_____no_output_____"
]
],
[
[
"x = np.linspace(-3, 3, 601)\ny = f(x)",
"_____no_output_____"
],
[
"x1 = np.linspace(-3, 3, 7)\ny1 = f(x1)\nplt.figure(figsize=(6,6))\nplt.ylim(-2,10)\nplt.plot([-3,3],[0,0],c='k')\nplt.plot([0,0],[-2,10],c='k')\nplt.scatter(x1,y1,c='k',s=50)\nplt.grid()\nplt.xlabel('x',fontsize=14)\nplt.ylabel('y',fontsize=14)\nplt.show()",
"_____no_output_____"
],
[
"x2 = np.linspace(-3, 3, 31)\ny2 = f(x2)\nplt.figure(figsize=(6,6))\nplt.ylim(-2,10)\nplt.plot([-3,3],[0,0],c='k')\nplt.plot([0,0],[-2,10],c='k')\nplt.scatter(x2,y2,c='k',s=50)\nplt.grid()\nplt.xlabel('x',fontsize=14)\nplt.ylabel('y',fontsize=14)\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(6,6))\nplt.plot(x,y,c='k')\nplt.ylim(-2,10)\nplt.plot([-3,3],[0,0],c='k')\nplt.plot([0,0],[-2,10],c='k')\nplt.scatter([1,2],[2,5],c='k',s=50)\nplt.grid()\nplt.xlabel('x',fontsize=14)\nplt.ylabel('y',fontsize=14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 2.2 合成関数・逆関数",
"_____no_output_____"
],
[
"### 図2.6 逆関数のグラフ",
"_____no_output_____"
]
],
[
[
"def f(x):\n return(x**2 + 1)\ndef g(x):\n return(np.sqrt(x - 1))",
"_____no_output_____"
],
[
"xx1 = np.linspace(0.0, 4.0, 200)\nxx2 = np.linspace(1.0, 4.0, 200)\nyy1 = f(xx1)\nyy2 = g(xx2)",
"_____no_output_____"
],
[
"plt.figure(figsize=(6,6))\nplt.xlabel('$x$',fontsize=14)\nplt.ylabel('$y$',fontsize=14)\nplt.ylim(-2.0, 4.0)\nplt.xlim(-2.0, 4.0)\nplt.grid()\nplt.plot(xx1,yy1, linestyle='-', c='k', label='$y=x^2+1$')\nplt.plot(xx2,yy2, linestyle='-.', c='k', label='$y=\\sqrt{x-1}$')\nplt.plot([-2,4],[-2,4], color='black')\nplt.plot([-2,4],[0,0], color='black')\nplt.plot([0,0],[-2,4],color='black')\nplt.legend(fontsize=14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 2.3 微分と極限",
"_____no_output_____"
],
[
"### 図2-7 関数のグラフを拡大したときの様子",
"_____no_output_____"
]
],
[
[
"from matplotlib import pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"def f(x):\n return(x**3 - x)",
"_____no_output_____"
],
[
"delta = 2.0\nx = np.linspace(0.5-delta, 0.5+delta, 200)\ny = f(x)\nfig = plt.figure(figsize=(6,6))\nplt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)\nplt.xlim(0.5-delta, 0.5+delta)\nplt.plot(x, y, 'b-', lw=1, c='k')\nplt.scatter([0.5], [-3.0/8.0])\nplt.xlabel('x',fontsize=14)\nplt.ylabel('y',fontsize=14)\nplt.grid()\nplt.title('delta = %.4f' % delta, fontsize=14)\nplt.show()",
"_____no_output_____"
],
[
"delta = 0.2\nx = np.linspace(0.5-delta, 0.5+delta, 200)\ny = f(x)\nfig = plt.figure(figsize=(6,6))\nplt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)\nplt.xlim(0.5-delta, 0.5+delta)\nplt.plot(x, y, 'b-', lw=1, c='k')\nplt.scatter([0.5], [-3.0/8.0])\nplt.xlabel('x',fontsize=14)\nplt.ylabel('y',fontsize=14)\nplt.grid()\nplt.title('delta = %.4f' % delta, fontsize=14)\nplt.show()",
"_____no_output_____"
],
[
"delta = 0.01\nx = np.linspace(0.5-delta, 0.5+delta, 200)\ny = f(x)\nfig = plt.figure(figsize=(6,6))\nplt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)\nplt.xlim(0.5-delta, 0.5+delta)\nplt.plot(x, y, 'b-', lw=1, c='k')\nplt.scatter(0.5, -3.0/8.0)\nplt.xlabel('x',fontsize=14)\nplt.ylabel('y',fontsize=14)\nplt.grid()\nplt.title('delta = %.4f' % delta, fontsize=14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 図2-8 関数のグラフ上の2点を結んだ直線の傾き ",
"_____no_output_____"
]
],
[
[
"delta = 2.0\nx = np.linspace(0.5-delta, 0.5+delta, 200)\nx1 = 0.6\nx2 = 1.0\ny = f(x)\nfig = plt.figure(figsize=(6,6))\nplt.ylim(-1, 0.5)\nplt.xlim(0, 1.5)\nplt.plot(x, y, 'b-', lw=1, c='k')\nplt.scatter([x1, x2], [f(x1), f(x2)], c='k', lw=1)\nplt.plot([x1, x2], [f(x1), f(x2)], c='k', lw=1)\nplt.plot([x1, x2, x2], [f(x1), f(x1), f(x2)], c='k', lw=1)\nplt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)\nplt.tick_params(color='white')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 図2-10 接線の方程式",
"_____no_output_____"
]
],
[
[
"def f(x):\n return(x**2 - 4*x)\ndef g(x):\n return(-2*x -1)",
"_____no_output_____"
],
[
"x = np.linspace(-2, 6, 500)\nfig = plt.figure(figsize=(6,6))\nplt.scatter([1],[-3],c='k')\nplt.plot(x, f(x), 'b-', lw=1, c='k')\nplt.plot(x, g(x), 'b-', lw=1, c='b')\nplt.plot([x.min(), x.max()], [0, 0], lw=2, c='k')\nplt.plot([0, 0], [g(x).min(), f(x).max()], lw=2, c='k')\nplt.grid(lw=2)\nplt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)\nplt.tick_params(color='white')\nplt.xlabel('X')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 2.4 極大・極小",
"_____no_output_____"
],
[
"### 図2-11 y= x3-3xのグラフと極大・極小",
"_____no_output_____"
]
],
[
[
"def f1(x):\n return(x**3 - 3*x)",
"_____no_output_____"
],
[
"x = np.linspace(-3, 3, 500)\ny = f1(x)\nfig = plt.figure(figsize=(6,6))\nplt.ylim(-4, 4)\nplt.xlim(-3, 3)\nplt.plot(x, y, 'b-', lw=1, c='k')\nplt.plot([0,0],[-4,4],c='k')\nplt.plot([-3,3],[0,0],c='k')\nplt.grid()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 図2-12 極大でも極小でもない例 (y=x3のグラフ)",
"_____no_output_____"
]
],
[
[
"def f2(x):\n return(x**3)",
"_____no_output_____"
],
[
"x = np.linspace(-3, 3, 500)\ny = f2(x)\nfig = plt.figure(figsize=(6,6))\nplt.ylim(-4, 4)\nplt.xlim(-3, 3)\nplt.plot(x, y, 'b-', lw=1, c='k')\nplt.plot([0,0],[-4,4],c='k')\nplt.plot([-3,3],[0,0],c='k')\nplt.grid()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 2.7 合成関数の微分",
"_____no_output_____"
],
[
"### 図2-14 逆関数の微分",
"_____no_output_____"
]
],
[
[
"#逆関数の微分\ndef f(x):\n return(x**2 + 1)\ndef g(x):\n return(np.sqrt(x - 1))",
"_____no_output_____"
],
[
"xx1 = np.linspace(0.0, 4.0, 200)\nxx2 = np.linspace(1.0, 4.0, 200)\nyy1 = f(xx1)\nyy2 = g(xx2)",
"_____no_output_____"
],
[
"plt.figure(figsize=(6,6))\nplt.xlabel('$x$',fontsize=14)\nplt.ylabel('$y$',fontsize=14)\nplt.ylim(-2.0, 4.0)\nplt.xlim(-2.0, 4.0)\nplt.grid()\nplt.plot(xx1,yy1, linestyle='-', color='blue')\nplt.plot(xx2,yy2, linestyle='-', color='blue')\nplt.plot([-2,4],[-2,4], color='black')\nplt.plot([-2,4],[0,0], color='black')\nplt.plot([0,0],[-2,4],color='black')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 2.9 積分",
"_____no_output_____"
],
[
"### 図2-15 面積を表す関数S(x)とf(x)の関係",
"_____no_output_____"
]
],
[
[
"def f(x) :\n return x**2 + 1\nxx = np.linspace(-4.0, 4.0, 200)\nyy = f(xx)",
"_____no_output_____"
],
[
"plt.figure(figsize=(6,6))\nplt.xlim(-2,2)\nplt.ylim(-1,4)\nplt.plot(xx, yy)\nplt.plot([-2,2],[0,0],c='k',lw=1)\nplt.plot([0,0],[-1,4],c='k',lw=1)\nplt.plot([0,0],[0,f(0)],c='b')\nplt.plot([1,1],[0,f(1)],c='b')\nplt.plot([1.5,1.5],[0,f(1.5)],c='b')\nplt.plot([1,1.5],[f(1),f(1)],c='b')\nplt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)\nplt.tick_params(color='white')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 図2-16 グラフの面積と定積分",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(6,6))\nplt.xlim(-2,2)\nplt.ylim(-1,4)\nplt.plot(xx, yy)\nplt.plot([-2,2],[0,0],c='k',lw=1)\nplt.plot([0,0],[-1,4],c='k',lw=1)\nplt.plot([0,0],[0,f(0)],c='b')\nplt.plot([1,1],[0,f(1)],c='b')\nplt.plot([1.5,1.5],[0,f(1.5)],c='b')\nplt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)\nplt.tick_params(color='white')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 図2-17 積分と面積の関係",
"_____no_output_____"
]
],
[
[
"def f(x) :\n return x**2 + 1\nx = np.linspace(-1.0, 2.0, 200)\ny = f(x)\nN = 10\nxx = np.linspace(0.5, 1.5, N+1)\nyy = f(xx)",
"_____no_output_____"
],
[
"print(xx)",
"[0.5 0.6 0.7 0.8 0.9 1. 1.1 1.2 1.3 1.4 1.5]\n"
],
[
"plt.figure(figsize=(6,6))\nplt.xlim(-1,2)\nplt.ylim(-1,4)\nplt.plot(x, y)\nplt.plot([-1,2],[0,0],c='k',lw=2)\nplt.plot([0,0],[-1,4],c='k',lw=2)\nplt.plot([0.5,0.5],[0,f(0.5)],c='b')\nplt.plot([1.5,1.5],[0,f(1.5)],c='b')\nplt.bar(xx[:-1], yy[:-1], align='edge', width=1/N*0.9)\nplt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)\nplt.tick_params(color='white')\nplt.grid()\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d04e6a4f40f202ff97a6a303d64bae442e79f713 | 8,380 | ipynb | Jupyter Notebook | Make all the coins.ipynb | Sukrut11/Python-Certification | 5cefbfaf813d74424388345b6bb396f31360fb3a | [
"Apache-2.0"
] | null | null | null | Make all the coins.ipynb | Sukrut11/Python-Certification | 5cefbfaf813d74424388345b6bb396f31360fb3a | [
"Apache-2.0"
] | null | null | null | Make all the coins.ipynb | Sukrut11/Python-Certification | 5cefbfaf813d74424388345b6bb396f31360fb3a | [
"Apache-2.0"
] | null | null | null | 35.210084 | 137 | 0.420048 | [
[
[
"import random\n\nclass Coin:\n\n def __init__(self, rare = False, clean = True, heads = True, **kwargs):\n\n for key,value in kwargs.items():\n setattr(self,key,value)\n \n self.is_rare = rare\n self.is_clean = clean\n self.heads = heads\n\n if self.is_rare:\n self.value = self.original_value * 1.25\n else:\n self.value = self.original_value\n\n if self.clean:\n self.colour = self.clean_colour\n else:\n self.colour = self.rusty_colour\n\n def rust(self):\n self.colour = self.rusty_colour\n\n def clean(self):\n self.colour = self.clean_colour\n\n def __del__(self):\n print(\"Coin spent!\")\n\n def flip(self):\n heads_options = [True, False]\n choice = random.choice(heads_options)\n self.heads = choice\n\n def __str__(self):\n if self.original_value >= 1:\n return \"£{} coin\".format(int(self.original_value))\n else:\n return \"{}p Coin\".format(int(self.original_value * 100))\n\n\nclass One_Pence(Coin):\n def __init__(self):\n \n data = {\n \"original_value\": 0.01,\n \"clean_colour\": \"bronze\",\n \"rusty_colour\": \"brownish\",\n \"num_edges\": 1,\n \"diameter\": 20.3, #mm\n \"thickness\": 1.52, #mm\n \"mass\": 3.56, #grams\n }\n super().__init__(**data)\n\nclass Two_Pence(Coin):\n def __init__(self):\n \n data = {\n \"original_value\": 0.02,\n \"clean_colour\": \"bronze\",\n \"rusty_colour\": \"brownish\",\n \"num_edges\": 1,\n \"diameter\": 25.9, #mm\n \"thickness\": 1.85, #mm\n \"mass\": 7.12, #grams\n }\n super().__init__(**data)\n\nclass Five_Pence(Coin):\n def __init__(self):\n \n data = {\n \"original_value\": 0.05,\n \"clean_colour\": \"silver\",\n \"rusty_colour\": None,\n \"num_edges\": 1,\n \"diameter\": 18.0, #mm\n \"thickness\": 1.77, #mm\n \"mass\": 3.25, #grams\n }\n super().__init__(**data)\n\n def rust(self):\n self.colour = self.clean_colour\n\n def clean(self):\n self.colour = self.clean_colour\n\nclass Ten_Pence(Coin):\n def __init__(self):\n \n data = {\n \"original_value\": 0.10,\n \"clean_colour\": \"silver\",\n \"rusty_colour\": None,\n \"num_edges\": 1,\n \"diameter\": 24.5, #mm\n \"thickness\": 1.85, #mm\n \"mass\": 6.50, #grams\n }\n super().__init__(**data)\n\n def rust(self):\n self.colour = self.clean_colour\n\n def clean(self):\n self.colour = self.clean_colour\n\nclass Twenty_Pence(Coin):\n def __init__(self):\n \n data = {\n \"original_value\": 0.20,\n \"clean_colour\": \"silver\",\n \"rusty_colour\": None,\n \"num_edges\": 7,\n \"diameter\": 21.4, #mm\n \"thickness\": 1.7, #mm\n \"mass\": 5.00, #grams\n }\n super().__init__(**data)\n\n def rust(self):\n self.colour = self.clean_colour\n\n def clean(self):\n self.colour = self.clean_colour\n\nclass Fifty_Pence(Coin):\n def __init__(self):\n \n data = {\n \"original_value\": 0.50,\n \"clean_colour\": \"silver\",\n \"rusty_colour\": None,\n \"num_edges\": 7,\n \"diameter\": 27.3, #mm\n \"thickness\": 1.78, #mm\n \"mass\": 8.00, #grams\n }\n super().__init__(**data)\n\n def rust(self):\n self.colour = self.clean_colour\n\n def clean(self):\n self.colour = self.clean_colour\n\n\n\nclass One_Pound(Coin):\n def __init__(self):\n data = {\n \"original_value\": 1.00,\n \"clean_colour\": \"gold\",\n \"rusty_colour\": \"greenish\",\n \"num_edges\": 1,\n \"diameter\": 22.5, #mm\n \"thickness\": 3.15, #mm\n \"mass\": 9.5, #grams\n }\n super().__init__(**data)\n\nclass Two_Pound(Coin):\n def __init__(self):\n data = {\n \"original_value\": 2.00,\n \"clean_colour\": \"gold & silver\",\n \"rusty_colour\": \"greenish\",\n \"num_edges\": 1,\n \"diameter\": 28.4, #mm\n \"thickness\": 2.50, #mm\n \"mass\": 12.00, #grams\n }\n super().__init__(**data)\n\ncoins =[One_Pence(), Two_Pence(), Five_Pence(), Ten_Pence(), Twenty_Pence(),\n Fifty_Pence(), One_Pound(), Two_Pound()]\n\nfor coin in coins:\n arguments = [coin, coin.colour, coin.value, coin.diameter, coin.thickness,\n coin.num_edges, coin.mass]\n\n string = \"{} - Colour: {}, value:{}, diameter(mm):{}, thickness(mm):{}, number of edges:{}, mass(g):{}\".format(*arguments)\n print(string)\n",
"1p Coin - Colour: bronze, value:0.01, diameter(mm):20.3, thickness(mm):1.52, number of edges:1, mass(g):3.56\n2p Coin - Colour: bronze, value:0.02, diameter(mm):25.9, thickness(mm):1.85, number of edges:1, mass(g):7.12\n5p Coin - Colour: silver, value:0.05, diameter(mm):18.0, thickness(mm):1.77, number of edges:1, mass(g):3.25\n10p Coin - Colour: silver, value:0.1, diameter(mm):24.5, thickness(mm):1.85, number of edges:1, mass(g):6.5\n20p Coin - Colour: silver, value:0.2, diameter(mm):21.4, thickness(mm):1.7, number of edges:7, mass(g):5.0\n50p Coin - Colour: silver, value:0.5, diameter(mm):27.3, thickness(mm):1.78, number of edges:7, mass(g):8.0\n£1 coin - Colour: gold, value:1.0, diameter(mm):22.5, thickness(mm):3.15, number of edges:1, mass(g):9.5\n£2 coin - Colour: gold & silver, value:2.0, diameter(mm):28.4, thickness(mm):2.5, number of edges:1, mass(g):12.0\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d04e71b1afe4e8219912ac1fe20bec27f348c1fb | 26,700 | ipynb | Jupyter Notebook | VacationPy/VacationPy.ipynb | kdturner83/PythonAPI_Challenge | af30e69c4337bc18cdf1be4a9cd616e0cc0ae728 | [
"ADSL"
] | null | null | null | VacationPy/VacationPy.ipynb | kdturner83/PythonAPI_Challenge | af30e69c4337bc18cdf1be4a9cd616e0cc0ae728 | [
"ADSL"
] | null | null | null | VacationPy/VacationPy.ipynb | kdturner83/PythonAPI_Challenge | af30e69c4337bc18cdf1be4a9cd616e0cc0ae728 | [
"ADSL"
] | null | null | null | 69.895288 | 1,514 | 0.641423 | [
[
[
"# VacationPy\n----\n\n#### Note\n* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.\n\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport gmaps\nimport os\n\n# Import API key\nfrom api_keys import g_key\n\n# Configure gmaps\ngmaps.configure(api_key=gkey)\nprint(gkey)",
"_____no_output_____"
]
],
[
[
"### Store Part I results into DataFrame\n* Load the csv exported in Part I to a DataFrame",
"_____no_output_____"
]
],
[
[
"# Create vacation dataframe\n#clean_city_data_df.to_csv('../Resources/city_output.csv')\nvacation_df = pd.read_csv('../Resources/city_output.csv')\n#vacation_df = vacation_df.drop(columns=\"Unnamed: 0\")\nvacation_df.head()",
"_____no_output_____"
]
],
[
[
"### Humidity Heatmap\n* Configure gmaps.\n* Use the Lat and Lng as locations and Humidity as the weight.\n* Add Heatmap layer to map.",
"_____no_output_____"
]
],
[
[
"# Store latitude and longitude in locations\nlocations = vacation_df[[\"lat\", \"long\"]]\nweights = vacation_df[\"humidity\"].astype(float)\n\nfig = gmaps.figure()\n\n# Create heat layer\nheat_layer = gmaps.heatmap_layer(locations, weights=weights, \n dissipating=False, max_intensity=10,\n point_radius=300)\n\nfig",
"_____no_output_____"
]
],
[
[
"### Create new DataFrame fitting weather criteria\n* Narrow down the cities to fit weather conditions.\n* Drop any rows will null values.",
"_____no_output_____"
]
],
[
[
"#vacation_df.dropna(inplace = True) max temp, cloudiness = 0, wind speed <10, 70> <80\ncity_weather_df = vacation_df.copy()\ncity_weather_df.dropna(inplace = True) \ncity_weather_df",
"_____no_output_____"
]
],
[
[
"### Hotel Map\n* Store into variable named `hotel_df`.\n* Add a \"Hotel Name\" column to the DataFrame.\n* Set parameters to search for hotels with 5000 meters.\n* Hit the Google Places API for each city's coordinates.\n* Store the first Hotel result into the DataFrame.\n* Plot markers on top of the heatmap.",
"_____no_output_____"
]
],
[
[
"#Search for hotel in cities and assign to a new column in hotel_df\nhotelname = []\nhotel_df = city_weather_df.copy()\nparams = {}\nbase_url = \"https://maps.googleapis.com/maps/api/place/nearbysearch/json?\"\n\nfor index, row in hotel_df.iterrows():\n # get city name, lat, lng from df\n lat = row[\"lat\"]\n lng = row[\"long\"]\n city_name = row[\"city\"]\n \n # add keyword to params dict\n params[\"location\"] = f\"{lat},{lng}\"\n params[\"radius\"] = \"5000\"\n params[\"type\"] = \"hotel\"\n params['keyword'] = 'hotel'\n params[\"key\"] = gkey \n \n url_params = urlencode(params)\n # assemble url and make API request\n #print(f\"Retrieving Results for Index {index}: {city_name}.\")\n query_string = base_url+url_params\n #pprint(query_string)\n \n # save the hotel name to dataframe\n try:\n response = requests.get(query_string).json() \n \n # extract results\n results = response['results'] \n\n #print(f\"Closest hotel in {city_name} is {results[0]['name']}.\")\n hotel_df.loc[index, \"Hotel Name\"] = results[0]['name']\n time.sleep(.2)\n \n # if there is no hotel available, show missing field\n except (KeyError, IndexError):\n print(f\"{index} - There isn't any hotel in a 5000m radius.\")\n #print(\"------------\")\n\n # Print end of search once searching is completed\n#print(\"-------End of Search-------\")\nhotel_df",
"_____no_output_____"
],
[
"hotel_df = hotel_df.dropna()\n# Make adjustmentss to hotel_df, calculate;\n# 1. A max temperature lower than 80 degrees but higher than 70.\n# 2. Wind speed less than 10 mph.\n# 3. Zero cloudiness.\nhotel_df = hotel_df.loc[(hotel_df.maxtemp < 290) & (hotel_df.maxtemp > 270)]\nhotel_df = hotel_df.loc[hotel_df.windspeed < 10]\nhotel_df = hotel_df.loc[hotel_df.cloudiness == 0]\nhotel_df",
"_____no_output_____"
],
[
"#NOTE: Do not change any of the code in this cell\n\n# Using the template add the hotel marks to the heatmap\ninfo_box_template = \"\"\"\n<dl>\n<dt>Name</dt><dd>{Hotel Name}</dd>\n<dt>City</dt><dd>{city}</dd>\n<dt>Country</dt><dd>{country}</dd>\n</dl>\n\"\"\"\n# Store the DataFrame Row\n# NOTE: be sure to update with your DataFrame name\nhotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]\nlocations = hotel_df[[\"lat\", \"long\"]]",
"_____no_output_____"
],
[
"# Create a map using state centroid coordinates to set markers\nmarker_locations = locations\n\n# Create a marker_layer \n#fig = gmaps.figure()\nmarkers = gmaps.marker_layer(marker_locations, info_box_content=hotel_info) \n\nfig.add_layer(markers)\nfig",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d04e9a701c440de0acda34a81fea18f61f85a2f8 | 440,308 | ipynb | Jupyter Notebook | notebooks/preprocessing-data-covid-map.ipynb | aneridand/msds593 | db87619f32592527ea1ec055ef35f4749955e0a5 | [
"MIT"
] | null | null | null | notebooks/preprocessing-data-covid-map.ipynb | aneridand/msds593 | db87619f32592527ea1ec055ef35f4749955e0a5 | [
"MIT"
] | null | null | null | notebooks/preprocessing-data-covid-map.ipynb | aneridand/msds593 | db87619f32592527ea1ec055ef35f4749955e0a5 | [
"MIT"
] | 1 | 2021-01-12T19:29:43.000Z | 2021-01-12T19:29:43.000Z | 121.030236 | 190,336 | 0.816908 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.basemap import Basemap as Basemap\nfrom matplotlib.patches import Polygon\nfrom matplotlib.colorbar import ColorbarBase\n\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
]
],
[
[
"To install basemap\n\n`conda install -c conda-forge proj4`\n\n`conda install -c anaconda basemap`",
"_____no_output_____"
],
[
"In this notebook we will preprocess data to be able to compute death rates by state due to covid. You will need this data for plotting a map in hw3. ",
"_____no_output_____"
],
[
"## Dataframes ",
"_____no_output_____"
],
[
"A DataFrame object is a two-dimensional matrix with rows and columns. Each column can have different data types, but all values within a column must be of the same data type. The columns behave like [series objects](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html).\n\nData frames columns are ordered and the name-to-column mapping is stored in an index. Data frames also have an index for the rows, just like a series has an index into the values of the series. So, a data frame has two indexes which lets us zero in, for example, on a specific element using row and column index values.",
"_____no_output_____"
],
[
"Let's use `pd.read_csv` to read a csv file with all covid cases per state. Taken from the [nytimes github]( https://github.com/nytimes/covid-19-data). `.head()` gives the top 5 rows of the dataframe.",
"_____no_output_____"
]
],
[
[
"covid = pd.read_csv(\"data/us-states.csv\")\ncovid.head()",
"_____no_output_____"
]
],
[
[
"This dataframe has population estimates and a lot of other info. See `data/nst-est2019-alldata.pdf` for a descrition of all columns.",
"_____no_output_____"
]
],
[
[
"population = pd.read_csv(\"data/nst-est2019-alldata.csv\")\npopulation.head()",
"_____no_output_____"
],
[
"## let's look at the columns. I am looking for the population of 2019 per state.\n#list(population.columns)",
"_____no_output_____"
]
],
[
[
"Always look at shapes of objects before and after you manipulate them. You will get `(number of row, number of columns).` How many states in United States of America?",
"_____no_output_____"
]
],
[
[
"covid.shape, population.shape",
"_____no_output_____"
],
[
"covid.describe()\n# note that the counts are different because there are missing values in some columns",
"_____no_output_____"
],
[
"# covid[\"confirmed_cases\"]\ncovid[\"confirmed_cases\"].isnull()",
"_____no_output_____"
],
[
"# count how many rows are null?\n(covid[\"confirmed_cases\"].isnull() == True).sum()",
"_____no_output_____"
],
[
"# similarly \n(covid[\"confirmed_cases\"].isnull()).sum()",
"_____no_output_____"
],
[
"# is.na() also works\n(covid[\"confirmed_cases\"].isna()).sum()",
"_____no_output_____"
],
[
"# take first 10 elements of the column \"confirmed_cases\"\nc = covid[\"confirmed_cases\"][:10]\nc",
"_____no_output_____"
],
[
"# be careful on how different functions behave with respect to NAs\nlen(c), c.count(), c.sum(), c.sum(skipna=False), np.sum(c), sum(c)",
"_____no_output_____"
],
[
"# if you want to fill the NAs you can do\ncovid = covid.fillna(-1)\ncovid.head()",
"_____no_output_____"
]
],
[
[
"### Exercise 1 \nHow to fill NAs with different values for different columns? ",
"_____no_output_____"
],
[
"## Subsetting and merging dataframes\nWe need info about deaths from the covid dataframe and info about population from other dataframe. Let's keep just that. Also we need a way to combine (merge) the two dataframes. The column `fips` is a unique identifier for the state so I will keep that. Also the state name can be useful.",
"_____no_output_____"
]
],
[
[
"covid.head()",
"_____no_output_____"
],
[
"# selecting columns\ncovid = covid[[\"state\", \"fips\", \"deaths\"]]\ncovid.head()",
"_____no_output_____"
],
[
"population.head()",
"_____no_output_____"
],
[
"# from the pdf we have the following info\n# STATE = State FIPS code\n# NAME = State name\n# POPESTIMATE2019 = 7/1/2019 resident total population estimate\n\npopulation = population[[\"STATE\", \"NAME\", \"POPESTIMATE2019\"]]\n# show first 10 rows\npopulation.iloc[:10]",
"_____no_output_____"
],
[
"# we are not interested in top values of the population table (aggregates)\npopulation = population.iloc[5:] # all rows after 5\npopulation.head()",
"_____no_output_____"
],
[
"covid.shape, population.shape",
"_____no_output_____"
]
],
[
[
"There are various ways to merge two dataframes. At the moment we want to preserve all the data.\n`outer`: use union of keys from both frames",
"_____no_output_____"
]
],
[
[
"# Can we merge on state name?\nrates = covid.merge(population, how=\"outer\", left_on='fips', right_on='STATE')",
"_____no_output_____"
],
[
"rates.iloc[:15]",
"_____no_output_____"
],
[
"# let's look at rows with NAs\nna_index = rates[\"POPESTIMATE2019\"].isnull()\nrates[na_index]",
"_____no_output_____"
],
[
"## Let's drop them\nrates = rates.dropna()\nrates.shape",
"_____no_output_____"
],
[
"# cleaning up some more\nrates = rates[[\"state\", \"fips\", \"deaths\", \"POPESTIMATE2019\"]]",
"_____no_output_____"
],
[
"rates[\"rates\"] = 1000*rates[\"deaths\"]/rates[\"POPESTIMATE2019\"] # set a new column\nrates",
"_____no_output_____"
],
[
"# sorting by rates\nrates = rates.sort_values(by=[\"rates\"])\n#rates",
"_____no_output_____"
],
[
"## mean value of the rate column\nrates[\"rates\"].mean(), rates[\"rates\"].median()",
"_____no_output_____"
],
[
"rates[\"rates\"].quantile(q=[0.1, 0.25, 0.5, 0.75, 0.9])",
"_____no_output_____"
],
[
"# if you want 7 groups of color you need 8 quantiles \nq = np.linspace(0, 1, 8, endpoint=True) # equidistant numbers between 0 and 1\nq",
"_____no_output_____"
],
[
"# compute quantile of covid rates\nrates[\"rates\"].quantile(q=q)",
"_____no_output_____"
],
[
"qq = rates[\"rates\"].quantile(q=q)\ntype(qq) # what is the type?",
"_____no_output_____"
],
[
"type(qq.values) # I prefer working with numpy arrays",
"_____no_output_____"
],
[
"boundaries = rates[\"rates\"].quantile(q=q).values\nboundaries",
"_____no_output_____"
],
[
"## let's define a new ordinal variable based on the quantiles of the rates\nrates[\"color\"] = pd.qcut(rates[\"rates\"], 7)\nrates[\"color\"]",
"_____no_output_____"
],
[
"rates[\"color\"].unique()",
"_____no_output_____"
],
[
"## let's directly put colors here for our plot\n\ncolors = [\"#ffffd4\", \"#fee391\", \"#fec44f\", \"#fe9929\", \"#ec7014\", \"#cc4c02\", \"#8c2d04\"] # from colorbrewer2.org\nrates[\"color\"] = pd.qcut(rates[\"rates\"], 7, labels=colors)\nrates[\"color\"].values",
"_____no_output_____"
]
],
[
[
"## Dictionary of color per state",
"_____no_output_____"
]
],
[
[
"# iterate through rows\nfor i, row in rates.iterrows():\n print(row[\"state\"], row[\"color\"])",
"Alaska #ffffd4\nHawaii #ffffd4\nWyoming #ffffd4\nVermont #ffffd4\nMaine #ffffd4\nOregon #ffffd4\nMontana #ffffd4\nUtah #ffffd4\nWest Virginia #fee391\nPuerto Rico #fee391\nKansas #fee391\nSouth Dakota #fee391\nWisconsin #fee391\nNorth Dakota #fee391\nOklahoma #fee391\nNebraska #fec44f\nIdaho #fec44f\nKentucky #fec44f\nWashington #fec44f\nMissouri #fec44f\nNorth Carolina #fec44f\nTennessee #fec44f\nVirginia #fe9929\nNew Hampshire #fe9929\nArkansas #fe9929\nColorado #fe9929\nMinnesota #fe9929\nCalifornia #fe9929\nOhio #fe9929\nIowa #fe9929\nNew Mexico #ec7014\nNevada #ec7014\nAlabama #ec7014\nTexas #ec7014\nIndiana #ec7014\nGeorgia #ec7014\nFlorida #ec7014\nSouth Carolina #cc4c02\nPennsylvania #cc4c02\nDelaware #cc4c02\nMaryland #cc4c02\nIllinois #cc4c02\nMichigan #cc4c02\nArizona #cc4c02\nDistrict of Columbia #8c2d04\nMississippi #8c2d04\nRhode Island #8c2d04\nLouisiana #8c2d04\nConnecticut #8c2d04\nMassachusetts #8c2d04\nNew York #8c2d04\nNew Jersey #8c2d04\n"
],
[
"# make a dictionary of color per state\nstate2color = {}\nfor i, row in rates.iterrows():\n state2color[row[\"state\"]] = row[\"color\"]",
"_____no_output_____"
],
[
"# here is a shortcut of the same\n# dictionary comprehension\nstate2color = {row[\"state\"]: row[\"color\"] for i, row in rates.iterrows()}",
"_____no_output_____"
]
],
[
[
"## Making a map in matplotlib",
"_____no_output_____"
],
[
"Based on these examples\n\nhttps://github.com/matplotlib/basemap/blob/master/examples/fillstates.py\n\nhttps://stackoverflow.com/questions/39742305/how-to-use-basemap-python-to-plot-us-with-50-states",
"_____no_output_____"
]
],
[
[
"# Lambert Conformal map of lower 48 states.\nm = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49,\n projection='lcc',lat_1=33,lat_2=45,lon_0=-95)\n\n# load the shapefile, use the name 'states'\nshape = m.readshapefile('st99_d00', name='states', drawbounds=True)\nax = plt.gca() # get current axes instance\n \n# list of states in the data\nstates = [shapedict['NAME'] for shapedict in m.states_info]\n \nfor i, seg in enumerate(m.states):\n state = states[i]\n color = state2color[state]\n poly = Polygon(seg, facecolor=color, edgecolor=color)\n ax.add_patch(poly)\n",
"_____no_output_____"
],
[
"states = [shapedict['NAME'] for shapedict in m.states_info] # list comprenhension\n#states",
"_____no_output_____"
]
],
[
[
"## How to make a column bar",
"_____no_output_____"
]
],
[
[
"colors = [\"#ffffd4\", \"#fee391\", \"#fec44f\", \"#fe9929\", \"#ec7014\", \"#cc4c02\", \"#8c2d04\"]\nbounds = [1,2,3,4,5,6,7,8]\nboundaries = [0.055, 0.139, 0.23, 0.316, 0.387, 0.588, 0.832, 1.804]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(1, 8))\nfig.subplots_adjust(bottom=0.5)\n\ncmap = mpl.colors.ListedColormap(colors)\n\ncb2 = ColorbarBase(ax, cmap=cmap,\n boundaries=bounds,\n ticks=bounds,\n label=boundaries,\n orientation='vertical')\ncb2.set_label('Covid rates')\ncb2.set_ticklabels(boundaries)",
"_____no_output_____"
]
],
[
[
"## Put it together",
"_____no_output_____"
]
],
[
[
"# rounding\nboundaries = [0.00, 0.14, 0.23, 0.32, 0.39, 0.59, 0.83, 1.80]",
"_____no_output_____"
],
[
"# Lambert Conformal map of lower 48 states.\nfig, ax = plt.subplots(figsize=(12,6))\nm = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49,\n projection='lcc',lat_1=33,lat_2=45,lon_0=-95)\n\n# load the shapefile, use the name 'states'\nshape = m.readshapefile('st99_d00', name='states', drawbounds=True,\n linewidth=0.2,color='#808080')\n \n# list of states in the data\nstates = [shapedict['NAME'] for shapedict in m.states_info]\n \nfor i, seg in enumerate(m.states):\n state = states[i]\n color = state2color[state]\n poly = Polygon(seg, facecolor=color, edgecolor=color)\n ax.add_patch(poly)\n \nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\nax.spines[\"bottom\"].set_visible(False)\nax.spines[\"left\"].set_visible(False)\nplt.annotate(\"Covid death rates per thousands\", xy=(0, 1.05), xycoords='axes fraction', fontsize=20, color='#303030')\n\n\n# [left, bottom, width, height] \nax_c = fig.add_axes([0.25, 0.05, 0.5, 0.03])\n\ncmap = mpl.colors.ListedColormap(colors)\ncb2 = ColorbarBase(ax_c, cmap=cmap,\n boundaries=bounds,\n ticks=bounds,\n label=boundaries,\n orientation='horizontal')\ncb2.set_label(\"\")\ncb2.set_ticklabels(boundaries)",
"_____no_output_____"
]
],
[
[
"## More on dataframe manipulation",
"_____no_output_____"
],
[
"`.iloc` for slicing a dataframe",
"_____no_output_____"
]
],
[
[
"rates.head()",
"_____no_output_____"
],
[
"rates = rates.reset_index(drop=True)\nrates.head()",
"_____no_output_____"
],
[
"## keep the first 7 rows\nrates_top7 = rates.iloc[:7]\nrates_top7",
"_____no_output_____"
],
[
"## keep columns 2 and 3\nrates_top7_cols23 = rates_top7.iloc[:, 2:4]\nrates_top7_cols23",
"_____no_output_____"
],
[
"# we can do it at the same time\nrates.iloc[:7, 2:4]",
"_____no_output_____"
]
],
[
[
"**Exercise 2**: Make a map of `rate of covid cases` per state. Can you use a diverging palette to understand states that are abobe or below average? Which plot makes more sense for this problem, one with a diverging palette or a sequential one?",
"_____no_output_____"
],
[
"**Exercise 3**: (hard) Can you annotate this plot showing the top states with death rates.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d04e9e5e33d9ac50cf1a6019ff2b882429b8c33f | 112,441 | ipynb | Jupyter Notebook | 24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb | mohd-faizy/DataScience-With-Python | 13ebb10cf9083343056d5b782957241de1d595f9 | [
"MIT"
] | 5 | 2021-02-03T14:36:58.000Z | 2022-01-01T10:29:26.000Z | 24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb | mohd-faizy/DataScience-With-Python | 13ebb10cf9083343056d5b782957241de1d595f9 | [
"MIT"
] | null | null | null | 24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb | mohd-faizy/DataScience-With-Python | 13ebb10cf9083343056d5b782957241de1d595f9 | [
"MIT"
] | 3 | 2021-02-08T00:31:16.000Z | 2022-03-17T13:52:32.000Z | 112,441 | 112,441 | 0.918482 | [
[
[
"## 1. Meet Dr. Ignaz Semmelweis\n<p><img style=\"float: left;margin:5px 20px 5px 1px\" src=\"https://assets.datacamp.com/production/project_20/img/ignaz_semmelweis_1860.jpeg\"></p>\n<!--\n<img style=\"float: left;margin:5px 20px 5px 1px\" src=\"https://assets.datacamp.com/production/project_20/datasets/ignaz_semmelweis_1860.jpeg\">\n-->\n<p>This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about <em>childbed fever</em>: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and <em>wash their hands</em>!</p>\n<p>In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of <em>handwashing</em>. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.</p>",
"_____no_output_____"
]
],
[
[
"# Importing modules\nimport pandas as pd\n\n# Read datasets/yearly_deaths_by_clinic.csv into yearly\nyearly = pd.read_csv(\"datasets/yearly_deaths_by_clinic.csv\")\n\n# Print out yearly\nyearly",
"_____no_output_____"
]
],
[
[
"## 2. The alarming number of deaths\n<p>The table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an <em>alarming</em> number of women died as the result of childbirth, most of them from childbed fever.</p>\n<p>We see this more clearly if we look at the <em>proportion of deaths</em> out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.</p>",
"_____no_output_____"
]
],
[
[
"# Calculate proportion of deaths per no. births\nyearly[\"proportion_deaths\"] = yearly[\"deaths\"] / yearly[\"births\"]\n\n# Extract Clinic 1 data into clinic_1 and Clinic 2 data into clinic_2\nclinic_1 = yearly[yearly[\"clinic\"] == \"clinic 1\"]\nclinic_2 = yearly[yearly[\"clinic\"] == \"clinic 2\"]\n\n# Print out clinic_1\nclinic_1",
"_____no_output_____"
]
],
[
[
"## 3. Death at the clinics\n<p>If we now plot the proportion of deaths at both Clinic 1 and Clinic 2 we'll see a curious pattern…</p>",
"_____no_output_____"
]
],
[
[
"# This makes plots appear in the notebook\n%matplotlib inline\n\n# Plot yearly proportion of deaths at the two clinics\nax = clinic_1.plot(x=\"year\", y=\"proportion_deaths\", label=\"Clinic 1\")\nclinic_2.plot(x=\"year\", y=\"proportion_deaths\", label=\"Clinic 2\", ax=ax, ylabel=\"Proportion deaths\")",
"_____no_output_____"
]
],
[
[
"## 4. The handwashing begins\n<p>Why is the proportion of deaths consistently so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. </p>\n<p>Semmelweis started to suspect that something on the corpses spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: <em>Wash your hands!</em> This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. </p>\n<p>Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.</p>",
"_____no_output_____"
]
],
[
[
"# Read datasets/monthly_deaths.csv into monthly\nmonthly = pd.read_csv(\"datasets/monthly_deaths.csv\", parse_dates=[\"date\"])\n\n# Calculate proportion of deaths per no. births\nmonthly[\"proportion_deaths\"] = monthly[\"deaths\"] / monthly[\"births\"]\n\n# Print out the first rows in monthly\nmonthly.head()",
"_____no_output_____"
]
],
[
[
"## 5. The effect of handwashing\n<p>With the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!</p>",
"_____no_output_____"
]
],
[
[
"# Plot monthly proportion of deaths\nax = monthly.plot(x=\"date\", y=\"proportion_deaths\", ylabel=\"Proportion deaths\")",
"_____no_output_____"
]
],
[
[
"## 6. The effect of handwashing highlighted\n<p>Starting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. </p>\n<p>The effect of handwashing is made even more clear if we highlight this in the graph.</p>",
"_____no_output_____"
]
],
[
[
"# Date when handwashing was made mandatory\nhandwashing_start = pd.to_datetime('1847-06-01')\n\n# Split monthly into before and after handwashing_start\nbefore_washing = monthly[monthly[\"date\"] < handwashing_start]\nafter_washing = monthly[monthly[\"date\"] >= handwashing_start]\n\n# Plot monthly proportion of deaths before and after handwashing\nax = before_washing.plot(x=\"date\", y=\"proportion_deaths\",\n label=\"Before handwashing\")\nafter_washing.plot(x=\"date\", y=\"proportion_deaths\",\n label=\"After handwashing\", ax=ax, ylabel=\"Proportion deaths\")",
"_____no_output_____"
]
],
[
[
"## 7. More handwashing, fewer deaths?\n<p>Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?</p>",
"_____no_output_____"
]
],
[
[
"# Difference in mean monthly proportion of deaths due to handwashing\nbefore_proportion = before_washing[\"proportion_deaths\"]\nafter_proportion = after_washing[\"proportion_deaths\"]\nmean_diff = after_proportion.mean() - before_proportion.mean()\nmean_diff",
"_____no_output_____"
]
],
[
[
"## 8. A Bootstrap analysis of Semmelweis handwashing data\n<p>It reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). </p>\n<p>To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).</p>",
"_____no_output_____"
]
],
[
[
"# A bootstrap analysis of the reduction of deaths due to handwashing\nboot_mean_diff = []\nfor i in range(3000):\n boot_before = before_proportion.sample(frac=1, replace=True)\n boot_after = after_proportion.sample(frac=1, replace=True)\n boot_mean_diff.append( boot_after.mean() - boot_before.mean() )\n\n# Calculating a 95% confidence interval from boot_mean_diff \nconfidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])\nconfidence_interval",
"_____no_output_____"
]
],
[
[
"## 9. The fate of Dr. Semmelweis\n<p>So handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.</p>\n<p>The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some \"substance\" (what we today know as <em>bacteria</em>) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.</p>\n<p>One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.</p>",
"_____no_output_____"
]
],
[
[
"# The data Semmelweis collected points to that:\ndoctors_should_wash_their_hands = True",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d04ea507fc0d6bf87806640466000cea64017f5f | 41,970 | ipynb | Jupyter Notebook | carnot_efficiency.ipynb | MarkusLohmayer/master-thesis-code | b107d1b582064daf9ad4414e1c9f332ef0be8660 | [
"MIT"
] | 1 | 2020-11-14T15:56:07.000Z | 2020-11-14T15:56:07.000Z | carnot_efficiency.ipynb | MarkusLohmayer/master-thesis-code | b107d1b582064daf9ad4414e1c9f332ef0be8660 | [
"MIT"
] | null | null | null | carnot_efficiency.ipynb | MarkusLohmayer/master-thesis-code | b107d1b582064daf9ad4414e1c9f332ef0be8660 | [
"MIT"
] | null | null | null | 499.642857 | 40,412 | 0.953467 | [
[
[
"# Carnot efficiency as a function of temperature\n\nassuming a fixed reference temperature",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# standard temperature 0°C as reference\nθ_0 = 273.15 # Kelvin\n\n# temperature range: 0°C to 200°C \nθ = np.linspace(θ_0, θ_0+200, num=500)\n\n# Carnot efficiency\nη = (θ - θ_0) / θ",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(dpi=200)\nax.plot(θ, η);",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d04ea9e906a60b0a5aae399298777c97b9e56f9b | 88,236 | ipynb | Jupyter Notebook | data_2/Notebooks/JaccardDistanceAnalysis.ipynb | budh333/UnSilence_VOC | 3ba8f302f82df2d512d453c6b76dffb50d4f64db | [
"MIT"
] | 1 | 2021-07-29T09:27:06.000Z | 2021-07-29T09:27:06.000Z | data_2/Notebooks/JaccardDistanceAnalysis.ipynb | budh333/UnSilence_VOC | 3ba8f302f82df2d512d453c6b76dffb50d4f64db | [
"MIT"
] | 5 | 2021-08-12T13:38:54.000Z | 2021-08-30T08:55:34.000Z | data_2/Notebooks/JaccardDistanceAnalysis.ipynb | budh333/UnSilence_VOC | 3ba8f302f82df2d512d453c6b76dffb50d4f64db | [
"MIT"
] | null | null | null | 169.684615 | 23,128 | 0.886622 | [
[
[
"import os\nfrom glob import glob\n\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
]
],
[
[
"## Cleaning Up (& Stats About It)\n\n\n - For each annotator:\n - How many annotation files?\n - How many txt files?\n - Number of empty .ann files\n - How many non-empty .ann files have a `TranscriptionError_Document`/`DuplicatePage` tag?\n - How many .ann files have ONLY one of those two tags and are empty o/w? -> remove if so\n \n => remove corresponding .txt files \n => create new corpus",
"_____no_output_____"
]
],
[
[
"def get_all_files(annotator):\n \"\"\" collapsing folder structure per annotator\"\"\"\n data_dir = \"../Data/\"\n ann_dir = data_dir+annotator+\"/\"\n for cur_dir in glob(ann_dir+\"/6*\"):\n txt_files = sorted(glob(cur_dir+\"/*.txt\"))\n ann_files = sorted(glob(cur_dir+\"/*.ann\"))\n yield from zip(txt_files, ann_files)\n \n \ndef has_error_tag(any_string):\n \"\"\"Return strings with error tags\"\"\"\n return \"TranscriptionError_Document\" in any_string or\\\n \"DuplicatePage\" in any_string\n\n\ndef remove_error_tag_lines(ann_file_content):\n return [line for line in ann_file_content.strip().split(\"\\n\") \n if not has_error_tag(line)] ",
"_____no_output_____"
],
[
"annotators = \"A B C Silja Yolien\".split()",
"_____no_output_____"
],
[
"results = {}\n\nprint(\"Total Annotation Files Per Annotator\\n\")\nfor anno in annotators:\n empty = []\n cur_keep = []\n\n error_tag = []\n error_tag_but_non_empty = []\n\n ann_files = list(get_all_files(anno))\n print(anno, len(ann_files))\n\n for txt, ann in ann_files:\n with open(ann) as handle:\n contents = handle.read()\n \n if not contents.strip():\n empty.append((txt, ann)) \n elif has_error_tag(contents):\n \n error_tags_removed = remove_error_tag_lines(\n contents\n )\n \n if error_tags_removed == []:\n error_tag.append((txt, ann))\n else:\n error_tag_but_non_empty.append((txt, ann)) \n else:\n cur_keep.append((txt, ann))\n \n \n results[anno] = [cur_keep, empty, error_tag, error_tag_but_non_empty]\n ",
"Total Annotation Files Per Annotator\n\nEmma 983\nJonas 983\nRoos 1183\nSilja 983\nYolien 983\n"
],
[
"from tabulate import tabulate\n\nstats = pd.DataFrame([\n [k, sum(map(len, v))]+\n [len(v[0])+len(v[-1])]+\n list(map(len, v)) for k, v in results.items()\n \n],\ncolumns=[\"Annotator\", \"Total\", \"Keep\",\n \"Non-empty-No error\", \"Empty\", \"Error\", \"Err.&Non-Empty\"]).set_index(\"Annotator\")\nprint(stats)",
" Total Keep Non-empty-No error Empty Error Err.&Non-Empty\nAnnotator \nEmma 983 333 237 637 13 96\nJonas 983 321 205 648 14 116\nRoos 1183 727 449 294 162 278\nSilja 983 539 263 346 98 276\nYolien 983 570 253 388 25 317\n"
],
[
"stats_T = pd.melt(stats[[\"Total\", \"Empty\", \"Keep\", \"Error\"]].reset_index(), \n id_vars=[\"Annotator\"], value_name=\"Number\")\n\nplt.figure(figsize=(10, 7))\nsns.barplot(data=stats_T, x='Annotator', y=\"Number\", hue=\"variable\")",
"_____no_output_____"
],
[
"keep = {anno: v[0]+v[-1] for anno, v in results.items()}\n\n{k: len(v) for k, v in keep.items()}",
"_____no_output_____"
],
[
"# keep",
"_____no_output_____"
]
],
[
[
"### Make New Corpus\n\nby copying files",
"_____no_output_____"
]
],
[
[
"from shutil import copy2\n\nalready_copied = True\n\nif not already_copied:\n from tqdm import tqdm \n os.makedirs('Keep')\n\n for anno, ls in tqdm(keep.items()):\n cur_dir = f\"Keep/{anno}\"\n os.makedirs(cur_dir)\n\n for txt, ann in ls:\n copy2(txt, cur_dir)\n copy2(ann, cur_dir)\nelse:\n print(\"Already copied, doing nothing!\")",
"Already copied, doing nothing!\n"
]
],
[
[
"# Pairwise Intersections of Annotation Files",
"_____no_output_____"
]
],
[
[
"def only_names(file_list):\n \"returns only names of files in a particular list\"\n return [ann.split(\"/\")[-1] for txt, ann in file_list]\n\n\nls = []\nfor a1, fs1 in keep.items():\n for a2, fs2 in keep.items():\n if not a1 == a2:\n \n names1, names2 = only_names(fs1), only_names(fs2)\n inter = set(names1) & set(names2) #names of files are identical\n val = len(inter)/len(names1)\n \n total_names1 = only_names(tup for ls in results[a1] for tup in ls)\n total_names2 = only_names(tup for ls in results[a2] for tup in ls)\n \n total_inter = set(total_names1) & set(total_names2)\n total_val = len(total_inter)/len(total_names1)\n \n jacc_val = len(set(names1).intersection(set(names2)))/len(set(names1).union(set(names2)))\n jacc_val_2 = len(set(total_names1).intersection(set(total_names2)))/len(set(total_names1).union(set(total_names2)))\n \n \n \n ls.append([a1, a2, len(inter), val, \n len(total_inter), total_val, jacc_val, jacc_val_2])\n \n \ninter_stats = pd.DataFrame(ls, \n columns=[\"Anno1\", \"Anno2\", \n \"Intersection\", \"normed_Intersection\",\n \"total_Intersection\", \"total_normed_Intersection\", \"Jaccard_distance\", \"Jaccard_Distance_2\"])",
"_____no_output_____"
],
[
"# inter_stats",
"_____no_output_____"
]
],
[
[
"#### Jaccard Distance to Understand Overlap Pages between Annotators",
"_____no_output_____"
]
],
[
[
"inter_stats_T = inter_stats.pivot_table(\n values=\"Jaccard_distance\",\n index=\"Anno1\", columns=\"Anno2\"\n)\n\nsns.heatmap(inter_stats_T*100, annot=True, cmap=\"YlGnBu\")\n_ = plt.title(\"Before Clean Up: Jaccard Distance (percentage)\")\n\nplt.show()\n\ninter_stats_T = inter_stats.pivot_table(\n values=\"Jaccard_Distance_2\",\n index=\"Anno1\", columns=\"Anno2\"\n)\n\nsns.heatmap(inter_stats_T*100, annot=True, cmap=\"YlGnBu\")\n_ = plt.title(\"After Clean Up: Jaccard Distance (percentage)\")\n\nplt.show()\n\n\n# inter_stats_T = inter_stats.pivot_table(\n# values=\"Intersection\",\n# index=\"Anno1\", columns=\"Anno2\"\n# )\n\n# sns.heatmap(inter_stats_T, \n# annot=True, cmap=\"YlGnBu\")\n\n# _ = plt.title(\"Before Clean Up: Raw Counts\")",
"_____no_output_____"
]
],
[
[
"**Conclusion**: Each pair of annotators annotated on average have 6% overlap (over the total documents they annotated together).",
"_____no_output_____"
],
[
"## Check Tag Distributions",
"_____no_output_____"
]
],
[
[
"def get_lines(ann_file):\n with open(ann_file) as handle:\n for l in handle:\n if not l.strip(): continue\n yield l.strip().split(\"\\t\")\n\ndef get_entities(ann_file):\n for line in get_lines(ann_file):\n if line[0].startswith(\"T\") and len(line) >= 2:\n tag_type, tag, string = line\n yield tag.split()[0]\n\n\n \nents = {a: [e for txt, ann in files for e in get_entities(ann)]\n for a, files in keep.items()}",
"_____no_output_____"
],
[
"from collections import Counter\n\nentity_stats = pd.DataFrame(\n [[a, e, c] for a in ents for e, c in Counter(ents[a]).items() if not e in [\"DuplicatePage\", \"Noteworthy\", \"TranscriptionError_Document\"]],\n columns=[\"Annotator\", \"EntityType\", \"Count\"]\n)\n\n\n\nplt.figure(figsize=(10, 7))\n_ = sns.barplot(data=entity_stats, x='Annotator', y=\"Count\", hue=\"EntityType\")",
"_____no_output_____"
]
],
[
[
"**Conclusion**: \nHere we see that most annotators follow a similar trend in entities annotated, only annotator who stands out is '3'.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d04eb0be5760cdebabe2aa62e2d3d99c0f41b690 | 5,028 | ipynb | Jupyter Notebook | 5 Extending.ipynb | DavidMStraub/flavio-tutorial | bc266993fb29bda0b5efaf9c69fbf6ac6f6fb85c | [
"MIT"
] | 4 | 2018-12-25T20:50:48.000Z | 2021-02-05T07:57:28.000Z | 5 Extending.ipynb | DavidMStraub/flavio-tutorial | bc266993fb29bda0b5efaf9c69fbf6ac6f6fb85c | [
"MIT"
] | 2 | 2018-01-22T22:57:11.000Z | 2019-04-05T12:38:31.000Z | 5 Extending.ipynb | DavidMStraub/flavio-tutorial | bc266993fb29bda0b5efaf9c69fbf6ac6f6fb85c | [
"MIT"
] | 4 | 2018-12-25T20:50:50.000Z | 2020-10-26T18:45:35.000Z | 22.854545 | 146 | 0.544352 | [
[
[
"# flavio tutorial\n\n## Part 5: Extending flavio",
"_____no_output_____"
],
[
"### Adding an observable: photon polarization in $B\\to K\\pi\\pi\\gamma$\n\n$$\\lambda_\\gamma = \\frac{|G_L|^2 - |G_R|^2}{|G_L|^2 + |G_R|^2}$$\n\n$$G_L = C_7^\\text{eff} + \\ldots, \\qquad G_L = C_7' + \\ldots $$\n\n$\\ldots$ refer to non-factorizable hadronic contributions - let's ignore them for simplicity",
"_____no_output_____"
],
[
"Writing a function that takes a `WilsonCoefficients` instance and a dictionary of parameter values as input",
"_____no_output_____"
]
],
[
[
"import flavio\n\ndef ll_lgamma(wc_obj, par_dict):\n scale = flavio.config['renormalization scale']['bvgamma']\n wc_dict = flavio.physics.bdecays.wilsoncoefficients.wctot_dict(wc_obj, 'bsee', scale, par_dict)\n delta_C7 = flavio.physics.bdecays.matrixelements.delta_C7(\n par=par_dict, wc=wc_dict, q2=0, scale=scale, qiqj='bs')\n a = {}\n GL = abs(wc_dict['C7eff_bs'] + delta_C7)**2\n GR = abs(wc_dict['C7effp_bs'])**2\n return (GL**2 - GR**2) / (GL**2 + GR**2)",
"_____no_output_____"
]
],
[
[
"Defining the `Observable` and `Prediction` instances",
"_____no_output_____"
]
],
[
[
"obs = 'lambda_gamma'\nflavio.classes.Observable(obs)\nflavio.classes.Prediction(obs, ll_lgamma);",
"_____no_output_____"
],
[
"flavio.sm_prediction('lambda_gamma')",
"_____no_output_____"
],
[
"wc = flavio.WilsonCoefficients()\nwc.set_initial({'C7p_bs': 0.25}, 4.8)\nflavio.np_prediction('lambda_gamma', wc)",
"_____no_output_____"
]
],
[
[
"## Adding a new parameter",
"_____no_output_____"
]
],
[
[
"flavio.classes.Parameter('my_fudge_factor')\nflavio.default_parameters.set_constraint('my_fudge_factor', '0 +- 0.2')",
"_____no_output_____"
],
[
"flavio.default_parameters.get_central('my_fudge_factor')",
"_____no_output_____"
],
[
"[flavio.default_parameters.get_random_all()['my_fudge_factor'] for i in range(5)]",
"_____no_output_____"
]
],
[
[
"## Adding observables that depend on new operators\n\nIn principle, any observable where NP enters via local operators can be added to flavio:\n\n- $D$ mixing & decays\n- Non-leptonic $B$ decays\n- Charged lepton flavour violation\n- $(g-2)_\\ell$\n- Electric dipole moments\n- Electroweak precision tests (via SMEFT)",
"_____no_output_____"
],
[
"### Extending the operator basis\n\nTo extend the operator basis, the additional operators have to be define in the WCxf basis; See [wcxf.github.io](https://wcxf.github.io).\n\nOne of the next release will also allow to define *observables* themselves in terms of other WCxf EFTs or bases (e.g. SMEFT).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d04eba5c99b3b9e5f958ae5c069860531d1cd673 | 87,986 | ipynb | Jupyter Notebook | module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb | JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data | ee4e1ab6b564db421e8481a3953e12b1819cb00f | [
"MIT"
] | null | null | null | module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb | JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data | ee4e1ab6b564db421e8481a3953e12b1819cb00f | [
"MIT"
] | null | null | null | module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb | JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data | ee4e1ab6b564db421e8481a3953e12b1819cb00f | [
"MIT"
] | null | null | null | 39.830693 | 1,190 | 0.472689 | [
[
[
"# Lambda School Data Science - Loading, Cleaning and Visualizing Data\n\nObjectives for today:\n- Load data from multiple sources into a Python notebook \n - From a URL (github or otherwise)\n - CSV upload method\n - !wget method\n- \"Clean\" a dataset using common Python libraries\n - Removing NaN values \"Data Imputation\"\n- Create basic plots appropriate for different data types\n - Scatter Plot\n - Histogram\n - Density Plot\n - Pairplot (if we have time)",
"_____no_output_____"
],
[
"# Part 1 - Loading Data\n\nData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.\n\nData set sources:\n\n- https://archive.ics.uci.edu/ml/datasets.html\n- https://github.com/awesomedata/awesome-public-datasets\n- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)\n\nLet's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags).",
"_____no_output_____"
],
[
"## Lecture example - flag data",
"_____no_output_____"
]
],
[
[
"# Step 1 - find the actual file to download\n\n# From navigating the page, clicking \"Data Folder\"\nflag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'\n\n# You can \"shell out\" in a notebook for more powerful tools\n# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html\n\n# Funny extension, but on inspection looks like a csv\n!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data\n\n# Extensions are just a norm! You have to inspect to be sure what something is",
"Afghanistan,5,1,648,16,10,2,0,3,5,1,1,0,1,1,1,0,green,0,0,0,0,1,0,0,1,0,0,black,green\nAlbania,3,1,29,3,6,6,0,0,3,1,0,0,1,0,1,0,red,0,0,0,0,1,0,0,0,1,0,red,red\nAlgeria,4,1,2388,20,8,2,2,0,3,1,1,0,0,1,0,0,green,0,0,0,0,1,1,0,0,0,0,green,white\nAmerican-Samoa,6,3,0,0,1,1,0,0,5,1,0,1,1,1,0,1,blue,0,0,0,0,0,0,1,1,1,0,blue,red\nAndorra,3,1,0,0,6,0,3,0,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,blue,red\nAngola,4,2,1247,7,10,5,0,2,3,1,0,0,1,0,1,0,red,0,0,0,0,1,0,0,1,0,0,red,black\nAnguilla,1,4,0,0,1,1,0,1,3,0,0,1,0,1,0,1,white,0,0,0,0,0,0,0,0,1,0,white,blue\nAntigua-Barbuda,1,4,0,0,1,1,0,1,5,1,0,1,1,1,1,0,red,0,0,0,0,1,0,1,0,0,0,black,red\nArgentina,2,3,2777,28,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue\nArgentine,2,3,2777,28,2,0,0,3,3,0,0,1,1,1,0,0,blue,0,0,0,0,1,0,0,0,0,0,blue,blue\nAustralia,6,2,7690,15,1,1,0,0,3,1,0,1,0,1,0,0,blue,0,1,1,1,6,0,0,0,0,0,white,blue\nAustria,3,1,84,8,4,0,0,3,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red\nBahamas,1,4,19,0,1,1,0,3,3,0,0,1,1,0,1,0,blue,0,0,0,0,0,0,1,0,0,0,blue,blue\nBahrain,5,1,1,0,8,2,0,0,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,white,red\nBangladesh,5,1,143,90,6,2,0,0,2,1,1,0,0,0,0,0,green,1,0,0,0,0,0,0,0,0,0,green,green\nBarbados,1,4,0,0,1,1,3,0,3,0,0,1,1,0,1,0,blue,0,0,0,0,0,0,0,1,0,0,blue,blue\nBelgium,3,1,31,10,6,0,3,0,3,1,0,0,1,0,1,0,gold,0,0,0,0,0,0,0,0,0,0,black,red\nBelize,1,4,23,0,1,1,0,2,8,1,1,1,1,1,1,1,blue,1,0,0,0,0,0,0,1,1,1,red,red\nBenin,4,1,113,3,3,5,0,0,2,1,1,0,0,0,0,0,green,0,0,0,0,1,0,0,0,0,0,green,green\nBermuda,1,4,0,0,1,1,0,0,6,1,1,1,1,1,1,0,red,1,1,1,1,0,0,0,1,1,0,white,red\nBhutan,5,1,47,1,10,3,0,0,4,1,0,0,0,1,1,1,orange,4,0,0,0,0,0,0,0,1,0,orange,red\nBolivia,2,3,1099,6,2,0,0,3,3,1,1,0,1,0,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green\nBotswana,4,2,600,1,10,5,0,5,3,0,0,1,0,1,1,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue\nBrazil,2,3,8512,119,6,0,0,0,4,0,1,1,1,1,0,0,green,1,0,0,0,22,0,0,0,0,1,green,green\nBritish-Virgin-Isles,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,1,blue,0,1,1,1,0,0,0,1,1,1,white,blue\nBrunei,5,1,6,0,10,2,0,0,4,1,0,0,1,1,1,0,gold,0,0,0,0,0,0,1,1,1,1,white,gold\nBulgaria,3,1,111,9,5,6,0,3,5,1,1,1,1,1,0,0,red,0,0,0,0,1,0,0,1,1,0,white,red\nBurkina,4,4,274,7,3,5,0,2,3,1,1,0,1,0,0,0,red,0,0,0,0,1,0,0,0,0,0,red,green\nBurma,5,1,678,35,10,3,0,0,3,1,0,1,0,1,0,0,red,0,0,0,1,14,0,0,1,1,0,blue,red\nBurundi,4,2,28,4,10,5,0,0,3,1,1,0,0,1,0,0,red,1,0,1,0,3,0,0,0,0,0,white,white\nCameroon,4,1,474,8,3,1,3,0,3,1,1,0,1,0,0,0,gold,0,0,0,0,1,0,0,0,0,0,green,gold\nCanada,1,4,9976,24,1,1,2,0,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,1,0,red,red\nCape-Verde-Islands,4,4,4,0,6,0,1,2,5,1,1,0,1,0,1,1,gold,0,0,0,0,1,0,0,0,1,0,red,green\nCayman-Islands,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,1,blue,1,1,1,1,4,0,0,1,1,1,white,blue\nCentral-African-Republic,4,1,623,2,10,5,1,0,5,1,1,1,1,1,0,0,gold,0,0,0,0,1,0,0,0,0,0,blue,gold\nChad,4,1,1284,4,3,5,3,0,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,blue,red\nChile,2,3,757,11,2,0,0,2,3,1,0,1,0,1,0,0,red,0,0,0,1,1,0,0,0,0,0,blue,red\nChina,5,1,9561,1008,7,6,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,5,0,0,0,0,0,red,red\nColombia,2,4,1139,28,2,0,0,3,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,gold,red\nComorro-Islands,4,2,2,0,3,2,0,0,2,0,1,0,0,1,0,0,green,0,0,0,0,4,1,0,0,0,0,green,green\nCongo,4,2,342,2,10,5,0,0,3,1,1,0,1,0,0,0,red,0,0,0,0,1,0,0,1,1,0,red,red\nCook-Islands,6,3,0,0,1,1,0,0,4,1,0,1,0,1,0,0,blue,1,1,1,1,15,0,0,0,0,0,white,blue\nCosta-Rica,1,4,51,2,2,0,0,5,3,1,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue\nCuba,1,4,115,10,2,6,0,5,3,1,0,1,0,1,0,0,blue,0,0,0,0,1,0,1,0,0,0,blue,blue\nCyprus,3,1,9,1,6,1,0,0,3,0,1,0,1,1,0,0,white,0,0,0,0,0,0,0,1,1,0,white,white\nCzechoslovakia,3,1,128,15,5,6,0,0,3,1,0,1,0,1,0,0,white,0,0,0,0,0,0,1,0,0,0,white,red\nDenmark,3,1,43,5,6,1,0,0,2,1,0,0,0,1,0,0,red,0,1,0,0,0,0,0,0,0,0,red,red\nDjibouti,4,1,22,0,3,2,0,0,4,1,1,1,0,1,0,0,blue,0,0,0,0,1,0,1,0,0,0,white,green\nDominica,1,4,0,0,1,1,0,0,6,1,1,1,1,1,1,0,green,1,0,0,0,10,0,0,0,1,0,green,green\nDominican-Republic,1,4,49,6,2,0,0,0,3,1,0,1,0,1,0,0,blue,0,1,0,0,0,0,0,0,0,0,blue,blue\nEcuador,2,3,284,8,2,0,0,3,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,gold,red\nEgypt,4,1,1001,47,8,2,0,3,4,1,0,0,1,1,1,0,black,0,0,0,0,0,0,0,0,1,1,red,black\nEl-Salvador,1,4,21,5,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue\nEquatorial-Guinea,4,1,28,0,10,5,0,3,4,1,1,1,0,1,0,0,green,0,0,0,0,0,0,1,0,0,0,green,red\nEthiopia,4,1,1222,31,10,1,0,3,3,1,1,0,1,0,0,0,green,0,0,0,0,0,0,0,0,0,0,green,red\nFaeroes,3,4,1,0,6,1,0,0,3,1,0,1,0,1,0,0,white,0,1,0,0,0,0,0,0,0,0,white,white\nFalklands-Malvinas,2,3,12,0,1,1,0,0,6,1,1,1,1,1,0,0,blue,1,1,1,1,0,0,0,1,1,1,white,blue\nFiji,6,2,18,1,1,1,0,0,7,1,1,1,1,1,0,1,blue,0,2,1,1,0,0,0,1,1,0,white,blue\nFinland,3,1,337,5,9,1,0,0,2,0,0,1,0,1,0,0,white,0,1,0,0,0,0,0,0,0,0,white,white\nFrance,3,1,547,54,3,0,3,0,3,1,0,1,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,blue,red\nFrench-Guiana,2,4,91,0,3,0,3,0,3,1,0,1,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,blue,red\nFrench-Polynesia,6,3,4,0,3,0,0,3,5,1,0,1,1,1,1,0,red,1,0,0,0,1,0,0,1,0,0,red,red\nGabon,4,2,268,1,10,5,0,3,3,0,1,1,1,0,0,0,green,0,0,0,0,0,0,0,0,0,0,green,blue\nGambia,4,4,10,1,1,5,0,5,4,1,1,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green\nGermany-DDR,3,1,108,17,4,6,0,3,3,1,0,0,1,0,1,0,gold,0,0,0,0,0,0,0,1,0,0,black,gold\nGermany-FRG,3,1,249,61,4,1,0,3,3,1,0,0,1,0,1,0,black,0,0,0,0,0,0,0,0,0,0,black,gold\nGhana,4,4,239,14,1,5,0,3,4,1,1,0,1,0,1,0,red,0,0,0,0,1,0,0,0,0,0,red,green\nGibraltar,3,4,0,0,1,1,0,1,3,1,0,0,1,1,0,0,white,0,0,0,0,0,0,0,1,0,0,white,red\nGreece,3,1,132,10,6,1,0,9,2,0,0,1,0,1,0,0,blue,0,1,0,1,0,0,0,0,0,0,blue,blue\nGreenland,1,4,2176,0,6,1,0,0,2,1,0,0,0,1,0,0,white,1,0,0,0,0,0,0,0,0,0,white,red\nGrenada,1,4,0,0,1,1,0,0,3,1,1,0,1,0,0,0,gold,1,0,0,0,7,0,1,0,1,0,red,red\nGuam,6,1,0,0,1,1,0,0,7,1,1,1,1,1,0,1,blue,0,0,0,0,0,0,0,1,1,1,red,red\nGuatemala,1,4,109,8,2,0,3,0,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue\nGuinea,4,4,246,6,3,2,3,0,3,1,1,0,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,red,green\nGuinea-Bissau,4,4,36,1,6,5,1,2,4,1,1,0,1,0,1,0,gold,0,0,0,0,1,0,0,0,0,0,red,green\nGuyana,2,4,215,1,1,4,0,0,5,1,1,0,1,1,1,0,green,0,0,0,0,0,0,1,0,0,0,black,green\nHaiti,1,4,28,6,3,0,2,0,2,1,0,0,0,0,1,0,black,0,0,0,0,0,0,0,0,0,0,black,red\nHonduras,1,4,112,4,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,5,0,0,0,0,0,blue,blue\nHong-Kong,5,1,1,5,7,3,0,0,6,1,1,1,1,1,0,1,blue,1,1,1,1,0,0,0,1,1,1,white,blue\nHungary,3,1,93,11,9,6,0,3,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green\nIceland,3,4,103,0,6,1,0,0,3,1,0,1,0,1,0,0,blue,0,1,0,0,0,0,0,0,0,0,blue,blue\nIndia,5,1,3268,684,6,4,0,3,4,0,1,1,0,1,0,1,orange,1,0,0,0,0,0,0,1,0,0,orange,green\nIndonesia,6,2,1904,157,10,2,0,2,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,white\nIran,5,1,1648,39,6,2,0,3,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,1,0,1,green,red\nIraq,5,1,435,14,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,3,0,0,0,0,0,red,black\nIreland,3,4,70,3,1,0,3,0,3,0,1,0,0,1,0,1,white,0,0,0,0,0,0,0,0,0,0,green,orange\nIsrael,5,1,21,4,10,7,0,2,2,0,0,1,0,1,0,0,white,0,0,0,0,1,0,0,0,0,0,blue,blue\nItaly,3,1,301,57,6,0,3,0,3,1,1,0,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,green,red\nIvory-Coast,4,4,323,7,3,5,3,0,3,1,1,0,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,red,green\nJamaica,1,4,11,2,1,1,0,0,3,0,1,0,1,0,1,0,green,0,0,1,0,0,0,1,0,0,0,gold,gold\nJapan,5,1,372,118,9,7,0,0,2,1,0,0,0,1,0,0,white,1,0,0,0,1,0,0,0,0,0,white,white\nJordan,5,1,98,2,8,2,0,3,4,1,1,0,0,1,1,0,black,0,0,0,0,1,0,1,0,0,0,black,green\nKampuchea,5,1,181,6,10,3,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,0,0,0,1,0,0,red,red\nKenya,4,1,583,17,10,5,0,5,4,1,1,0,0,1,1,0,red,1,0,0,0,0,0,0,1,0,0,black,green\nKiribati,6,1,0,0,1,1,0,0,4,1,0,1,1,1,0,0,red,0,0,0,0,1,0,0,1,1,0,red,blue\nKuwait,5,1,18,2,8,2,0,3,4,1,1,0,0,1,1,0,green,0,0,0,0,0,0,0,0,0,0,green,red\nLaos,5,1,236,3,10,6,0,3,3,1,0,1,0,1,0,0,red,1,0,0,0,0,0,0,0,0,0,red,red\nLebanon,5,1,10,3,8,2,0,2,4,1,1,0,0,1,0,1,red,0,0,0,0,0,0,0,0,1,0,red,red\nLesotho,4,2,30,1,10,5,2,0,4,1,1,1,0,1,0,0,blue,0,0,0,0,0,0,0,1,0,0,green,blue\nLiberia,4,4,111,1,10,5,0,11,3,1,0,1,0,1,0,0,red,0,0,0,1,1,0,0,0,0,0,blue,red\nLibya,4,1,1760,3,8,2,0,0,1,0,1,0,0,0,0,0,green,0,0,0,0,0,0,0,0,0,0,green,green\nLiechtenstein,3,1,0,0,4,0,0,2,3,1,0,1,1,0,0,0,red,0,0,0,0,0,0,0,1,0,0,blue,red\nLuxembourg,3,1,3,0,4,0,0,3,3,1,0,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,blue\nMalagasy,4,2,587,9,10,1,1,2,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,white,green\nMalawi,4,2,118,6,10,5,0,3,3,1,1,0,0,0,1,0,red,0,0,0,0,1,0,0,0,0,0,black,green\nMalaysia,5,1,333,13,10,2,0,14,4,1,0,1,1,1,0,0,red,0,0,0,1,1,1,0,0,0,0,blue,white\nMaldive-Islands,5,1,0,0,10,2,0,0,3,1,1,0,0,1,0,0,red,0,0,0,0,0,1,0,0,0,0,red,red\nMali,4,4,1240,7,3,2,3,0,3,1,1,0,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,green,red\nMalta,3,1,0,0,10,0,2,0,3,1,0,0,0,1,1,0,red,0,1,0,0,0,0,0,1,0,0,white,red\nMarianas,6,1,0,0,10,1,0,0,3,0,0,1,0,1,0,0,blue,0,0,0,0,1,0,0,1,0,0,blue,blue\nMauritania,4,4,1031,2,8,2,0,0,2,0,1,0,1,0,0,0,green,0,0,0,0,1,1,0,0,0,0,green,green\nMauritius,4,2,2,1,1,4,0,4,4,1,1,1,1,0,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green\nMexico,1,4,1973,77,2,0,3,0,4,1,1,0,0,1,0,1,green,0,0,0,0,0,0,0,0,1,0,green,red\nMicronesia,6,1,1,0,10,1,0,0,2,0,0,1,0,1,0,0,blue,0,0,0,0,4,0,0,0,0,0,blue,blue\nMonaco,3,1,0,0,3,0,0,2,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,white\nMongolia,5,1,1566,2,10,6,3,0,3,1,0,1,1,0,0,0,red,2,0,0,0,1,1,1,1,0,0,red,red\nMontserrat,1,4,0,0,1,1,0,0,7,1,1,1,1,1,1,0,blue,0,2,1,1,0,0,0,1,1,0,white,blue\nMorocco,4,4,447,20,8,2,0,0,2,1,1,0,0,0,0,0,red,0,0,0,0,1,0,0,0,0,0,red,red\nMozambique,4,2,783,12,10,5,0,5,5,1,1,0,1,1,1,0,gold,0,0,0,0,1,0,1,1,0,0,green,gold\nNauru,6,2,0,0,10,1,0,3,3,0,0,1,1,1,0,0,blue,0,0,0,0,1,0,0,0,0,0,blue,blue\nNepal,5,1,140,16,10,4,0,0,3,0,0,1,0,1,0,1,brown,0,0,0,0,2,1,0,0,0,0,blue,blue\nNetherlands,3,1,41,14,6,1,0,3,3,1,0,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,blue\nNetherlands-Antilles,1,4,0,0,6,1,0,1,3,1,0,1,0,1,0,0,white,0,0,0,0,6,0,0,0,0,0,white,white\nNew-Zealand,6,2,268,2,1,1,0,0,3,1,0,1,0,1,0,0,blue,0,1,1,1,4,0,0,0,0,0,white,blue\nNicaragua,1,4,128,3,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue\nNiger,4,1,1267,5,3,2,0,3,3,0,1,0,0,1,0,1,orange,1,0,0,0,0,0,0,0,0,0,orange,green\nNigeria,4,1,925,56,10,2,3,0,2,0,1,0,0,1,0,0,green,0,0,0,0,0,0,0,0,0,0,green,green\nNiue,6,3,0,0,1,1,0,0,4,1,0,1,1,1,0,0,gold,1,1,1,1,5,0,0,0,0,0,white,gold\nNorth-Korea,5,1,121,18,10,6,0,5,3,1,0,1,0,1,0,0,blue,1,0,0,0,1,0,0,0,0,0,blue,blue\nNorth-Yemen,5,1,195,9,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,1,0,0,0,0,0,red,black\nNorway,3,1,324,4,6,1,0,0,3,1,0,1,0,1,0,0,red,0,1,0,0,0,0,0,0,0,0,red,red\nOman,5,1,212,1,8,2,0,2,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,1,0,0,red,green\nPakistan,5,1,804,84,6,2,1,0,2,0,1,0,0,1,0,0,green,0,0,0,0,1,1,0,0,0,0,white,green\nPanama,2,4,76,2,2,0,0,0,3,1,0,1,0,1,0,0,red,0,0,0,4,2,0,0,0,0,0,white,white\nPapua-New-Guinea,6,2,463,3,1,5,0,0,4,1,0,0,1,1,1,0,black,0,0,0,0,5,0,1,0,1,0,red,black\nParguay,2,3,407,3,2,0,0,3,6,1,1,1,1,1,1,0,red,1,0,0,0,1,0,0,1,1,1,red,blue\nPeru,2,3,1285,14,2,0,3,0,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red\nPhilippines,6,1,300,48,10,0,0,0,4,1,0,1,1,1,0,0,blue,0,0,0,0,4,0,1,0,0,0,blue,red\nPoland,3,1,313,36,5,6,0,2,2,1,0,0,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,white,red\nPortugal,3,4,92,10,6,0,0,0,5,1,1,1,1,1,0,0,red,1,0,0,0,0,0,0,1,0,0,green,red\nPuerto-Rico,1,4,9,3,2,0,0,5,3,1,0,1,0,1,0,0,red,0,0,0,0,1,0,1,0,0,0,red,red\nQatar,5,1,11,0,8,2,0,0,2,0,0,0,0,1,0,1,brown,0,0,0,0,0,0,0,0,0,0,white,brown\nRomania,3,1,237,22,6,6,3,0,7,1,1,1,1,1,0,1,red,0,0,0,0,2,0,0,1,1,1,blue,red\nRwanda,4,2,26,5,10,5,3,0,4,1,1,0,1,0,1,0,red,0,0,0,0,0,0,0,0,0,1,red,green\nSan-Marino,3,1,0,0,6,0,0,2,2,0,0,1,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,white,blue\nSao-Tome,4,1,0,0,6,0,0,3,4,1,1,0,1,0,1,0,green,0,0,0,0,2,0,1,0,0,0,green,green\nSaudi-Arabia,5,1,2150,9,8,2,0,0,2,0,1,0,0,1,0,0,green,0,0,0,0,0,0,0,1,0,1,green,green\nSenegal,4,4,196,6,3,2,3,0,3,1,1,0,1,0,0,0,green,0,0,0,0,1,0,0,0,0,0,green,red\nSeychelles,4,2,0,0,1,1,0,0,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green\nSierra-Leone,4,4,72,3,1,5,0,3,3,0,1,1,0,1,0,0,green,0,0,0,0,0,0,0,0,0,0,green,blue\nSingapore,5,1,1,3,7,3,0,2,2,1,0,0,0,1,0,0,white,0,0,0,0,5,1,0,0,0,0,red,white\nSoloman-Islands,6,2,30,0,1,1,0,0,4,0,1,1,1,1,0,0,green,0,0,0,0,5,0,1,0,0,0,blue,green\nSomalia,4,1,637,5,10,2,0,0,2,0,0,1,0,1,0,0,blue,0,0,0,0,1,0,0,0,0,0,blue,blue\nSouth-Africa,4,2,1221,29,6,1,0,3,5,1,1,1,0,1,0,1,orange,0,1,1,0,0,0,0,0,0,0,orange,blue\nSouth-Korea,5,1,99,39,10,7,0,0,4,1,0,1,0,1,1,0,white,1,0,0,0,0,0,0,1,0,0,white,white\nSouth-Yemen,5,1,288,2,8,2,0,3,4,1,0,1,0,1,1,0,red,0,0,0,0,1,0,1,0,0,0,red,black\nSpain,3,4,505,38,2,0,0,3,2,1,0,0,1,0,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red\nSri-Lanka,5,1,66,15,10,3,2,0,4,0,1,0,1,0,0,1,gold,0,0,0,0,0,0,0,1,1,0,gold,gold\nSt-Helena,4,3,0,0,1,1,0,0,7,1,1,1,1,1,0,1,blue,0,1,1,1,0,0,0,1,0,0,white,blue\nSt-Kitts-Nevis,1,4,0,0,1,1,0,0,5,1,1,0,1,1,1,0,green,0,0,0,0,2,0,1,0,0,0,green,red\nSt-Lucia,1,4,0,0,1,1,0,0,4,0,0,1,1,1,1,0,blue,0,0,0,0,0,0,1,0,0,0,blue,blue\nSt-Vincent,1,4,0,0,1,1,5,0,4,0,1,1,1,1,0,0,green,0,0,0,0,0,0,0,1,1,1,blue,green\nSudan,4,1,2506,20,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,0,0,1,0,0,0,red,black\nSurinam,2,4,63,0,6,1,0,5,4,1,1,0,1,1,0,0,red,0,0,0,0,1,0,0,0,0,0,green,green\nSwaziland,4,2,17,1,10,1,0,5,7,1,0,1,1,1,1,1,blue,0,0,0,0,0,0,0,1,0,0,blue,blue\nSweden,3,1,450,8,6,1,0,0,2,0,0,1,1,0,0,0,blue,0,1,0,0,0,0,0,0,0,0,blue,blue\nSwitzerland,3,1,41,6,4,1,0,0,2,1,0,0,0,1,0,0,red,0,1,0,0,0,0,0,0,0,0,red,red\nSyria,5,1,185,10,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,2,0,0,0,0,0,red,black\nTaiwan,5,1,36,18,7,3,0,0,3,1,0,1,0,1,0,0,red,1,0,0,1,1,0,0,0,0,0,blue,red\nTanzania,4,2,945,18,10,5,0,0,4,0,1,1,1,0,1,0,green,0,0,0,0,0,0,1,0,0,0,green,blue\nThailand,5,1,514,49,10,3,0,5,3,1,0,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red\nTogo,4,1,57,2,3,7,0,5,4,1,1,0,1,1,0,0,green,0,0,0,1,1,0,0,0,0,0,red,green\nTonga,6,2,1,0,10,1,0,0,2,1,0,0,0,1,0,0,red,0,1,0,1,0,0,0,0,0,0,white,red\nTrinidad-Tobago,2,4,5,1,1,1,0,0,3,1,0,0,0,1,1,0,red,0,0,0,0,0,0,1,0,0,0,white,white\nTunisia,4,1,164,7,8,2,0,0,2,1,0,0,0,1,0,0,red,1,0,0,0,1,1,0,0,0,0,red,red\nTurkey,5,1,781,45,9,2,0,0,2,1,0,0,0,1,0,0,red,0,0,0,0,1,1,0,0,0,0,red,red\nTurks-Cocos-Islands,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,1,blue,0,1,1,1,0,0,0,1,1,0,white,blue\nTuvalu,6,2,0,0,1,1,0,0,5,1,0,1,1,1,0,0,blue,0,1,1,1,9,0,0,0,0,0,white,blue\nUAE,5,1,84,1,8,2,1,3,4,1,1,0,0,1,1,0,green,0,0,0,0,0,0,0,0,0,0,red,black\nUganda,4,1,236,13,10,5,0,6,5,1,0,0,1,1,1,0,gold,1,0,0,0,0,0,0,0,1,0,black,red\nUK,3,4,245,56,1,1,0,0,3,1,0,1,0,1,0,0,red,0,1,1,0,0,0,0,0,0,0,white,red\nUruguay,2,3,178,3,2,0,0,9,3,0,0,1,1,1,0,0,white,0,0,0,1,1,0,0,0,0,0,white,white\nUS-Virgin-Isles,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,0,white,0,0,0,0,0,0,0,1,1,1,white,white\nUSA,1,4,9363,231,1,1,0,13,3,1,0,1,0,1,0,0,white,0,0,0,1,50,0,0,0,0,0,blue,red\nUSSR,5,1,22402,274,5,6,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,1,0,0,1,0,0,red,red\nVanuatu,6,2,15,0,6,1,0,0,4,1,1,0,1,0,1,0,red,0,0,0,0,0,0,1,0,1,0,black,green\nVatican-City,3,1,0,0,6,0,2,0,4,1,0,0,1,1,1,0,gold,0,0,0,0,0,0,0,1,0,0,gold,white\nVenezuela,2,4,912,15,2,0,0,3,7,1,1,1,1,1,1,1,red,0,0,0,0,7,0,0,1,1,0,gold,red\nVietnam,5,1,333,60,10,6,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,1,0,0,0,0,0,red,red\nWestern-Samoa,6,3,3,0,1,1,0,0,3,1,0,1,0,1,0,0,red,0,0,0,1,5,0,0,0,0,0,blue,red\nYugoslavia,3,1,256,22,6,6,0,3,4,1,0,1,1,1,0,0,red,0,0,0,0,1,0,0,0,0,0,blue,red\nZaire,4,2,905,28,10,5,0,0,4,1,1,0,1,0,0,1,green,1,0,0,0,0,0,0,1,1,0,green,green\nZambia,4,2,753,6,10,5,3,0,4,1,1,0,0,0,1,1,green,0,0,0,0,0,0,0,0,1,0,green,brown\nZimbabwe,4,2,391,8,10,5,0,7,5,1,1,0,1,1,1,0,green,0,0,0,0,1,0,1,1,1,0,green,green\n"
],
[
"# Step 2 - load the data\n\n# How to deal with a csv? 🐼\nimport pandas as pd\nflag_data = pd.read_csv(flag_data_url)",
"_____no_output_____"
],
[
"# Step 3 - verify we've got *something*\nflag_data.head()",
"_____no_output_____"
],
[
"# Step 4 - Looks a bit odd - verify that it is what we want\nflag_data.count()",
"_____no_output_____"
],
[
"!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc",
"'wc' is not recognized as an internal or external command,\noperable program or batch file.\n"
],
[
"# So we have 193 observations with funny names, file has 194 rows\n# Looks like the file has no header row, but read_csv assumes it does\nhelp(pd.read_csv)",
"Help on function read_csv in module pandas.io.parsers:\n\nread_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='\"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)\n Read a comma-separated values (csv) file into DataFrame.\n \n Also supports optionally iterating or breaking of the file\n into chunks.\n \n Additional help can be found in the online docs for\n `IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n \n Parameters\n ----------\n filepath_or_buffer : str, path object, or file-like object\n Any valid string path is acceptable. The string could be a URL. Valid\n URL schemes include http, ftp, s3, and file. For file URLs, a host is\n expected. A local file could be: file://localhost/path/to/table.csv.\n \n If you want to pass in a path object, pandas accepts either\n ``pathlib.Path`` or ``py._path.local.LocalPath``.\n \n By file-like object, we refer to objects with a ``read()`` method, such as\n a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\n sep : str, default ','\n Delimiter to use. If sep is None, the C engine cannot automatically detect\n the separator, but the Python parsing engine can, meaning the latter will\n be used and automatically detect the separator by Python's builtin sniffer\n tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n different from ``'\\s+'`` will be interpreted as regular expressions and\n will also force the use of the Python parsing engine. Note that regex\n delimiters are prone to ignoring quoted data. Regex example: ``'\\r\\t'``.\n delimiter : str, default ``None``\n Alias for sep.\n header : int, list of int, default 'infer'\n Row number(s) to use as the column names, and the start of the\n data. Default behavior is to infer the column names: if no names\n are passed the behavior is identical to ``header=0`` and column\n names are inferred from the first line of the file, if column\n names are passed explicitly then the behavior is identical to\n ``header=None``. Explicitly pass ``header=0`` to be able to\n replace existing names. The header can be a list of integers that\n specify row locations for a multi-index on the columns\n e.g. [0,1,3]. Intervening rows that are not specified will be\n skipped (e.g. 2 in this example is skipped). Note that this\n parameter ignores commented lines and empty lines if\n ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n data rather than the first line of the file.\n names : array-like, optional\n List of column names to use. If file contains no header row, then you\n should explicitly pass ``header=None``. Duplicates in this list will cause\n a ``UserWarning`` to be issued.\n index_col : int, sequence or bool, optional\n Column to use as the row labels of the DataFrame. If a sequence is given, a\n MultiIndex is used. If you have a malformed file with delimiters at the end\n of each line, you might consider ``index_col=False`` to force pandas to\n not use the first column as the index (row names).\n usecols : list-like or callable, optional\n Return a subset of the columns. If list-like, all elements must either\n be positional (i.e. integer indices into the document columns) or strings\n that correspond to column names provided either by the user in `names` or\n inferred from the document header row(s). For example, a valid list-like\n `usecols` parameter would be ``[0, 1, 2]`` or ``['foo', 'bar', 'baz']``.\n Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n To instantiate a DataFrame from ``data`` with element order preserved use\n ``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columns\n in ``['foo', 'bar']`` order or\n ``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``\n for ``['bar', 'foo']`` order.\n \n If callable, the callable function will be evaluated against the column\n names, returning names where the callable function evaluates to True. An\n example of a valid callable argument would be ``lambda x: x.upper() in\n ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster\n parsing time and lower memory usage.\n squeeze : bool, default False\n If the parsed data only contains one column then return a Series.\n prefix : str, optional\n Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...\n mangle_dupe_cols : bool, default True\n Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than\n 'X'...'X'. Passing in False will cause data to be overwritten if there\n are duplicate names in the columns.\n dtype : Type name or dict of column -> type, optional\n Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32,\n 'c': 'Int64'}\n Use `str` or `object` together with suitable `na_values` settings\n to preserve and not interpret dtype.\n If converters are specified, they will be applied INSTEAD\n of dtype conversion.\n engine : {'c', 'python'}, optional\n Parser engine to use. The C engine is faster while the python engine is\n currently more feature-complete.\n converters : dict, optional\n Dict of functions for converting values in certain columns. Keys can either\n be integers or column labels.\n true_values : list, optional\n Values to consider as True.\n false_values : list, optional\n Values to consider as False.\n skipinitialspace : bool, default False\n Skip spaces after delimiter.\n skiprows : list-like, int or callable, optional\n Line numbers to skip (0-indexed) or number of lines to skip (int)\n at the start of the file.\n \n If callable, the callable function will be evaluated against the row\n indices, returning True if the row should be skipped and False otherwise.\n An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\n skipfooter : int, default 0\n Number of lines at bottom of file to skip (Unsupported with engine='c').\n nrows : int, optional\n Number of rows of file to read. Useful for reading pieces of large files.\n na_values : scalar, str, list-like, or dict, optional\n Additional strings to recognize as NA/NaN. If dict passed, specific\n per-column NA values. By default the following values are interpreted as\n NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',\n '1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'n/a', 'nan',\n 'null'.\n keep_default_na : bool, default True\n Whether or not to include the default NaN values when parsing the data.\n Depending on whether `na_values` is passed in, the behavior is as follows:\n \n * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n is appended to the default NaN values used for parsing.\n * If `keep_default_na` is True, and `na_values` are not specified, only\n the default NaN values are used for parsing.\n * If `keep_default_na` is False, and `na_values` are specified, only\n the NaN values specified `na_values` are used for parsing.\n * If `keep_default_na` is False, and `na_values` are not specified, no\n strings will be parsed as NaN.\n \n Note that if `na_filter` is passed in as False, the `keep_default_na` and\n `na_values` parameters will be ignored.\n na_filter : bool, default True\n Detect missing value markers (empty strings and the value of na_values). In\n data without any NAs, passing na_filter=False can improve the performance\n of reading a large file.\n verbose : bool, default False\n Indicate number of NA values placed in non-numeric columns.\n skip_blank_lines : bool, default True\n If True, skip over blank lines rather than interpreting as NaN values.\n parse_dates : bool or list of int or names or list of lists or dict, default False\n The behavior is as follows:\n \n * boolean. If True -> try parsing the index.\n * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n each as a separate date column.\n * list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as\n a single date column.\n * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call\n result 'foo'\n \n If a column or index cannot be represented as an array of datetimes,\n say because of an unparseable value or a mixture of timezones, the column\n or index will be returned unaltered as an object data type. For\n non-standard datetime parsing, use ``pd.to_datetime`` after\n ``pd.read_csv``. To parse an index or column with a mixture of timezones,\n specify ``date_parser`` to be a partially-applied\n :func:`pandas.to_datetime` with ``utc=True``. See\n :ref:`io.csv.mixed_timezones` for more.\n \n Note: A fast-path exists for iso8601-formatted dates.\n infer_datetime_format : bool, default False\n If True and `parse_dates` is enabled, pandas will attempt to infer the\n format of the datetime strings in the columns, and if it can be inferred,\n switch to a faster method of parsing them. In some cases this can increase\n the parsing speed by 5-10x.\n keep_date_col : bool, default False\n If True and `parse_dates` specifies combining multiple columns then\n keep the original columns.\n date_parser : function, optional\n Function to use for converting a sequence of string columns to an array of\n datetime instances. The default uses ``dateutil.parser.parser`` to do the\n conversion. Pandas will try to call `date_parser` in three different ways,\n advancing to the next if an exception occurs: 1) Pass one or more arrays\n (as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) the\n string values from the columns defined by `parse_dates` into a single array\n and pass that; and 3) call `date_parser` once for each row using one or\n more strings (corresponding to the columns defined by `parse_dates`) as\n arguments.\n dayfirst : bool, default False\n DD/MM format dates, international and European format.\n iterator : bool, default False\n Return TextFileReader object for iteration or getting chunks with\n ``get_chunk()``.\n chunksize : int, optional\n Return TextFileReader object for iteration.\n See the `IO Tools docs\n <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_\n for more information on ``iterator`` and ``chunksize``.\n compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'\n For on-the-fly decompression of on-disk data. If 'infer' and\n `filepath_or_buffer` is path-like, then detect compression from the\n following extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise no\n decompression). If using 'zip', the ZIP file must contain only one data\n file to be read in. Set to None for no decompression.\n \n .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.\n \n thousands : str, optional\n Thousands separator.\n decimal : str, default '.'\n Character to recognize as decimal point (e.g. use ',' for European data).\n lineterminator : str (length 1), optional\n Character to break file into lines. Only valid with C parser.\n quotechar : str (length 1), optional\n The character used to denote the start and end of a quoted item. Quoted\n items can include the delimiter and it will be ignored.\n quoting : int or csv.QUOTE_* instance, default 0\n Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of\n QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).\n doublequote : bool, default ``True``\n When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate\n whether or not to interpret two consecutive quotechar elements INSIDE a\n field as a single ``quotechar`` element.\n escapechar : str (length 1), optional\n One-character string used to escape other characters.\n comment : str, optional\n Indicates remainder of line should not be parsed. If found at the beginning\n of a line, the line will be ignored altogether. This parameter must be a\n single character. Like empty lines (as long as ``skip_blank_lines=True``),\n fully commented lines are ignored by the parameter `header` but not by\n `skiprows`. For example, if ``comment='#'``, parsing\n ``#empty\\na,b,c\\n1,2,3`` with ``header=0`` will result in 'a,b,c' being\n treated as the header.\n encoding : str, optional\n Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python\n standard encodings\n <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ .\n dialect : str or csv.Dialect, optional\n If provided, this parameter will override values (default or not) for the\n following parameters: `delimiter`, `doublequote`, `escapechar`,\n `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to\n override values, a ParserWarning will be issued. See csv.Dialect\n documentation for more details.\n tupleize_cols : bool, default False\n Leave a list of tuples on columns as is (default is to convert to\n a MultiIndex on the columns).\n \n .. deprecated:: 0.21.0\n This argument will be removed and will always convert to MultiIndex\n \n error_bad_lines : bool, default True\n Lines with too many fields (e.g. a csv line with too many commas) will by\n default cause an exception to be raised, and no DataFrame will be returned.\n If False, then these \"bad lines\" will dropped from the DataFrame that is\n returned.\n warn_bad_lines : bool, default True\n If error_bad_lines is False, and warn_bad_lines is True, a warning for each\n \"bad line\" will be output.\n delim_whitespace : bool, default False\n Specifies whether or not whitespace (e.g. ``' '`` or ``' '``) will be\n used as the sep. Equivalent to setting ``sep='\\s+'``. If this option\n is set to True, nothing should be passed in for the ``delimiter``\n parameter.\n \n .. versionadded:: 0.18.1 support for the Python parser.\n \n low_memory : bool, default True\n Internally process the file in chunks, resulting in lower memory use\n while parsing, but possibly mixed type inference. To ensure no mixed\n types either set False, or specify the type with the `dtype` parameter.\n Note that the entire file is read into a single DataFrame regardless,\n use the `chunksize` or `iterator` parameter to return the data in chunks.\n (Only valid with C parser).\n memory_map : bool, default False\n If a filepath is provided for `filepath_or_buffer`, map the file object\n directly onto memory and access the data directly from there. Using this\n option can improve performance because there is no longer any I/O overhead.\n float_precision : str, optional\n Specifies which converter the C engine should use for floating-point\n values. The options are `None` for the ordinary converter,\n `high` for the high-precision converter, and `round_trip` for the\n round-trip converter.\n \n Returns\n -------\n DataFrame or TextParser\n A comma-separated values (csv) file is returned as two-dimensional\n data structure with labeled axes.\n \n See Also\n --------\n to_csv : Write DataFrame to a comma-separated values (csv) file.\n read_csv : Read a comma-separated values (csv) file into DataFrame.\n read_fwf : Read a table of fixed-width formatted lines into DataFrame.\n \n Examples\n --------\n >>> pd.read_csv('data.csv') # doctest: +SKIP\n\n"
],
[
"# Alright, we can pass header=None to fix this\nflag_data = pd.read_csv(flag_data_url, header=None)\nflag_data.head()",
"_____no_output_____"
],
[
"flag_data.count()",
"_____no_output_____"
],
[
"flag_data.isna().sum()",
"_____no_output_____"
]
],
[
[
"### Yes, but what does it *mean*?\n\nThis data is fairly nice - it was \"donated\" and is already \"clean\" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).\n\n```\n1. name: Name of the country concerned\n2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania\n3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW\n4. area: in thousands of square km\n5. population: in round millions\n6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others\n7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others\n8. bars: Number of vertical bars in the flag\n9. stripes: Number of horizontal stripes in the flag\n10. colours: Number of different colours in the flag\n11. red: 0 if red absent, 1 if red present in the flag\n12. green: same for green\n13. blue: same for blue\n14. gold: same for gold (also yellow)\n15. white: same for white\n16. black: same for black\n17. orange: same for orange (also brown)\n18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)\n19. circles: Number of circles in the flag\n20. crosses: Number of (upright) crosses\n21. saltires: Number of diagonal crosses\n22. quarters: Number of quartered sections\n23. sunstars: Number of sun or star symbols\n24. crescent: 1 if a crescent moon symbol present, else 0\n25. triangle: 1 if any triangles present, 0 otherwise\n26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 0\n27. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise\n28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise\n29. topleft: colour in the top-left corner (moving right to decide tie-breaks)\n30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)\n```\n\nExercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1...",
"_____no_output_____"
],
[
"## Steps of Loading and Exploring a Dataset:\n\n- Find a dataset that looks interesting\n- Learn what you can about it \n - What's in it? \n - How many rows and columns? \n - What types of variables?\n- Look at the raw contents of the file\n- Load it into your workspace (notebook)\n - Handle any challenges with headers\n - Handle any problems with missing values\n- Then you can start to explore the data\n - Look at the summary statistics\n - Look at counts of different categories\n - Make some plots to look at the distribution of the data",
"_____no_output_____"
],
[
"## 3 ways of loading a dataset",
"_____no_output_____"
],
[
"### From its URL",
"_____no_output_____"
]
],
[
[
"dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'\n\ncolumn_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', \n 'marital-status', 'occupation', 'relationship', 'race', 'sex', \n 'capital-gain', 'capital-loss', 'hours-per-week', \n 'native-country', 'income']\n\ndf = pd.read_csv(dataset_url, names=column_headers)\nprint(df.shape)\ndf.head()",
"(32561, 15)\n"
]
],
[
[
"### From a local file",
"_____no_output_____"
]
],
[
[
"from google.colab import files\nuploaded ",
"_____no_output_____"
]
],
[
[
"### Using the `!wget` command",
"_____no_output_____"
]
],
[
[
"import wget\n\nwget https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
"_____no_output_____"
]
],
[
[
"# Part 2 - Deal with Missing Values",
"_____no_output_____"
],
[
"## Diagnose Missing Values\n\nLets use the Adult Dataset from UCI. <https://github.com/ryanleeallred/datasets>",
"_____no_output_____"
]
],
[
[
"df.isnull().sum()",
"_____no_output_____"
]
],
[
[
"## Fill Missing Values",
"_____no_output_____"
]
],
[
[
"dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'\n\ncolumn_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', \n 'marital-status', 'occupation', 'relationship', 'race', 'sex', \n 'capital-gain', 'capital-loss', 'hours-per-week', \n 'native-country', 'income']\n\ndf = pd.read_csv(dataset_url, names=column_headers, na_values=[' ?'])\nprint(df.shape)\ndf.head(20)",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"df.iloc[14][13]",
"_____no_output_____"
]
],
[
[
"# Part 3 - Explore the Dataset:",
"_____no_output_____"
],
[
"## Look at \"Summary Statistics",
"_____no_output_____"
],
[
"### Numeric",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
]
],
[
[
"###Non-Numeric",
"_____no_output_____"
]
],
[
[
"df.describe(exclude=\"number\")",
"_____no_output_____"
]
],
[
[
"## Look at Categorical Values",
"_____no_output_____"
],
[
"# Part 4 - Basic Visualizations (using the Pandas Library)",
"_____no_output_____"
],
[
"## Histogram",
"_____no_output_____"
]
],
[
[
"# Pandas Histogram ",
"_____no_output_____"
]
],
[
[
"## Density Plot (KDE)",
"_____no_output_____"
]
],
[
[
"# Pandas Density Plot",
"_____no_output_____"
]
],
[
[
"## Scatter Plot",
"_____no_output_____"
]
],
[
[
"# Pandas Scatterplot",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d04ebacc51b22d70d441c20cf214a5ef0ee9c6de | 274,120 | ipynb | Jupyter Notebook | projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb | tobias-fyi/data-may-differ | afd10656be583bc6a6fe2da0c90632a56b7854be | [
"MIT"
] | null | null | null | projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb | tobias-fyi/data-may-differ | afd10656be583bc6a6fe2da0c90632a56b7854be | [
"MIT"
] | null | null | null | projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb | tobias-fyi/data-may-differ | afd10656be583bc6a6fe2da0c90632a56b7854be | [
"MIT"
] | null | null | null | 90.349374 | 165,993 | 0.574281 | [
[
[
"# Sci-Fi IRL #1: Technology Terminology Velocity\n\n### A Data Storytelling Project by Tobias Reaper\n\n### ---- Datalogue 008 ----\n\n---\n---",
"_____no_output_____"
],
[
"### Imports and Configuration",
"_____no_output_____"
]
],
[
[
"# Three Musketeers\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\n# For using the API\nimport requests",
"_____no_output_____"
],
[
"# More advanced vizualizations with Bokeh\nfrom bokeh.plotting import figure, output_file, output_notebook, show\nfrom bokeh.layouts import column\nfrom bokeh.models.glyphs import Patches",
"_____no_output_____"
],
[
"# Import color library\nimport colorcet as cc",
"_____no_output_____"
],
[
"# Define color palette\npalette = [cc.bkr[i*15] for i in range(17)]\npalette",
"_____no_output_____"
],
[
"# Set pandas display options to allow for more columns and rows\npd.set_option(\"display.max_columns\", 100)\npd.set_option(\"display.max_rows\", 500)",
"_____no_output_____"
]
],
[
[
"---\n\n### Functions",
"_____no_output_____"
]
],
[
[
"def pushshift_api_request(query, subreddit, frequency=\"month\", aggs=\"created_utc\"):\n \"\"\"\n Returns the JSON response of a PushShift API aggregate comment search as a Python dictionary.\n \n Note: if you're reading this note, that means that this function is still only written\n with the intention of automating a specific set of actions for a specific project.\n \n ---- Arguments ----\n query: (str) keyword to search.\n subreddit: (str) subreddit name\n frequency: (str) set the size of the time buckets.\n aggs: (str) aggregate function name. Default is \"created_utc\".\n (For more information, read the PushShift API Documentation.)\n -------------------\n \"\"\"\n \n # Build the query url based on endpoints and parameters \n url = f\"https://api.pushshift.io/reddit/search/comment/?q={query}&subreddit={subreddit}&aggs={aggs}&frequency={frequency}&size=100\"\n \n # Send the request and save the response into the response object\n response = requests.get(url)\n \n # Check the response; stop execution if failed\n assert response.status_code == 200\n \n # Parse the JSON into a Python dictionary\n # and return it for further processing\n return response.json()",
"_____no_output_____"
],
[
"def create_df(data, keyword, frequency=\"month\"):\n \"\"\"\n Returns cleaned Pandas DataFrame of keyword frequency over time, given correctly-formatted Python dictionary.\n Renames the frequency column to keyword; converts month to datetime.\n \n Note: if you're reading this note, that means that this function is still only written\n with the intention of automating a specific set of actions for a specific project.\n \n ---- Arguments ----\n data: (dict) Python dictionary converted from JSON API response.\n keyword: (str) the keyword that was queried.\n time_bucket: (str) size of time buckets, which is also the name of the resulting DataFrame column. Defaults to \"month\".\n -------------------\n \"\"\"\n \n # Convert the python object into a pandas dataframe\n df = pd.DataFrame(data[\"aggs\"][\"created_utc\"])\n\n # Convert \"key\" into a datetime column\n df[\"key\"] = pd.to_datetime(df[\"key\"], unit=\"s\", origin=\"unix\")\n\n # Rename \"key\" to reflect the fact that it is the beginning of the time bucket\n df = df.rename(mapper={\"key\": frequency, \"doc_count\": keyword}, axis=\"columns\")\n \n # Return the DataFrame\n return df",
"_____no_output_____"
],
[
"def comments_df(data):\n \"\"\"\n Returns Reddit comments in Pandas DataFrame, given the correctly-formatted Python dictionary.\n \n Note: if you're reading this note, that means that this function is still only written\n with the intention of automating a specific set of actions for a specific project.\n \n ---- Arguments ----\n data: (dict) Python dictionary converted from JSON API response.\n -------------------\n \"\"\"\n \n # Convert the comments into a pandas dataframe\n df = pd.DataFrame(data[\"data\"])\n\n # Return the DataFrame\n return df",
"_____no_output_____"
],
[
"def df_to_csv(data, filename):\n \"\"\"\n Basically just a wrapper around the Pandas `.to_csv()` method,\n created to standardize the inputs and outputs.\n \n ---- Arguments ----\n data: (pd.DataFrame) Pandas DataFrame to be saved as a csv.\n filepath: (str) name or path of the file to be saved.\n -------------------\n \"\"\"\n \n # Saves the DataFrame to csv\n data.to_csv(path_or_buf=filename, index=False)\n \n # And that's it, folks!",
"_____no_output_____"
],
[
"def reddit_data_setter(keywords, subreddits, csv=False, frequency=\"month\", aggs=\"created_utc\"):\n \"\"\"\n Creates two DataFrames that hold combined data of all combinations of keywords / subreddits.\n \n Note: if you're reading this note, that means that this function is still only written\n with the intention of automating a specific set of actions for a specific project.\n \n ---- Arguments ----\n keywords: (list) keyword(s) to search.\n subreddits: (list) name of subreddit(s) to include.\n csv: (bool) if True, save the resulting dataframes as csv file.\n frequency: (str) set the size of the time buckets.\n aggs: (str) aggregate function name. Default is \"created_utc\".\n (For more information, read the PushShift API Documentation.)\n -------------------\n \"\"\"\n from time import sleep\n\n comment_df_list = [] # Empty list to hold comment dataframes\n word_df_list = [] # Empty list to hold monthly word count dataframes\n df_comm = pd.DataFrame() # Empty dataframe for comment data\n df_main = pd.DataFrame() # Empty dataframe for keyword counts\n\n # Create the \"month\" (datetime) column - to be used when joining\n df_main[\"month\"] = pd.date_range(start=\"2005-01-01\", end=\"2019-09-01\", freq=\"MS\")\n \n # Run query for individual keywords on each subreddit\n # Subreddit (outer) -> keyword (inner) = all keywords in one subreddit at a time\n for subreddit in subreddits:\n for word in keywords:\n # Create unique column name for each subreddit / word combo\n col_name = f\"{subreddit}_{word.replace(' ', '')}\"\n \n # Indicates current subreddit / keyword\n start = f\"{col_name}...\"\n print(start)\n sleep(0.5) # Add sleep time to reduce API load \n\n # Make request and convert response to dictionary\n dictionary = pushshift_api_request(word, subreddit)\n\n # Append aggs word count df to word_df_list\n word_df_list.append(create_df(dictionary, col_name))\n\n # Append comments df to comment_df_list\n comment_df_list.append(comments_df(dictionary))\n \n sleep(0.5) # More sleep to reduce API load\n sleep(0.5)\n \n # Set \"month\" as index in order to concatenate list of dataframes\n df_main = pd.concat([df.set_index(\"month\") for df in word_df_list],\n axis=1, join=\"outer\").reset_index()\n \n # Concatenate comment_df_list dataframes\n df_comm = pd.concat(comment_df_list, axis=0, sort=False,\n join=\"outer\", ignore_index=True)\n \n # If csv parameter is set to True, save datasets to filesystem as csv\n if csv:\n df_to_csv(df_main, f\"{keywords[0]}-monthly.csv\")\n df_to_csv(df_comm, f\"{keywords[0]}-comments.csv\")\n \n # Return df_main, df_comm, respectively\n return df_main, df_comm",
"_____no_output_____"
]
],
[
[
"---\n---",
"_____no_output_____"
],
[
"## Term Velocity: Algorithm\n\nThe velocity of the term \"algorithm\" in each of the target subreddits.",
"_____no_output_____"
]
],
[
[
"# Define keywords and subreddits as python lists\nwords = [\n \"algorithm\",\n]\n\nsubs = [\n \"Futurology\",\n \"technology\",\n \"science\",\n \"askscience\",\n \"gadgets\",\n \"books\",\n \"scifi\",\n \"movies\",\n \"gaming\",\n \"television\",\n \"news\",\n \"worldnews\",\n \"politics\",\n \"philosophy\",\n \"AskReddit\",\n \"todayilearned\",\n \"explainlikeimfive\",\n]",
"_____no_output_____"
],
[
"# Run the function to create and save the dataset\ndf_main, df_comm = reddit_data_setter(words, subs, True)",
"_____no_output_____"
],
[
"# Take a look to be sure it worked as expected\nprint(df_main.shape)\ndf_main.head()",
"(156, 18)\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Visualizations",
"_____no_output_____"
]
],
[
[
"# Load csv\ndf_main = pd.read_csv(\"008-Session_Exports/algorithm-monthly.csv\")",
"_____no_output_____"
],
[
"df_main[\"month\"] = pd.to_datetime(df_main[\"month\"], infer_datetime_format=True)\ndf_main.head()",
"_____no_output_____"
],
[
"df_main.dtypes",
"_____no_output_____"
],
[
"# Color assignments\nsubs_colors = {}\n\nfor i in range(len(subs)):\n subs_colors[f\"{subs[i]}\"] = f\"{palette[i]}\"",
"_____no_output_____"
],
[
"# Output to current notebook\noutput_notebook()\noutput_file(f\"{words[0]}-velocity-viz.html\")\n\np = {} # dict to hold plots\np_names = [] # list for plot names\n\nfor sub in subs_colors:\n p[f\"{sub}\"] = figure(title=f\"Comments that mention '{words[0]}' in r/{sub}\",\n plot_width=1000, plot_height=200, \n x_axis_type=\"datetime\", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))\n p[f\"{sub}\"].line(df_main[\"month\"], df_main[f\"{sub}_{words[0]}\"], line_width=2, line_color=f\"{subs_colors[sub]}\")\n p_names.append(p[f\"{sub}\"])\n\n# Show the results\nshow(column(p_names))",
"_____no_output_____"
]
],
[
[
"---\n---",
"_____no_output_____"
],
[
"## Term Velocity: AI\n\nThe velocity of the term \"AI\" (abbreviation of artificial intelligence) in each of the target subreddits.",
"_____no_output_____"
]
],
[
[
"# Define keywords and subreddits as python lists\nwords = [\n \"AI\",\n]\n\nsubs = [\n \"Futurology\",\n \"technology\",\n \"science\",\n \"askscience\",\n \"gadgets\",\n \"books\",\n \"scifi\",\n \"movies\",\n \"gaming\",\n \"television\",\n \"news\",\n \"worldnews\",\n \"politics\",\n \"philosophy\",\n \"AskReddit\",\n \"todayilearned\",\n \"explainlikeimfive\",\n]",
"_____no_output_____"
],
[
"# Run the function to create and save the dataset\ndf_main, df_comm = reddit_data_setter(words, subs, True)",
"_____no_output_____"
],
[
"# Take a look to be sure it worked as expected\nprint(df_main.shape)\ndf_main.head()",
"(156, 18)\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Visualizations",
"_____no_output_____"
]
],
[
[
"# Color assignments\nsubs_colors = {}\n\nfor i in range(len(subs)):\n subs_colors[f\"{subs[i]}\"] = f\"{palette[i]}\"",
"_____no_output_____"
],
[
"# Output to current notebook\noutput_notebook()\noutput_file(f\"{words[0]}-velocity-viz.html\")\n\np = {} # dict to hold plots\np_names = [] # list for plot names\n\nfor sub in subs_colors:\n p[f\"{sub}\"] = figure(title=f\"Comments that mention '{words[0]}' in r/{sub}\",\n plot_width=1000, plot_height=200, \n x_axis_type=\"datetime\", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))\n p[f\"{sub}\"].line(df_main[\"month\"], df_main[f\"{sub}_{words[0]}\"], line_width=2, line_color=f\"{subs_colors[sub]}\")\n p_names.append(p[f\"{sub}\"])\n\n# Show the results\nshow(column(p_names))",
"_____no_output_____"
]
],
[
[
"---\n---",
"_____no_output_____"
],
[
"## Term Velocity: AR\n\nThe velocity of the term \"AR\" (abbreviation of augmented reality) in each of the target subreddits.",
"_____no_output_____"
]
],
[
[
"# Define keywords and subreddits as python lists\nwords = [\n \"AR\",\n]\n\nsubs = [\n \"Futurology\",\n \"technology\",\n \"science\",\n \"askscience\",\n \"gadgets\",\n \"books\",\n \"scifi\",\n \"movies\",\n \"gaming\",\n \"television\",\n \"news\",\n \"worldnews\",\n \"politics\",\n \"philosophy\",\n \"AskReddit\",\n \"todayilearned\",\n \"explainlikeimfive\",\n]",
"_____no_output_____"
],
[
"# Run the function to create and save the dataset\ndf_main, df_comm = reddit_data_setter(words, subs, True)",
"_____no_output_____"
],
[
"# Take a look to be sure it worked as expected\nprint(df_main.shape)\ndf_main.head()",
"(156, 18)\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Visualizations",
"_____no_output_____"
]
],
[
[
"# Color assignments\nsubs_colors = {}\n\nfor i in range(len(subs)):\n subs_colors[f\"{subs[i]}\"] = f\"{palette[i]}\"",
"_____no_output_____"
],
[
"# Output to current notebook\noutput_notebook()\noutput_file(f\"{words[0]}-velocity-viz.html\")\n\np = {} # dict to hold plots\np_names = [] # list for plot names\n\nfor sub in subs_colors:\n p[f\"{sub}\"] = figure(title=f\"Comments that mention '{words[0]}' in r/{sub}\",\n plot_width=1000, plot_height=200, \n x_axis_type=\"datetime\", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))\n p[f\"{sub}\"].line(df_main[\"month\"], df_main[f\"{sub}_{words[0]}\"], line_width=2, line_color=f\"{subs_colors[sub]}\")\n p_names.append(p[f\"{sub}\"])\n\n# Show the results\nshow(column(p_names))",
"_____no_output_____"
]
],
[
[
"---\n---",
"_____no_output_____"
],
[
"## Term Velocity: Automation\n\nThe velocity of the term \"automation\" in each of the target subreddits.",
"_____no_output_____"
]
],
[
[
"# Define keywords and subreddits as python lists\nwords = [\n \"automation\",\n]\n\nsubs = [\n \"Futurology\",\n \"technology\",\n \"science\",\n \"askscience\",\n \"gadgets\",\n \"books\",\n \"scifi\",\n \"movies\",\n \"gaming\",\n \"television\",\n \"news\",\n \"worldnews\",\n \"politics\",\n \"philosophy\",\n \"AskReddit\",\n \"todayilearned\",\n \"explainlikeimfive\",\n]",
"_____no_output_____"
],
[
"# Run the function to create and save the dataset\ndf_main, df_comm = reddit_data_setter(words, subs, True)",
"_____no_output_____"
],
[
"# Take a look to be sure it worked as expected\nprint(df_main.shape)\ndf_main.head()",
"(151, 18)\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Visualizations",
"_____no_output_____"
]
],
[
[
"# Output to current notebook\noutput_notebook()\noutput_file(f\"{words[0]}-velocity-viz.html\")\n\np = {} # dict to hold plots\np_names = [] # list for plot names\n\nfor sub in subs_colors:\n p[f\"{sub}\"] = figure(title=f\"Comments that mention '{words[0]}' in r/{sub}\",\n plot_width=1000, plot_height=200, \n x_axis_type=\"datetime\", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))\n p[f\"{sub}\"].line(df_main[\"month\"], df_main[f\"{sub}_{words[0]}\"], line_width=2, line_color=f\"{subs_colors[sub]}\")\n p_names.append(p[f\"{sub}\"])\n\n# Show the results\nshow(column(p_names))",
"_____no_output_____"
]
],
[
[
"---\n---",
"_____no_output_____"
],
[
"## Term Velocity: Big Data\n\nThe velocity of the term \"big data\" in each of the target subreddits.",
"_____no_output_____"
]
],
[
[
"# Define keywords and subreddits as python lists\nwords = [\n \"big data\",\n]\n\nsubs = [\n \"Futurology\",\n \"technology\",\n \"science\",\n \"askscience\",\n \"gadgets\",\n \"books\",\n \"scifi\",\n \"movies\",\n \"gaming\",\n \"television\",\n \"news\",\n \"worldnews\",\n \"politics\",\n \"philosophy\",\n \"AskReddit\",\n \"todayilearned\",\n \"explainlikeimfive\",\n]",
"_____no_output_____"
],
[
"# Run the function to create and save the dataset\ndf_main, df_comm = reddit_data_setter(words, subs, True)",
"Futurology_bigdata...\ntechnology_bigdata...\nscience_bigdata...\naskscience_bigdata...\ngadgets_bigdata...\nbooks_bigdata...\nscifi_bigdata...\nmovies_bigdata...\ngaming_bigdata...\ntelevision_bigdata...\nnews_bigdata...\nworldnews_bigdata...\npolitics_bigdata...\nphilosophy_bigdata...\nAskReddit_bigdata...\ntodayilearned_bigdata...\nexplainlikeimfive_bigdata...\n"
],
[
"# Take a look to be sure it worked as expected\nprint(df_main.shape)\ndf_main.head()",
"(153, 18)\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Visualizations",
"_____no_output_____"
]
],
[
[
"# Output to current notebook\noutput_notebook()\noutput_file(f\"{words[0].replace(' ', '')}-velocity-viz.html\")\n\np = {} # dict to hold plots\np_names = [] # list for plot names\n\nfor sub in subs_colors:\n p[f\"{sub}\"] = figure(title=f\"Comments that mention '{words[0]}' in r/{sub}\",\n plot_width=1000, plot_height=200, \n x_axis_type=\"datetime\", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))\n p[f\"{sub}\"].line(df_main[\"month\"], df_main[f\"{sub}_{words[0].replace(' ', '')}\"], line_width=2, line_color=f\"{subs_colors[sub]}\")\n p_names.append(p[f\"{sub}\"])\n\n# Show the results\nshow(column(p_names))",
"_____no_output_____"
]
],
[
[
"---\n---",
"_____no_output_____"
],
[
"## Overall Subreddit Comment Velocity\n\nThe total number of comments made in each of the subreddits. This is one way I can normalize the data.",
"_____no_output_____"
]
],
[
[
"# Define keywords and subreddits as python lists\nwords = [\"\"] # Passing in an empty list this time to look at all comments\n\nsubs = [\n \"Futurology\",\n \"technology\",\n \"science\",\n \"askscience\",\n \"gadgets\",\n \"books\",\n \"scifi\",\n \"movies\",\n \"gaming\",\n \"television\",\n \"news\",\n \"worldnews\",\n \"politics\",\n \"philosophy\",\n \"AskReddit\",\n \"todayilearned\",\n \"explainlikeimfive\",\n]",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
]
],
[
[
"def all_comments_monthly(subreddit, frequency=\"month\", aggs=\"created_utc\"):\n \"\"\"\n Returns the JSON response of a PushShift API aggregate comment search as a Python dictionary.\n \n Note: if you're reading this note, that means that this function is still only written\n with the intention of automating a specific set of actions for a specific project.\n \n ---- Arguments ----\n query: (str) keyword to search.\n subreddit: (str) subreddit name\n frequency: (str) set the size of the time buckets.\n aggs: (str) aggregate function name. Default is \"created_utc\".\n (For more information, read the PushShift API Documentation.)\n -------------------\n \"\"\"\n \n # Build the query url based on endpoints and parameters \n url = f\"https://api.pushshift.io/reddit/search/comment/?subreddit={subreddit}&aggs={aggs}&frequency={frequency}&size=100\"\n \n # Send the request and save the response into the response object\n response = requests.get(url)\n \n # Check the response; stop execution if failed\n assert response.status_code == 200\n \n # Parse the JSON into a Python dictionary and return it for further processing\n return response.json()",
"_____no_output_____"
],
[
"def all_comments_aggregator(keywords, subreddits, csv=False, frequency=\"month\", aggs=\"created_utc\"):\n \"\"\"\n Creates two DataFrames that hold combined data of all comments in all the target subreddits.\n \n Note: if you're reading this note, that means that this function is still only written\n with the intention of automating a specific set of actions for a specific project.\n \n ---- Arguments ----\n keywords: (list) keyword(s) to search.\n subreddits: (list) name of subreddit(s) to include.\n csv: (bool) if True, save the resulting dataframes as csv file.\n frequency: (str) set the size of the time buckets.\n aggs: (str) aggregate function name. Default is \"created_utc\".\n (For more information, read the PushShift API Documentation.)\n -------------------\n \"\"\"\n from time import sleep\n\n comment_df_list = [] # Empty list to hold comment dataframes\n word_df_list = [] # Empty list to hold monthly word count dataframes\n df_comm = pd.DataFrame() # Empty dataframe for comment data\n df_main = pd.DataFrame() # Empty dataframe for keyword counts\n\n # Create the \"month\" (datetime) column - to be used when joining\n df_main[\"month\"] = pd.date_range(start=\"2005-01-01\", end=\"2019-09-01\", freq=\"MS\")\n \n # Run query for individual keywords on each subreddit\n # Subreddit (outer) -> keyword (inner) = all keywords in one subreddit at a time\n for subreddit in subreddits:\n for word in keywords:\n # Create unique column name for each subreddit / word combo\n col_name = f\"{subreddit}_{word.replace(' ', '')}\"\n \n # Indicates current subreddit / keyword\n start = f\"{col_name}...\"\n print(start)\n sleep(0.5) # Add sleep time to reduce API load \n\n # Make request and convert response to dictionary\n dictionary = pushshift_api_request(word, subreddit)\n\n # Append aggs word count df to word_df_list\n word_df_list.append(create_df(dictionary, col_name))\n\n # Append comments df to comment_df_list\n comment_df_list.append(comments_df(dictionary))\n \n sleep(0.5) # More sleep to reduce API load\n sleep(0.5)\n \n # Set \"month\" as index in order to concatenate list of dataframes\n df_main = pd.concat([df.set_index(\"month\") for df in word_df_list],\n axis=1, join=\"outer\").reset_index()\n \n # Concatenate comment_df_list dataframes\n df_comm = pd.concat(comment_df_list, axis=0, sort=False,\n join=\"outer\", ignore_index=True)\n \n # If csv parameter is set to True, save datasets to filesystem as csv\n if csv:\n df_to_csv(df_main, f\"{keywords[0]}-monthly.csv\")\n df_to_csv(df_comm, f\"{keywords[0]}-comments.csv\")\n \n # Return df_main, df_comm, respectively\n return df_main, df_comm",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
]
],
[
[
"# Run the function to create and save the dataset\ndf_main, df_comm = reddit_data_setter(words, subs, True)",
"Futurology_...\ntechnology_...\nscience_...\naskscience_...\ngadgets_...\nbooks_...\nscifi_...\nmovies_...\ngaming_...\ntelevision_...\nnews_...\nworldnews_...\npolitics_...\nphilosophy_...\nAskReddit_...\ntodayilearned_...\nexplainlikeimfive_...\n"
],
[
"# Take a look to be sure it worked as expected\nprint(df_main.shape)\ndf_main.head()",
"(156, 18)\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Visualizations",
"_____no_output_____"
]
],
[
[
"# Output to current notebook\noutput_notebook()\noutput_file(\"overall-subreddit-velocity-viz.html\")\n\np = {} # dict to hold plots\np_names = [] # list for plot names\n\nfor sub in subs_colors:\n p[f\"{sub}\"] = figure(title=f\"Comments in r/{sub}\",\n plot_width=1000, plot_height=200, \n x_axis_type=\"datetime\", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))\n p[f\"{sub}\"].line(df_main[\"month\"], df_main[f\"{sub}_\"], line_width=2, line_color=f\"{subs_colors[sub]}\")\n p_names.append(p[f\"{sub}\"])\n\n# Show the results\nshow(column(p_names))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d04ed28f19a2651c4aba924181ce26b8c674a807 | 187,029 | ipynb | Jupyter Notebook | Notebooks/ORF_CNN_104.ipynb | ShepherdCode/Soars2021 | ab4f304eaa09e52d260152397a6c53d7a05457da | [
"MIT"
] | 1 | 2021-08-16T14:49:04.000Z | 2021-08-16T14:49:04.000Z | Notebooks/ORF_CNN_104.ipynb | ShepherdCode/Soars2021 | ab4f304eaa09e52d260152397a6c53d7a05457da | [
"MIT"
] | null | null | null | Notebooks/ORF_CNN_104.ipynb | ShepherdCode/Soars2021 | ab4f304eaa09e52d260152397a6c53d7a05457da | [
"MIT"
] | null | null | null | 257.261348 | 27,594 | 0.883318 | [
[
[
"# ORF recognition by CNN\nCompare to ORF_CNN_101.\nUse 2-layer CNN.\nRun on Mac.",
"_____no_output_____"
]
],
[
[
"PC_SEQUENCES=20000 # how many protein-coding sequences\nNC_SEQUENCES=20000 # how many non-coding sequences\nPC_TESTS=1000\nNC_TESTS=1000\nBASES=1000 # how long is each sequence\nALPHABET=4 # how many different letters are possible\nINPUT_SHAPE_2D = (BASES,ALPHABET,1) # Conv2D needs 3D inputs\nINPUT_SHAPE = (BASES,ALPHABET) # Conv1D needs 2D inputs\nFILTERS = 32 # how many different patterns the model looks for\nNEURONS = 16\nWIDTH = 3 # how wide each pattern is, in bases\nSTRIDE_2D = (1,1) # For Conv2D how far in each direction\nSTRIDE = 1 # For Conv1D, how far between pattern matches, in bases\nEPOCHS=10 # how many times to train on all the data\nSPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3 \nFOLDS=5 # train the model this many times (range 1 to SPLITS)",
"_____no_output_____"
],
[
"import sys\ntry:\n from google.colab import drive\n IN_COLAB = True\n print(\"On Google CoLab, mount cloud-local file, get our code from GitHub.\")\n PATH='/content/drive/'\n #drive.mount(PATH,force_remount=True) # hardly ever need this\n #drive.mount(PATH) # Google will require login credentials\n DATAPATH=PATH+'My Drive/data/' # must end in \"/\"\n import requests\n r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')\n with open('RNA_gen.py', 'w') as f:\n f.write(r.text) \n from RNA_gen import *\n r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')\n with open('RNA_describe.py', 'w') as f:\n f.write(r.text) \n from RNA_describe import *\n r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')\n with open('RNA_prep.py', 'w') as f:\n f.write(r.text) \n from RNA_prep import *\nexcept:\n print(\"CoLab not working. On my PC, use relative paths.\")\n IN_COLAB = False\n DATAPATH='data/' # must end in \"/\"\n sys.path.append(\"..\") # append parent dir in order to use sibling dirs\n from SimTools.RNA_gen import *\n from SimTools.RNA_describe import *\n from SimTools.RNA_prep import *\n\nMODELPATH=\"BestModel\" # saved on cloud instance and lost after logout\n#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login\n\nif not assert_imported_RNA_gen():\n print(\"ERROR: Cannot use RNA_gen.\")\nif not assert_imported_RNA_prep():\n print(\"ERROR: Cannot use RNA_prep.\")",
"On Google CoLab, mount cloud-local file, get our code from GitHub.\n"
],
[
"from os import listdir\nimport time # datetime\nimport csv\nfrom zipfile import ZipFile\n\nimport numpy as np\nimport pandas as pd\nfrom scipy import stats # mode\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import cross_val_score\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense,Embedding\nfrom keras.layers import Conv1D,Conv2D\nfrom keras.layers import Flatten,MaxPooling1D,MaxPooling2D\nfrom keras.losses import BinaryCrossentropy\n# tf.keras.losses.BinaryCrossentropy\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors\nmycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1\nnp.set_printoptions(precision=2)\n",
"_____no_output_____"
],
[
"t = time.time()\ntime.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))",
"_____no_output_____"
],
[
"# Use code from our SimTools library.\ndef make_generators(seq_len):\n pcgen = Collection_Generator() \n pcgen.get_len_oracle().set_mean(seq_len)\n pcgen.set_seq_oracle(Transcript_Oracle())\n ncgen = Collection_Generator() \n ncgen.get_len_oracle().set_mean(seq_len)\n return pcgen,ncgen\n\npc_sim,nc_sim = make_generators(BASES)\npc_train = pc_sim.get_sequences(PC_SEQUENCES)\nnc_train = nc_sim.get_sequences(NC_SEQUENCES)\nprint(\"Train on\",len(pc_train),\"PC seqs\")\nprint(\"Train on\",len(nc_train),\"NC seqs\")",
"Train on 20000 PC seqs\nTrain on 20000 NC seqs\n"
],
[
"# Use code from our LearnTools library.\nX,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles\nprint(\"Data ready.\")",
"Data ready.\n"
],
[
"def make_DNN():\n print(\"make_DNN\")\n print(\"input shape:\",INPUT_SHAPE)\n dnn = Sequential()\n #dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE)) \n dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding=\"same\",\n input_shape=INPUT_SHAPE))\n dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding=\"same\"))\n dnn.add(MaxPooling1D())\n dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding=\"same\"))\n dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding=\"same\"))\n dnn.add(MaxPooling1D())\n dnn.add(Flatten())\n dnn.add(Dense(NEURONS,activation=\"sigmoid\",dtype=np.float32)) \n dnn.add(Dense(1,activation=\"sigmoid\",dtype=np.float32)) \n dnn.compile(optimizer='adam',\n loss=BinaryCrossentropy(from_logits=False),\n metrics=['accuracy']) # add to default metrics=loss\n dnn.build(input_shape=INPUT_SHAPE)\n #ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)\n #bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)\n #model.compile(loss=bc, optimizer=ln_rate, metrics=[\"accuracy\"])\n return dnn\nmodel = make_DNN()\nprint(model.summary())",
"make_DNN\ninput shape: (1000, 4)\nModel: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv1d (Conv1D) (None, 1000, 32) 416 \n_________________________________________________________________\nconv1d_1 (Conv1D) (None, 1000, 32) 3104 \n_________________________________________________________________\nmax_pooling1d (MaxPooling1D) (None, 500, 32) 0 \n_________________________________________________________________\nconv1d_2 (Conv1D) (None, 500, 32) 3104 \n_________________________________________________________________\nconv1d_3 (Conv1D) (None, 500, 32) 3104 \n_________________________________________________________________\nmax_pooling1d_1 (MaxPooling1 (None, 250, 32) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 8000) 0 \n_________________________________________________________________\ndense (Dense) (None, 16) 128016 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 17 \n=================================================================\nTotal params: 137,761\nTrainable params: 137,761\nNon-trainable params: 0\n_________________________________________________________________\nNone\n"
],
[
"from keras.callbacks import ModelCheckpoint\ndef do_cross_validation(X,y):\n cv_scores = []\n fold=0\n mycallbacks = [ModelCheckpoint(\n filepath=MODELPATH, save_best_only=True, \n monitor='val_accuracy', mode='max')] \n splitter = KFold(n_splits=SPLITS) # this does not shuffle\n for train_index,valid_index in splitter.split(X):\n if fold < FOLDS:\n fold += 1\n X_train=X[train_index] # inputs for training\n y_train=y[train_index] # labels for training\n X_valid=X[valid_index] # inputs for validation\n y_valid=y[valid_index] # labels for validation\n print(\"MODEL\")\n # Call constructor on each CV. Else, continually improves the same model.\n model = model = make_DNN()\n print(\"FIT\") # model.fit() implements learning\n start_time=time.time()\n history=model.fit(X_train, y_train, \n epochs=EPOCHS, \n verbose=1, # ascii art while learning\n callbacks=mycallbacks, # called at end of each epoch\n validation_data=(X_valid,y_valid))\n end_time=time.time()\n elapsed_time=(end_time-start_time) \n print(\"Fold %d, %d epochs, %d sec\"%(fold,EPOCHS,elapsed_time))\n # print(history.history.keys()) # all these keys will be shown in figure\n pd.DataFrame(history.history).plot(figsize=(8,5))\n plt.grid(True)\n plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale\n plt.show()\n",
"_____no_output_____"
],
[
"do_cross_validation(X,y)",
"MODEL\nmake_DNN\ninput shape: (1000, 4)\nFIT\nEpoch 1/10\n1000/1000 [==============================] - 89s 88ms/step - loss: 0.6474 - accuracy: 0.6086 - val_loss: 0.5048 - val_accuracy: 0.7552\nINFO:tensorflow:Assets written to: BestModel/assets\nEpoch 2/10\n1000/1000 [==============================] - 88s 88ms/step - loss: 0.4840 - accuracy: 0.7699 - val_loss: 0.4902 - val_accuracy: 0.7669\nINFO:tensorflow:Assets written to: BestModel/assets\nEpoch 3/10\n1000/1000 [==============================] - 88s 88ms/step - loss: 0.3951 - accuracy: 0.8257 - val_loss: 0.4273 - val_accuracy: 0.8112\nINFO:tensorflow:Assets written to: BestModel/assets\nEpoch 4/10\n1000/1000 [==============================] - 90s 90ms/step - loss: 0.2797 - accuracy: 0.8938 - val_loss: 0.3144 - val_accuracy: 0.8736\nINFO:tensorflow:Assets written to: BestModel/assets\nEpoch 5/10\n1000/1000 [==============================] - 90s 90ms/step - loss: 0.1786 - accuracy: 0.9437 - val_loss: 0.2970 - val_accuracy: 0.8799\nINFO:tensorflow:Assets written to: BestModel/assets\nEpoch 6/10\n1000/1000 [==============================] - 92s 92ms/step - loss: 0.1335 - accuracy: 0.9609 - val_loss: 0.2811 - val_accuracy: 0.8916\nINFO:tensorflow:Assets written to: BestModel/assets\nEpoch 7/10\n1000/1000 [==============================] - 91s 91ms/step - loss: 0.1165 - accuracy: 0.9632 - val_loss: 0.2818 - val_accuracy: 0.8895\nEpoch 8/10\n1000/1000 [==============================] - 91s 91ms/step - loss: 0.0955 - accuracy: 0.9720 - val_loss: 0.2989 - val_accuracy: 0.8913\nEpoch 9/10\n1000/1000 [==============================] - 91s 91ms/step - loss: 0.0862 - accuracy: 0.9730 - val_loss: 0.3020 - val_accuracy: 0.8855\nEpoch 10/10\n1000/1000 [==============================] - 90s 90ms/step - loss: 0.0835 - accuracy: 0.9728 - val_loss: 0.2870 - val_accuracy: 0.8957\nINFO:tensorflow:Assets written to: BestModel/assets\nFold 1, 10 epochs, 911 sec\n"
],
[
"from keras.models import load_model\npc_test = pc_sim.get_sequences(PC_TESTS)\nnc_test = nc_sim.get_sequences(NC_TESTS)\nX,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)\nbest_model=load_model(MODELPATH)\nscores = best_model.evaluate(X, y, verbose=0)\nprint(\"The best model parameters were saved during cross-validation.\")\nprint(\"Best was defined as maximum validation accuracy at end of any epoch.\")\nprint(\"Now re-load the best model and test it on previously unseen data.\")\nprint(\"Test on\",len(pc_test),\"PC seqs\")\nprint(\"Test on\",len(nc_test),\"NC seqs\")\nprint(\"%s: %.2f%%\" % (best_model.metrics_names[1], scores[1]*100))\n",
"The best model parameters were saved during cross-validation.\nBest was defined as maximum validation accuracy at end of any epoch.\nNow re-load the best model and test it on previously unseen data.\nTest on 1000 PC seqs\nTest on 1000 NC seqs\naccuracy: 92.40%\n"
],
[
"from sklearn.metrics import roc_curve\nfrom sklearn.metrics import roc_auc_score\nns_probs = [0 for _ in range(len(y))]\nbm_probs = best_model.predict(X)\nns_auc = roc_auc_score(y, ns_probs)\nbm_auc = roc_auc_score(y, bm_probs)\nns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)\nbm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)\nplt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)\nplt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)\nplt.title('ROC')\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.legend()\nplt.show()\nprint(\"%s: %.2f%%\" %('AUC',bm_auc))\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04ee152cc4e090de517f5a61e2d6bf44f526be0 | 107,629 | ipynb | Jupyter Notebook | tutorials/01_NeMo_Models.ipynb | mcdavid109/NeMo | a7df3e0271ab6133f7fe057ec697f764c8637d54 | [
"Apache-2.0"
] | 2 | 2020-10-08T13:38:46.000Z | 2020-10-14T15:09:34.000Z | tutorials/01_NeMo_Models.ipynb | purn3ndu/NeMo | fd98a89adf80012987851a2cd3c3f4dc63bb8db6 | [
"Apache-2.0"
] | null | null | null | tutorials/01_NeMo_Models.ipynb | purn3ndu/NeMo | fd98a89adf80012987851a2cd3c3f4dc63bb8db6 | [
"Apache-2.0"
] | 1 | 2020-12-18T14:23:37.000Z | 2020-12-18T14:23:37.000Z | 38.08528 | 482 | 0.533778 | [
[
[
"\"\"\"\nYou can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n\nInstructions for setting up Colab are as follows:\n1. Open a new Python 3 notebook.\n2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n4. Run this cell to set up dependencies.\n\"\"\"\n# If you're using Google Colab and not running locally, run this cell.\n\n## Install dependencies\n!pip install wget\n!apt-get install sox libsndfile1 ffmpeg\n!pip install unidecode\n\n# ## Install NeMo\n!python -m pip install --upgrade git+https://github.com/NVIDIA/NeMo.git#egg=nemo_toolkit[all]\n\n## Install TorchAudio\n!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n## Grab the config we'll use in this example\n!mkdir configs",
"_____no_output_____"
]
],
[
[
"# minGPT License\n\n*This notebook port's the [minGPT codebase](https://github.com/karpathy/minGPT) into equivalent NeMo code. The license for minGPT has therefore been attached here.*\n\n```\nThe MIT License (MIT) Copyright (c) 2020 Andrej Karpathy\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n```",
"_____no_output_____"
],
[
"# torch-rnn License\n*This notebook utilizes the `tiny-shakespeare` dataset from the [torch-rnn](https://github.com/jcjohnson/torch-rnn) codebase. The license for torch-rnn has therefore been attached here.*\n\n```\nThe MIT License (MIT)\n\nCopyright (c) 2016 Justin Johnson\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n```\n",
"_____no_output_____"
],
[
"-------\n\n***Note: This notebook will intentionally introduce some errors to show the power of Neural Types or model development concepts, inside the cells marked with `[ERROR CELL]`. The explanation of and resolution of such errors can be found in the subsequent cells.***\n\n-----",
"_____no_output_____"
],
[
"# The NeMo Model\n\nNeMo comes with many state of the art pre-trained Conversational AI models for users to quickly be able to start training and fine-tuning on their own datasets. \n\nIn the previous [NeMo Primer](https://colab.research.google.com/github/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb) notebook, we learned how to download pretrained checkpoints with NeMo and we also discussed the fundamental concepts of the NeMo Model. The previous tutorial showed us how to use, modify, save, and restore NeMo Models.\n\nIn this tutorial we will learn how to develop a non-trivial NeMo model from scratch. This helps us to understand the underlying components and how they interact with the overall PyTorch ecosystem.\n",
"_____no_output_____"
],
[
"-------\nAt the heart of NeMo lies the concept of the \"Model\". For NeMo developers, a \"Model\" is the neural network(s) as well as all the infrastructure supporting those network(s), wrapped into a singular, cohesive unit. As such, most NeMo models are constructed to contain the following out of the box (note: some NeMo models support additional functionality specific to the domain/use case!) - \n\n - Neural Network architecture - all of the modules that are required for the model.\n\n - Dataset + Data Loaders - all of the components that prepare the data for consumption during training or evaluation.\n\n - Preprocessing + Postprocessing - any of the components that process the datasets so the modules can easily consume them.\n\n - Optimizer + Schedulers - basic defaults that work out of the box and allow further experimentation with ease.\n\n - Any other supporting infrastructure - tokenizers, language model configuration, data augmentation, etc.",
"_____no_output_____"
],
[
"# Constructing a NeMo Model\n\nNeMo \"Models\" are comprised of a few key components, so let's tackle them one by one. We will attempt to go in the order that's stated above.\n\nTo make this slightly challenging, let's port a model from the NLP domain this time. Transformers are all the rage, with BERT and his friends from Sesame Street forming the core infrastructure for many NLP tasks. \n\nAn excellent (yet simple) implementation of one such model - GPT - can be found in the `minGPT` repository - https://github.com/karpathy/minGPT. While the script is short, it explains and succinctly explores all of the core components we expect in a NeMo model, so it's a prime candidate for NeMo! Sidenote: NeMo supports GPT in its NLP collection, and as such, this notebook aims to be an in-depth development walkthrough for such models.\n\nIn the following notebook, we will attempt to port minGPT to NeMo, and along the way, discuss some core concepts of NeMo itself.",
"_____no_output_____"
],
[
"# Constructing the Neural Network Architecture\n\nFirst, on the list - the neural network that forms the backbone of the NeMo Model.\n\nSo how do we create such a model? Using PyTorch! As you'll see below, NeMo components are compatible with all of PyTorch, so you can augment your workflow without ever losing the flexibility of PyTorch itself!\n\nLet's start with a couple of imports - ",
"_____no_output_____"
]
],
[
[
"import torch\nimport nemo\nfrom nemo.core import NeuralModule\nfrom nemo.core import typecheck",
"_____no_output_____"
]
],
[
[
"## Neural Module\nWait, what's `NeuralModule`? Where is the wonderful `torch.nn.Module`? \n\n`NeuralModule` is a subclass of `torch.nn.Module`, and it brings with it a few additional functionalities.\n\nIn addition to being a `torch.nn.Module`, thereby being entirely compatible with the PyTorch ecosystem, it has the following capabilities - \n\n1) `Typing` - It adds support for `Neural Type Checking` to the model. `Typing` is optional but quite useful, as we will discuss below!\n\n2) `Serialization` - Remember the `OmegaConf` config dict and YAML config files? Well, all `NeuralModules` inherently supports serialization/deserialization from such config dictionaries!\n\n3) `FileIO` - This is another entirely optional file serialization system. Does your `NeuralModule` require some way to preserve data that can't be saved into a PyTorch checkpoint? Write your serialization and deserialization logic in two handy methods! **Note**: When you create the final NeMo Model, this will be implemented for you! Automatic serialization and deserialization support of NeMo models!\n",
"_____no_output_____"
]
],
[
[
"class MyEmptyModule(NeuralModule):\n\n def forward(self):\n print(\"Neural Module ~ hello world!\")",
"_____no_output_____"
],
[
"x = MyEmptyModule()\nx()",
"_____no_output_____"
]
],
[
[
"## Neural Types\n\nNeural Types? You might be wondering what that term refers to.\n\nAlmost all NeMo components inherit the class `Typing`. `Typing` is a simple class that adds two properties to the class that inherits it - `input_types` and `output_types`. A NeuralType, by its shortest definition, is simply a semantic tensor. It contains information regarding the semantic shape the tensor should hold, as well as the semantic information of what that tensor represents. That's it.\n\nSo what semantic information does such a typed tensor contain? Let's take an example below.\n\n\n",
"_____no_output_____"
],
[
"------\nAcross the Deep Learning domain, we often encounter cases where tensor shapes may match, but the semantics don't match at all. For example take a look at the following rank 3 tensors - ",
"_____no_output_____"
]
],
[
[
"# Case 1:\nembedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30)\nx = torch.randint(high=10, size=(1, 5))\nprint(\"x :\", x)\nprint(\"embedding(x) :\", embedding(x).shape)",
"_____no_output_____"
],
[
"# Case 2\nlstm = torch.nn.LSTM(1, 30, batch_first=True)\nx = torch.randn(1, 5, 1)\nprint(\"x :\", x)\nprint(\"lstm(x) :\", lstm(x)[0].shape) # Let's take all timestep outputs of the LSTM",
"_____no_output_____"
]
],
[
[
"-------\nAs you can see, the output of Case 1 is an embedding of shape [1, 5, 30], and the output of Case 2 is an LSTM output (state `h` over all time steps), also of the same shape [1, 5, 30].\n\nDo they have the same shape? **Yes**. <br>If we do a Case 1 .shape == Case 2 .shape, will we get True as an output? **Yes**. <br>\nDo they represent the same concept? **No**. <br>\n\n\nThe ability to recognize that the two tensors do not represent the same semantic information is precisely why we utilize Neural Types. It contains the information of both the shape and the semantic concept of what that tensor represents. If we performed a neural type check between the two outputs of those tensors, it would raise an error saying semantically they were different things (more technically, it would say that they are `INCOMPATIBLE` with each other)!\n",
"_____no_output_____"
],
[
"--------\n\nYou may have read of concepts such as [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html). While conceptually similar, Neural Types attached by NeMo are not as tightly bound to the PyTorch ecosystem - practically any object of a class can be attached with a neural type!\n",
"_____no_output_____"
],
[
"## Neural Types - Usage\n\nNeural Types sound interesting, so how do we go about adding them? Let's take a few cases below. \n\nNeural Types are one of the core foundations of NeMo - you will find them in a vast majority of Neural Modules, and every NeMo Model will have its Neural Types defined. While they are entirely optional and unintrusive, NeMo takes great care to support it so that there is no semantic incompatibility between components being used by users.",
"_____no_output_____"
],
[
"Let's start with a basic example of a type checked module.",
"_____no_output_____"
]
],
[
[
"from nemo.core.neural_types import NeuralType\nfrom nemo.core.neural_types import *",
"_____no_output_____"
],
[
"class EmbeddingModule(NeuralModule):\n def __init__(self):\n super().__init__()\n self.embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30)\n\n @typecheck()\n def forward(self, x):\n return self.embedding(x)\n\n @property\n def input_types(self):\n return {\n 'x': NeuralType(axes=('B', 'T'), elements_type=Index())\n }\n\n @property\n def output_types(self):\n return {\n 'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EmbeddedTextType())\n }",
"_____no_output_____"
]
],
[
[
"To show the benefit of Neural Types, we are going to replicate the above cases inside NeuralModules.\n\nLet's discuss how we added type checking support to the above class.\n\n1) `forward` has a decorator `@typecheck()` on it.\n\n2) `input_types` and `output_types` properties are defined.\n\nThat's it!",
"_____no_output_____"
],
[
"-------\n\nLet's expand on each of the above steps.\n\n- `@typecheck()` is a simple decorator that takes any class that inherits `Typing` (NeuralModule does this for us) and adds the two default properties of `input_types` and `output_types`, which by default returns None.\n\nThe `@typecheck()` decorator's explicit use ensures that, by default, neural type checking is **disabled**. NeMo does not wish to intrude on the development process of models. So users can \"opt-in\" to type checking by overriding the two properties. Therefore, the decorator ensures that users are not burdened with type checking before they wish to have it.\n\nSo what is `@typecheck()`? Simply put, you can wrap **any** function of a class that inherits `Typing` with this decorator, and it will look up the definition of the types of that class and enforce them. Typically, `torch.nn.Module` subclasses only implement `forward()` so it is most common to wrap that method, but `@typecheck()` is a very flexible decorator. Inside NeMo, we will show some advanced use cases (which are quite crucial to particular domains such as TTS).",
"_____no_output_____"
],
[
"------\n\nAs we see above, `@typecheck()` enforces the types. How then, do we provide this type of information to NeMo? \n\nBy overriding `input_types` and `output_types` properties of the class, we can return a dictionary mapping a string name to a `NeuralType`.\n\nIn the above case, we define a `NeuralType` as two components - \n\n- `axes`: This is the semantic information of the carried by the axes themselves. The most common axes information is from single character notation.\n\n> `B` = Batch <br>\n> `C` / `D` - Channel / Dimension (treated the same) <br>\n> `T` - Time <br>\n> `H` / `W` - Height / Width <br>\n\n- `elements_type`: This is the semantic information of \"what the tensor represents\". All such types are derived from the basic `ElementType`, and merely subclassing `ElementType` allows us to build a hierarchy of custom semantic types that can be used by NeMo!\n\nHere, we declare that the input is an element_type of `Index` (index of the character in the vocabulary) and that the output is an element_type of `EmbeddedTextType` (the text embedding)",
"_____no_output_____"
]
],
[
[
"embedding_module = EmbeddingModule()",
"_____no_output_____"
]
],
[
[
"Now let's construct the equivalent of the Case 2 above, but as a `NeuralModule`.",
"_____no_output_____"
]
],
[
[
"class LSTMModule(NeuralModule):\n def __init__(self):\n super().__init__()\n self.lstm = torch.nn.LSTM(1, 30, batch_first=True)\n\n @typecheck()\n def forward(self, x):\n return self.lstm(x)\n\n @property\n def input_types(self):\n return {\n 'x': NeuralType(axes=('B', 'T', 'C'), elements_type=SpectrogramType())\n }\n\n @property\n def output_types(self):\n return {\n 'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation())\n }",
"_____no_output_____"
]
],
[
[
"------\nHere, we define the LSTM module from the Case 2 above.\n\nWe changed the input to be a rank three tensor, now representing a \"SpectrogramType\". We intentionally keep it generic - it can be a `MelSpectrogramType` or a `MFCCSpectrogramType` as it's input!\n\nThe output of an LSTM is now an `EncodedRepresentation`. Practically, this can be the output of a CNN layer, a Transformer block, or in this case, an LSTM layer. We can, of course, specialize by subclassing EncodedRepresentation and then using that!",
"_____no_output_____"
]
],
[
[
"lstm_module = LSTMModule()",
"_____no_output_____"
]
],
[
[
"------\nNow for the test !",
"_____no_output_____"
]
],
[
[
"# Case 1 [ERROR CELL]\nx1 = torch.randint(high=10, size=(1, 5))\nprint(\"x :\", x1)\nprint(\"embedding(x) :\", embedding_module(x1).shape)",
"_____no_output_____"
]
],
[
[
"-----\nYou might be wondering why we get a `TypeError` right off the bat. This `TypeError` is raised by design.\n\nPositional arguments can cause significant issues during model development, mostly when the model/module design is not finalized. To reduce the potential for mistakes caused by wrong positional arguments and enforce the name of arguments provided to the function, `Typing` requires you to **call all of your type-checked functions by kwargs only**.",
"_____no_output_____"
]
],
[
[
"# Case 1\nprint(\"x :\", x1)\nprint(\"embedding(x) :\", embedding_module(x=x1).shape)",
"_____no_output_____"
]
],
[
[
"Now let's try the same for the `LSTMModule` in Case 2",
"_____no_output_____"
]
],
[
[
"# Case 2 [ERROR CELL]\nx2 = torch.randn(1, 5, 1)\nprint(\"x :\", x2)\nprint(\"lstm(x) :\", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM",
"_____no_output_____"
]
],
[
[
"-----\nNow we get a type error stating that the number of output arguments provided does not match what is expected.\n\nWhat exactly is going on here? Well, inside our `LSTMModule` class, we declare the output types to be a single NeuralType - an `EncodedRepresentation` of shape [B, T, C].\n\nBut the output of an LSTM layer is a tuple of two state values - the hidden state `h` and the cell state `c`!\n\nSo the neural type system raises an error saying that the number of output arguments does not match what is expected.\n\nLet's fix the above.",
"_____no_output_____"
]
],
[
[
"class CorrectLSTMModule(LSTMModule): # Let's inherit the wrong class to make it easy to override\n @property\n def output_types(self):\n return {\n 'h': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()),\n 'c': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()),\n }",
"_____no_output_____"
],
[
"lstm_module = CorrectLSTMModule()",
"_____no_output_____"
],
[
"# Case 2\nx2 = torch.randn(1, 5, 1)\nprint(\"x :\", x2)\nprint(\"lstm(x) :\", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM `h` gate",
"_____no_output_____"
]
],
[
[
"------\nGreat! So now, the type checking system is happy.\n\nIf you looked closely, the outputs were ordinary Torch Tensors (this is good news; we don't want to be incompatible with torch Tensors after all!). So, where exactly is the type of information stored?\n\nWhen the `output_types` is overridden, and valid torch tensors are returned as a result, these tensors are attached with the attribute `neural_type`. Let's inspect this -",
"_____no_output_____"
]
],
[
[
"emb_out = embedding_module(x=x1)\nlstm_out = lstm_module(x=x2)[0]\n\nassert hasattr(emb_out, 'neural_type')\nassert hasattr(lstm_out, 'neural_type')",
"_____no_output_____"
],
[
"print(\"Embedding tensor :\", emb_out.neural_type)\nprint(\"LSTM tensor :\", lstm_out.neural_type)",
"_____no_output_____"
]
],
[
[
"-------\nSo we see that these tensors now have this attribute called `neural_type` and are the same shape.\n\nThis exercise's entire goal was to assert that the two outputs are semantically **not** the same object, even if they are the same shape. \n\nLet's test this!",
"_____no_output_____"
]
],
[
[
"emb_out.neural_type.compare(lstm_out.neural_type)",
"_____no_output_____"
],
[
"emb_out.neural_type == lstm_out.neural_type",
"_____no_output_____"
]
],
[
[
"## Neural Types - Limitations\n\nYou might have noticed one interesting fact - our inputs were just `torch.Tensor` to both typed function calls, and they had no `neural_type` assigned to them.\n\nSo why did the type check system not raise any error? \n\nThis is to maintain compatibility - type checking is meant to work on a chain of function calls - and each of these functions should themselves be wrapped with the `@typecheck()` decorator. This is also done because we don't want to overtax the forward call with dozens of checks, and therefore we only type modules that perform some higher-order logical computation. \n\n------\n\nAs an example, it is mostly unnecessary (but still possible) to type the input and output of every residual block of a ResNet model. However, it is practically important to type the encoder (no matter how many layers is inside it) and the decoder (the classification head) separately so that when one does fine-tuning, there is no semantic mismatch of the tensors input to the encoder and bound to the decoder.",
"_____no_output_____"
],
[
"-------\nFor this case, since it would be impractical to extend a class to attach a type to the input tensor, we can take a shortcut and directly attach the neural type to the input!",
"_____no_output_____"
]
],
[
[
"embedding_module = EmbeddingModule()\nx1 = torch.randint(high=10, size=(1, 5))\n\n# Attach correct neural type\nx1.neural_type = NeuralType(('B', 'T'), Index())\n\nprint(\"embedding(x) :\", embedding_module(x=x1).shape)",
"_____no_output_____"
],
[
"# Attach wrong neural type [ERROR CELL]\nx1.neural_type = NeuralType(('B', 'T'), LabelsType())\n\nprint(\"embedding(x) :\", embedding_module(x=x1).shape)",
"_____no_output_____"
]
],
[
[
"## Let's create the minGPT components\n\nNow that we have a somewhat firm grasp of neural type checking, let's begin porting the minGPT example code. Once again, most of the code will be a direct port from the [minGPT repository](https://github.com/karpathy/minGPT).\n\nHere, you will notice one thing. By just changing class imports, one `@typecheck()` on forward, and adding `input_types` and `output_types` (which are also entirely optional!), we are almost entirely done with the PyTorch Lightning port!",
"_____no_output_____"
]
],
[
[
"import math\nfrom typing import List, Set, Dict, Tuple, Optional\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import functional as F",
"_____no_output_____"
]
],
[
[
"## Creating Element Types\n\nTill now, we have used the Neural Types provided by the NeMo core. But we need not be restricted to the pre-defined element types !\n\nUsers have total flexibility in defining any hierarchy of element types as they please!",
"_____no_output_____"
]
],
[
[
"class AttentionType(EncodedRepresentation):\n \"\"\"Basic Attention Element Type\"\"\"\n\nclass SelfAttentionType(AttentionType):\n \"\"\"Self Attention Element Type\"\"\"\n\nclass CausalSelfAttentionType(SelfAttentionType):\n \"\"\"Causal Self Attention Element Type\"\"\"",
"_____no_output_____"
]
],
[
[
"## Creating the modules\n\nNeural Modules are generally top-level modules but can be used at any level of the module hierarchy.\n\nFor demonstration, we will treat an encoder comprising a block of Causal Self Attention modules as a typed Neural Module. Of course, we can also treat each Causal Self Attention layer itself as a neural module if we require it, but top-level modules are generally preferred.",
"_____no_output_____"
]
],
[
[
"class CausalSelfAttention(nn.Module):\n \"\"\"\n A vanilla multi-head masked self-attention layer with a projection at the end.\n It is possible to use torch.nn.MultiheadAttention here but I am including an\n explicit implementation here to show that there is nothing too scary here.\n \"\"\"\n\n def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop):\n super().__init__()\n assert n_embd % n_head == 0\n self.n_head = n_head\n # key, query, value projections for all heads\n self.key = nn.Linear(n_embd, n_embd)\n self.query = nn.Linear(n_embd, n_embd)\n self.value = nn.Linear(n_embd, n_embd)\n # regularization\n self.attn_drop = nn.Dropout(attn_pdrop)\n self.resid_drop = nn.Dropout(resid_pdrop)\n # output projection\n self.proj = nn.Linear(n_embd, n_embd)\n # causal mask to ensure that attention is only applied to the left in the input sequence\n self.register_buffer(\"mask\", torch.tril(torch.ones(block_size, block_size))\n .view(1, 1, block_size, block_size))\n def forward(self, x, layer_past=None):\n B, T, C = x.size()\n\n # calculate query, key, values for all heads in batch and move head forward to be the batch dim\n k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)\n q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)\n v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)\n\n # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)\n att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))\n att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))\n att = F.softmax(att, dim=-1)\n att = self.attn_drop(att)\n y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)\n y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side\n\n # output projection\n y = self.resid_drop(self.proj(y))\n return y\n \n\nclass Block(nn.Module):\n \"\"\" an unassuming Transformer block \"\"\"\n\n def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop):\n super().__init__()\n self.ln1 = nn.LayerNorm(n_embd)\n self.ln2 = nn.LayerNorm(n_embd)\n self.attn = CausalSelfAttention(n_embd, block_size, n_head, attn_pdrop, resid_pdrop)\n self.mlp = nn.Sequential(\n nn.Linear(n_embd, 4 * n_embd),\n nn.GELU(),\n nn.Linear(4 * n_embd, n_embd),\n nn.Dropout(resid_pdrop),\n )\n\n def forward(self, x):\n x = x + self.attn(self.ln1(x))\n x = x + self.mlp(self.ln2(x))\n return x",
"_____no_output_____"
]
],
[
[
"## Building the NeMo Model\n\nSince a NeMo Model is comprised of various parts, we are going to iterate on the model step by step inside this notebook. As such, we will have multiple intermediate NeMo \"Models\", which will be partial implementations, and they will inherit each other iteratively.\n\nIn a complete implementation of a NeMo Model (as found in the NeMo collections), all of these components will generally be found in a single class.\n\nLet's start by inheriting `ModelPT` - the core class of a PyTorch NeMo Model, which inherits the PyTorch Lightning Module.",
"_____no_output_____"
],
[
"-------\n**Remember**:\n\n - The NeMo equivalent of `torch.nn.Module` is the `NeuralModule.\n - The NeMo equivalent of the `LightningModule` is `ModelPT`.\n",
"_____no_output_____"
]
],
[
[
"import pytorch_lightning as ptl\nfrom nemo.core import ModelPT\nfrom omegaconf import OmegaConf",
"_____no_output_____"
]
],
[
[
"------\nNext, let's construct the bare minimum implementation of the NeMo Model - just the constructor, the initializer of weights, and the forward method.\n\nInitially, we will follow the steps followed by the minGPT implementation, and progressively refactor for NeMo ",
"_____no_output_____"
]
],
[
[
"class PTLGPT(ptl.LightningModule):\n def __init__(self,\n # model definition args\n vocab_size: int, # size of the vocabulary (number of possible tokens)\n block_size: int, # length of the model's context window in time\n n_layer: int, # depth of the model; number of Transformer blocks in sequence\n n_embd: int, # the \"width\" of the model, number of channels in each Transformer\n n_head: int, # number of heads in each multi-head attention inside each Transformer block\n # model optimization args\n learning_rate: float = 3e-4, # the base learning rate of the model\n weight_decay: float = 0.1, # amount of regularizing L2 weight decay on MatMul ops\n betas: Tuple[float, float] = (0.9, 0.95), # momentum terms (betas) for the Adam optimizer\n embd_pdrop: float = 0.1, # \\in [0,1]: amount of dropout on input embeddings\n resid_pdrop: float = 0.1, # \\in [0,1]: amount of dropout in each residual connection\n attn_pdrop: float = 0.1, # \\in [0,1]: amount of dropout on the attention matrix\n ):\n super().__init__()\n\n # save these for optimizer init later\n self.learning_rate = learning_rate\n self.weight_decay = weight_decay\n self.betas = betas\n\n # input embedding stem: drop(content + position)\n self.tok_emb = nn.Embedding(vocab_size, n_embd)\n self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd))\n self.drop = nn.Dropout(embd_pdrop)\n # deep transformer: just a sequence of transformer blocks\n self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop) for _ in range(n_layer)])\n # decoder: at the end one more layernorm and decode the answers\n self.ln_f = nn.LayerNorm(n_embd)\n self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f\n\n self.block_size = block_size\n self.apply(self._init_weights)\n\n print(\"number of parameters: %e\" % sum(p.numel() for p in self.parameters()))\n\n def forward(self, idx):\n b, t = idx.size()\n assert t <= self.block_size, \"Cannot forward, model block size is exhausted.\"\n\n # forward the GPT model\n token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector\n position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector\n x = self.drop(token_embeddings + position_embeddings)\n x = self.blocks(x)\n x = self.ln_f(x)\n logits = self.head(x)\n\n return logits\n\n def get_block_size(self):\n return self.block_size\n\n def _init_weights(self, module):\n \"\"\"\n Vanilla model initialization:\n - all MatMul weights \\in N(0, 0.02) and biases to zero\n - all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0\n \"\"\"\n if isinstance(module, (nn.Linear, nn.Embedding)):\n module.weight.data.normal_(mean=0.0, std=0.02)\n if isinstance(module, nn.Linear) and module.bias is not None:\n module.bias.data.zero_()\n elif isinstance(module, nn.LayerNorm):\n module.bias.data.zero_()\n module.weight.data.fill_(1.0)",
"_____no_output_____"
]
],
[
[
"------\nLet's create a PyTorch Lightning Model above, just to make sure it works !",
"_____no_output_____"
]
],
[
[
"m = PTLGPT(vocab_size=100, block_size=32, n_layer=1, n_embd=32, n_head=4)",
"_____no_output_____"
]
],
[
[
"------\nNow, let's convert the above easily into a NeMo Model.\n\nA NeMo Model constructor generally accepts only two things - \n\n1) `cfg`: An OmegaConf DictConfig object that defines precisely the components required by the model to define its neural network architecture, data loader setup, optimizer setup, and any additional components needed for the model itself.\n\n2) `trainer`: An optional Trainer from PyTorch Lightning if the NeMo model will be used for training. It can be set after construction (if required) using the `set_trainer` method. For this notebook, we will not be constructing the config for the Trainer object.",
"_____no_output_____"
],
[
"## Refactoring Neural Modules\n\nAs we discussed above, Neural Modules are generally higher-level components of the Model and can potentially be replaced by equivalent Neural Modules.\n\nAs we see above, the embedding modules, deep transformer network, and final decoder layer have all been combined inside the PyTorch Lightning implementation constructor.\n\n------\n\nHowever, the decoder could have been an RNN instead of a simple Linear layer, or it could have been a 1D-CNN instead.\n\nLikewise, the deep encoder could potentially have a different implementation of Self Attention modules.\n\nThese changes cannot be easily implemented any more inside the above implementation. However, if we refactor these components into their respective NeuralModules, then we can easily replace them with equivalent modules we construct in the future!",
"_____no_output_____"
],
[
"### Refactoring the Embedding module\n\nLet's first refactor out the embedding module from the above implementation",
"_____no_output_____"
]
],
[
[
"class GPTEmbedding(NeuralModule):\n def __init__(self, vocab_size: int, n_embd: int, block_size: int, embd_pdrop: float = 0.0):\n super().__init__()\n\n # input embedding stem: drop(content + position)\n self.tok_emb = nn.Embedding(vocab_size, n_embd)\n self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd))\n self.drop = nn.Dropout(embd_pdrop)\n\n @typecheck()\n def forward(self, idx):\n b, t = idx.size()\n \n # forward the GPT model\n token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector\n position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector\n x = self.drop(token_embeddings + position_embeddings)\n return x\n\n @property\n def input_types(self):\n return {\n 'idx': NeuralType(('B', 'T'), Index())\n }\n\n @property\n def output_types(self):\n return {\n 'embeddings': NeuralType(('B', 'T', 'C'), EmbeddedTextType())\n }",
"_____no_output_____"
]
],
[
[
"### Refactoring the Encoder\n\nNext, let's refactor the Encoder - the multi layer Transformer Encoder",
"_____no_output_____"
]
],
[
[
"class GPTTransformerEncoder(NeuralModule):\n def __init__(self, n_embd: int, block_size: int, n_head: int, n_layer: int, attn_pdrop: float = 0.0, resid_pdrop: float = 0.0):\n super().__init__()\n\n self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop) \n for _ in range(n_layer)])\n \n @typecheck()\n def forward(self, embed):\n return self.blocks(embed)\n\n @property\n def input_types(self):\n return {\n 'embed': NeuralType(('B', 'T', 'C'), EmbeddedTextType())\n }\n\n @property\n def output_types(self):\n return {\n 'encoding': NeuralType(('B', 'T', 'C'), CausalSelfAttentionType())\n }",
"_____no_output_____"
]
],
[
[
"### Refactoring the Decoder\n\nFinally, let's refactor the Decoder - the small one-layer feed-forward network to decode the answer.\n\n-------\n\nNote an interesting detail - The `input_types` of the Decoder accepts the generic `EncoderRepresentation()`, where as the `neural_type` of the `GPTTransformerEncoder` has the `output_type` of `CausalSelfAttentionType`.\n\nThis is semantically *not* a mismatch! As you can see above in the inheritance chart, we declare `EncodedRepresentation` -> `AttentionType` -> `SelfAttentionType` -> `CausalSelfAttentionType`. \n\nSuch an inheritance hierarchy for the `element_type` allows future encoders (which also have a neural output type of at least `EncodedRepresentation`) to be swapped in place of the current GPT Causal Self Attention Encoder while keeping the rest of the NeMo model working just fine!",
"_____no_output_____"
]
],
[
[
"class GPTDecoder(NeuralModule):\n def __init__(self, n_embd: int, vocab_size: int):\n super().__init__()\n self.ln_f = nn.LayerNorm(n_embd)\n self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f\n\n @typecheck()\n def forward(self, encoding):\n x = self.ln_f(encoding)\n logits = self.head(x)\n return logits\n\n @property\n def input_types(self):\n return {\n 'encoding': NeuralType(('B', 'T', 'C'), EncodedRepresentation())\n }\n \n @property\n def output_types(self):\n return {\n 'logits': NeuralType(('B', 'T', 'C'), LogitsType())\n }\n",
"_____no_output_____"
]
],
[
[
"### Refactoring the NeMo GPT Model\n\nNow that we have 3 NeuralModules for the embedding, the encoder, and the decoder, let's refactor the NeMo model to take advantage of this refactor!\n\nThis time, we inherit from `ModelPT` instead of the general `LightningModule`.",
"_____no_output_____"
]
],
[
[
"class AbstractNeMoGPT(ModelPT):\n def __init__(self, cfg: OmegaConf, trainer: ptl.Trainer = None):\n super().__init__(cfg=cfg, trainer=trainer)\n\n # input embedding stem: drop(content + position)\n self.embedding = self.from_config_dict(self.cfg.embedding)\n # deep transformer: just a sequence of transformer blocks\n self.encoder = self.from_config_dict(self.cfg.encoder)\n # decoder: at the end one more layernorm and decode the answers\n self.decoder = self.from_config_dict(self.cfg.decoder)\n\n self.block_size = self.cfg.embedding.block_size\n self.apply(self._init_weights)\n\n print(\"number of parameters: %e\" % self.num_weights)\n\n @typecheck()\n def forward(self, idx):\n b, t = idx.size()\n assert t <= self.block_size, \"Cannot forward, model block size is exhausted.\"\n\n # forward the GPT model\n # Remember: Only kwargs are allowed !\n e = self.embedding(idx=idx)\n x = self.encoder(embed=e)\n logits = self.decoder(encoding=x)\n\n return logits\n\n def get_block_size(self):\n return self.block_size\n\n def _init_weights(self, module):\n \"\"\"\n Vanilla model initialization:\n - all MatMul weights \\in N(0, 0.02) and biases to zero\n - all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0\n \"\"\"\n if isinstance(module, (nn.Linear, nn.Embedding)):\n module.weight.data.normal_(mean=0.0, std=0.02)\n if isinstance(module, nn.Linear) and module.bias is not None:\n module.bias.data.zero_()\n elif isinstance(module, nn.LayerNorm):\n module.bias.data.zero_()\n module.weight.data.fill_(1.0)\n\n @property\n def input_types(self):\n return {\n 'idx': NeuralType(('B', 'T'), Index())\n }\n\n @property\n def output_types(self):\n return {\n 'logits': NeuralType(('B', 'T', 'C'), LogitsType())\n }",
"_____no_output_____"
]
],
[
[
"## Creating a config for a Model\n\nAt first glance, not much changed compared to the PyTorch Lightning implementation above. Other than the constructor, which now accepts a config, nothing changed at all!\n\nNeMo operates on the concept of a NeMo Model being accompanied by a corresponding config dict (instantiated as an OmegaConf object). This enables us to prototype the model by utilizing Hydra rapidly. This includes various other benefits - such as hyperparameter optimization and serialization/deserialization of NeMo models.\n\nLet's look at how actually to construct such config objects!",
"_____no_output_____"
]
],
[
[
"# model definition args (required)\n# ================================\n# vocab_size: int # size of the vocabulary (number of possible tokens)\n# block_size: int # length of the model's context window in time\n# n_layer: int # depth of the model; number of Transformer blocks in sequence\n# n_embd: int # the \"width\" of the model, number of channels in each Transformer\n# n_head: int # number of heads in each multi-head attention inside each Transformer block \n\n# model definition args (optional)\n# ================================\n# embd_pdrop: float = 0.1, # \\in [0,1]: amount of dropout on input embeddings\n# resid_pdrop: float = 0.1, # \\in [0,1]: amount of dropout in each residual connection\n# attn_pdrop: float = 0.1, # \\in [0,1]: amount of dropout on the attention matrix",
"_____no_output_____"
]
],
[
[
"------\nAs we look at the required parameters above, we need a way to tell OmegaConf that these values are currently not set, but the user should set them before we use them.\n\nOmegaConf supports such behavior using the `MISSING` value. A similar effect can be achieved in YAML configs by using `???` as a placeholder.",
"_____no_output_____"
]
],
[
[
"from omegaconf import MISSING",
"_____no_output_____"
],
[
"# Let's create a utility for building the class path\ndef get_class_path(cls):\n return f'{cls.__module__}.{cls.__name__}'",
"_____no_output_____"
]
],
[
[
"### Structure of a Model config\n\nLet's first create a config for the common components of the model level config -",
"_____no_output_____"
]
],
[
[
"common_config = OmegaConf.create({\n 'vocab_size': MISSING,\n 'block_size': MISSING,\n 'n_layer': MISSING,\n 'n_embd': MISSING,\n 'n_head': MISSING,\n})",
"_____no_output_____"
]
],
[
[
"-----\nThe model config right now is still being built - it needs to contain a lot more details!\n\nA complete Model Config should have the sub-configs of all of its top-level modules as well. This means the configs of the `embedding`, `encoder`, and the `decoder`.\n",
"_____no_output_____"
],
[
"### Structure of sub-module config\n\nFor top-level models, we generally don't change the actual module very often, and instead, primarily change the hyperparameters of that model.\n\nSo we will make use of `Hydra`'s Class instantiation method - which can easily be accessed via the class method `ModelPT.from_config_dict()`.\n\nLet's take a few examples below -",
"_____no_output_____"
]
],
[
[
"embedding_config = OmegaConf.create({\n '_target_': get_class_path(GPTEmbedding),\n 'vocab_size': '${model.vocab_size}',\n 'n_embd': '${model.n_embd}',\n 'block_size': '${model.block_size}',\n 'embd_pdrop': 0.1\n})\n\nencoder_config = OmegaConf.create({\n '_target_': get_class_path(GPTTransformerEncoder),\n 'n_embd': '${model.n_embd}',\n 'block_size': '${model.block_size}',\n 'n_head': '${model.n_head}',\n 'n_layer': '${model.n_layer}',\n 'attn_pdrop': 0.1,\n 'resid_pdrop': 0.1\n})\n\ndecoder_config = OmegaConf.create({\n '_target_': get_class_path(GPTDecoder),\n # n_embd: int, vocab_size: int\n 'n_embd': '${model.n_embd}',\n 'vocab_size': '${model.vocab_size}'\n})",
"_____no_output_____"
]
],
[
[
"##### What is `_target_`?\n--------\n\nIn the above config, we see a `_target_` in the config. `_target_` is usually a full classpath to the actual class in the python package/user local directory. It is required for Hydra to locate and instantiate the model from its path correctly.\n\nSo why do we want to set a classpath?\n\nIn general, when developing models, we don't often change the encoder or the decoder, but we do change the hyperparameters of the encoder and decoder.\n\nThis notation helps us keep the Model level declaration of the forward step neat and precise. It also logically helps us demark which parts of the model can be easily replaced - in the future, we can easily replace the encoder with some other type of self-attention block or the decoder with an RNN or 1D-CNN neural module (as long as they have the same Neural Type definition as the current blocks).\n",
"_____no_output_____"
],
[
"##### What is the `${}` syntax?\n-------\n\nOmegaConf, and by extension, Hydra, supports Variable Interpolation. As you can see in the `__init__` of embedding, encoder, and decoder neural modules, they often share many parameters between each other.\n\nIt would become tedious and error-prone to set each of these constructors' values separately in each of the embedding, encoder, and decoder configs.\n\nSo instead, we define standard keys inside of the `model` level config and then interpolate these values inside of the respective configs!",
"_____no_output_____"
],
[
"### Attaching the model and module-level configs\n\nSo now, we have a Model level and per-module level configs for the core components. Sub-module configs generally fall under the \"model\" namespace, but you have the flexibility to define the structure as you require.\n\nLet's attach them!\n",
"_____no_output_____"
]
],
[
[
"model_config = OmegaConf.create({\n 'model': common_config\n})\n\n# Then let's attach the sub-module configs\nmodel_config.model.embedding = embedding_config\nmodel_config.model.encoder = encoder_config\nmodel_config.model.decoder = decoder_config",
"_____no_output_____"
]
],
[
[
"-----\nLet's print this config!",
"_____no_output_____"
]
],
[
[
"print(OmegaConf.to_yaml(model_config))",
"_____no_output_____"
]
],
[
[
"-----\nWait, why did OmegaConf not fill in the value of the variable interpolation for the configs yet?\n\nThis is because OmegaConf takes a deferred approach to variable interpolation. To force it ahead of time, we can use the following snippet - ",
"_____no_output_____"
]
],
[
[
"temp_config = OmegaConf.create(OmegaConf.to_container(model_config, resolve=True))\nprint(OmegaConf.to_yaml(temp_config))",
"_____no_output_____"
]
],
[
[
"-----\nNow that we have a config, let's try to create an object of the NeMo Model !",
"_____no_output_____"
]
],
[
[
"import copy",
"_____no_output_____"
],
[
"# Let's work on a copy of the model config and update it before we send it into the Model.\ncfg = copy.deepcopy(model_config)",
"_____no_output_____"
],
[
"# Let's set the values of the config (for some plausible small model)\ncfg.model.vocab_size = 100\ncfg.model.block_size = 128\ncfg.model.n_layer = 1\ncfg.model.n_embd = 32\ncfg.model.n_head = 4",
"_____no_output_____"
],
[
"print(OmegaConf.to_yaml(cfg))",
"_____no_output_____"
],
[
"# Try to create a model with this config [ERROR CELL]\nm = AbstractNeMoGPT(cfg.model)",
"_____no_output_____"
]
],
[
[
"-----\n\nYou will note that we added the `Abstract` tag for a reason to this NeMo Model and that when we try to instantiate it - it raises an error that we need to implement specific methods.\n\n1) `setup_training_data` & `setup_validation_data` - All NeMo models should implement two data loaders - the training data loader and the validation data loader. Optionally, they can go one step further and also implement the `setup_test_data` method to add support for evaluating the Model on its own.\n\nWhy do we enforce this? NeMo Models are meant to be a unified, cohesive object containing the details about the neural network underlying that Model and the data loaders to train, validate, and optionally test those models.\n\nIn doing so, once the Model is created/deserialized, it would take just a few more steps to train the Model from scratch / fine-tune/evaluate the Model on any data that the user provides, as long as this user-provided dataset is in a format supported by the Dataset / DataLoader that is used by this Model!\n\n2) `list_available_models` - This is a utility method to provide a list of pre-trained NeMo models to the user from the cloud.\n\nTypically, NeMo models can be easily packaged into a tar file (which we call a .nemo file in the earlier primer notebook). These tar files contain the model config + the pre-trained checkpoint weights of the Model, and can easily be downloaded from some cloud service. \n\nFor this notebook, we will not be implementing this method.\n\n--------\nFinally, let's create a concrete implementation of the above NeMo Model!",
"_____no_output_____"
]
],
[
[
"from nemo.core.classes.common import PretrainedModelInfo",
"_____no_output_____"
],
[
"class BasicNeMoGPT(AbstractNeMoGPT):\n\n @classmethod\n def list_available_models(cls) -> PretrainedModelInfo:\n return None\n\n def setup_training_data(self, train_data_config: OmegaConf):\n self._train_dl = None\n \n def setup_validation_data(self, val_data_config: OmegaConf):\n self._validation_dl = None\n \n def setup_test_data(self, test_data_config: OmegaConf):\n self._test_dl = None",
"_____no_output_____"
]
],
[
[
"------\nNow let's try to create an object of the `BasicNeMoGPT` model",
"_____no_output_____"
]
],
[
[
"m = BasicNeMoGPT(cfg.model)",
"_____no_output_____"
]
],
[
[
"## Setting up train-val-test steps\n\nThe above `BasicNeMoGPT` Model is a basic PyTorch Lightning Module, with some added functionality - \n\n1) Neural Type checks support - as defined in the Model as well as the internal modules.\n\n2) Save and restore of the Model (in the trivial case) to a tarfile.\n\nBut as the Model is right now, it crucially does not support PyTorch Lightning's `Trainer`. As such, while this Model can be called manually, it cannot be easily trained or evaluated by using the PyTorch Lightning framework.\n\n------\n\nLet's begin adding support for this then -",
"_____no_output_____"
]
],
[
[
"class BasicNeMoGPTWithSteps(BasicNeMoGPT):\n\n def step_(self, split, batch, batch_idx=None):\n idx, targets = batch\n logits = self(idx=idx)\n loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))\n key = 'loss' if split == 'train' else f\"{split}_loss\"\n return {key: loss}\n\n def training_step(self, *args, **kwargs):\n return self.step_('train', *args, **kwargs)\n\n def validation_step(self, *args, **kwargs):\n return self.step_('val', *args, **kwargs)\n\n def test_step(self, *args, **kwargs):\n return self.step_('test', *args, **kwargs)\n \n # This is useful for multiple validation data loader setup\n def multi_validation_epoch_end(self, outputs, dataloader_idx: int = 0):\n val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean()\n return {'val_loss': val_loss_mean}\n\n # This is useful for multiple test data loader setup\n def multi_test_epoch_end(self, outputs, dataloader_idx: int = 0):\n test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean()\n return {'test_loss': test_loss_mean}",
"_____no_output_____"
],
[
"m = BasicNeMoGPTWithSteps(cfg=cfg.model)",
"_____no_output_____"
]
],
[
[
"### Setup for Multi Validation and Multi Test data loaders\n\nAs discussed in the NeMo Primer, NeMo has in-built support for multiple data loaders for validation and test steps. Therefore, as an example of how easy it is to add such support, we include the `multi_validation_epoch_end` and `multi_test_epoch_end` overrides.\n\nIt is also practically essential to collate results from more than one distributed GPUs, and then aggregate results properly at the end of the epoch. NeMo strictly enforces the correct collation of results, even if you will work on only one device! Future-proofing is baked into the model design for this case!\n\nTherefore NeMo provides the above two generic methods to support aggregation and simultaneously support multiple datasets!\n\n**Please note, you can prepend your already existing `validation_epoch_end` and `test_epoch_end` implementations with the `multi_` in the name, and that alone is sufficient to enable multi-dataset and multi-GPU support!**\n\n------\n**Note: To disable multi-dataset support, simply override `validation_epoch_end` and `test_epoch_end` instead of `multi_validation_epoch_end` and `multi_test_epoch_end`!**",
"_____no_output_____"
],
[
"## Setting up the optimizer / scheduler\n\nWe are relatively close to reaching feature parity with the MinGPT Model! But we are missing a crucial piece - the optimizer.\n\nAll NeMo Model's come with a default implementation of `setup_optimization()`, which will parse the provided model config to obtain the `optim` and `sched` sub-configs, and automatically configure the optimizer and scheduler.\n\nIf training GPT was as simple as plugging in an Adam optimizer over all the parameters with a cosine weight decay schedule, we could do that from the config alone.\n\n-------\n\nBut GPT is not such a trivial model - more specifically, it requires weight decay to be applied to the weight matrices but not to the biases, the embedding matrix, or the LayerNorm layers.\n\nWe can drop the support that Nemo provides for such special cases and instead utilize the PyTorch Lightning method `configure_optimizers` to perform the same task.\n\n-------\n\nNote, for NeMo Models; the `configure_optimizers` is implemented as a trivial call to `setup_optimization()` followed by returning the generated optimizer and scheduler! So we can override the `configure_optimizer` method and manage the optimizer creation manually!\n\nNeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases which the additional flexibility becomes necessary!",
"_____no_output_____"
]
],
[
[
"class BasicNeMoGPTWithOptim(BasicNeMoGPTWithSteps):\n\n def configure_optimizers(self):\n \"\"\"\n This long function is unfortunately doing something very simple and is being very defensive:\n We are separating out all parameters of the model into two buckets: those that will experience\n weight decay for regularization and those that won't (biases, and layernorm/embedding weights).\n We are then returning the PyTorch optimizer object.\n \"\"\"\n\n # separate out all parameters to those that will and won't experience weight decay\n decay = set()\n no_decay = set()\n whitelist_weight_modules = (torch.nn.Linear, )\n blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)\n for mn, m in self.named_modules():\n for pn, p in m.named_parameters():\n fpn = '%s.%s' % (mn, pn) if mn else pn # full param name\n\n if pn.endswith('bias'):\n # all biases will not be decayed\n no_decay.add(fpn)\n elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):\n # weights of whitelist modules will be weight decayed\n decay.add(fpn)\n elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):\n # weights of blacklist modules will NOT be weight decayed\n no_decay.add(fpn)\n\n # special case the position embedding parameter in the root GPT module as not decayed\n no_decay.add('embedding.pos_emb')\n\n # validate that we considered every parameter\n param_dict = {pn: p for pn, p in self.named_parameters()}\n inter_params = decay & no_decay\n union_params = decay | no_decay\n assert len(inter_params) == 0, \"parameters %s made it into both decay/no_decay sets!\" % (str(inter_params), )\n assert len(param_dict.keys() - union_params) == 0, \"parameters %s were not separated into either decay/no_decay set!\" \\\n % (str(param_dict.keys() - union_params), )\n\n # create the pytorch optimizer object\n optim_groups = [\n {\"params\": [param_dict[pn] for pn in sorted(list(decay))], \"weight_decay\": self.cfg.optim.weight_decay},\n {\"params\": [param_dict[pn] for pn in sorted(list(no_decay))], \"weight_decay\": 0.0},\n ]\n optimizer = torch.optim.AdamW(optim_groups, lr=self.cfg.optim.lr, betas=self.cfg.optim.betas)\n return optimizer\n",
"_____no_output_____"
],
[
"m = BasicNeMoGPTWithOptim(cfg=cfg.model)",
"_____no_output_____"
]
],
[
[
"-----\nNow let's setup the config for the optimizer !",
"_____no_output_____"
]
],
[
[
"OmegaConf.set_struct(cfg.model, False)\n\noptim_config = OmegaConf.create({\n 'lr': 3e-4,\n 'weight_decay': 0.1,\n 'betas': [0.9, 0.95]\n})\n\ncfg.model.optim = optim_config\n\nOmegaConf.set_struct(cfg.model, True)",
"_____no_output_____"
]
],
[
[
"## Setting up the dataset / data loaders\n\nSo we were able almost entirely to replicate the MinGPT implementation. \n\nRemember, NeMo models should contain all of the logic to load the Dataset and DataLoader for at least the train and validation step.\n\nWe temporarily provided empty implementations to get around it till now, but let's fill that in now!\n\n-------\n\n**Note for datasets**: Below, we will show an example using a very small dataset called `tiny_shakespeare`, found at the original [char-rnn repository](https://github.com/karpathy/char-rnn), but practically you could use any text corpus. The one suggested in minGPT is available at http://mattmahoney.net/dc/textdata.html",
"_____no_output_____"
],
[
"### Creating the Dataset\n\nNeMo has Neural Type checking support, even for Datasets! It's just a minor change of the import in most cases and one difference in how we handle `collate_fn`.\n\nWe could paste the dataset info from minGPT, and you'd only need to make 2 changes!\n\n-----\nIn this example, we will be writing a thin subclass over the datasets provided by `nlp` from HuggingFace!",
"_____no_output_____"
]
],
[
[
"from nemo.core import Dataset\nfrom torch.utils import data\nfrom torch.utils.data.dataloader import DataLoader",
"_____no_output_____"
],
[
"class TinyShakespeareDataset(Dataset):\n\n def __init__(self, data_path, block_size, crop=None, override_vocab=None):\n\n # load the data and crop it appropriately\n with open(data_path, 'r') as f:\n if crop is None:\n data = f.read()\n else:\n f.seek(crop[0])\n data = f.read(crop[1])\n\n # build a vocabulary from data or inherit it\n vocab = sorted(list(set(data))) if override_vocab is None else override_vocab\n\n # Add UNK\n special_tokens = ['<PAD>', '<UNK>'] # We use just <UNK> and <PAD> in the call, but can add others.\n if not override_vocab:\n vocab = [*special_tokens, *vocab] # Update train vocab with special tokens\n\n data_size, vocab_size = len(data), len(vocab)\n print('data of crop %s has %d characters, vocab of size %d.' % (str(crop), data_size, vocab_size))\n print('Num samples in dataset : %d' % (data_size // block_size))\n\n self.stoi = { ch:i for i,ch in enumerate(vocab) }\n self.itos = { i:ch for i,ch in enumerate(vocab) }\n self.block_size = block_size\n self.vocab_size = vocab_size\n self.data = data\n self.vocab = vocab\n self.special_tokens = special_tokens\n\n def __len__(self):\n return len(self.data) // self.block_size\n\n def __getitem__(self, idx):\n # attempt to fetch a chunk of (block_size + 1) items, but (block_size) will work too\n chunk = self.data[idx*self.block_size : min(len(self.data), (idx+1)*self.block_size + 1)]\n # map the string into a sequence of integers\n ixes = [self.stoi[s] if s in self.stoi else self.stoi['<UNK>'] for s in chunk ]\n # if stars align (last idx and len(self.data) % self.block_size == 0), pad with <PAD>\n if len(ixes) < self.block_size + 1:\n assert len(ixes) == self.block_size # i believe this is the only way this could happen, make sure\n ixes.append(self.stoi['<PAD>'])\n dix = torch.tensor(ixes, dtype=torch.long)\n return dix[:-1], dix[1:]\n\n @property\n def output_types(self):\n return {\n 'input': NeuralType(('B', 'T'), Index()),\n 'target': NeuralType(('B', 'T'), LabelsType())\n }",
"_____no_output_____"
]
],
[
[
"------\nWe didn't have to change anything until here. How then is type-checking done? \n\nNeMo does type-checking inside of the collate function implementation itself! In this case, it is not necessary to override the `collate_fn` inside the Dataset, but if we did need to override it, **NeMo requires that the private method `_collate_fn` be overridden instead**.\n\nWe can then use data loaders with minor modifications!\n\n**Also, there is no need to implement the `input_types` for Dataset, as they are the ones generating the input for the model!**",
"_____no_output_____"
],
[
"-----\nLet's prepare the dataset that we are going to use - Tiny Shakespeare from the following codebase [char-rnn](https://github.com/karpathy/char-rnn).",
"_____no_output_____"
]
],
[
[
"import os",
"_____no_output_____"
],
[
"if not os.path.exists('tiny-shakespeare.txt'):\n !wget https://raw.githubusercontent.com/jcjohnson/torch-rnn/master/data/tiny-shakespeare.txt",
"_____no_output_____"
],
[
"!head -n 5 tiny-shakespeare.txt",
"_____no_output_____"
],
[
"train_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(0, int(1e6)))\nval_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1e6), int(50e3)), override_vocab=train_dataset.vocab)\ntest_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1.05e6), int(100e3)), override_vocab=train_dataset.vocab)",
"_____no_output_____"
]
],
[
[
"### Setting up dataset/data loader support in the Model\n\nSo we now know our data loader works. Let's integrate it as part of the Model itself!\n\nTo do this, we use the three special attributes of the NeMo Model - `self._train_dl`, `self._validation_dl` and `self._test_dl`. Once you construct your DataLoader, place your data loader to these three variables. \n\nFor multi-data loader support, the same applies! NeMo will automatically handle the management of multiple data loaders for you!",
"_____no_output_____"
]
],
[
[
"class NeMoGPT(BasicNeMoGPTWithOptim):\n\n def _setup_data_loader(self, cfg):\n if self.vocab is None:\n override_vocab = None\n else:\n override_vocab = self.vocab\n\n dataset = TinyShakespeareDataset(\n data_path=cfg.data_path,\n block_size=cfg.block_size,\n crop=tuple(cfg.crop) if 'crop' in cfg else None,\n override_vocab=override_vocab\n )\n\n if self.vocab is None:\n self.vocab = dataset.vocab\n\n return DataLoader(\n dataset=dataset,\n batch_size=cfg.batch_size,\n shuffle=cfg.shuffle,\n collate_fn=dataset.collate_fn, # <-- this is necessary for type checking\n pin_memory=cfg.pin_memory if 'pin_memory' in cfg else False,\n num_workers=cfg.num_workers if 'num_workers' in cfg else 0\n )\n \n def setup_training_data(self, train_data_config: OmegaConf):\n self.vocab = None\n self._train_dl = self._setup_data_loader(train_data_config)\n \n def setup_validation_data(self, val_data_config: OmegaConf):\n self._validation_dl = self._setup_data_loader(val_data_config)\n \n def setup_test_data(self, test_data_config: OmegaConf):\n self._test_dl = self._setup_data_loader(test_data_config)\n",
"_____no_output_____"
]
],
[
[
"### Creating the dataset / dataloader config\n\nThe final step to setup this model is to add the `train_ds`, `validation_ds` and `test_ds` configs inside the model config!",
"_____no_output_____"
]
],
[
[
"OmegaConf.set_struct(cfg.model, False)\n\n# Set the data path and update vocabular size\ncfg.model.data_path = 'tiny-shakespeare.txt'\ncfg.model.vocab_size = train_dataset.vocab_size\n\nOmegaConf.set_struct(cfg.model, True)",
"_____no_output_____"
],
[
"train_ds = OmegaConf.create({\n 'data_path': '${model.data_path}',\n 'block_size': '${model.block_size}',\n 'crop': [0, int(1e6)],\n 'batch_size': 64,\n 'shuffle': True,\n})\n\nvalidation_ds = OmegaConf.create({\n 'data_path': '${model.data_path}',\n 'block_size': '${model.block_size}',\n 'crop': [int(1e6), int(50e3)],\n 'batch_size': 4,\n 'shuffle': False,\n})\n\ntest_ds = OmegaConf.create({\n 'data_path': '${model.data_path}',\n 'block_size': '${model.block_size}',\n 'crop': [int(1.05e6), int(100e3)],\n 'batch_size': 4,\n 'shuffle': False,\n})",
"_____no_output_____"
],
[
"# Attach to the model config\nOmegaConf.set_struct(cfg.model, False)\n\ncfg.model.train_ds = train_ds\ncfg.model.validation_ds = validation_ds\ncfg.model.test_ds = test_ds\n\nOmegaConf.set_struct(cfg.model, True)",
"_____no_output_____"
],
[
"# Let's see the config now !\nprint(OmegaConf.to_yaml(cfg))",
"_____no_output_____"
],
[
"# Let's try creating a model now !\nmodel = NeMoGPT(cfg=cfg.model)",
"_____no_output_____"
]
],
[
[
"-----\nAll the data loaders load properly ! Yay!",
"_____no_output_____"
],
[
"# Evaluate the model - end to end!\n\nNow that the data loaders have been set up, all that's left is to train and test the model! We have most of the components required by this model - the train, val and test data loaders, the optimizer, and the type-checked forward step to perform the train-validation-test steps! \n\nBut training a GPT model from scratch is not the goal of this primer, so instead, let's do a sanity check by merely testing the model for a few steps using random initial weights.\n\nThe above will ensure that - \n\n1) Our data loaders work as intended\n\n2) The type checking system assures us that our Neural Modules are performing their forward step correctly.\n\n3) The loss is calculated, and therefore the model runs end to end, ultimately supporting PyTorch Lightning.",
"_____no_output_____"
]
],
[
[
"if torch.cuda.is_available():\n cuda = 1\nelse:\n cuda = 0\n\ntrainer = ptl.Trainer(gpus=cuda, test_percent_check=1.0)",
"_____no_output_____"
],
[
"trainer.test(model)",
"_____no_output_____"
]
],
[
[
"# Saving and restoring models\n\nNeMo internally keeps track of the model configuration, as well as the model checkpoints and parameters.\n\nAs long as your NeMo follows the above general guidelines, you can call the `save_to` and `restore_from` methods to save and restore your models!",
"_____no_output_____"
]
],
[
[
"model.save_to('gpt_model.nemo')",
"_____no_output_____"
],
[
"!ls -d -- *.nemo",
"_____no_output_____"
],
[
"temp_model = NeMoGPT.restore_from('gpt_model.nemo')",
"_____no_output_____"
],
[
"# [ERROR CELL]\ntemp_model.setup_test_data(temp_model.cfg.test_ds)",
"_____no_output_____"
]
],
[
[
"-----\n\nHmm, it seems it wasn't so easy in this case. Non-trivial models have non-trivial issues!\n\nRemember, our NeMoGPT model sets its self.vocab inside the `setup_train_data` step. But that depends on the vocabulary generated by the train set... which is **not** restored during model restoration (unless you call `setup_train_data` explicitly!).\n\nWe can quickly resolve this issue by constructing an external data file to enable save and restore support, and NeMo supports that too! We will use the `register_artifact` API in NeMo to support external files being attached to the .nemo checkpoint.",
"_____no_output_____"
]
],
[
[
"class NeMoGPTv2(NeMoGPT):\n \n def setup_training_data(self, train_data_config: OmegaConf):\n self.vocab = None\n self._train_dl = self._setup_data_loader(train_data_config)\n\n # Save the vocab into a text file for now\n with open('vocab.txt', 'w') as f:\n for token in self.vocab:\n f.write(f\"{token}<SEP>\")\n \n # This is going to register the file into .nemo!\n # When you later use .save_to(), it will copy this file into the tar file.\n self.register_artifact(None, 'vocab.txt')\n \n def setup_validation_data(self, val_data_config: OmegaConf):\n # This is going to try to find the same file, and if it fails, \n # it will use the copy in .nemo\n vocab_file = self.register_artifact(None, 'vocab.txt')\n \n with open(vocab_file, 'r') as f:\n vocab = []\n vocab = f.read().split('<SEP>')[:-1] # the -1 here is for the dangling <SEP> token in the file\n self.vocab = vocab\n\n self._validation_dl = self._setup_data_loader(val_data_config)\n \n def setup_test_data(self, test_data_config: OmegaConf):\n # This is going to try to find the same file, and if it fails, \n # it will use the copy in .nemo\n vocab_file = self.register_artifact(None, 'vocab.txt')\n\n with open(vocab_file, 'r') as f:\n vocab = []\n vocab = f.read().split('<SEP>')[:-1] # the -1 here is for the dangling <SEP> token in the file\n self.vocab = vocab\n\n self._test_dl = self._setup_data_loader(test_data_config)\n",
"_____no_output_____"
],
[
"# Let's try creating a model now !\nmodel = NeMoGPTv2(cfg=cfg.model)",
"_____no_output_____"
],
[
"# Now let's try to save and restore !\nmodel.save_to('gpt_model.nemo')",
"_____no_output_____"
],
[
"temp_model = NeMoGPTv2.restore_from('gpt_model.nemo')",
"_____no_output_____"
],
[
"temp_model.setup_multiple_test_data(temp_model.cfg.test_ds)",
"_____no_output_____"
],
[
"if torch.cuda.is_available():\n cuda = 1\nelse:\n cuda = 0\n\ntrainer = ptl.Trainer(gpus=cuda, test_percent_check=1.0)",
"_____no_output_____"
],
[
"trainer.test(model)",
"_____no_output_____"
]
],
[
[
"------\nThere we go ! Now our model's can be serialized and de-serialized without any issue, even with an external vocab file !",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d04eeb7f45453f6494b30db4c8757cc931a90e17 | 381,793 | ipynb | Jupyter Notebook | Kaggle_Weibull_Wind.ipynb | khanfarhan10/wind_analysis | 82582d020e773d30b8425b9943736ba9c45267fe | [
"FTL"
] | 2 | 2020-10-04T11:48:50.000Z | 2021-06-09T15:34:04.000Z | Kaggle_Weibull_Wind.ipynb | khanfarhan10/wind_analysis | 82582d020e773d30b8425b9943736ba9c45267fe | [
"FTL"
] | null | null | null | Kaggle_Weibull_Wind.ipynb | khanfarhan10/wind_analysis | 82582d020e773d30b8425b9943736ba9c45267fe | [
"FTL"
] | 2 | 2021-01-10T16:32:48.000Z | 2021-04-29T18:29:49.000Z | 174.096215 | 86,214 | 0.858405 | [
[
[
"import pandas as pd\nimport numpy as np\n#upload the csv file or \n#!git clone \n#and locate the csv and change location\ndf=pd.read_csv(\"/content/T1.csv\", engine='python')\ndf.head()",
"_____no_output_____"
],
[
"lst=df[\"Wind Speed (m/s)\"]",
"_____no_output_____"
],
[
"lst",
"_____no_output_____"
],
[
"max(lst)",
"_____no_output_____"
],
[
"min(lst)",
"_____no_output_____"
],
[
"lst=list(df[\"Wind Speed (m/s)\"])\n\n# Python program to get average of a list \ndef Average(lst): \n\treturn sum(lst) / len(lst) \n\n# Driver Code \naverage = Average(lst) \n\n# Printing average of the list \nprint(\"Average of the list =\", round(average, 2)) \n\n\n",
"Average of the list = 7.56\n"
],
[
"for i in range(len(lst)):\n lst[i]=round(lst[i],0)",
"_____no_output_____"
],
[
"lst",
"_____no_output_____"
],
[
"# Python program to count the frequency of \n# elements in a list using a dictionary \n \ndef CountFrequency(my_list): \n \n # Creating an empty dictionary \n freq = {} \n for item in my_list: \n if (item in freq): \n freq[item] += 1\n else: \n freq[item] = 1\n \n for key, value in freq.items(): \n print (\"% d : % d\"%(key, value))\n\n return freq \nf=CountFrequency(lst)",
" 5 : 3815\n 6 : 4643\n 7 : 4863\n 8 : 4411\n 4 : 3961\n 3 : 4313\n 9 : 3695\n 10 : 3291\n 11 : 2964\n 12 : 2451\n 13 : 1978\n 14 : 1271\n 15 : 943\n 16 : 690\n 2 : 3595\n 1 : 1866\n 0 : 121\n 17 : 464\n 18 : 428\n 19 : 357\n 20 : 232\n 22 : 31\n 21 : 102\n 23 : 29\n 24 : 14\n 25 : 2\n"
],
[
"",
"_____no_output_____"
],
[
"dictionary_items = f.items()\nsorted_items = sorted(dictionary_items)\nsorted_items",
"_____no_output_____"
],
[
"#x wind speed\n#y frequency\nx=[]\ny=[]\nfor each in sorted_items:\n print(each)\n x.append(each[0])\n y.append(each[1])",
"(0.0, 121)\n(1.0, 1866)\n(2.0, 3595)\n(3.0, 4313)\n(4.0, 3961)\n(5.0, 3815)\n(6.0, 4643)\n(7.0, 4863)\n(8.0, 4411)\n(9.0, 3695)\n(10.0, 3291)\n(11.0, 2964)\n(12.0, 2451)\n(13.0, 1978)\n(14.0, 1271)\n(15.0, 943)\n(16.0, 690)\n(17.0, 464)\n(18.0, 428)\n(19.0, 357)\n(20.0, 232)\n(21.0, 102)\n(22.0, 31)\n(23.0, 29)\n(24.0, 14)\n(25.0, 2)\n"
],
[
"x",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
],
[
"ybar=np.array(y)/5\nybar=ybar/10\nxbar=np.array(x)\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(20,8))\nplt.style.use('dark_background')\n\n#plt.rcParams[\"font.family\"] = \"Times New Roman\"\nplt.rcParams[\"font.size\"] = \"16\"\n\nplt.title('Actual Distribution of Wind Speed in a Practical Scenario', fontsize=30)\nplt.grid(False)\n\nfrom scipy.interpolate import make_interp_spline, BSpline\nT,power=xbar,ybar\n# 300 represents number of points to make between T.min and T.max\nxnew = np.linspace(T.min(), T.max(), 300) \n\nspl = make_interp_spline(T, power, k=3) # type: BSpline\npower_smooth = spl(xnew)\n\nplt.plot(xnew, power_smooth,color=\"w\")\n#plt.show()\n\n#plt.plot(xbar, ybar)\n#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);\nwidth=0.8\nbar1=plt.bar(xbar, ybar, width,color=\"y\")\nfor rect,val in zip(bar1,ybar):\n height = rect.get_height()\n #print(val)\n if(val==0):\n plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str(\"-\"), ha='center', va='bottom',fontsize=20)\n else:\n plt.text(rect.get_x() + rect.get_width()/2.0, height+2, str(int(round(val,0))), ha='center', va='bottom',fontsize=12)\n#plt.xticks(np.arange(25) + width , list(range(25)))\nplt.rcParams['xtick.labelsize']=16\nplt.rcParams['ytick.labelsize']=16\n\n\nplt.xlabel('Wind Speed(m/s)', fontsize=18)\nplt.ylabel('Frequency[%]', fontsize=18)\n\n\n\nplt.show()\n",
"_____no_output_____"
],
[
"def percentage(y):\n #print(y)\n tot=y.sum()\n #print(tot)\n y=y/tot\n return y*100\n\nybar=percentage(np.array(y))\n#print(ybar)\nxbar=np.array(x)\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(20,8))\nplt.style.use('dark_background')\n\n#plt.rcParams[\"font.family\"] = \"Times New Roman\"\nplt.rcParams[\"font.size\"] = \"16\"\n\nplt.title('Actual Distribution of Wind Speed in a Practical Scenario', fontsize=30)\nplt.grid(False)\n\nfrom scipy.interpolate import make_interp_spline, BSpline\nT,power=xbar,ybar\n# 300 represents number of points to make between T.min and T.max\nxnew = np.linspace(T.min(), T.max(), 300) \n\nspl = make_interp_spline(T, power, k=3) # type: BSpline\npower_smooth = spl(xnew)\n\nplt.plot(xnew, power_smooth,color=\"w\")\n#plt.show()\n\n#plt.plot(xbar, ybar)\n#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);\nwidth=0.8\nbar1=plt.bar(xbar, ybar, width,color=\"y\")\nfor rect,val in zip(bar1,ybar):\n height = rect.get_height()\n #print(val)\n if(val==0):\n plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str(\"-\"), ha='center', va='bottom',fontsize=20)\n else:\n plt.text(rect.get_x() + rect.get_width()/2.0, height+0.2, str(round(val,1)), ha='center', va='bottom',fontsize=12)\n#plt.xticks(np.arange(25) + width , list(range(25)))\nplt.rcParams['xtick.labelsize']=16\nplt.rcParams['ytick.labelsize']=16\n\n\nplt.xlabel('Wind Speed(m/s)', fontsize=18)\nplt.ylabel('Frequency[%]', fontsize=18)\n\nplt.savefig(\"actual_distribution.png\" ,dpi=100)\n\nplt.show()\n",
"_____no_output_____"
],
[
"from scipy import stats\nimport matplotlib.pyplot as plt\n\n#input for pseudo data\nN = 100\nKappa_in = 2.08\nLambda_in = 8.97\na_in = 1\nloc_in = 0 \n\n#Generate data from given input\ndata = stats.exponweib.rvs(a=a_in,c=Kappa_in, loc=loc_in, scale=Lambda_in, size = N)\n\n#The a and loc are fixed in the fit since it is standard to assume they are known\na_out, Kappa_out, loc_out, Lambda_out = stats.exponweib.fit(data, f0=a_in,floc=loc_in)\n\n#Plot\nbins = range(25)\nfig = plt.figure() \nax = fig.add_subplot(1, 1, 1)\ny=stats.exponweib.pdf(bins, a=a_out,c=Kappa_out,loc=loc_out,scale = Lambda_out)\nax.plot(bins,y*1000)\n#ax.hist(data, bins = bins , alpha=0.5)\n#ax.annotate(\"Shape: $k = %.2f$ \\n Scale: $\\lambda = %.2f$\"%(Kappa_out,Lambda_out), xy=(0.7, 0.85), xycoords=ax.transAxes)\nplt.show()",
"_____no_output_____"
],
[
"def percentage(y):\n #print(y)\n tot=y.sum()\n #print(tot)\n y=y/tot\n return y*100\n\nybar=percentage(np.array(y))\n\nfrom scipy import stats\nimport matplotlib.pyplot as plt\n\n#input for pseudo data\nN = 100\nKappa_in = 2.08\nLambda_in = 8.97\na_in = 1\nloc_in = 0 \n\n#Generate data from given input\ndata = stats.exponweib.rvs(a=a_in,c=Kappa_in, loc=loc_in, scale=Lambda_in, size = N)\n\n#The a and loc are fixed in the fit since it is standard to assume they are known\na_out, Kappa_out, loc_out, Lambda_out = stats.exponweib.fit(data, f0=a_in,floc=loc_in)\n\n#Plot\n\n\n#print(ybar)\nxbar=np.array(x)\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(20,8))\nplt.style.use('dark_background')\n\nbins = range(25)\n#fig = plt.figure() \n#ax = fig.add_subplot(1, 1, 1)\nyhat=stats.exponweib.pdf(bins, a=a_out,c=Kappa_out,loc=loc_out,scale = Lambda_out)\nplt.plot(bins,yhat*100, linewidth=4,markersize=12,marker='o',color='green')\n#ax.hist(data, bins = bins , alpha=0.5)\n#ax.annotate(\"Shape: $k = %.2f$ \\n Scale: $\\lambda = %.2f$\"%(Kappa_out,Lambda_out), xy=(0.7, 0.85), xycoords=ax.transAxes)\n#plt.show()\n\n\n#plt.rcParams[\"font.family\"] = \"Times New Roman\"\nplt.rcParams[\"font.size\"] = \"16\"\n\nplt.title('Comparitive Distribution of Wind Speed', fontsize=30)\nplt.grid(False)\n\nfrom scipy.interpolate import make_interp_spline, BSpline\nT,power=xbar[:-1],ybar\nprint(xbar.shape,ybar.shape)\n# 300 represents number of points to make between T.min and T.max\nxnew = np.linspace(T.min(), T.max(), 300) \n\nspl = make_interp_spline(T, power, k=3) # type: BSpline\npower_smooth = spl(xnew)\n\nplt.plot(xnew, power_smooth,color=\"red\" ,linewidth=4,markersize=12,marker='+')\n#plt.show()\n\n#plt.plot(xbar, ybar)\n#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);\nwidth=0.8\n#bar1=plt.bar(xbar, ybar, width,color=\"y\")\n\"\"\"\nfor rect,val in zip(bar1,ybar):\n height = rect.get_height()\n #print(val)\n if(val==0):\n plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str(\"-\"), ha='center', va='bottom',fontsize=20)\n else:\n plt.text(rect.get_x() + rect.get_width()/2.0, height+0.2, str(round(val,1)), ha='center', va='bottom',fontsize=12)\n\"\"\"\n#plt.xticks(np.arange(25) + width , list(range(25)))\nplt.rcParams['xtick.labelsize']=16\nplt.rcParams['ytick.labelsize']=16\n\n\nplt.xlabel('Wind Speed(m/s)', fontsize=18)\nplt.ylabel('Frequency[%]', fontsize=18)\n\nplt.savefig(\"new_distribution.png\" ,dpi=100)\n\nplt.show()\n",
"(26,) (25,)\n"
],
[
"def percentage(y):\n #print(y)\n tot=y.sum()\n #print(tot)\n y=y/tot\n return y*100\n\nybar=percentage(np.array(y))\n\nfrom scipy import stats\nimport matplotlib.pyplot as plt\n\n#input for pseudo data\nN = 100\nKappa_in = 2.08\nLambda_in = 8.97\na_in = 1\nloc_in = 0 \n\n#Generate data from given input\ndata = stats.exponweib.rvs(a=a_in,c=Kappa_in, loc=loc_in, scale=Lambda_in, size = N)\n\n#The a and loc are fixed in the fit since it is standard to assume they are known\na_out, Kappa_out, loc_out, Lambda_out = stats.exponweib.fit(data, f0=a_in,floc=loc_in)\n\n#Plot\n\n\n#print(ybar)\nxbar=np.array(x)\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(20,8))\nplt.style.use('dark_background')\n\nbins = range(25)\n#fig = plt.figure() \n#ax = fig.add_subplot(1, 1, 1)\nyhat=stats.exponweib.pdf(bins, a=a_out,c=Kappa_out,loc=loc_out,scale = Lambda_out)\nplt.plot(bins,yhat*100, linewidth=4,color='chartreuse',label=\"Theoretical Weibull Distribution\")\n#ax.hist(data, bins = bins , alpha=0.5)\n#ax.annotate(\"Shape: $k = %.2f$ \\n Scale: $\\lambda = %.2f$\"%(Kappa_out,Lambda_out), xy=(0.7, 0.85), xycoords=ax.transAxes)\n#plt.show()\n\n\n#plt.rcParams[\"font.family\"] = \"Times New Roman\"\nplt.rcParams[\"font.size\"] = \"16\"\n\nplt.title('Comparative Distribution of Wind Speed', fontsize=30)\nplt.grid(False)\n\nfrom scipy.interpolate import make_interp_spline, BSpline\nT,power=xbar[:-1],ybar\n# 300 represents number of points to make between T.min and T.max\nxnew = np.linspace(T.min(), T.max(), 300) \n\nspl = make_interp_spline(T, power, k=3) # type: BSpline\npower_smooth = spl(xnew)\n\nplt.plot(xnew, power_smooth,color=\"red\" ,linewidth=4,label=\" Practical Distribution\")\n#plt.show()\n\n#plt.plot(xbar, ybar)\n#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);\nwidth=0.8\n#bar1=plt.bar(xbar, ybar, width,color=\"y\")\n\"\"\"\nfor rect,val in zip(bar1,ybar):\n height = rect.get_height()\n #print(val)\n if(val==0):\n plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str(\"-\"), ha='center', va='bottom',fontsize=20)\n else:\n plt.text(rect.get_x() + rect.get_width()/2.0, height+0.2, str(round(val,1)), ha='center', va='bottom',fontsize=12)\n\"\"\"\n#plt.xticks(np.arange(25) + width , list(range(25)))\nplt.rcParams['xtick.labelsize']=16\nplt.rcParams['ytick.labelsize']=16\n\nlg=plt.legend(loc='best',title='Distribution Type', prop={'size': 20})\nlg.get_title().set_fontsize(20)\nlg._legend_box.align = \"center\"\n\n\nplt.xlabel('Wind Speed(m/s)', fontsize=18)\nplt.ylabel('Frequency[%]', fontsize=18)\n\nplt.savefig(\"new_distribution.png\" ,dpi=100)\n\nplt.show()\n",
"_____no_output_____"
],
[
"1. Sort data in ascending order\n2. Assign them a rank, such that the lowest data point is 1, second lowest is 2, etc.\n3. Assign each data point a probability. For beginners, i recommend (i-0.5)/n, where i and n are rank and sample size, respectively.\n4. Take natural log of data.\n5. Calculate ln (-ln (1-P)) for every data, where P is probabiliyy calculated in step 3.\n6. Linear regression with results of Step 5 as Y and results of Step 4 as X. Altrrnatively, you can fit a trendline in Excel.\n7. Slope of the regression line is the shape parameter, aka Weibull modulus. The intercept is the negative of the product of shape parameter and natural log of scale parameter.",
"_____no_output_____"
],
[
"from scipy.interpolate import make_interp_spline, BSpline\nT,power=xbar,ybar\n# 300 represents number of points to make between T.min and T.max\nxnew = np.linspace(T.min(), T.max(), 300) \n\nspl = make_interp_spline(T, power, k=3) # type: BSpline\npower_smooth = spl(xnew)\n\nplt.plot(xnew, power_smooth)\nplt.show()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"#x = np.random.normal(size=100)\nimport seaborn as sns\nsns.distplot(x);",
"_____no_output_____"
],
[
"sns.jointplot(x=x, y=y);",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04ef136b01887a79da503529b13e05639b41f73 | 1,987 | ipynb | Jupyter Notebook | docs/_downloads/06ac615a764ea75899d9a8dd43c871d1/plot__cartesian_coordinates.ipynb | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | docs/_downloads/06ac615a764ea75899d9a8dd43c871d1/plot__cartesian_coordinates.ipynb | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | docs/_downloads/06ac615a764ea75899d9a8dd43c871d1/plot__cartesian_coordinates.ipynb | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | 26.144737 | 269 | 0.514847 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Cartesian Coordinates\n\nThe default coordinate system.\n\nSee\n`coord_cartesian() <https://jetbrains.github.io/lets-plot-docs/pages/api/lets_plot.coord_cartesian.html#lets_plot.coord_cartesian>`__.\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nfrom lets_plot import *\nLetsPlot.setup_html()",
"_____no_output_____"
],
[
"df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')",
"_____no_output_____"
],
[
"p = ggplot(df, aes(x='fl')) + geom_bar()\np1 = p + ggtitle('Default')\np2 = p + coord_cartesian(ylim=[0, 250]) + ggtitle('With Specified Coordinates')\n\nw, h = 400, 300\nbunch = GGBunch()\nbunch.add_plot(p1, 0, 0, w, h)\nbunch.add_plot(p2, w, 0, w, h)\nbunch",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d04efc173eb0891062136c1dc6fe6c885ed5fb12 | 216,019 | ipynb | Jupyter Notebook | ch_1-introduction.ipynb | arturbaccarin/introduction-to-machine-learning | 5a1d723da283143c570e998c41bcaec3ce1df0fa | [
"MIT"
] | null | null | null | ch_1-introduction.ipynb | arturbaccarin/introduction-to-machine-learning | 5a1d723da283143c570e998c41bcaec3ce1df0fa | [
"MIT"
] | null | null | null | ch_1-introduction.ipynb | arturbaccarin/introduction-to-machine-learning | 5a1d723da283143c570e998c41bcaec3ce1df0fa | [
"MIT"
] | null | null | null | 837.282946 | 209,354 | 0.952208 | [
[
[
"from sklearn.datasets import load_iris\niris_dataset = load_iris()",
"_____no_output_____"
],
[
"'''\nThis is an example of a classifi cation problem. The possi‐\nble outputs (different species of irises) are called classes. Every iris in the dataset\nbelongs to one of three classes, so this problem is a three-class classification problem.\n\nThe desired output for a single data point (an iris) is the species of this flower.\nFor a particular data point, the species it belongs to is called its label.\n'''\n\niris_dataset.keys()",
"_____no_output_____"
],
[
"## target: species of flower that we want to predict\niris_dataset['target_names']",
"_____no_output_____"
],
[
"# each entity row is known as sample\n# each columns is known as features\niris_dataset['feature_names']\n\n\n'''\nall of the elements in a NumPy array should be homogeneous. The mathematical operations that are meant to be \nperformed on arrays would be extremely inefficient if the arrays weren’t homogeneous.\n\nNumPy uses much less memory to store data and it provides a mechanism of specifying the data types.\nThis allows the code to be optimized even further.\n'''\ntype(iris_dataset['data'])\niris_dataset['data'].shape\niris_dataset['data']\n",
"_____no_output_____"
],
[
"iris_dataset['data'].shape",
"_____no_output_____"
],
[
"# Target: is a one-dimension array\niris_dataset['target']",
"_____no_output_____"
],
[
"# we cannot use the data we used to build the model to evaluate\n# we need to show a new data with labels\n# this is usually done splitting the data - Training Data and Training Set\n\n'''\nIn scikit-learn , data is usually denoted with a capital X , while labels are denoted by\na lowercase y . This is inspired by the standard formulation f(x)=y in mathematics,\nwhere x is the input to a function and y is the output. \n\nwe use a capital X because the data is a two-dimensional array (a\nmatrix) and a lowercase y because the target is a one-dimensional array (a vector).\n'''\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'], iris_dataset['target'], random_state=0)\n# 75% - training set\n# 25% - test set\n\n",
"_____no_output_____"
],
[
"# inspect the data\n# inspecting your data is a good way to find abnormalities and peculiarities.\n# One of the best ways to inspect data is to visualize it.\n# pair plot\nimport pandas as pd\nfrom pandas.plotting import scatter_matrix\n\niris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names)\ngrr = scatter_matrix(iris_dataframe, c=y_train, figsize=(15, 15), marker='o', hist_kwds={'bins': 20}, s=60, alpha=.8)\n",
"_____no_output_____"
],
[
"# k-Nearest Neighbors\nfrom sklearn.neighbors import KNeighborsClassifier\nknn = KNeighborsClassifier(n_neighbors=1)\n\n# we call the fit method of the knn object,\nknn.fit(X_train, y_train)\niris_dataset['target_names']\nX_new = [[5, 2.9, 1, 0.2]] # as scikit-learn always expects two-dimensional arrays for the data.\nknn.predict(X_new)\niris_dataset['target_names'][knn.predict(X_new)]",
"_____no_output_____"
],
[
"# Evaluating the Model\nimport numpy as np\n\ny_pred = knn.predict(X_test)\nnp.mean(y_pred == y_test)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04f060a7a4e056c1555cf9b35259f17964c3e97 | 706,022 | ipynb | Jupyter Notebook | notebooks/Fundus/Single/Fundus_Analysis_Myopia.ipynb | mfc2496/EyeSee-Server | fbe146fd6397a2312d95a335bbf7893d03af8a57 | [
"MIT"
] | null | null | null | notebooks/Fundus/Single/Fundus_Analysis_Myopia.ipynb | mfc2496/EyeSee-Server | fbe146fd6397a2312d95a335bbf7893d03af8a57 | [
"MIT"
] | null | null | null | notebooks/Fundus/Single/Fundus_Analysis_Myopia.ipynb | mfc2496/EyeSee-Server | fbe146fd6397a2312d95a335bbf7893d03af8a57 | [
"MIT"
] | 1 | 2021-09-09T14:18:45.000Z | 2021-09-09T14:18:45.000Z | 706,022 | 706,022 | 0.870719 | [
[
[
"# Fundus Analysis - Pathological Myopia\r\n",
"_____no_output_____"
]
],
[
[
"!nvidia-smi",
"Wed Jan 20 14:13:47 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 460.32.03 Driver Version: 418.67 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n| N/A 61C P8 10W / 70W | 0MiB / 15079MiB | 0% Default |\n| | | ERR! |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n"
]
],
[
[
"**Import Data from Google Drive**",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\r\ndrive.mount('/content/gdrive')",
"Mounted at /content/gdrive\n"
],
[
"import os\r\nos.environ['KAGGLE_CONFIG_DIR'] = \"/content/gdrive/My Drive/Kaggle\"",
"_____no_output_____"
],
[
"%cd /content/gdrive/My Drive/Kaggle",
"/content/gdrive/My Drive/Kaggle\n"
],
[
"pwd",
"_____no_output_____"
]
],
[
[
"**Download Data in Colab**",
"_____no_output_____"
]
],
[
[
"!kaggle datasets download -d andrewmvd/ocular-disease-recognition-odir5k",
"Downloading ocular-disease-recognition-odir5k.zip to /content/gdrive/My Drive/Kaggle\n100% 1.62G/1.62G [00:17<00:00, 52.1MB/s]\n100% 1.62G/1.62G [00:17<00:00, 101MB/s] \n"
],
[
"!ls",
"full_df.csv\nimagenet_class_index.json\ninception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5\ninception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5\ninception_v3_weights_tf_dim_ordering_tf_kernels.h5\ninception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5\nkaggle.json\nKuszma.JPG\nocular-disease-recognition-odir5k.zip\nODIR-5K\npreprocessed_images\nresnet50_weights_tf_dim_ordering_tf_kernels.h5\nresnet50_weights_tf_dim_ordering_tf_kernels_notop.h5\nvgg16_weights_tf_dim_ordering_tf_kernels_notop.h5\nxception_weights_tf_dim_ordering_tf_kernels.h5\nxception_weights_tf_dim_ordering_tf_kernels_notop.h5\n"
]
],
[
[
"**Un-zip the Data**",
"_____no_output_____"
]
],
[
[
"!unzip \\*.zip && rm *.zip",
"\u001b[1;30;43mStreaming output truncated to the last 5000 lines.\u001b[0m\n inflating: preprocessed_images/2179_left.jpg \n inflating: preprocessed_images/2179_right.jpg \n inflating: preprocessed_images/217_left.jpg \n inflating: preprocessed_images/217_right.jpg \n inflating: preprocessed_images/2180_left.jpg \n inflating: preprocessed_images/2180_right.jpg \n inflating: preprocessed_images/2181_left.jpg \n inflating: preprocessed_images/2181_right.jpg \n inflating: preprocessed_images/2182_left.jpg \n inflating: preprocessed_images/2182_right.jpg \n inflating: preprocessed_images/2183_left.jpg \n inflating: preprocessed_images/2183_right.jpg \n inflating: preprocessed_images/2184_left.jpg \n inflating: preprocessed_images/2184_right.jpg \n inflating: preprocessed_images/2185_left.jpg \n inflating: preprocessed_images/2185_right.jpg \n inflating: preprocessed_images/2187_left.jpg \n inflating: preprocessed_images/2187_right.jpg \n inflating: preprocessed_images/2189_left.jpg \n inflating: preprocessed_images/2189_right.jpg \n inflating: preprocessed_images/218_left.jpg \n inflating: preprocessed_images/218_right.jpg \n inflating: preprocessed_images/2190_left.jpg \n inflating: preprocessed_images/2190_right.jpg \n inflating: preprocessed_images/2191_left.jpg \n inflating: preprocessed_images/2191_right.jpg \n inflating: preprocessed_images/2192_left.jpg \n inflating: preprocessed_images/2192_right.jpg \n inflating: preprocessed_images/2193_left.jpg \n inflating: preprocessed_images/2193_right.jpg \n inflating: preprocessed_images/2194_left.jpg \n inflating: preprocessed_images/2194_right.jpg \n inflating: preprocessed_images/2195_left.jpg \n inflating: preprocessed_images/2195_right.jpg \n inflating: preprocessed_images/2196_left.jpg \n inflating: preprocessed_images/2196_right.jpg \n inflating: preprocessed_images/2197_left.jpg \n inflating: preprocessed_images/2197_right.jpg \n inflating: preprocessed_images/2198_left.jpg \n inflating: preprocessed_images/2198_right.jpg \n inflating: preprocessed_images/2199_left.jpg \n inflating: preprocessed_images/2199_right.jpg \n inflating: preprocessed_images/219_left.jpg \n inflating: preprocessed_images/219_right.jpg \n inflating: preprocessed_images/21_left.jpg \n inflating: preprocessed_images/21_right.jpg \n inflating: preprocessed_images/2200_left.jpg \n inflating: preprocessed_images/2200_right.jpg \n inflating: preprocessed_images/2201_left.jpg \n inflating: preprocessed_images/2201_right.jpg \n inflating: preprocessed_images/2203_left.jpg \n inflating: preprocessed_images/2203_right.jpg \n inflating: preprocessed_images/2204_left.jpg \n inflating: preprocessed_images/2204_right.jpg \n inflating: preprocessed_images/2205_left.jpg \n inflating: preprocessed_images/2205_right.jpg \n inflating: preprocessed_images/2206_left.jpg \n inflating: preprocessed_images/2206_right.jpg \n inflating: preprocessed_images/2207_left.jpg \n inflating: preprocessed_images/2207_right.jpg \n inflating: preprocessed_images/2208_left.jpg \n inflating: preprocessed_images/2208_right.jpg \n inflating: preprocessed_images/2209_left.jpg \n inflating: preprocessed_images/2209_right.jpg \n inflating: preprocessed_images/2210_left.jpg \n inflating: preprocessed_images/2210_right.jpg \n inflating: preprocessed_images/2211_left.jpg \n inflating: preprocessed_images/2211_right.jpg \n inflating: preprocessed_images/2212_left.jpg \n inflating: preprocessed_images/2212_right.jpg \n inflating: preprocessed_images/2213_left.jpg \n inflating: preprocessed_images/2213_right.jpg \n inflating: preprocessed_images/2215_left.jpg \n inflating: preprocessed_images/2215_right.jpg \n inflating: preprocessed_images/2216_left.jpg \n inflating: preprocessed_images/2216_right.jpg \n inflating: preprocessed_images/2217_left.jpg \n inflating: preprocessed_images/2217_right.jpg \n inflating: preprocessed_images/2218_left.jpg \n inflating: preprocessed_images/2218_right.jpg \n inflating: preprocessed_images/2219_left.jpg \n inflating: preprocessed_images/2219_right.jpg \n inflating: preprocessed_images/221_left.jpg \n inflating: preprocessed_images/221_right.jpg \n inflating: preprocessed_images/2220_left.jpg \n inflating: preprocessed_images/2220_right.jpg \n inflating: preprocessed_images/2221_left.jpg \n inflating: preprocessed_images/2221_right.jpg \n inflating: preprocessed_images/2222_left.jpg \n inflating: preprocessed_images/2222_right.jpg \n inflating: preprocessed_images/2223_left.jpg \n inflating: preprocessed_images/2223_right.jpg \n inflating: preprocessed_images/2225_left.jpg \n inflating: preprocessed_images/2225_right.jpg \n inflating: preprocessed_images/2226_left.jpg \n inflating: preprocessed_images/2226_right.jpg \n inflating: preprocessed_images/2227_left.jpg \n inflating: preprocessed_images/2227_right.jpg \n inflating: preprocessed_images/2228_left.jpg \n inflating: preprocessed_images/2228_right.jpg \n inflating: preprocessed_images/2229_left.jpg \n inflating: preprocessed_images/222_right.jpg \n inflating: preprocessed_images/2231_right.jpg \n inflating: preprocessed_images/2232_left.jpg \n inflating: preprocessed_images/2232_right.jpg \n inflating: preprocessed_images/2233_left.jpg \n inflating: preprocessed_images/2233_right.jpg \n inflating: preprocessed_images/2234_left.jpg \n inflating: preprocessed_images/2234_right.jpg \n inflating: preprocessed_images/2235_left.jpg \n inflating: preprocessed_images/2235_right.jpg \n inflating: preprocessed_images/2236_left.jpg \n inflating: preprocessed_images/2236_right.jpg \n inflating: preprocessed_images/2237_left.jpg \n inflating: preprocessed_images/2237_right.jpg \n inflating: preprocessed_images/2239_left.jpg \n inflating: preprocessed_images/2239_right.jpg \n inflating: preprocessed_images/223_left.jpg \n inflating: preprocessed_images/223_right.jpg \n inflating: preprocessed_images/2240_left.jpg \n inflating: preprocessed_images/2240_right.jpg \n inflating: preprocessed_images/2241_left.jpg \n inflating: preprocessed_images/2242_left.jpg \n inflating: preprocessed_images/2242_right.jpg \n inflating: preprocessed_images/2243_left.jpg \n inflating: preprocessed_images/2243_right.jpg \n inflating: preprocessed_images/2244_right.jpg \n inflating: preprocessed_images/2246_left.jpg \n inflating: preprocessed_images/2246_right.jpg \n inflating: preprocessed_images/2247_left.jpg \n inflating: preprocessed_images/2247_right.jpg \n inflating: preprocessed_images/2248_left.jpg \n inflating: preprocessed_images/2248_right.jpg \n inflating: preprocessed_images/224_right.jpg \n inflating: preprocessed_images/2251_right.jpg \n inflating: preprocessed_images/225_left.jpg \n inflating: preprocessed_images/225_right.jpg \n inflating: preprocessed_images/2262_left.jpg \n inflating: preprocessed_images/2262_right.jpg \n inflating: preprocessed_images/226_left.jpg \n inflating: preprocessed_images/226_right.jpg \n inflating: preprocessed_images/227_left.jpg \n inflating: preprocessed_images/227_right.jpg \n inflating: preprocessed_images/2282_left.jpg \n inflating: preprocessed_images/2282_right.jpg \n inflating: preprocessed_images/228_left.jpg \n inflating: preprocessed_images/228_right.jpg \n inflating: preprocessed_images/229_left.jpg \n inflating: preprocessed_images/229_right.jpg \n inflating: preprocessed_images/22_left.jpg \n inflating: preprocessed_images/230_left.jpg \n inflating: preprocessed_images/230_right.jpg \n inflating: preprocessed_images/231_left.jpg \n inflating: preprocessed_images/231_right.jpg \n inflating: preprocessed_images/2329_left.jpg \n inflating: preprocessed_images/2329_right.jpg \n inflating: preprocessed_images/232_left.jpg \n inflating: preprocessed_images/232_right.jpg \n inflating: preprocessed_images/2330_left.jpg \n inflating: preprocessed_images/2330_right.jpg \n inflating: preprocessed_images/2331_left.jpg \n inflating: preprocessed_images/2331_right.jpg \n inflating: preprocessed_images/2332_left.jpg \n inflating: preprocessed_images/2332_right.jpg \n inflating: preprocessed_images/2333_left.jpg \n inflating: preprocessed_images/2333_right.jpg \n inflating: preprocessed_images/2334_left.jpg \n inflating: preprocessed_images/2334_right.jpg \n inflating: preprocessed_images/2335_left.jpg \n inflating: preprocessed_images/2335_right.jpg \n inflating: preprocessed_images/2336_left.jpg \n inflating: preprocessed_images/2336_right.jpg \n inflating: preprocessed_images/2337_left.jpg \n inflating: preprocessed_images/2337_right.jpg \n inflating: preprocessed_images/2338_left.jpg \n inflating: preprocessed_images/2338_right.jpg \n inflating: preprocessed_images/2339_left.jpg \n inflating: preprocessed_images/2339_right.jpg \n inflating: preprocessed_images/233_left.jpg \n inflating: preprocessed_images/233_right.jpg \n inflating: preprocessed_images/2340_left.jpg \n inflating: preprocessed_images/2340_right.jpg \n inflating: preprocessed_images/2341_left.jpg \n inflating: preprocessed_images/2341_right.jpg \n inflating: preprocessed_images/2342_left.jpg \n inflating: preprocessed_images/2342_right.jpg \n inflating: preprocessed_images/2343_left.jpg \n inflating: preprocessed_images/2343_right.jpg \n inflating: preprocessed_images/2345_left.jpg \n inflating: preprocessed_images/2345_right.jpg \n inflating: preprocessed_images/2346_left.jpg \n inflating: preprocessed_images/2346_right.jpg \n inflating: preprocessed_images/2347_left.jpg \n inflating: preprocessed_images/2347_right.jpg \n inflating: preprocessed_images/2348_left.jpg \n inflating: preprocessed_images/2348_right.jpg \n inflating: preprocessed_images/2349_left.jpg \n inflating: preprocessed_images/2349_right.jpg \n inflating: preprocessed_images/234_left.jpg \n inflating: preprocessed_images/234_right.jpg \n inflating: preprocessed_images/2350_left.jpg \n inflating: preprocessed_images/2351_left.jpg \n inflating: preprocessed_images/2351_right.jpg \n inflating: preprocessed_images/2352_left.jpg \n inflating: preprocessed_images/2352_right.jpg \n inflating: preprocessed_images/2353_left.jpg \n inflating: preprocessed_images/2353_right.jpg \n inflating: preprocessed_images/2354_left.jpg \n inflating: preprocessed_images/2354_right.jpg \n inflating: preprocessed_images/2355_left.jpg \n inflating: preprocessed_images/2355_right.jpg \n inflating: preprocessed_images/2356_left.jpg \n inflating: preprocessed_images/2356_right.jpg \n inflating: preprocessed_images/2357_left.jpg \n inflating: preprocessed_images/2357_right.jpg \n inflating: preprocessed_images/2359_left.jpg \n inflating: preprocessed_images/2359_right.jpg \n inflating: preprocessed_images/235_left.jpg \n inflating: preprocessed_images/235_right.jpg \n inflating: preprocessed_images/2360_left.jpg \n inflating: preprocessed_images/2360_right.jpg \n inflating: preprocessed_images/2361_left.jpg \n inflating: preprocessed_images/2361_right.jpg \n inflating: preprocessed_images/2362_left.jpg \n inflating: preprocessed_images/2362_right.jpg \n inflating: preprocessed_images/2363_left.jpg \n inflating: preprocessed_images/2363_right.jpg \n inflating: preprocessed_images/2364_left.jpg \n inflating: preprocessed_images/2364_right.jpg \n inflating: preprocessed_images/2365_left.jpg \n inflating: preprocessed_images/2365_right.jpg \n inflating: preprocessed_images/2366_left.jpg \n inflating: preprocessed_images/2366_right.jpg \n inflating: preprocessed_images/2367_left.jpg \n inflating: preprocessed_images/2367_right.jpg \n inflating: preprocessed_images/2368_left.jpg \n inflating: preprocessed_images/2368_right.jpg \n inflating: preprocessed_images/2369_left.jpg \n inflating: preprocessed_images/2369_right.jpg \n inflating: preprocessed_images/236_right.jpg \n inflating: preprocessed_images/2370_left.jpg \n inflating: preprocessed_images/2370_right.jpg \n inflating: preprocessed_images/2371_left.jpg \n inflating: preprocessed_images/2371_right.jpg \n inflating: preprocessed_images/2372_left.jpg \n inflating: preprocessed_images/2372_right.jpg \n inflating: preprocessed_images/2373_left.jpg \n inflating: preprocessed_images/2373_right.jpg \n inflating: preprocessed_images/2374_left.jpg \n inflating: preprocessed_images/2374_right.jpg \n inflating: preprocessed_images/2375_left.jpg \n inflating: preprocessed_images/2375_right.jpg \n inflating: preprocessed_images/2376_left.jpg \n inflating: preprocessed_images/2376_right.jpg \n inflating: preprocessed_images/2377_left.jpg \n inflating: preprocessed_images/2378_left.jpg \n inflating: preprocessed_images/2378_right.jpg \n inflating: preprocessed_images/2379_left.jpg \n inflating: preprocessed_images/2379_right.jpg \n inflating: preprocessed_images/237_left.jpg \n inflating: preprocessed_images/237_right.jpg \n inflating: preprocessed_images/2380_left.jpg \n inflating: preprocessed_images/2380_right.jpg \n inflating: preprocessed_images/2381_left.jpg \n inflating: preprocessed_images/2381_right.jpg \n inflating: preprocessed_images/2382_left.jpg \n inflating: preprocessed_images/2382_right.jpg \n inflating: preprocessed_images/2383_left.jpg \n inflating: preprocessed_images/2383_right.jpg \n inflating: preprocessed_images/2384_left.jpg \n inflating: preprocessed_images/2384_right.jpg \n inflating: preprocessed_images/2385_left.jpg \n inflating: preprocessed_images/2385_right.jpg \n inflating: preprocessed_images/2386_left.jpg \n inflating: preprocessed_images/2386_right.jpg \n inflating: preprocessed_images/2387_left.jpg \n inflating: preprocessed_images/2387_right.jpg \n inflating: preprocessed_images/2388_left.jpg \n inflating: preprocessed_images/2388_right.jpg \n inflating: preprocessed_images/2389_left.jpg \n inflating: preprocessed_images/2389_right.jpg \n inflating: preprocessed_images/238_left.jpg \n inflating: preprocessed_images/238_right.jpg \n inflating: preprocessed_images/2390_left.jpg \n inflating: preprocessed_images/2390_right.jpg \n inflating: preprocessed_images/2391_left.jpg \n inflating: preprocessed_images/2391_right.jpg \n inflating: preprocessed_images/2392_left.jpg \n inflating: preprocessed_images/2392_right.jpg \n inflating: preprocessed_images/2393_left.jpg \n inflating: preprocessed_images/2393_right.jpg \n inflating: preprocessed_images/2394_left.jpg \n inflating: preprocessed_images/2394_right.jpg \n inflating: preprocessed_images/2395_left.jpg \n inflating: preprocessed_images/2395_right.jpg \n inflating: preprocessed_images/2396_left.jpg \n inflating: preprocessed_images/2396_right.jpg \n inflating: preprocessed_images/2397_left.jpg \n inflating: preprocessed_images/2397_right.jpg \n inflating: preprocessed_images/2398_left.jpg \n inflating: preprocessed_images/2398_right.jpg \n inflating: preprocessed_images/2399_left.jpg \n inflating: preprocessed_images/2399_right.jpg \n inflating: preprocessed_images/239_left.jpg \n inflating: preprocessed_images/239_right.jpg \n inflating: preprocessed_images/23_left.jpg \n inflating: preprocessed_images/23_right.jpg \n inflating: preprocessed_images/2400_right.jpg \n inflating: preprocessed_images/2401_left.jpg \n inflating: preprocessed_images/2401_right.jpg \n inflating: preprocessed_images/2402_left.jpg \n inflating: preprocessed_images/2402_right.jpg \n inflating: preprocessed_images/2403_left.jpg \n inflating: preprocessed_images/2403_right.jpg \n inflating: preprocessed_images/2404_left.jpg \n inflating: preprocessed_images/2404_right.jpg \n inflating: preprocessed_images/2405_left.jpg \n inflating: preprocessed_images/2405_right.jpg \n inflating: preprocessed_images/2406_left.jpg \n inflating: preprocessed_images/2406_right.jpg \n inflating: preprocessed_images/2407_left.jpg \n inflating: preprocessed_images/2407_right.jpg \n inflating: preprocessed_images/2408_left.jpg \n inflating: preprocessed_images/2408_right.jpg \n inflating: preprocessed_images/2409_left.jpg \n inflating: preprocessed_images/2409_right.jpg \n inflating: preprocessed_images/240_left.jpg \n inflating: preprocessed_images/240_right.jpg \n inflating: preprocessed_images/2410_left.jpg \n inflating: preprocessed_images/2410_right.jpg \n inflating: preprocessed_images/2411_left.jpg \n inflating: preprocessed_images/2411_right.jpg \n inflating: preprocessed_images/2412_left.jpg \n inflating: preprocessed_images/2412_right.jpg \n inflating: preprocessed_images/2413_left.jpg \n inflating: preprocessed_images/2413_right.jpg \n inflating: preprocessed_images/2414_left.jpg \n inflating: preprocessed_images/2414_right.jpg \n inflating: preprocessed_images/2415_left.jpg \n inflating: preprocessed_images/2415_right.jpg \n inflating: preprocessed_images/2416_left.jpg \n inflating: preprocessed_images/2416_right.jpg \n inflating: preprocessed_images/2417_left.jpg \n inflating: preprocessed_images/2417_right.jpg \n inflating: preprocessed_images/2418_left.jpg \n inflating: preprocessed_images/2418_right.jpg \n inflating: preprocessed_images/2419_left.jpg \n inflating: preprocessed_images/2419_right.jpg \n inflating: preprocessed_images/241_left.jpg \n inflating: preprocessed_images/2420_left.jpg \n inflating: preprocessed_images/2420_right.jpg \n inflating: preprocessed_images/2421_left.jpg \n inflating: preprocessed_images/2421_right.jpg \n inflating: preprocessed_images/2422_left.jpg \n inflating: preprocessed_images/2422_right.jpg \n inflating: preprocessed_images/2423_left.jpg \n inflating: preprocessed_images/2423_right.jpg \n inflating: preprocessed_images/2424_left.jpg \n inflating: preprocessed_images/2424_right.jpg \n inflating: preprocessed_images/2425_left.jpg \n inflating: preprocessed_images/2425_right.jpg \n inflating: preprocessed_images/2426_left.jpg \n inflating: preprocessed_images/2426_right.jpg \n inflating: preprocessed_images/2427_left.jpg \n inflating: preprocessed_images/2427_right.jpg \n inflating: preprocessed_images/2428_left.jpg \n inflating: preprocessed_images/2429_left.jpg \n inflating: preprocessed_images/2429_right.jpg \n inflating: preprocessed_images/242_right.jpg \n inflating: preprocessed_images/2430_left.jpg \n inflating: preprocessed_images/2430_right.jpg \n inflating: preprocessed_images/2431_left.jpg \n inflating: preprocessed_images/2431_right.jpg \n inflating: preprocessed_images/2432_left.jpg \n inflating: preprocessed_images/2432_right.jpg \n inflating: preprocessed_images/2433_left.jpg \n inflating: preprocessed_images/2433_right.jpg \n inflating: preprocessed_images/2434_left.jpg \n inflating: preprocessed_images/2434_right.jpg \n inflating: preprocessed_images/2435_left.jpg \n inflating: preprocessed_images/2435_right.jpg \n inflating: preprocessed_images/2436_left.jpg \n inflating: preprocessed_images/2436_right.jpg \n inflating: preprocessed_images/2437_left.jpg \n inflating: preprocessed_images/2437_right.jpg \n inflating: preprocessed_images/2438_left.jpg \n inflating: preprocessed_images/2438_right.jpg \n inflating: preprocessed_images/2439_left.jpg \n inflating: preprocessed_images/2439_right.jpg \n inflating: preprocessed_images/243_left.jpg \n inflating: preprocessed_images/243_right.jpg \n inflating: preprocessed_images/2440_left.jpg \n inflating: preprocessed_images/2440_right.jpg \n inflating: preprocessed_images/2441_left.jpg \n inflating: preprocessed_images/2441_right.jpg \n inflating: preprocessed_images/2442_left.jpg \n inflating: preprocessed_images/2442_right.jpg \n inflating: preprocessed_images/2443_left.jpg \n inflating: preprocessed_images/2443_right.jpg \n inflating: preprocessed_images/2444_left.jpg \n inflating: preprocessed_images/2444_right.jpg \n inflating: preprocessed_images/2445_left.jpg \n inflating: preprocessed_images/2445_right.jpg \n inflating: preprocessed_images/2446_left.jpg \n inflating: preprocessed_images/2446_right.jpg \n inflating: preprocessed_images/2447_left.jpg \n inflating: preprocessed_images/2447_right.jpg \n inflating: preprocessed_images/2448_left.jpg \n inflating: preprocessed_images/2449_left.jpg \n inflating: preprocessed_images/2449_right.jpg \n inflating: preprocessed_images/244_left.jpg \n inflating: preprocessed_images/244_right.jpg \n inflating: preprocessed_images/2451_left.jpg \n inflating: preprocessed_images/2451_right.jpg \n inflating: preprocessed_images/2452_left.jpg \n inflating: preprocessed_images/2452_right.jpg \n inflating: preprocessed_images/2454_left.jpg \n inflating: preprocessed_images/2454_right.jpg \n inflating: preprocessed_images/2456_left.jpg \n inflating: preprocessed_images/2456_right.jpg \n inflating: preprocessed_images/2457_left.jpg \n inflating: preprocessed_images/2457_right.jpg \n inflating: preprocessed_images/2458_left.jpg \n inflating: preprocessed_images/2458_right.jpg \n inflating: preprocessed_images/2459_left.jpg \n inflating: preprocessed_images/2459_right.jpg \n inflating: preprocessed_images/245_left.jpg \n inflating: preprocessed_images/245_right.jpg \n inflating: preprocessed_images/2460_left.jpg \n inflating: preprocessed_images/2460_right.jpg \n inflating: preprocessed_images/2461_left.jpg \n inflating: preprocessed_images/2461_right.jpg \n inflating: preprocessed_images/2462_left.jpg \n inflating: preprocessed_images/2462_right.jpg \n inflating: preprocessed_images/2463_left.jpg \n inflating: preprocessed_images/2463_right.jpg \n inflating: preprocessed_images/2465_left.jpg \n inflating: preprocessed_images/2465_right.jpg \n inflating: preprocessed_images/2466_left.jpg \n inflating: preprocessed_images/2466_right.jpg \n inflating: preprocessed_images/2467_left.jpg \n inflating: preprocessed_images/2467_right.jpg \n inflating: preprocessed_images/2469_left.jpg \n inflating: preprocessed_images/2469_right.jpg \n inflating: preprocessed_images/246_left.jpg \n inflating: preprocessed_images/2470_left.jpg \n inflating: preprocessed_images/2470_right.jpg \n inflating: preprocessed_images/2471_left.jpg \n inflating: preprocessed_images/2471_right.jpg \n inflating: preprocessed_images/2472_left.jpg \n inflating: preprocessed_images/2472_right.jpg \n inflating: preprocessed_images/2473_left.jpg \n inflating: preprocessed_images/2473_right.jpg \n inflating: preprocessed_images/2474_left.jpg \n inflating: preprocessed_images/2474_right.jpg \n inflating: preprocessed_images/2475_left.jpg \n inflating: preprocessed_images/2475_right.jpg \n inflating: preprocessed_images/2476_left.jpg \n inflating: preprocessed_images/2476_right.jpg \n inflating: preprocessed_images/2477_left.jpg \n inflating: preprocessed_images/2478_left.jpg \n inflating: preprocessed_images/2479_left.jpg \n inflating: preprocessed_images/2479_right.jpg \n inflating: preprocessed_images/247_left.jpg \n inflating: preprocessed_images/247_right.jpg \n inflating: preprocessed_images/2480_left.jpg \n inflating: preprocessed_images/2480_right.jpg \n inflating: preprocessed_images/2481_left.jpg \n inflating: preprocessed_images/2481_right.jpg \n inflating: preprocessed_images/2482_left.jpg \n inflating: preprocessed_images/2482_right.jpg \n inflating: preprocessed_images/2483_left.jpg \n inflating: preprocessed_images/2483_right.jpg \n inflating: preprocessed_images/2484_left.jpg \n inflating: preprocessed_images/2484_right.jpg \n inflating: preprocessed_images/2485_left.jpg \n inflating: preprocessed_images/2485_right.jpg \n inflating: preprocessed_images/2486_left.jpg \n inflating: preprocessed_images/2486_right.jpg \n inflating: preprocessed_images/2487_left.jpg \n inflating: preprocessed_images/2487_right.jpg \n inflating: preprocessed_images/2488_left.jpg \n inflating: preprocessed_images/2488_right.jpg \n inflating: preprocessed_images/2489_left.jpg \n inflating: preprocessed_images/2489_right.jpg \n inflating: preprocessed_images/2490_left.jpg \n inflating: preprocessed_images/2490_right.jpg \n inflating: preprocessed_images/2492_left.jpg \n inflating: preprocessed_images/2492_right.jpg \n inflating: preprocessed_images/2493_left.jpg \n inflating: preprocessed_images/2493_right.jpg \n inflating: preprocessed_images/2495_right.jpg \n inflating: preprocessed_images/2496_right.jpg \n inflating: preprocessed_images/2497_left.jpg \n inflating: preprocessed_images/2497_right.jpg \n inflating: preprocessed_images/2498_left.jpg \n inflating: preprocessed_images/2498_right.jpg \n inflating: preprocessed_images/2499_left.jpg \n inflating: preprocessed_images/2499_right.jpg \n inflating: preprocessed_images/249_left.jpg \n inflating: preprocessed_images/249_right.jpg \n inflating: preprocessed_images/24_left.jpg \n inflating: preprocessed_images/24_right.jpg \n inflating: preprocessed_images/2500_left.jpg \n inflating: preprocessed_images/2500_right.jpg \n inflating: preprocessed_images/2501_left.jpg \n inflating: preprocessed_images/2501_right.jpg \n inflating: preprocessed_images/2502_left.jpg \n inflating: preprocessed_images/2502_right.jpg \n inflating: preprocessed_images/2503_left.jpg \n inflating: preprocessed_images/2503_right.jpg \n inflating: preprocessed_images/2504_left.jpg \n inflating: preprocessed_images/2504_right.jpg \n inflating: preprocessed_images/2505_left.jpg \n inflating: preprocessed_images/2505_right.jpg \n inflating: preprocessed_images/2506_left.jpg \n inflating: preprocessed_images/2506_right.jpg \n inflating: preprocessed_images/2507_left.jpg \n inflating: preprocessed_images/2507_right.jpg \n inflating: preprocessed_images/2508_left.jpg \n inflating: preprocessed_images/2508_right.jpg \n inflating: preprocessed_images/2509_left.jpg \n inflating: preprocessed_images/2509_right.jpg \n inflating: preprocessed_images/250_left.jpg \n inflating: preprocessed_images/250_right.jpg \n inflating: preprocessed_images/2510_left.jpg \n inflating: preprocessed_images/2510_right.jpg \n inflating: preprocessed_images/2511_left.jpg \n inflating: preprocessed_images/2511_right.jpg \n inflating: preprocessed_images/2512_left.jpg \n inflating: preprocessed_images/2512_right.jpg \n inflating: preprocessed_images/2513_left.jpg \n inflating: preprocessed_images/2513_right.jpg \n inflating: preprocessed_images/2514_left.jpg \n inflating: preprocessed_images/2514_right.jpg \n inflating: preprocessed_images/2515_left.jpg \n inflating: preprocessed_images/2516_left.jpg \n inflating: preprocessed_images/2517_left.jpg \n inflating: preprocessed_images/2517_right.jpg \n inflating: preprocessed_images/2518_left.jpg \n inflating: preprocessed_images/2518_right.jpg \n inflating: preprocessed_images/2519_left.jpg \n inflating: preprocessed_images/2519_right.jpg \n inflating: preprocessed_images/251_left.jpg \n inflating: preprocessed_images/251_right.jpg \n inflating: preprocessed_images/2520_left.jpg \n inflating: preprocessed_images/2520_right.jpg \n inflating: preprocessed_images/2521_left.jpg \n inflating: preprocessed_images/2521_right.jpg \n inflating: preprocessed_images/2522_left.jpg \n inflating: preprocessed_images/2522_right.jpg \n inflating: preprocessed_images/2523_left.jpg \n inflating: preprocessed_images/2523_right.jpg \n inflating: preprocessed_images/2524_left.jpg \n inflating: preprocessed_images/2524_right.jpg \n inflating: preprocessed_images/2525_left.jpg \n inflating: preprocessed_images/2525_right.jpg \n inflating: preprocessed_images/2526_left.jpg \n inflating: preprocessed_images/2526_right.jpg \n inflating: preprocessed_images/2527_left.jpg \n inflating: preprocessed_images/2527_right.jpg \n inflating: preprocessed_images/2528_left.jpg \n inflating: preprocessed_images/2528_right.jpg \n inflating: preprocessed_images/2529_left.jpg \n inflating: preprocessed_images/2529_right.jpg \n inflating: preprocessed_images/252_left.jpg \n inflating: preprocessed_images/252_right.jpg \n inflating: preprocessed_images/2530_left.jpg \n inflating: preprocessed_images/2530_right.jpg \n inflating: preprocessed_images/2531_left.jpg \n inflating: preprocessed_images/2531_right.jpg \n inflating: preprocessed_images/2532_left.jpg \n inflating: preprocessed_images/2532_right.jpg \n inflating: preprocessed_images/2533_left.jpg \n inflating: preprocessed_images/2533_right.jpg \n inflating: preprocessed_images/2534_left.jpg \n inflating: preprocessed_images/2534_right.jpg \n inflating: preprocessed_images/2535_left.jpg \n inflating: preprocessed_images/2535_right.jpg \n inflating: preprocessed_images/2537_left.jpg \n inflating: preprocessed_images/2537_right.jpg \n inflating: preprocessed_images/2538_left.jpg \n inflating: preprocessed_images/2538_right.jpg \n inflating: preprocessed_images/2539_left.jpg \n inflating: preprocessed_images/2539_right.jpg \n inflating: preprocessed_images/253_left.jpg \n inflating: preprocessed_images/253_right.jpg \n inflating: preprocessed_images/2540_left.jpg \n inflating: preprocessed_images/2540_right.jpg \n inflating: preprocessed_images/2541_left.jpg \n inflating: preprocessed_images/2541_right.jpg \n inflating: preprocessed_images/2542_left.jpg \n inflating: preprocessed_images/2542_right.jpg \n inflating: preprocessed_images/2543_left.jpg \n inflating: preprocessed_images/2543_right.jpg \n inflating: preprocessed_images/2544_left.jpg \n inflating: preprocessed_images/2544_right.jpg \n inflating: preprocessed_images/2545_left.jpg \n inflating: preprocessed_images/2545_right.jpg \n inflating: preprocessed_images/2546_left.jpg \n inflating: preprocessed_images/2547_left.jpg \n inflating: preprocessed_images/2547_right.jpg \n inflating: preprocessed_images/2548_left.jpg \n inflating: preprocessed_images/2548_right.jpg \n inflating: preprocessed_images/2549_left.jpg \n inflating: preprocessed_images/2549_right.jpg \n inflating: preprocessed_images/254_left.jpg \n inflating: preprocessed_images/254_right.jpg \n inflating: preprocessed_images/2551_left.jpg \n inflating: preprocessed_images/2551_right.jpg \n inflating: preprocessed_images/2552_left.jpg \n inflating: preprocessed_images/2552_right.jpg \n inflating: preprocessed_images/2553_left.jpg \n inflating: preprocessed_images/2553_right.jpg \n inflating: preprocessed_images/2554_left.jpg \n inflating: preprocessed_images/2554_right.jpg \n inflating: preprocessed_images/2555_left.jpg \n inflating: preprocessed_images/2555_right.jpg \n inflating: preprocessed_images/2556_left.jpg \n inflating: preprocessed_images/2556_right.jpg \n inflating: preprocessed_images/2557_left.jpg \n inflating: preprocessed_images/2557_right.jpg \n inflating: preprocessed_images/2558_left.jpg \n inflating: preprocessed_images/2558_right.jpg \n inflating: preprocessed_images/2559_left.jpg \n inflating: preprocessed_images/2559_right.jpg \n inflating: preprocessed_images/255_left.jpg \n inflating: preprocessed_images/255_right.jpg \n inflating: preprocessed_images/2560_left.jpg \n inflating: preprocessed_images/2560_right.jpg \n inflating: preprocessed_images/2561_left.jpg \n inflating: preprocessed_images/2561_right.jpg \n inflating: preprocessed_images/2562_left.jpg \n inflating: preprocessed_images/2562_right.jpg \n inflating: preprocessed_images/2563_left.jpg \n inflating: preprocessed_images/2563_right.jpg \n inflating: preprocessed_images/2564_left.jpg \n inflating: preprocessed_images/2564_right.jpg \n inflating: preprocessed_images/2565_left.jpg \n inflating: preprocessed_images/2565_right.jpg \n inflating: preprocessed_images/2566_left.jpg \n inflating: preprocessed_images/2566_right.jpg \n inflating: preprocessed_images/2567_left.jpg \n inflating: preprocessed_images/2567_right.jpg \n inflating: preprocessed_images/2568_left.jpg \n inflating: preprocessed_images/2568_right.jpg \n inflating: preprocessed_images/2569_left.jpg \n inflating: preprocessed_images/2569_right.jpg \n inflating: preprocessed_images/256_left.jpg \n inflating: preprocessed_images/256_right.jpg \n inflating: preprocessed_images/2570_left.jpg \n inflating: preprocessed_images/2570_right.jpg \n inflating: preprocessed_images/2571_right.jpg \n inflating: preprocessed_images/2572_left.jpg \n inflating: preprocessed_images/2572_right.jpg \n inflating: preprocessed_images/2573_left.jpg \n inflating: preprocessed_images/2573_right.jpg \n inflating: preprocessed_images/2574_left.jpg \n inflating: preprocessed_images/2574_right.jpg \n inflating: preprocessed_images/2575_left.jpg \n inflating: preprocessed_images/2575_right.jpg \n inflating: preprocessed_images/2576_left.jpg \n inflating: preprocessed_images/2576_right.jpg \n inflating: preprocessed_images/2577_left.jpg \n inflating: preprocessed_images/2577_right.jpg \n inflating: preprocessed_images/2579_left.jpg \n inflating: preprocessed_images/2579_right.jpg \n inflating: preprocessed_images/257_left.jpg \n inflating: preprocessed_images/257_right.jpg \n inflating: preprocessed_images/2580_right.jpg \n inflating: preprocessed_images/2581_left.jpg \n inflating: preprocessed_images/2581_right.jpg \n inflating: preprocessed_images/2583_left.jpg \n inflating: preprocessed_images/2583_right.jpg \n inflating: preprocessed_images/2584_left.jpg \n inflating: preprocessed_images/2584_right.jpg \n inflating: preprocessed_images/2585_left.jpg \n inflating: preprocessed_images/2585_right.jpg \n inflating: preprocessed_images/2586_left.jpg \n inflating: preprocessed_images/2586_right.jpg \n inflating: preprocessed_images/2587_left.jpg \n inflating: preprocessed_images/2587_right.jpg \n inflating: preprocessed_images/2588_left.jpg \n inflating: preprocessed_images/2588_right.jpg \n inflating: preprocessed_images/2589_right.jpg \n inflating: preprocessed_images/258_left.jpg \n inflating: preprocessed_images/258_right.jpg \n inflating: preprocessed_images/2590_left.jpg \n inflating: preprocessed_images/2590_right.jpg \n inflating: preprocessed_images/2591_left.jpg \n inflating: preprocessed_images/2591_right.jpg \n inflating: preprocessed_images/2592_left.jpg \n inflating: preprocessed_images/2592_right.jpg \n inflating: preprocessed_images/2593_left.jpg \n inflating: preprocessed_images/2593_right.jpg \n inflating: preprocessed_images/2594_left.jpg \n inflating: preprocessed_images/2594_right.jpg \n inflating: preprocessed_images/2595_left.jpg \n inflating: preprocessed_images/2595_right.jpg \n inflating: preprocessed_images/2596_left.jpg \n inflating: preprocessed_images/2596_right.jpg \n inflating: preprocessed_images/2597_left.jpg \n inflating: preprocessed_images/2597_right.jpg \n inflating: preprocessed_images/2598_left.jpg \n inflating: preprocessed_images/2598_right.jpg \n inflating: preprocessed_images/2599_left.jpg \n inflating: preprocessed_images/2599_right.jpg \n inflating: preprocessed_images/259_left.jpg \n inflating: preprocessed_images/259_right.jpg \n inflating: preprocessed_images/25_left.jpg \n inflating: preprocessed_images/2600_left.jpg \n inflating: preprocessed_images/2600_right.jpg \n inflating: preprocessed_images/2601_left.jpg \n inflating: preprocessed_images/2601_right.jpg \n inflating: preprocessed_images/2602_left.jpg \n inflating: preprocessed_images/2602_right.jpg \n inflating: preprocessed_images/2603_left.jpg \n inflating: preprocessed_images/2603_right.jpg \n inflating: preprocessed_images/2604_left.jpg \n inflating: preprocessed_images/2604_right.jpg \n inflating: preprocessed_images/2605_left.jpg \n inflating: preprocessed_images/2605_right.jpg \n inflating: preprocessed_images/2608_left.jpg \n inflating: preprocessed_images/2608_right.jpg \n inflating: preprocessed_images/2609_left.jpg \n inflating: preprocessed_images/2609_right.jpg \n inflating: preprocessed_images/260_left.jpg \n inflating: preprocessed_images/260_right.jpg \n inflating: preprocessed_images/2610_left.jpg \n inflating: preprocessed_images/2610_right.jpg \n inflating: preprocessed_images/2611_right.jpg \n inflating: preprocessed_images/2612_left.jpg \n inflating: preprocessed_images/2612_right.jpg \n inflating: preprocessed_images/2613_left.jpg \n inflating: preprocessed_images/2613_right.jpg \n inflating: preprocessed_images/2614_left.jpg \n inflating: preprocessed_images/2614_right.jpg \n inflating: preprocessed_images/2615_left.jpg \n inflating: preprocessed_images/2615_right.jpg \n inflating: preprocessed_images/2616_left.jpg \n inflating: preprocessed_images/2616_right.jpg \n inflating: preprocessed_images/2617_left.jpg \n inflating: preprocessed_images/2617_right.jpg \n inflating: preprocessed_images/2618_left.jpg \n inflating: preprocessed_images/2618_right.jpg \n inflating: preprocessed_images/2619_left.jpg \n inflating: preprocessed_images/2619_right.jpg \n inflating: preprocessed_images/261_right.jpg \n inflating: preprocessed_images/2621_left.jpg \n inflating: preprocessed_images/2621_right.jpg \n inflating: preprocessed_images/2622_left.jpg \n inflating: preprocessed_images/2622_right.jpg \n inflating: preprocessed_images/2623_left.jpg \n inflating: preprocessed_images/2623_right.jpg \n inflating: preprocessed_images/2624_left.jpg \n inflating: preprocessed_images/2624_right.jpg \n inflating: preprocessed_images/2625_left.jpg \n inflating: preprocessed_images/2625_right.jpg \n inflating: preprocessed_images/2626_left.jpg \n inflating: preprocessed_images/2626_right.jpg \n inflating: preprocessed_images/2627_left.jpg \n inflating: preprocessed_images/2627_right.jpg \n inflating: preprocessed_images/2628_left.jpg \n inflating: preprocessed_images/2628_right.jpg \n inflating: preprocessed_images/2629_right.jpg \n inflating: preprocessed_images/262_left.jpg \n inflating: preprocessed_images/262_right.jpg \n inflating: preprocessed_images/2630_left.jpg \n inflating: preprocessed_images/2630_right.jpg \n inflating: preprocessed_images/2631_left.jpg \n inflating: preprocessed_images/2631_right.jpg \n inflating: preprocessed_images/2632_left.jpg \n inflating: preprocessed_images/2632_right.jpg \n inflating: preprocessed_images/2633_left.jpg \n inflating: preprocessed_images/2633_right.jpg \n inflating: preprocessed_images/2634_left.jpg \n inflating: preprocessed_images/2634_right.jpg \n inflating: preprocessed_images/2635_left.jpg \n inflating: preprocessed_images/2635_right.jpg \n inflating: preprocessed_images/2636_left.jpg \n inflating: preprocessed_images/2636_right.jpg \n inflating: preprocessed_images/2637_left.jpg \n inflating: preprocessed_images/2637_right.jpg \n inflating: preprocessed_images/2638_left.jpg \n inflating: preprocessed_images/2638_right.jpg \n inflating: preprocessed_images/2639_left.jpg \n inflating: preprocessed_images/2639_right.jpg \n inflating: preprocessed_images/263_left.jpg \n inflating: preprocessed_images/263_right.jpg \n inflating: preprocessed_images/2640_left.jpg \n inflating: preprocessed_images/2640_right.jpg \n inflating: preprocessed_images/2641_left.jpg \n inflating: preprocessed_images/2641_right.jpg \n inflating: preprocessed_images/2642_left.jpg \n inflating: preprocessed_images/2642_right.jpg \n inflating: preprocessed_images/2643_left.jpg \n inflating: preprocessed_images/2643_right.jpg \n inflating: preprocessed_images/2644_left.jpg \n inflating: preprocessed_images/2644_right.jpg \n inflating: preprocessed_images/2645_left.jpg \n inflating: preprocessed_images/2645_right.jpg \n inflating: preprocessed_images/2646_left.jpg \n inflating: preprocessed_images/2646_right.jpg \n inflating: preprocessed_images/2647_left.jpg \n inflating: preprocessed_images/2647_right.jpg \n inflating: preprocessed_images/2649_left.jpg \n inflating: preprocessed_images/2649_right.jpg \n inflating: preprocessed_images/264_left.jpg \n inflating: preprocessed_images/264_right.jpg \n inflating: preprocessed_images/2650_left.jpg \n inflating: preprocessed_images/2650_right.jpg \n inflating: preprocessed_images/2651_left.jpg \n inflating: preprocessed_images/2651_right.jpg \n inflating: preprocessed_images/2652_left.jpg \n inflating: preprocessed_images/2652_right.jpg \n inflating: preprocessed_images/2653_left.jpg \n inflating: preprocessed_images/2653_right.jpg \n inflating: preprocessed_images/2654_left.jpg \n inflating: preprocessed_images/2654_right.jpg \n inflating: preprocessed_images/2655_left.jpg \n inflating: preprocessed_images/2655_right.jpg \n inflating: preprocessed_images/2657_left.jpg \n inflating: preprocessed_images/2657_right.jpg \n inflating: preprocessed_images/2658_left.jpg \n inflating: preprocessed_images/2658_right.jpg \n inflating: preprocessed_images/2659_left.jpg \n inflating: preprocessed_images/2659_right.jpg \n inflating: preprocessed_images/265_left.jpg \n inflating: preprocessed_images/265_right.jpg \n inflating: preprocessed_images/2660_left.jpg \n inflating: preprocessed_images/2660_right.jpg \n inflating: preprocessed_images/2661_left.jpg \n inflating: preprocessed_images/2661_right.jpg \n inflating: preprocessed_images/2662_left.jpg \n inflating: preprocessed_images/2662_right.jpg \n inflating: preprocessed_images/2663_left.jpg \n inflating: preprocessed_images/2663_right.jpg \n inflating: preprocessed_images/2664_left.jpg \n inflating: preprocessed_images/2665_left.jpg \n inflating: preprocessed_images/2665_right.jpg \n inflating: preprocessed_images/2666_left.jpg \n inflating: preprocessed_images/2666_right.jpg \n inflating: preprocessed_images/2667_left.jpg \n inflating: preprocessed_images/2667_right.jpg \n inflating: preprocessed_images/2668_left.jpg \n inflating: preprocessed_images/2668_right.jpg \n inflating: preprocessed_images/2669_left.jpg \n inflating: preprocessed_images/2669_right.jpg \n inflating: preprocessed_images/266_left.jpg \n inflating: preprocessed_images/266_right.jpg \n inflating: preprocessed_images/2670_left.jpg \n inflating: preprocessed_images/2670_right.jpg \n inflating: preprocessed_images/2671_left.jpg \n inflating: preprocessed_images/2671_right.jpg \n inflating: preprocessed_images/2672_left.jpg \n inflating: preprocessed_images/2672_right.jpg \n inflating: preprocessed_images/2673_left.jpg \n inflating: preprocessed_images/2673_right.jpg \n inflating: preprocessed_images/2675_left.jpg \n inflating: preprocessed_images/2675_right.jpg \n inflating: preprocessed_images/2676_right.jpg \n inflating: preprocessed_images/2677_left.jpg \n inflating: preprocessed_images/2677_right.jpg \n inflating: preprocessed_images/2678_left.jpg \n inflating: preprocessed_images/2678_right.jpg \n inflating: preprocessed_images/2679_left.jpg \n inflating: preprocessed_images/2679_right.jpg \n inflating: preprocessed_images/267_left.jpg \n inflating: preprocessed_images/2680_left.jpg \n inflating: preprocessed_images/2680_right.jpg \n inflating: preprocessed_images/2681_left.jpg \n inflating: preprocessed_images/2681_right.jpg \n inflating: preprocessed_images/2682_left.jpg \n inflating: preprocessed_images/2682_right.jpg \n inflating: preprocessed_images/2683_left.jpg \n inflating: preprocessed_images/2683_right.jpg \n inflating: preprocessed_images/2684_left.jpg \n inflating: preprocessed_images/2684_right.jpg \n inflating: preprocessed_images/2685_left.jpg \n inflating: preprocessed_images/2685_right.jpg \n inflating: preprocessed_images/2687_left.jpg \n inflating: preprocessed_images/2687_right.jpg \n inflating: preprocessed_images/2688_left.jpg \n inflating: preprocessed_images/2688_right.jpg \n inflating: preprocessed_images/2689_left.jpg \n inflating: preprocessed_images/2689_right.jpg \n inflating: preprocessed_images/268_left.jpg \n inflating: preprocessed_images/268_right.jpg \n inflating: preprocessed_images/2690_left.jpg \n inflating: preprocessed_images/2690_right.jpg \n inflating: preprocessed_images/2691_left.jpg \n inflating: preprocessed_images/2691_right.jpg \n inflating: preprocessed_images/2692_left.jpg \n inflating: preprocessed_images/2692_right.jpg \n inflating: preprocessed_images/2693_left.jpg \n inflating: preprocessed_images/2693_right.jpg \n inflating: preprocessed_images/2695_left.jpg \n inflating: preprocessed_images/2695_right.jpg \n inflating: preprocessed_images/2696_left.jpg \n inflating: preprocessed_images/2696_right.jpg \n inflating: preprocessed_images/2697_left.jpg \n inflating: preprocessed_images/2697_right.jpg \n inflating: preprocessed_images/2698_left.jpg \n inflating: preprocessed_images/2698_right.jpg \n inflating: preprocessed_images/2699_left.jpg \n inflating: preprocessed_images/2699_right.jpg \n inflating: preprocessed_images/269_left.jpg \n inflating: preprocessed_images/269_right.jpg \n inflating: preprocessed_images/26_left.jpg \n inflating: preprocessed_images/26_right.jpg \n inflating: preprocessed_images/2701_left.jpg \n inflating: preprocessed_images/2701_right.jpg \n inflating: preprocessed_images/2702_left.jpg \n inflating: preprocessed_images/2702_right.jpg \n inflating: preprocessed_images/2703_left.jpg \n inflating: preprocessed_images/2703_right.jpg \n inflating: preprocessed_images/2704_left.jpg \n inflating: preprocessed_images/2704_right.jpg \n inflating: preprocessed_images/2705_left.jpg \n inflating: preprocessed_images/2705_right.jpg \n inflating: preprocessed_images/2706_left.jpg \n inflating: preprocessed_images/2706_right.jpg \n inflating: preprocessed_images/2707_left.jpg \n inflating: preprocessed_images/2707_right.jpg \n inflating: preprocessed_images/2708_left.jpg \n inflating: preprocessed_images/2708_right.jpg \n inflating: preprocessed_images/2709_left.jpg \n inflating: preprocessed_images/2709_right.jpg \n inflating: preprocessed_images/270_left.jpg \n inflating: preprocessed_images/270_right.jpg \n inflating: preprocessed_images/2710_left.jpg \n inflating: preprocessed_images/2710_right.jpg \n inflating: preprocessed_images/2711_left.jpg \n inflating: preprocessed_images/2711_right.jpg \n inflating: preprocessed_images/2712_left.jpg \n inflating: preprocessed_images/2712_right.jpg \n inflating: preprocessed_images/2713_left.jpg \n inflating: preprocessed_images/2713_right.jpg \n inflating: preprocessed_images/2714_left.jpg \n inflating: preprocessed_images/2714_right.jpg \n inflating: preprocessed_images/2715_left.jpg \n inflating: preprocessed_images/2715_right.jpg \n inflating: preprocessed_images/2716_left.jpg \n inflating: preprocessed_images/2716_right.jpg \n inflating: preprocessed_images/2717_left.jpg \n inflating: preprocessed_images/2717_right.jpg \n inflating: preprocessed_images/2718_left.jpg \n inflating: preprocessed_images/2718_right.jpg \n inflating: preprocessed_images/2719_left.jpg \n inflating: preprocessed_images/2719_right.jpg \n inflating: preprocessed_images/271_left.jpg \n inflating: preprocessed_images/271_right.jpg \n inflating: preprocessed_images/2720_left.jpg \n inflating: preprocessed_images/2720_right.jpg \n inflating: preprocessed_images/2721_left.jpg \n inflating: preprocessed_images/2722_left.jpg \n inflating: preprocessed_images/2722_right.jpg \n inflating: preprocessed_images/2723_left.jpg \n inflating: preprocessed_images/2723_right.jpg \n inflating: preprocessed_images/2724_left.jpg \n inflating: preprocessed_images/2725_left.jpg \n inflating: preprocessed_images/2725_right.jpg \n inflating: preprocessed_images/2726_left.jpg \n inflating: preprocessed_images/2726_right.jpg \n inflating: preprocessed_images/2727_left.jpg \n inflating: preprocessed_images/2728_left.jpg \n inflating: preprocessed_images/2728_right.jpg \n inflating: preprocessed_images/2729_left.jpg \n inflating: preprocessed_images/272_left.jpg \n inflating: preprocessed_images/272_right.jpg \n inflating: preprocessed_images/2730_left.jpg \n inflating: preprocessed_images/2730_right.jpg \n inflating: preprocessed_images/2731_left.jpg \n inflating: preprocessed_images/2731_right.jpg \n inflating: preprocessed_images/2732_left.jpg \n inflating: preprocessed_images/2732_right.jpg \n inflating: preprocessed_images/2733_left.jpg \n inflating: preprocessed_images/2733_right.jpg \n inflating: preprocessed_images/2735_left.jpg \n inflating: preprocessed_images/2735_right.jpg \n inflating: preprocessed_images/2736_left.jpg \n inflating: preprocessed_images/2736_right.jpg \n inflating: preprocessed_images/2737_right.jpg \n inflating: preprocessed_images/2738_left.jpg \n inflating: preprocessed_images/2738_right.jpg \n inflating: preprocessed_images/2739_right.jpg \n inflating: preprocessed_images/273_left.jpg \n inflating: preprocessed_images/273_right.jpg \n inflating: preprocessed_images/2740_left.jpg \n inflating: preprocessed_images/2740_right.jpg \n inflating: preprocessed_images/2742_left.jpg \n inflating: preprocessed_images/2742_right.jpg \n inflating: preprocessed_images/2743_left.jpg \n inflating: preprocessed_images/2743_right.jpg \n inflating: preprocessed_images/2744_left.jpg \n inflating: preprocessed_images/2744_right.jpg \n inflating: preprocessed_images/2745_left.jpg \n inflating: preprocessed_images/2745_right.jpg \n inflating: preprocessed_images/2746_left.jpg \n inflating: preprocessed_images/2746_right.jpg \n inflating: preprocessed_images/2747_left.jpg \n inflating: preprocessed_images/2748_left.jpg \n inflating: preprocessed_images/2748_right.jpg \n inflating: preprocessed_images/2749_left.jpg \n inflating: preprocessed_images/2749_right.jpg \n inflating: preprocessed_images/274_left.jpg \n inflating: preprocessed_images/274_right.jpg \n inflating: preprocessed_images/2750_left.jpg \n inflating: preprocessed_images/2750_right.jpg \n inflating: preprocessed_images/2751_right.jpg \n inflating: preprocessed_images/2752_right.jpg \n inflating: preprocessed_images/2753_left.jpg \n inflating: preprocessed_images/2753_right.jpg \n inflating: preprocessed_images/2754_left.jpg \n inflating: preprocessed_images/2754_right.jpg \n inflating: preprocessed_images/2755_left.jpg \n inflating: preprocessed_images/2755_right.jpg \n inflating: preprocessed_images/2756_right.jpg \n inflating: preprocessed_images/2757_left.jpg \n inflating: preprocessed_images/2758_left.jpg \n inflating: preprocessed_images/2758_right.jpg \n inflating: preprocessed_images/275_left.jpg \n inflating: preprocessed_images/275_right.jpg \n inflating: preprocessed_images/2760_left.jpg \n inflating: preprocessed_images/2760_right.jpg \n inflating: preprocessed_images/2761_left.jpg \n inflating: preprocessed_images/2761_right.jpg \n inflating: preprocessed_images/2762_left.jpg \n inflating: preprocessed_images/2762_right.jpg \n inflating: preprocessed_images/2763_left.jpg \n inflating: preprocessed_images/2763_right.jpg \n inflating: preprocessed_images/2764_left.jpg \n inflating: preprocessed_images/2764_right.jpg \n inflating: preprocessed_images/2765_left.jpg \n inflating: preprocessed_images/2765_right.jpg \n inflating: preprocessed_images/2766_left.jpg \n inflating: preprocessed_images/2766_right.jpg \n inflating: preprocessed_images/2767_left.jpg \n inflating: preprocessed_images/2767_right.jpg \n inflating: preprocessed_images/2768_left.jpg \n inflating: preprocessed_images/2768_right.jpg \n inflating: preprocessed_images/2769_left.jpg \n inflating: preprocessed_images/2769_right.jpg \n inflating: preprocessed_images/276_right.jpg \n inflating: preprocessed_images/2770_left.jpg \n inflating: preprocessed_images/2770_right.jpg \n inflating: preprocessed_images/2771_left.jpg \n inflating: preprocessed_images/2771_right.jpg \n inflating: preprocessed_images/2772_left.jpg \n inflating: preprocessed_images/2772_right.jpg \n inflating: preprocessed_images/2773_left.jpg \n inflating: preprocessed_images/2773_right.jpg \n inflating: preprocessed_images/2774_left.jpg \n inflating: preprocessed_images/2774_right.jpg \n inflating: preprocessed_images/2775_left.jpg \n inflating: preprocessed_images/2775_right.jpg \n inflating: preprocessed_images/2776_left.jpg \n inflating: preprocessed_images/2776_right.jpg \n inflating: preprocessed_images/2777_left.jpg \n inflating: preprocessed_images/2777_right.jpg \n inflating: preprocessed_images/2778_left.jpg \n inflating: preprocessed_images/2778_right.jpg \n inflating: preprocessed_images/2779_left.jpg \n inflating: preprocessed_images/2779_right.jpg \n inflating: preprocessed_images/277_left.jpg \n inflating: preprocessed_images/277_right.jpg \n inflating: preprocessed_images/2780_left.jpg \n inflating: preprocessed_images/2780_right.jpg \n inflating: preprocessed_images/2781_left.jpg \n inflating: preprocessed_images/2781_right.jpg \n inflating: preprocessed_images/2782_left.jpg \n inflating: preprocessed_images/2782_right.jpg \n inflating: preprocessed_images/2783_left.jpg \n inflating: preprocessed_images/2783_right.jpg \n inflating: preprocessed_images/2784_left.jpg \n inflating: preprocessed_images/2784_right.jpg \n inflating: preprocessed_images/2785_left.jpg \n inflating: preprocessed_images/2785_right.jpg \n inflating: preprocessed_images/2786_left.jpg \n inflating: preprocessed_images/2786_right.jpg \n inflating: preprocessed_images/2787_left.jpg \n inflating: preprocessed_images/2787_right.jpg \n inflating: preprocessed_images/2788_left.jpg \n inflating: preprocessed_images/2788_right.jpg \n inflating: preprocessed_images/2789_left.jpg \n inflating: preprocessed_images/2789_right.jpg \n inflating: preprocessed_images/278_left.jpg \n inflating: preprocessed_images/278_right.jpg \n inflating: preprocessed_images/2790_left.jpg \n inflating: preprocessed_images/2790_right.jpg \n inflating: preprocessed_images/2791_left.jpg \n inflating: preprocessed_images/2791_right.jpg \n inflating: preprocessed_images/2792_left.jpg \n inflating: preprocessed_images/2792_right.jpg \n inflating: preprocessed_images/2793_left.jpg \n inflating: preprocessed_images/2793_right.jpg \n inflating: preprocessed_images/2794_left.jpg \n inflating: preprocessed_images/2794_right.jpg \n inflating: preprocessed_images/2795_left.jpg \n inflating: preprocessed_images/2795_right.jpg \n inflating: preprocessed_images/2796_left.jpg \n inflating: preprocessed_images/2796_right.jpg \n inflating: preprocessed_images/2797_left.jpg \n inflating: preprocessed_images/2797_right.jpg \n inflating: preprocessed_images/2798_left.jpg \n inflating: preprocessed_images/2798_right.jpg \n inflating: preprocessed_images/2799_left.jpg \n inflating: preprocessed_images/2799_right.jpg \n inflating: preprocessed_images/279_left.jpg \n inflating: preprocessed_images/279_right.jpg \n inflating: preprocessed_images/27_left.jpg \n inflating: preprocessed_images/27_right.jpg \n inflating: preprocessed_images/2800_left.jpg \n inflating: preprocessed_images/2800_right.jpg \n inflating: preprocessed_images/2801_left.jpg \n inflating: preprocessed_images/2801_right.jpg \n inflating: preprocessed_images/2802_left.jpg \n inflating: preprocessed_images/2802_right.jpg \n inflating: preprocessed_images/2803_left.jpg \n inflating: preprocessed_images/2803_right.jpg \n inflating: preprocessed_images/2804_left.jpg \n inflating: preprocessed_images/2804_right.jpg \n inflating: preprocessed_images/2805_left.jpg \n inflating: preprocessed_images/2805_right.jpg \n inflating: preprocessed_images/2806_left.jpg \n inflating: preprocessed_images/2806_right.jpg \n inflating: preprocessed_images/2807_left.jpg \n inflating: preprocessed_images/2807_right.jpg \n inflating: preprocessed_images/2808_left.jpg \n inflating: preprocessed_images/2808_right.jpg \n inflating: preprocessed_images/2809_left.jpg \n inflating: preprocessed_images/2809_right.jpg \n inflating: preprocessed_images/280_right.jpg \n inflating: preprocessed_images/2810_left.jpg \n inflating: preprocessed_images/2810_right.jpg \n inflating: preprocessed_images/2811_left.jpg \n inflating: preprocessed_images/2811_right.jpg \n inflating: preprocessed_images/2812_left.jpg \n inflating: preprocessed_images/2812_right.jpg \n inflating: preprocessed_images/2813_left.jpg \n inflating: preprocessed_images/2813_right.jpg \n inflating: preprocessed_images/2814_left.jpg \n inflating: preprocessed_images/2814_right.jpg \n inflating: preprocessed_images/2815_left.jpg \n inflating: preprocessed_images/2815_right.jpg \n inflating: preprocessed_images/2816_left.jpg \n inflating: preprocessed_images/2816_right.jpg \n inflating: preprocessed_images/2817_left.jpg \n inflating: preprocessed_images/2817_right.jpg \n inflating: preprocessed_images/2818_left.jpg \n inflating: preprocessed_images/2818_right.jpg \n inflating: preprocessed_images/2819_left.jpg \n inflating: preprocessed_images/2819_right.jpg \n inflating: preprocessed_images/281_left.jpg \n inflating: preprocessed_images/281_right.jpg \n inflating: preprocessed_images/2820_left.jpg \n inflating: preprocessed_images/2820_right.jpg \n inflating: preprocessed_images/2821_left.jpg \n inflating: preprocessed_images/2821_right.jpg \n inflating: preprocessed_images/2822_left.jpg \n inflating: preprocessed_images/2822_right.jpg \n inflating: preprocessed_images/2823_left.jpg \n inflating: preprocessed_images/2823_right.jpg \n inflating: preprocessed_images/2824_left.jpg \n inflating: preprocessed_images/2824_right.jpg \n inflating: preprocessed_images/2825_left.jpg \n inflating: preprocessed_images/2825_right.jpg \n inflating: preprocessed_images/2826_left.jpg \n inflating: preprocessed_images/2826_right.jpg \n inflating: preprocessed_images/2827_left.jpg \n inflating: preprocessed_images/2827_right.jpg \n inflating: preprocessed_images/2828_left.jpg \n inflating: preprocessed_images/2828_right.jpg \n inflating: preprocessed_images/2829_left.jpg \n inflating: preprocessed_images/2829_right.jpg \n inflating: preprocessed_images/282_left.jpg \n inflating: preprocessed_images/282_right.jpg \n inflating: preprocessed_images/2830_left.jpg \n inflating: preprocessed_images/2830_right.jpg \n inflating: preprocessed_images/2831_left.jpg \n inflating: preprocessed_images/2832_left.jpg \n inflating: preprocessed_images/2832_right.jpg \n inflating: preprocessed_images/2833_left.jpg \n inflating: preprocessed_images/2833_right.jpg \n inflating: preprocessed_images/2834_left.jpg \n inflating: preprocessed_images/2834_right.jpg \n inflating: preprocessed_images/2835_left.jpg \n inflating: preprocessed_images/2835_right.jpg \n inflating: preprocessed_images/2836_left.jpg \n inflating: preprocessed_images/2836_right.jpg \n inflating: preprocessed_images/2837_left.jpg \n inflating: preprocessed_images/2837_right.jpg \n inflating: preprocessed_images/2838_left.jpg \n inflating: preprocessed_images/2838_right.jpg \n inflating: preprocessed_images/2839_left.jpg \n inflating: preprocessed_images/2839_right.jpg \n inflating: preprocessed_images/283_right.jpg \n inflating: preprocessed_images/2841_left.jpg \n inflating: preprocessed_images/2841_right.jpg \n inflating: preprocessed_images/2842_left.jpg \n inflating: preprocessed_images/2842_right.jpg \n inflating: preprocessed_images/2843_left.jpg \n inflating: preprocessed_images/2843_right.jpg \n inflating: preprocessed_images/2844_left.jpg \n inflating: preprocessed_images/2844_right.jpg \n inflating: preprocessed_images/2845_left.jpg \n inflating: preprocessed_images/2845_right.jpg \n inflating: preprocessed_images/2846_left.jpg \n inflating: preprocessed_images/2846_right.jpg \n inflating: preprocessed_images/2847_left.jpg \n inflating: preprocessed_images/2847_right.jpg \n inflating: preprocessed_images/2848_left.jpg \n inflating: preprocessed_images/2848_right.jpg \n inflating: preprocessed_images/2849_left.jpg \n inflating: preprocessed_images/2849_right.jpg \n inflating: preprocessed_images/284_left.jpg \n inflating: preprocessed_images/284_right.jpg \n inflating: preprocessed_images/2850_left.jpg \n inflating: preprocessed_images/2850_right.jpg \n inflating: preprocessed_images/2851_right.jpg \n inflating: preprocessed_images/2852_left.jpg \n inflating: preprocessed_images/2852_right.jpg \n inflating: preprocessed_images/2853_left.jpg \n inflating: preprocessed_images/2853_right.jpg \n inflating: preprocessed_images/2854_left.jpg \n inflating: preprocessed_images/2854_right.jpg \n inflating: preprocessed_images/2855_left.jpg \n inflating: preprocessed_images/2855_right.jpg \n inflating: preprocessed_images/2856_left.jpg \n inflating: preprocessed_images/2856_right.jpg \n inflating: preprocessed_images/2857_left.jpg \n inflating: preprocessed_images/2857_right.jpg \n inflating: preprocessed_images/2858_left.jpg \n inflating: preprocessed_images/2858_right.jpg \n inflating: preprocessed_images/2859_left.jpg \n inflating: preprocessed_images/2859_right.jpg \n inflating: preprocessed_images/285_left.jpg \n inflating: preprocessed_images/285_right.jpg \n inflating: preprocessed_images/2860_left.jpg \n inflating: preprocessed_images/2860_right.jpg \n inflating: preprocessed_images/2861_left.jpg \n inflating: preprocessed_images/2861_right.jpg \n inflating: preprocessed_images/2862_left.jpg \n inflating: preprocessed_images/2862_right.jpg \n inflating: preprocessed_images/2864_left.jpg \n inflating: preprocessed_images/2864_right.jpg \n inflating: preprocessed_images/2865_left.jpg \n inflating: preprocessed_images/2865_right.jpg \n inflating: preprocessed_images/2866_left.jpg \n inflating: preprocessed_images/2866_right.jpg \n inflating: preprocessed_images/2867_left.jpg \n inflating: preprocessed_images/2867_right.jpg \n inflating: preprocessed_images/2868_left.jpg \n inflating: preprocessed_images/2868_right.jpg \n inflating: preprocessed_images/2869_left.jpg \n inflating: preprocessed_images/2869_right.jpg \n inflating: preprocessed_images/286_left.jpg \n inflating: preprocessed_images/286_right.jpg \n inflating: preprocessed_images/2871_left.jpg \n inflating: preprocessed_images/2871_right.jpg \n inflating: preprocessed_images/2872_left.jpg \n inflating: preprocessed_images/2872_right.jpg \n inflating: preprocessed_images/2873_left.jpg \n inflating: preprocessed_images/2873_right.jpg \n inflating: preprocessed_images/2874_left.jpg \n inflating: preprocessed_images/2874_right.jpg \n inflating: preprocessed_images/2876_left.jpg \n inflating: preprocessed_images/2876_right.jpg \n inflating: preprocessed_images/2877_left.jpg \n inflating: preprocessed_images/2877_right.jpg \n inflating: preprocessed_images/2878_left.jpg \n inflating: preprocessed_images/2878_right.jpg \n inflating: preprocessed_images/2879_left.jpg \n inflating: preprocessed_images/2879_right.jpg \n inflating: preprocessed_images/287_left.jpg \n inflating: preprocessed_images/287_right.jpg \n inflating: preprocessed_images/2880_left.jpg \n inflating: preprocessed_images/2880_right.jpg \n inflating: preprocessed_images/2881_left.jpg \n inflating: preprocessed_images/2881_right.jpg \n inflating: preprocessed_images/2882_left.jpg \n inflating: preprocessed_images/2882_right.jpg \n inflating: preprocessed_images/2883_left.jpg \n inflating: preprocessed_images/2883_right.jpg \n inflating: preprocessed_images/2884_left.jpg \n inflating: preprocessed_images/2884_right.jpg \n inflating: preprocessed_images/2885_left.jpg \n inflating: preprocessed_images/2885_right.jpg \n inflating: preprocessed_images/2886_left.jpg \n inflating: preprocessed_images/2886_right.jpg \n inflating: preprocessed_images/2887_left.jpg \n inflating: preprocessed_images/2887_right.jpg \n inflating: preprocessed_images/2888_left.jpg \n inflating: preprocessed_images/2888_right.jpg \n inflating: preprocessed_images/2889_left.jpg \n inflating: preprocessed_images/2889_right.jpg \n inflating: preprocessed_images/288_left.jpg \n inflating: preprocessed_images/288_right.jpg \n inflating: preprocessed_images/2890_left.jpg \n inflating: preprocessed_images/2890_right.jpg \n inflating: preprocessed_images/2892_left.jpg \n inflating: preprocessed_images/2892_right.jpg \n inflating: preprocessed_images/2893_right.jpg \n inflating: preprocessed_images/2895_left.jpg \n inflating: preprocessed_images/2895_right.jpg \n inflating: preprocessed_images/2896_left.jpg \n inflating: preprocessed_images/2896_right.jpg \n inflating: preprocessed_images/2897_left.jpg \n inflating: preprocessed_images/2897_right.jpg \n inflating: preprocessed_images/2898_left.jpg \n inflating: preprocessed_images/2898_right.jpg \n inflating: preprocessed_images/2899_left.jpg \n inflating: preprocessed_images/2899_right.jpg \n inflating: preprocessed_images/289_left.jpg \n inflating: preprocessed_images/289_right.jpg \n inflating: preprocessed_images/28_left.jpg \n inflating: preprocessed_images/28_right.jpg \n inflating: preprocessed_images/2900_left.jpg \n inflating: preprocessed_images/2900_right.jpg \n inflating: preprocessed_images/2901_left.jpg \n inflating: preprocessed_images/2901_right.jpg \n inflating: preprocessed_images/2902_left.jpg \n inflating: preprocessed_images/2902_right.jpg \n inflating: preprocessed_images/2903_left.jpg \n inflating: preprocessed_images/2903_right.jpg \n inflating: preprocessed_images/2904_left.jpg \n inflating: preprocessed_images/2904_right.jpg \n inflating: preprocessed_images/2905_left.jpg \n inflating: preprocessed_images/2905_right.jpg \n inflating: preprocessed_images/2906_left.jpg \n inflating: preprocessed_images/2906_right.jpg \n inflating: preprocessed_images/2907_left.jpg \n inflating: preprocessed_images/2907_right.jpg \n inflating: preprocessed_images/2908_left.jpg \n inflating: preprocessed_images/2908_right.jpg \n inflating: preprocessed_images/2909_left.jpg \n inflating: preprocessed_images/2909_right.jpg \n inflating: preprocessed_images/290_left.jpg \n inflating: preprocessed_images/2910_left.jpg \n inflating: preprocessed_images/2910_right.jpg \n inflating: preprocessed_images/2911_left.jpg \n inflating: preprocessed_images/2911_right.jpg \n inflating: preprocessed_images/2912_left.jpg \n inflating: preprocessed_images/2912_right.jpg \n inflating: preprocessed_images/2913_left.jpg \n inflating: preprocessed_images/2913_right.jpg \n inflating: preprocessed_images/2914_left.jpg \n inflating: preprocessed_images/2914_right.jpg \n inflating: preprocessed_images/2915_left.jpg \n inflating: preprocessed_images/2915_right.jpg \n inflating: preprocessed_images/2916_left.jpg \n inflating: preprocessed_images/2916_right.jpg \n inflating: preprocessed_images/2917_left.jpg \n inflating: preprocessed_images/2917_right.jpg \n inflating: preprocessed_images/2918_left.jpg \n inflating: preprocessed_images/2918_right.jpg \n inflating: preprocessed_images/2919_left.jpg \n inflating: preprocessed_images/2919_right.jpg \n inflating: preprocessed_images/291_left.jpg \n inflating: preprocessed_images/291_right.jpg \n inflating: preprocessed_images/2920_left.jpg \n inflating: preprocessed_images/2920_right.jpg \n inflating: preprocessed_images/2921_left.jpg \n inflating: preprocessed_images/2921_right.jpg \n inflating: preprocessed_images/2923_left.jpg \n inflating: preprocessed_images/2923_right.jpg \n inflating: preprocessed_images/2924_left.jpg \n inflating: preprocessed_images/2924_right.jpg \n inflating: preprocessed_images/2925_left.jpg \n inflating: preprocessed_images/2925_right.jpg \n inflating: preprocessed_images/2926_left.jpg \n inflating: preprocessed_images/2926_right.jpg \n inflating: preprocessed_images/2927_left.jpg \n inflating: preprocessed_images/2927_right.jpg \n inflating: preprocessed_images/2929_left.jpg \n inflating: preprocessed_images/2929_right.jpg \n inflating: preprocessed_images/292_left.jpg \n inflating: preprocessed_images/292_right.jpg \n inflating: preprocessed_images/2930_right.jpg \n inflating: preprocessed_images/2931_left.jpg \n inflating: preprocessed_images/2931_right.jpg \n inflating: preprocessed_images/2932_left.jpg \n inflating: preprocessed_images/2932_right.jpg \n inflating: preprocessed_images/2933_left.jpg \n inflating: preprocessed_images/2933_right.jpg \n inflating: preprocessed_images/2934_left.jpg \n inflating: preprocessed_images/2934_right.jpg \n inflating: preprocessed_images/2935_left.jpg \n inflating: preprocessed_images/2935_right.jpg \n inflating: preprocessed_images/2936_left.jpg \n inflating: preprocessed_images/2936_right.jpg \n inflating: preprocessed_images/2937_left.jpg \n inflating: preprocessed_images/2937_right.jpg \n inflating: preprocessed_images/2938_left.jpg \n inflating: preprocessed_images/2938_right.jpg \n inflating: preprocessed_images/2939_left.jpg \n inflating: preprocessed_images/2939_right.jpg \n inflating: preprocessed_images/293_right.jpg \n inflating: preprocessed_images/2940_left.jpg \n inflating: preprocessed_images/2940_right.jpg \n inflating: preprocessed_images/2941_left.jpg \n inflating: preprocessed_images/2941_right.jpg \n inflating: preprocessed_images/2942_left.jpg \n inflating: preprocessed_images/2942_right.jpg \n inflating: preprocessed_images/2943_left.jpg \n inflating: preprocessed_images/2943_right.jpg \n inflating: preprocessed_images/2944_left.jpg \n inflating: preprocessed_images/2944_right.jpg \n inflating: preprocessed_images/2945_left.jpg \n inflating: preprocessed_images/2945_right.jpg \n inflating: preprocessed_images/2946_left.jpg \n inflating: preprocessed_images/2946_right.jpg \n inflating: preprocessed_images/2948_left.jpg \n inflating: preprocessed_images/2948_right.jpg \n inflating: preprocessed_images/2949_left.jpg \n inflating: preprocessed_images/2949_right.jpg \n inflating: preprocessed_images/294_left.jpg \n inflating: preprocessed_images/294_right.jpg \n inflating: preprocessed_images/2950_left.jpg \n inflating: preprocessed_images/2950_right.jpg \n inflating: preprocessed_images/2951_left.jpg \n inflating: preprocessed_images/2951_right.jpg \n inflating: preprocessed_images/2952_left.jpg \n inflating: preprocessed_images/2952_right.jpg \n inflating: preprocessed_images/2953_left.jpg \n inflating: preprocessed_images/2953_right.jpg \n inflating: preprocessed_images/2954_left.jpg \n inflating: preprocessed_images/2954_right.jpg \n inflating: preprocessed_images/2955_left.jpg \n inflating: preprocessed_images/2955_right.jpg \n inflating: preprocessed_images/2956_left.jpg \n inflating: preprocessed_images/2956_right.jpg \n inflating: preprocessed_images/2957_left.jpg \n inflating: preprocessed_images/2957_right.jpg \n inflating: preprocessed_images/2958_left.jpg \n inflating: preprocessed_images/2958_right.jpg \n inflating: preprocessed_images/2959_right.jpg \n inflating: preprocessed_images/295_left.jpg \n inflating: preprocessed_images/295_right.jpg \n inflating: preprocessed_images/2960_left.jpg \n inflating: preprocessed_images/2960_right.jpg \n inflating: preprocessed_images/2961_left.jpg \n inflating: preprocessed_images/2961_right.jpg \n inflating: preprocessed_images/2962_left.jpg \n inflating: preprocessed_images/2962_right.jpg \n inflating: preprocessed_images/2963_left.jpg \n inflating: preprocessed_images/2963_right.jpg \n inflating: preprocessed_images/2964_left.jpg \n inflating: preprocessed_images/2964_right.jpg \n inflating: preprocessed_images/2965_left.jpg \n inflating: preprocessed_images/2965_right.jpg \n inflating: preprocessed_images/2966_left.jpg \n inflating: preprocessed_images/2966_right.jpg \n inflating: preprocessed_images/2967_left.jpg \n inflating: preprocessed_images/2967_right.jpg \n inflating: preprocessed_images/2968_left.jpg \n inflating: preprocessed_images/2969_left.jpg \n inflating: preprocessed_images/2969_right.jpg \n inflating: preprocessed_images/296_left.jpg \n inflating: preprocessed_images/296_right.jpg \n inflating: preprocessed_images/2970_left.jpg \n inflating: preprocessed_images/2970_right.jpg \n inflating: preprocessed_images/2971_left.jpg \n inflating: preprocessed_images/2971_right.jpg \n inflating: preprocessed_images/2972_left.jpg \n inflating: preprocessed_images/2972_right.jpg \n inflating: preprocessed_images/2973_left.jpg \n inflating: preprocessed_images/2973_right.jpg \n inflating: preprocessed_images/2974_left.jpg \n inflating: preprocessed_images/2974_right.jpg \n inflating: preprocessed_images/2975_left.jpg \n inflating: preprocessed_images/2975_right.jpg \n inflating: preprocessed_images/2976_left.jpg \n inflating: preprocessed_images/2976_right.jpg \n inflating: preprocessed_images/2977_left.jpg \n inflating: preprocessed_images/2977_right.jpg \n inflating: preprocessed_images/2978_left.jpg \n inflating: preprocessed_images/2978_right.jpg \n inflating: preprocessed_images/2979_left.jpg \n inflating: preprocessed_images/2979_right.jpg \n inflating: preprocessed_images/297_left.jpg \n inflating: preprocessed_images/297_right.jpg \n inflating: preprocessed_images/2980_left.jpg \n inflating: preprocessed_images/2980_right.jpg \n inflating: preprocessed_images/2981_left.jpg \n inflating: preprocessed_images/2981_right.jpg \n inflating: preprocessed_images/2982_left.jpg \n inflating: preprocessed_images/2982_right.jpg \n inflating: preprocessed_images/2983_left.jpg \n inflating: preprocessed_images/2983_right.jpg \n inflating: preprocessed_images/2984_left.jpg \n inflating: preprocessed_images/2984_right.jpg \n inflating: preprocessed_images/2985_left.jpg \n inflating: preprocessed_images/2985_right.jpg \n inflating: preprocessed_images/2986_left.jpg \n inflating: preprocessed_images/2986_right.jpg \n inflating: preprocessed_images/2987_left.jpg \n inflating: preprocessed_images/2987_right.jpg \n inflating: preprocessed_images/2988_left.jpg \n inflating: preprocessed_images/2988_right.jpg \n inflating: preprocessed_images/2989_left.jpg \n inflating: preprocessed_images/2989_right.jpg \n inflating: preprocessed_images/298_left.jpg \n inflating: preprocessed_images/298_right.jpg \n inflating: preprocessed_images/2990_left.jpg \n inflating: preprocessed_images/2990_right.jpg \n inflating: preprocessed_images/2991_left.jpg \n inflating: preprocessed_images/2991_right.jpg \n inflating: preprocessed_images/2992_left.jpg \n inflating: preprocessed_images/2992_right.jpg \n inflating: preprocessed_images/2993_left.jpg \n inflating: preprocessed_images/2993_right.jpg \n inflating: preprocessed_images/2994_left.jpg \n inflating: preprocessed_images/2994_right.jpg \n inflating: preprocessed_images/2995_left.jpg \n inflating: preprocessed_images/2995_right.jpg \n inflating: preprocessed_images/2996_left.jpg \n inflating: preprocessed_images/2996_right.jpg \n inflating: preprocessed_images/2997_left.jpg \n inflating: preprocessed_images/2997_right.jpg \n inflating: preprocessed_images/2998_left.jpg \n inflating: preprocessed_images/2998_right.jpg \n inflating: preprocessed_images/2999_left.jpg \n inflating: preprocessed_images/2999_right.jpg \n inflating: preprocessed_images/299_left.jpg \n inflating: preprocessed_images/299_right.jpg \n inflating: preprocessed_images/29_left.jpg \n inflating: preprocessed_images/29_right.jpg \n inflating: preprocessed_images/2_right.jpg \n inflating: preprocessed_images/3000_left.jpg \n inflating: preprocessed_images/3000_right.jpg \n inflating: preprocessed_images/3001_left.jpg \n inflating: preprocessed_images/3001_right.jpg \n inflating: preprocessed_images/3002_left.jpg \n inflating: preprocessed_images/3002_right.jpg \n inflating: preprocessed_images/3003_left.jpg \n inflating: preprocessed_images/3003_right.jpg \n inflating: preprocessed_images/3004_left.jpg \n inflating: preprocessed_images/3004_right.jpg \n inflating: preprocessed_images/3005_left.jpg \n inflating: preprocessed_images/3005_right.jpg \n inflating: preprocessed_images/3007_left.jpg \n inflating: preprocessed_images/3007_right.jpg \n inflating: preprocessed_images/3008_left.jpg \n inflating: preprocessed_images/3008_right.jpg \n inflating: preprocessed_images/3009_left.jpg \n inflating: preprocessed_images/3009_right.jpg \n inflating: preprocessed_images/300_left.jpg \n inflating: preprocessed_images/300_right.jpg \n inflating: preprocessed_images/3010_left.jpg \n inflating: preprocessed_images/3010_right.jpg \n inflating: preprocessed_images/3011_left.jpg \n inflating: preprocessed_images/3011_right.jpg \n inflating: preprocessed_images/3012_left.jpg \n inflating: preprocessed_images/3012_right.jpg \n inflating: preprocessed_images/3013_left.jpg \n inflating: preprocessed_images/3013_right.jpg \n inflating: preprocessed_images/3014_left.jpg \n inflating: preprocessed_images/3014_right.jpg \n inflating: preprocessed_images/3015_left.jpg \n inflating: preprocessed_images/3015_right.jpg \n inflating: preprocessed_images/3016_left.jpg \n inflating: preprocessed_images/3016_right.jpg \n inflating: preprocessed_images/3017_left.jpg \n inflating: preprocessed_images/3017_right.jpg \n inflating: preprocessed_images/3018_left.jpg \n inflating: preprocessed_images/3018_right.jpg \n inflating: preprocessed_images/3019_left.jpg \n inflating: preprocessed_images/301_left.jpg \n inflating: preprocessed_images/301_right.jpg \n inflating: preprocessed_images/3020_left.jpg \n inflating: preprocessed_images/3020_right.jpg \n inflating: preprocessed_images/3021_left.jpg \n inflating: preprocessed_images/3023_left.jpg \n inflating: preprocessed_images/3023_right.jpg \n inflating: preprocessed_images/3025_left.jpg \n inflating: preprocessed_images/3025_right.jpg \n inflating: preprocessed_images/3026_left.jpg \n inflating: preprocessed_images/3026_right.jpg \n inflating: preprocessed_images/3027_left.jpg \n inflating: preprocessed_images/3027_right.jpg \n inflating: preprocessed_images/3028_left.jpg \n inflating: preprocessed_images/3028_right.jpg \n inflating: preprocessed_images/3029_left.jpg \n inflating: preprocessed_images/3029_right.jpg \n inflating: preprocessed_images/302_left.jpg \n inflating: preprocessed_images/302_right.jpg \n inflating: preprocessed_images/3030_left.jpg \n inflating: preprocessed_images/3030_right.jpg \n inflating: preprocessed_images/3033_left.jpg \n inflating: preprocessed_images/3033_right.jpg \n inflating: preprocessed_images/3034_left.jpg \n inflating: preprocessed_images/3034_right.jpg \n inflating: preprocessed_images/3035_left.jpg \n inflating: preprocessed_images/3035_right.jpg \n inflating: preprocessed_images/3036_left.jpg \n inflating: preprocessed_images/3036_right.jpg \n inflating: preprocessed_images/3037_right.jpg \n inflating: preprocessed_images/3038_left.jpg \n inflating: preprocessed_images/3038_right.jpg \n inflating: preprocessed_images/3039_right.jpg \n inflating: preprocessed_images/303_left.jpg \n inflating: preprocessed_images/303_right.jpg \n inflating: preprocessed_images/3040_left.jpg \n inflating: preprocessed_images/3040_right.jpg \n inflating: preprocessed_images/3041_left.jpg \n inflating: preprocessed_images/3041_right.jpg \n inflating: preprocessed_images/3042_left.jpg \n inflating: preprocessed_images/3042_right.jpg \n inflating: preprocessed_images/3043_left.jpg \n inflating: preprocessed_images/3043_right.jpg \n inflating: preprocessed_images/3044_left.jpg \n inflating: preprocessed_images/3044_right.jpg \n inflating: preprocessed_images/3045_left.jpg \n inflating: preprocessed_images/3045_right.jpg \n inflating: preprocessed_images/3046_right.jpg \n inflating: preprocessed_images/3047_left.jpg \n inflating: preprocessed_images/3047_right.jpg \n inflating: preprocessed_images/3048_left.jpg \n inflating: preprocessed_images/3048_right.jpg \n inflating: preprocessed_images/3049_left.jpg \n inflating: preprocessed_images/304_left.jpg \n inflating: preprocessed_images/304_right.jpg \n inflating: preprocessed_images/3050_left.jpg \n inflating: preprocessed_images/3050_right.jpg \n inflating: preprocessed_images/3051_left.jpg \n inflating: preprocessed_images/3051_right.jpg \n inflating: preprocessed_images/3052_left.jpg \n inflating: preprocessed_images/3052_right.jpg \n inflating: preprocessed_images/3054_left.jpg \n inflating: preprocessed_images/3054_right.jpg \n inflating: preprocessed_images/3056_left.jpg \n inflating: preprocessed_images/3056_right.jpg \n inflating: preprocessed_images/3057_left.jpg \n inflating: preprocessed_images/3057_right.jpg \n inflating: preprocessed_images/3058_left.jpg \n inflating: preprocessed_images/3058_right.jpg \n inflating: preprocessed_images/3059_left.jpg \n inflating: preprocessed_images/3059_right.jpg \n inflating: preprocessed_images/305_right.jpg \n inflating: preprocessed_images/3061_left.jpg \n inflating: preprocessed_images/3061_right.jpg \n inflating: preprocessed_images/3062_left.jpg \n inflating: preprocessed_images/3062_right.jpg \n inflating: preprocessed_images/3063_left.jpg \n inflating: preprocessed_images/3063_right.jpg \n inflating: preprocessed_images/3064_left.jpg \n inflating: preprocessed_images/3064_right.jpg \n inflating: preprocessed_images/3065_left.jpg \n inflating: preprocessed_images/3065_right.jpg \n inflating: preprocessed_images/3066_left.jpg \n inflating: preprocessed_images/3066_right.jpg \n inflating: preprocessed_images/3067_left.jpg \n inflating: preprocessed_images/3067_right.jpg \n inflating: preprocessed_images/3068_left.jpg \n inflating: preprocessed_images/3068_right.jpg \n inflating: preprocessed_images/3069_left.jpg \n inflating: preprocessed_images/3069_right.jpg \n inflating: preprocessed_images/306_left.jpg \n inflating: preprocessed_images/306_right.jpg \n inflating: preprocessed_images/3070_left.jpg \n inflating: preprocessed_images/3070_right.jpg \n inflating: preprocessed_images/3071_left.jpg \n inflating: preprocessed_images/3071_right.jpg \n inflating: preprocessed_images/3072_left.jpg \n inflating: preprocessed_images/3072_right.jpg \n inflating: preprocessed_images/3073_left.jpg \n inflating: preprocessed_images/3073_right.jpg \n inflating: preprocessed_images/3074_left.jpg \n inflating: preprocessed_images/3074_right.jpg \n inflating: preprocessed_images/3075_left.jpg \n inflating: preprocessed_images/3075_right.jpg \n inflating: preprocessed_images/3076_left.jpg \n inflating: preprocessed_images/3076_right.jpg \n inflating: preprocessed_images/3077_left.jpg \n inflating: preprocessed_images/3077_right.jpg \n inflating: preprocessed_images/3078_left.jpg \n inflating: preprocessed_images/3078_right.jpg \n inflating: preprocessed_images/3079_left.jpg \n inflating: preprocessed_images/3079_right.jpg \n inflating: preprocessed_images/307_left.jpg \n inflating: preprocessed_images/3080_left.jpg \n inflating: preprocessed_images/3080_right.jpg \n inflating: preprocessed_images/3081_left.jpg \n inflating: preprocessed_images/3081_right.jpg \n inflating: preprocessed_images/3082_left.jpg \n inflating: preprocessed_images/3082_right.jpg \n inflating: preprocessed_images/3083_left.jpg \n inflating: preprocessed_images/3083_right.jpg \n inflating: preprocessed_images/3084_left.jpg \n inflating: preprocessed_images/3084_right.jpg \n inflating: preprocessed_images/3085_left.jpg \n inflating: preprocessed_images/3085_right.jpg \n inflating: preprocessed_images/3086_left.jpg \n inflating: preprocessed_images/3086_right.jpg \n inflating: preprocessed_images/3087_left.jpg \n inflating: preprocessed_images/3087_right.jpg \n inflating: preprocessed_images/3088_left.jpg \n inflating: preprocessed_images/3088_right.jpg \n inflating: preprocessed_images/3089_left.jpg \n inflating: preprocessed_images/3089_right.jpg \n inflating: preprocessed_images/308_left.jpg \n inflating: preprocessed_images/308_right.jpg \n inflating: preprocessed_images/3090_left.jpg \n inflating: preprocessed_images/3090_right.jpg \n inflating: preprocessed_images/3091_left.jpg \n inflating: preprocessed_images/3091_right.jpg \n inflating: preprocessed_images/3092_left.jpg \n inflating: preprocessed_images/3092_right.jpg \n inflating: preprocessed_images/3093_left.jpg \n inflating: preprocessed_images/3093_right.jpg \n inflating: preprocessed_images/3094_left.jpg \n inflating: preprocessed_images/3094_right.jpg \n inflating: preprocessed_images/3095_left.jpg \n inflating: preprocessed_images/3095_right.jpg \n inflating: preprocessed_images/3096_left.jpg \n inflating: preprocessed_images/3096_right.jpg \n inflating: preprocessed_images/3097_left.jpg \n inflating: preprocessed_images/3097_right.jpg \n inflating: preprocessed_images/3098_left.jpg \n inflating: preprocessed_images/3098_right.jpg \n inflating: preprocessed_images/3099_left.jpg \n inflating: preprocessed_images/3099_right.jpg \n inflating: preprocessed_images/309_left.jpg \n inflating: preprocessed_images/309_right.jpg \n inflating: preprocessed_images/3100_left.jpg \n inflating: preprocessed_images/3100_right.jpg \n inflating: preprocessed_images/3101_left.jpg \n inflating: preprocessed_images/3101_right.jpg \n inflating: preprocessed_images/3102_left.jpg \n inflating: preprocessed_images/3102_right.jpg \n inflating: preprocessed_images/3103_left.jpg \n inflating: preprocessed_images/3103_right.jpg \n inflating: preprocessed_images/3104_left.jpg \n inflating: preprocessed_images/3104_right.jpg \n inflating: preprocessed_images/3105_left.jpg \n inflating: preprocessed_images/3105_right.jpg \n inflating: preprocessed_images/3106_left.jpg \n inflating: preprocessed_images/3106_right.jpg \n inflating: preprocessed_images/3107_left.jpg \n inflating: preprocessed_images/3107_right.jpg \n inflating: preprocessed_images/3108_left.jpg \n inflating: preprocessed_images/3108_right.jpg \n inflating: preprocessed_images/3109_left.jpg \n inflating: preprocessed_images/3109_right.jpg \n inflating: preprocessed_images/310_left.jpg \n inflating: preprocessed_images/310_right.jpg \n inflating: preprocessed_images/3110_left.jpg \n inflating: preprocessed_images/3110_right.jpg \n inflating: preprocessed_images/3111_left.jpg \n inflating: preprocessed_images/3111_right.jpg \n inflating: preprocessed_images/3112_left.jpg \n inflating: preprocessed_images/3112_right.jpg \n inflating: preprocessed_images/3113_left.jpg \n inflating: preprocessed_images/3113_right.jpg \n inflating: preprocessed_images/3114_left.jpg \n inflating: preprocessed_images/3114_right.jpg \n inflating: preprocessed_images/3115_left.jpg \n inflating: preprocessed_images/3115_right.jpg \n inflating: preprocessed_images/3116_left.jpg \n inflating: preprocessed_images/3116_right.jpg \n inflating: preprocessed_images/3117_left.jpg \n inflating: preprocessed_images/3117_right.jpg \n inflating: preprocessed_images/3119_left.jpg \n inflating: preprocessed_images/311_left.jpg \n inflating: preprocessed_images/311_right.jpg \n inflating: preprocessed_images/3120_left.jpg \n inflating: preprocessed_images/3120_right.jpg \n inflating: preprocessed_images/3121_left.jpg \n inflating: preprocessed_images/3121_right.jpg \n inflating: preprocessed_images/3122_left.jpg \n inflating: preprocessed_images/3122_right.jpg \n inflating: preprocessed_images/3123_left.jpg \n inflating: preprocessed_images/3123_right.jpg \n inflating: preprocessed_images/3124_left.jpg \n inflating: preprocessed_images/3124_right.jpg \n inflating: preprocessed_images/3125_left.jpg \n inflating: preprocessed_images/3125_right.jpg \n inflating: preprocessed_images/3126_left.jpg \n inflating: preprocessed_images/3126_right.jpg \n inflating: preprocessed_images/3127_left.jpg \n inflating: preprocessed_images/3127_right.jpg \n inflating: preprocessed_images/3128_left.jpg \n inflating: preprocessed_images/3128_right.jpg \n inflating: preprocessed_images/3129_left.jpg \n inflating: preprocessed_images/3129_right.jpg \n inflating: preprocessed_images/312_left.jpg \n inflating: preprocessed_images/312_right.jpg \n inflating: preprocessed_images/3130_left.jpg \n inflating: preprocessed_images/3130_right.jpg \n inflating: preprocessed_images/3131_left.jpg \n inflating: preprocessed_images/3131_right.jpg \n inflating: preprocessed_images/3132_left.jpg \n inflating: preprocessed_images/3132_right.jpg \n inflating: preprocessed_images/3134_left.jpg \n inflating: preprocessed_images/3134_right.jpg \n inflating: preprocessed_images/3135_left.jpg \n inflating: preprocessed_images/3135_right.jpg \n inflating: preprocessed_images/3136_left.jpg \n inflating: preprocessed_images/3136_right.jpg \n inflating: preprocessed_images/3137_left.jpg \n inflating: preprocessed_images/3137_right.jpg \n inflating: preprocessed_images/3138_left.jpg \n inflating: preprocessed_images/3138_right.jpg \n inflating: preprocessed_images/3139_left.jpg \n inflating: preprocessed_images/3139_right.jpg \n inflating: preprocessed_images/313_left.jpg \n inflating: preprocessed_images/313_right.jpg \n inflating: preprocessed_images/3140_left.jpg \n inflating: preprocessed_images/3140_right.jpg \n inflating: preprocessed_images/3141_left.jpg \n inflating: preprocessed_images/3141_right.jpg \n inflating: preprocessed_images/3142_left.jpg \n inflating: preprocessed_images/3142_right.jpg \n inflating: preprocessed_images/3143_left.jpg \n inflating: preprocessed_images/3143_right.jpg \n inflating: preprocessed_images/3144_left.jpg \n inflating: preprocessed_images/3144_right.jpg \n inflating: preprocessed_images/3145_left.jpg \n inflating: preprocessed_images/3145_right.jpg \n inflating: preprocessed_images/3146_left.jpg \n inflating: preprocessed_images/3146_right.jpg \n inflating: preprocessed_images/3147_left.jpg \n inflating: preprocessed_images/3147_right.jpg \n inflating: preprocessed_images/3149_left.jpg \n inflating: preprocessed_images/3149_right.jpg \n inflating: preprocessed_images/314_left.jpg \n inflating: preprocessed_images/314_right.jpg \n inflating: preprocessed_images/3150_left.jpg \n inflating: preprocessed_images/3150_right.jpg \n inflating: preprocessed_images/3151_left.jpg \n inflating: preprocessed_images/3151_right.jpg \n inflating: preprocessed_images/3152_left.jpg \n inflating: preprocessed_images/3152_right.jpg \n inflating: preprocessed_images/3153_left.jpg \n inflating: preprocessed_images/3153_right.jpg \n inflating: preprocessed_images/3154_left.jpg \n inflating: preprocessed_images/3155_left.jpg \n inflating: preprocessed_images/3155_right.jpg \n inflating: preprocessed_images/3156_left.jpg \n inflating: preprocessed_images/3156_right.jpg \n inflating: preprocessed_images/3157_left.jpg \n inflating: preprocessed_images/3157_right.jpg \n inflating: preprocessed_images/3159_left.jpg \n inflating: preprocessed_images/3159_right.jpg \n inflating: preprocessed_images/315_left.jpg \n inflating: preprocessed_images/315_right.jpg \n inflating: preprocessed_images/3160_left.jpg \n inflating: preprocessed_images/3160_right.jpg \n inflating: preprocessed_images/3161_left.jpg \n inflating: preprocessed_images/3161_right.jpg \n inflating: preprocessed_images/3162_left.jpg \n inflating: preprocessed_images/3162_right.jpg \n inflating: preprocessed_images/3163_left.jpg \n inflating: preprocessed_images/3163_right.jpg \n inflating: preprocessed_images/3164_right.jpg \n inflating: preprocessed_images/3165_left.jpg \n inflating: preprocessed_images/3165_right.jpg \n inflating: preprocessed_images/3166_left.jpg \n inflating: preprocessed_images/3166_right.jpg \n inflating: preprocessed_images/3167_left.jpg \n inflating: preprocessed_images/3167_right.jpg \n inflating: preprocessed_images/3168_left.jpg \n inflating: preprocessed_images/3168_right.jpg \n inflating: preprocessed_images/3169_left.jpg \n inflating: preprocessed_images/3169_right.jpg \n inflating: preprocessed_images/316_left.jpg \n inflating: preprocessed_images/316_right.jpg \n inflating: preprocessed_images/3170_left.jpg \n inflating: preprocessed_images/3170_right.jpg \n inflating: preprocessed_images/3171_left.jpg \n inflating: preprocessed_images/3171_right.jpg \n inflating: preprocessed_images/3172_left.jpg \n inflating: preprocessed_images/3172_right.jpg \n inflating: preprocessed_images/3173_left.jpg \n inflating: preprocessed_images/3173_right.jpg \n inflating: preprocessed_images/3174_left.jpg \n inflating: preprocessed_images/3174_right.jpg \n inflating: preprocessed_images/3175_left.jpg \n inflating: preprocessed_images/3175_right.jpg \n inflating: preprocessed_images/3176_left.jpg \n inflating: preprocessed_images/3176_right.jpg \n inflating: preprocessed_images/3177_left.jpg \n inflating: preprocessed_images/3177_right.jpg \n inflating: preprocessed_images/3178_left.jpg \n inflating: preprocessed_images/3178_right.jpg \n inflating: preprocessed_images/3179_left.jpg \n inflating: preprocessed_images/3179_right.jpg \n inflating: preprocessed_images/317_left.jpg \n inflating: preprocessed_images/3180_left.jpg \n inflating: preprocessed_images/3180_right.jpg \n inflating: preprocessed_images/3181_right.jpg \n inflating: preprocessed_images/3182_left.jpg \n inflating: preprocessed_images/3182_right.jpg \n inflating: preprocessed_images/3183_left.jpg \n inflating: preprocessed_images/3183_right.jpg \n inflating: preprocessed_images/3184_left.jpg \n inflating: preprocessed_images/3184_right.jpg \n inflating: preprocessed_images/3185_left.jpg \n inflating: preprocessed_images/3185_right.jpg \n inflating: preprocessed_images/3186_left.jpg \n inflating: preprocessed_images/3186_right.jpg \n inflating: preprocessed_images/3187_left.jpg \n inflating: preprocessed_images/3187_right.jpg \n inflating: preprocessed_images/3188_left.jpg \n inflating: preprocessed_images/3188_right.jpg \n inflating: preprocessed_images/3189_left.jpg \n inflating: preprocessed_images/3189_right.jpg \n inflating: preprocessed_images/318_left.jpg \n inflating: preprocessed_images/318_right.jpg \n inflating: preprocessed_images/3190_left.jpg \n inflating: preprocessed_images/3190_right.jpg \n inflating: preprocessed_images/3191_left.jpg \n inflating: preprocessed_images/3191_right.jpg \n inflating: preprocessed_images/3192_left.jpg \n inflating: preprocessed_images/3192_right.jpg \n inflating: preprocessed_images/3193_left.jpg \n inflating: preprocessed_images/3193_right.jpg \n inflating: preprocessed_images/3194_left.jpg \n inflating: preprocessed_images/3194_right.jpg \n inflating: preprocessed_images/3195_left.jpg \n inflating: preprocessed_images/3195_right.jpg \n inflating: preprocessed_images/3196_left.jpg \n inflating: preprocessed_images/3196_right.jpg \n inflating: preprocessed_images/3197_left.jpg \n inflating: preprocessed_images/3197_right.jpg \n inflating: preprocessed_images/3198_left.jpg \n inflating: preprocessed_images/3198_right.jpg \n inflating: preprocessed_images/3199_left.jpg \n inflating: preprocessed_images/3199_right.jpg \n inflating: preprocessed_images/319_left.jpg \n inflating: preprocessed_images/319_right.jpg \n inflating: preprocessed_images/31_left.jpg \n inflating: preprocessed_images/31_right.jpg \n inflating: preprocessed_images/3200_left.jpg \n inflating: preprocessed_images/3200_right.jpg \n inflating: preprocessed_images/3201_left.jpg \n inflating: preprocessed_images/3201_right.jpg \n inflating: preprocessed_images/3202_left.jpg \n inflating: preprocessed_images/3202_right.jpg \n inflating: preprocessed_images/3203_left.jpg \n inflating: preprocessed_images/3203_right.jpg \n inflating: preprocessed_images/3204_left.jpg \n inflating: preprocessed_images/3205_left.jpg \n inflating: preprocessed_images/3205_right.jpg \n inflating: preprocessed_images/3206_left.jpg \n inflating: preprocessed_images/3206_right.jpg \n inflating: preprocessed_images/3207_left.jpg \n inflating: preprocessed_images/3207_right.jpg \n inflating: preprocessed_images/3208_left.jpg \n inflating: preprocessed_images/3208_right.jpg \n inflating: preprocessed_images/3209_left.jpg \n inflating: preprocessed_images/3209_right.jpg \n inflating: preprocessed_images/320_left.jpg \n inflating: preprocessed_images/3210_left.jpg \n inflating: preprocessed_images/3210_right.jpg \n inflating: preprocessed_images/3211_left.jpg \n inflating: preprocessed_images/3211_right.jpg \n inflating: preprocessed_images/3212_left.jpg \n inflating: preprocessed_images/3212_right.jpg \n inflating: preprocessed_images/3213_left.jpg \n inflating: preprocessed_images/3213_right.jpg \n inflating: preprocessed_images/3214_left.jpg \n inflating: preprocessed_images/3214_right.jpg \n inflating: preprocessed_images/3215_left.jpg \n inflating: preprocessed_images/3215_right.jpg \n inflating: preprocessed_images/3216_left.jpg \n inflating: preprocessed_images/3216_right.jpg \n inflating: preprocessed_images/3217_left.jpg \n inflating: preprocessed_images/3217_right.jpg \n inflating: preprocessed_images/3218_left.jpg \n inflating: preprocessed_images/3218_right.jpg \n inflating: preprocessed_images/3219_left.jpg \n inflating: preprocessed_images/3219_right.jpg \n inflating: preprocessed_images/321_left.jpg \n inflating: preprocessed_images/321_right.jpg \n inflating: preprocessed_images/3221_left.jpg \n inflating: preprocessed_images/3221_right.jpg \n inflating: preprocessed_images/3222_left.jpg \n inflating: preprocessed_images/3222_right.jpg \n inflating: preprocessed_images/3223_left.jpg \n inflating: preprocessed_images/3223_right.jpg \n inflating: preprocessed_images/3224_left.jpg \n inflating: preprocessed_images/3224_right.jpg \n inflating: preprocessed_images/3225_left.jpg \n inflating: preprocessed_images/3225_right.jpg \n inflating: preprocessed_images/3226_left.jpg \n inflating: preprocessed_images/3226_right.jpg \n inflating: preprocessed_images/3227_left.jpg \n inflating: preprocessed_images/3227_right.jpg \n inflating: preprocessed_images/3228_left.jpg \n inflating: preprocessed_images/3228_right.jpg \n inflating: preprocessed_images/3229_left.jpg \n inflating: preprocessed_images/3229_right.jpg \n inflating: preprocessed_images/3230_left.jpg \n inflating: preprocessed_images/3230_right.jpg \n inflating: preprocessed_images/3231_left.jpg \n inflating: preprocessed_images/3231_right.jpg \n inflating: preprocessed_images/3232_left.jpg \n inflating: preprocessed_images/3232_right.jpg \n inflating: preprocessed_images/3234_left.jpg \n inflating: preprocessed_images/3234_right.jpg \n inflating: preprocessed_images/3235_left.jpg \n inflating: preprocessed_images/3235_right.jpg \n inflating: preprocessed_images/3236_left.jpg \n inflating: preprocessed_images/3236_right.jpg \n inflating: preprocessed_images/3237_left.jpg \n inflating: preprocessed_images/3237_right.jpg \n inflating: preprocessed_images/3238_right.jpg \n inflating: preprocessed_images/3239_right.jpg \n inflating: preprocessed_images/323_left.jpg \n inflating: preprocessed_images/323_right.jpg \n inflating: preprocessed_images/3240_left.jpg \n inflating: preprocessed_images/3240_right.jpg \n inflating: preprocessed_images/3241_left.jpg \n inflating: preprocessed_images/3241_right.jpg \n inflating: preprocessed_images/3242_left.jpg \n inflating: preprocessed_images/3242_right.jpg \n inflating: preprocessed_images/3243_left.jpg \n inflating: preprocessed_images/3243_right.jpg \n inflating: preprocessed_images/3244_left.jpg \n inflating: preprocessed_images/3244_right.jpg \n inflating: preprocessed_images/3245_left.jpg \n inflating: preprocessed_images/3245_right.jpg \n inflating: preprocessed_images/3246_left.jpg \n inflating: preprocessed_images/3246_right.jpg \n inflating: preprocessed_images/3247_left.jpg \n inflating: preprocessed_images/3247_right.jpg \n inflating: preprocessed_images/3248_left.jpg \n inflating: preprocessed_images/3248_right.jpg \n inflating: preprocessed_images/3249_left.jpg \n inflating: preprocessed_images/3249_right.jpg \n inflating: preprocessed_images/324_left.jpg \n inflating: preprocessed_images/324_right.jpg \n inflating: preprocessed_images/3250_left.jpg \n inflating: preprocessed_images/3250_right.jpg \n inflating: preprocessed_images/3251_left.jpg \n inflating: preprocessed_images/3251_right.jpg \n inflating: preprocessed_images/3252_left.jpg \n inflating: preprocessed_images/3252_right.jpg \n inflating: preprocessed_images/3253_left.jpg \n inflating: preprocessed_images/3253_right.jpg \n inflating: preprocessed_images/3254_left.jpg \n inflating: preprocessed_images/3254_right.jpg \n inflating: preprocessed_images/3255_left.jpg \n inflating: preprocessed_images/3255_right.jpg \n inflating: preprocessed_images/3256_right.jpg \n inflating: preprocessed_images/3257_left.jpg \n inflating: preprocessed_images/3257_right.jpg \n inflating: preprocessed_images/3258_left.jpg \n inflating: preprocessed_images/3258_right.jpg \n inflating: preprocessed_images/3259_left.jpg \n inflating: preprocessed_images/3259_right.jpg \n inflating: preprocessed_images/325_left.jpg \n inflating: preprocessed_images/325_right.jpg \n inflating: preprocessed_images/3260_left.jpg \n inflating: preprocessed_images/3260_right.jpg \n inflating: preprocessed_images/3261_left.jpg \n inflating: preprocessed_images/3261_right.jpg \n inflating: preprocessed_images/3262_left.jpg \n inflating: preprocessed_images/3262_right.jpg \n inflating: preprocessed_images/3263_left.jpg \n inflating: preprocessed_images/3264_left.jpg \n inflating: preprocessed_images/3264_right.jpg \n inflating: preprocessed_images/3266_left.jpg \n inflating: preprocessed_images/3266_right.jpg \n inflating: preprocessed_images/3267_left.jpg \n inflating: preprocessed_images/3267_right.jpg \n inflating: preprocessed_images/3268_left.jpg \n inflating: preprocessed_images/3268_right.jpg \n inflating: preprocessed_images/3269_left.jpg \n inflating: preprocessed_images/3269_right.jpg \n inflating: preprocessed_images/326_left.jpg \n inflating: preprocessed_images/326_right.jpg \n inflating: preprocessed_images/3270_left.jpg \n inflating: preprocessed_images/3270_right.jpg \n inflating: preprocessed_images/3271_left.jpg \n inflating: preprocessed_images/3271_right.jpg \n inflating: preprocessed_images/3272_left.jpg \n inflating: preprocessed_images/3272_right.jpg \n inflating: preprocessed_images/3273_left.jpg \n inflating: preprocessed_images/3273_right.jpg \n inflating: preprocessed_images/3274_left.jpg \n inflating: preprocessed_images/3274_right.jpg \n inflating: preprocessed_images/3275_left.jpg \n inflating: preprocessed_images/3275_right.jpg \n inflating: preprocessed_images/3276_left.jpg \n inflating: preprocessed_images/3276_right.jpg \n inflating: preprocessed_images/327_left.jpg \n inflating: preprocessed_images/327_right.jpg \n inflating: preprocessed_images/3281_left.jpg \n inflating: preprocessed_images/3281_right.jpg \n inflating: preprocessed_images/3282_left.jpg \n inflating: preprocessed_images/3282_right.jpg \n inflating: preprocessed_images/3283_left.jpg \n inflating: preprocessed_images/3283_right.jpg \n inflating: preprocessed_images/3284_left.jpg \n inflating: preprocessed_images/3284_right.jpg \n inflating: preprocessed_images/3285_left.jpg \n inflating: preprocessed_images/3285_right.jpg \n inflating: preprocessed_images/3286_left.jpg \n inflating: preprocessed_images/3286_right.jpg \n inflating: preprocessed_images/3287_left.jpg \n inflating: preprocessed_images/3287_right.jpg \n inflating: preprocessed_images/3288_left.jpg \n inflating: preprocessed_images/3288_right.jpg \n inflating: preprocessed_images/3289_left.jpg \n inflating: preprocessed_images/3289_right.jpg \n inflating: preprocessed_images/328_left.jpg \n inflating: preprocessed_images/3290_left.jpg \n inflating: preprocessed_images/3290_right.jpg \n inflating: preprocessed_images/3291_left.jpg \n inflating: preprocessed_images/3291_right.jpg \n inflating: preprocessed_images/3292_left.jpg \n inflating: preprocessed_images/3293_left.jpg \n inflating: preprocessed_images/3293_right.jpg \n inflating: preprocessed_images/3294_left.jpg \n inflating: preprocessed_images/3294_right.jpg \n inflating: preprocessed_images/3295_left.jpg \n inflating: preprocessed_images/3295_right.jpg \n inflating: preprocessed_images/3296_left.jpg \n inflating: preprocessed_images/3296_right.jpg \n inflating: preprocessed_images/3297_left.jpg \n inflating: preprocessed_images/3297_right.jpg \n inflating: preprocessed_images/3298_left.jpg \n inflating: preprocessed_images/3298_right.jpg \n inflating: preprocessed_images/3299_left.jpg \n inflating: preprocessed_images/3299_right.jpg \n inflating: preprocessed_images/329_left.jpg \n inflating: preprocessed_images/329_right.jpg \n inflating: preprocessed_images/32_left.jpg \n inflating: preprocessed_images/32_right.jpg \n inflating: preprocessed_images/3300_left.jpg \n inflating: preprocessed_images/3300_right.jpg \n inflating: preprocessed_images/3301_left.jpg \n inflating: preprocessed_images/3301_right.jpg \n inflating: preprocessed_images/3302_left.jpg \n inflating: preprocessed_images/3302_right.jpg \n inflating: preprocessed_images/3303_left.jpg \n inflating: preprocessed_images/3303_right.jpg \n inflating: preprocessed_images/3304_left.jpg \n inflating: preprocessed_images/3304_right.jpg \n inflating: preprocessed_images/3305_left.jpg \n inflating: preprocessed_images/3305_right.jpg \n inflating: preprocessed_images/3306_left.jpg \n inflating: preprocessed_images/3306_right.jpg \n inflating: preprocessed_images/3307_left.jpg \n inflating: preprocessed_images/3307_right.jpg \n inflating: preprocessed_images/3308_left.jpg \n inflating: preprocessed_images/3308_right.jpg \n inflating: preprocessed_images/3309_left.jpg \n inflating: preprocessed_images/3309_right.jpg \n inflating: preprocessed_images/330_left.jpg \n inflating: preprocessed_images/330_right.jpg \n inflating: preprocessed_images/3310_left.jpg \n inflating: preprocessed_images/3310_right.jpg \n inflating: preprocessed_images/3311_left.jpg \n inflating: preprocessed_images/3311_right.jpg \n inflating: preprocessed_images/3312_left.jpg \n inflating: preprocessed_images/3312_right.jpg \n inflating: preprocessed_images/3313_left.jpg \n inflating: preprocessed_images/3313_right.jpg \n inflating: preprocessed_images/3315_left.jpg \n inflating: preprocessed_images/3315_right.jpg \n inflating: preprocessed_images/3316_left.jpg \n inflating: preprocessed_images/3316_right.jpg \n inflating: preprocessed_images/3317_left.jpg \n inflating: preprocessed_images/3317_right.jpg \n inflating: preprocessed_images/3318_left.jpg \n inflating: preprocessed_images/3318_right.jpg \n inflating: preprocessed_images/3319_left.jpg \n inflating: preprocessed_images/3319_right.jpg \n inflating: preprocessed_images/3320_right.jpg \n inflating: preprocessed_images/3321_left.jpg \n inflating: preprocessed_images/3321_right.jpg \n inflating: preprocessed_images/3323_left.jpg \n inflating: preprocessed_images/3323_right.jpg \n inflating: preprocessed_images/3324_left.jpg \n inflating: preprocessed_images/3324_right.jpg \n inflating: preprocessed_images/3325_left.jpg \n inflating: preprocessed_images/3326_left.jpg \n inflating: preprocessed_images/3326_right.jpg \n inflating: preprocessed_images/3327_left.jpg \n inflating: preprocessed_images/3327_right.jpg \n inflating: preprocessed_images/3328_left.jpg \n inflating: preprocessed_images/3328_right.jpg \n inflating: preprocessed_images/3329_left.jpg \n inflating: preprocessed_images/3329_right.jpg \n inflating: preprocessed_images/332_left.jpg \n inflating: preprocessed_images/332_right.jpg \n inflating: preprocessed_images/3330_left.jpg \n inflating: preprocessed_images/3330_right.jpg \n inflating: preprocessed_images/3331_left.jpg \n inflating: preprocessed_images/3331_right.jpg \n inflating: preprocessed_images/3332_left.jpg \n inflating: preprocessed_images/3332_right.jpg \n inflating: preprocessed_images/3333_left.jpg \n inflating: preprocessed_images/3333_right.jpg \n inflating: preprocessed_images/3334_left.jpg \n inflating: preprocessed_images/3334_right.jpg \n inflating: preprocessed_images/3335_left.jpg \n inflating: preprocessed_images/3335_right.jpg \n inflating: preprocessed_images/3336_left.jpg \n inflating: preprocessed_images/3336_right.jpg \n inflating: preprocessed_images/3337_left.jpg \n inflating: preprocessed_images/3337_right.jpg \n inflating: preprocessed_images/3338_left.jpg \n inflating: preprocessed_images/3338_right.jpg \n inflating: preprocessed_images/3339_left.jpg \n inflating: preprocessed_images/3339_right.jpg \n inflating: preprocessed_images/333_left.jpg \n inflating: preprocessed_images/333_right.jpg \n inflating: preprocessed_images/3340_left.jpg \n inflating: preprocessed_images/3340_right.jpg \n inflating: preprocessed_images/3341_left.jpg \n inflating: preprocessed_images/3341_right.jpg \n inflating: preprocessed_images/3342_left.jpg \n inflating: preprocessed_images/3342_right.jpg \n inflating: preprocessed_images/3343_left.jpg \n inflating: preprocessed_images/3343_right.jpg \n inflating: preprocessed_images/3344_left.jpg \n inflating: preprocessed_images/3344_right.jpg \n inflating: preprocessed_images/3345_left.jpg \n inflating: preprocessed_images/3345_right.jpg \n inflating: preprocessed_images/3346_right.jpg \n inflating: preprocessed_images/3347_left.jpg \n inflating: preprocessed_images/3347_right.jpg \n inflating: preprocessed_images/3348_left.jpg \n inflating: preprocessed_images/3348_right.jpg \n inflating: preprocessed_images/334_left.jpg \n inflating: preprocessed_images/334_right.jpg \n inflating: preprocessed_images/3350_left.jpg \n inflating: preprocessed_images/3350_right.jpg \n inflating: preprocessed_images/3351_left.jpg \n inflating: preprocessed_images/3351_right.jpg \n inflating: preprocessed_images/3352_left.jpg \n inflating: preprocessed_images/3352_right.jpg \n inflating: preprocessed_images/3353_left.jpg \n inflating: preprocessed_images/3353_right.jpg \n inflating: preprocessed_images/3354_left.jpg \n inflating: preprocessed_images/3354_right.jpg \n inflating: preprocessed_images/3355_left.jpg \n inflating: preprocessed_images/3355_right.jpg \n inflating: preprocessed_images/3356_left.jpg \n inflating: preprocessed_images/3356_right.jpg \n inflating: preprocessed_images/3357_left.jpg \n inflating: preprocessed_images/3357_right.jpg \n inflating: preprocessed_images/3358_left.jpg \n inflating: preprocessed_images/3358_right.jpg \n inflating: preprocessed_images/3359_left.jpg \n inflating: preprocessed_images/3359_right.jpg \n inflating: preprocessed_images/335_left.jpg \n inflating: preprocessed_images/335_right.jpg \n inflating: preprocessed_images/3360_left.jpg \n inflating: preprocessed_images/3360_right.jpg \n inflating: preprocessed_images/3361_left.jpg \n inflating: preprocessed_images/3361_right.jpg \n inflating: preprocessed_images/3362_left.jpg \n inflating: preprocessed_images/3362_right.jpg \n inflating: preprocessed_images/3363_left.jpg \n inflating: preprocessed_images/3363_right.jpg \n inflating: preprocessed_images/3364_left.jpg \n inflating: preprocessed_images/3364_right.jpg \n inflating: preprocessed_images/3365_left.jpg \n inflating: preprocessed_images/3365_right.jpg \n inflating: preprocessed_images/3366_left.jpg \n inflating: preprocessed_images/3366_right.jpg \n inflating: preprocessed_images/3367_left.jpg \n inflating: preprocessed_images/3367_right.jpg \n inflating: preprocessed_images/3368_left.jpg \n inflating: preprocessed_images/3368_right.jpg \n inflating: preprocessed_images/3369_left.jpg \n inflating: preprocessed_images/336_right.jpg \n inflating: preprocessed_images/3370_left.jpg \n inflating: preprocessed_images/3370_right.jpg \n inflating: preprocessed_images/3371_left.jpg \n inflating: preprocessed_images/3371_right.jpg \n inflating: preprocessed_images/3372_left.jpg \n inflating: preprocessed_images/3372_right.jpg \n inflating: preprocessed_images/3373_left.jpg \n inflating: preprocessed_images/3373_right.jpg \n inflating: preprocessed_images/3374_left.jpg \n inflating: preprocessed_images/3374_right.jpg \n inflating: preprocessed_images/3375_left.jpg \n inflating: preprocessed_images/3375_right.jpg \n inflating: preprocessed_images/3376_left.jpg \n inflating: preprocessed_images/3376_right.jpg \n inflating: preprocessed_images/3377_left.jpg \n inflating: preprocessed_images/3377_right.jpg \n inflating: preprocessed_images/3378_left.jpg \n inflating: preprocessed_images/3378_right.jpg \n inflating: preprocessed_images/3379_left.jpg \n inflating: preprocessed_images/3379_right.jpg \n inflating: preprocessed_images/337_left.jpg \n inflating: preprocessed_images/337_right.jpg \n inflating: preprocessed_images/3380_left.jpg \n inflating: preprocessed_images/3380_right.jpg \n inflating: preprocessed_images/3381_left.jpg \n inflating: preprocessed_images/3381_right.jpg \n inflating: preprocessed_images/3382_left.jpg \n inflating: preprocessed_images/3382_right.jpg \n inflating: preprocessed_images/3383_right.jpg \n inflating: preprocessed_images/3384_left.jpg \n inflating: preprocessed_images/3384_right.jpg \n inflating: preprocessed_images/3385_left.jpg \n inflating: preprocessed_images/3385_right.jpg \n inflating: preprocessed_images/3386_left.jpg \n inflating: preprocessed_images/3386_right.jpg \n inflating: preprocessed_images/3387_left.jpg \n inflating: preprocessed_images/3387_right.jpg \n inflating: preprocessed_images/3388_left.jpg \n inflating: preprocessed_images/3388_right.jpg \n inflating: preprocessed_images/3389_left.jpg \n inflating: preprocessed_images/3389_right.jpg \n inflating: preprocessed_images/338_left.jpg \n inflating: preprocessed_images/338_right.jpg \n inflating: preprocessed_images/3390_left.jpg \n inflating: preprocessed_images/3390_right.jpg \n inflating: preprocessed_images/3391_left.jpg \n inflating: preprocessed_images/3391_right.jpg \n inflating: preprocessed_images/3392_left.jpg \n inflating: preprocessed_images/3392_right.jpg \n inflating: preprocessed_images/3393_left.jpg \n inflating: preprocessed_images/3393_right.jpg \n inflating: preprocessed_images/3394_left.jpg \n inflating: preprocessed_images/3394_right.jpg \n inflating: preprocessed_images/3395_left.jpg \n inflating: preprocessed_images/3395_right.jpg \n inflating: preprocessed_images/3397_left.jpg \n inflating: preprocessed_images/3397_right.jpg \n inflating: preprocessed_images/3398_left.jpg \n inflating: preprocessed_images/3399_left.jpg \n inflating: preprocessed_images/3399_right.jpg \n inflating: preprocessed_images/339_left.jpg \n inflating: preprocessed_images/339_right.jpg \n inflating: preprocessed_images/33_left.jpg \n inflating: preprocessed_images/33_right.jpg \n inflating: preprocessed_images/3400_left.jpg \n inflating: preprocessed_images/3400_right.jpg \n inflating: preprocessed_images/3401_left.jpg \n inflating: preprocessed_images/3401_right.jpg \n inflating: preprocessed_images/3403_left.jpg \n inflating: preprocessed_images/3403_right.jpg \n inflating: preprocessed_images/3404_left.jpg \n inflating: preprocessed_images/3404_right.jpg \n inflating: preprocessed_images/3405_left.jpg \n inflating: preprocessed_images/3405_right.jpg \n inflating: preprocessed_images/3406_left.jpg \n inflating: preprocessed_images/3406_right.jpg \n inflating: preprocessed_images/3407_left.jpg \n inflating: preprocessed_images/3407_right.jpg \n inflating: preprocessed_images/3408_left.jpg \n inflating: preprocessed_images/3408_right.jpg \n inflating: preprocessed_images/3409_left.jpg \n inflating: preprocessed_images/3409_right.jpg \n inflating: preprocessed_images/340_left.jpg \n inflating: preprocessed_images/340_right.jpg \n inflating: preprocessed_images/3410_left.jpg \n inflating: preprocessed_images/3410_right.jpg \n inflating: preprocessed_images/3411_left.jpg \n inflating: preprocessed_images/3411_right.jpg \n inflating: preprocessed_images/3412_left.jpg \n inflating: preprocessed_images/3412_right.jpg \n inflating: preprocessed_images/3414_left.jpg \n inflating: preprocessed_images/3414_right.jpg \n inflating: preprocessed_images/3415_left.jpg \n inflating: preprocessed_images/3415_right.jpg \n inflating: preprocessed_images/3416_left.jpg \n inflating: preprocessed_images/3416_right.jpg \n inflating: preprocessed_images/3417_left.jpg \n inflating: preprocessed_images/3417_right.jpg \n inflating: preprocessed_images/3418_left.jpg \n inflating: preprocessed_images/3418_right.jpg \n inflating: preprocessed_images/3419_left.jpg \n inflating: preprocessed_images/3419_right.jpg \n inflating: preprocessed_images/341_left.jpg \n inflating: preprocessed_images/341_right.jpg \n inflating: preprocessed_images/3420_left.jpg \n inflating: preprocessed_images/3420_right.jpg \n inflating: preprocessed_images/3421_left.jpg \n inflating: preprocessed_images/3421_right.jpg \n inflating: preprocessed_images/3422_left.jpg \n inflating: preprocessed_images/3422_right.jpg \n inflating: preprocessed_images/3423_left.jpg \n inflating: preprocessed_images/3423_right.jpg \n inflating: preprocessed_images/3424_left.jpg \n inflating: preprocessed_images/3424_right.jpg \n inflating: preprocessed_images/3425_left.jpg \n inflating: preprocessed_images/3425_right.jpg \n inflating: preprocessed_images/3426_left.jpg \n inflating: preprocessed_images/3426_right.jpg \n inflating: preprocessed_images/3427_left.jpg \n inflating: preprocessed_images/3427_right.jpg \n inflating: preprocessed_images/3428_left.jpg \n inflating: preprocessed_images/3428_right.jpg \n inflating: preprocessed_images/3429_left.jpg \n inflating: preprocessed_images/3429_right.jpg \n inflating: preprocessed_images/342_right.jpg \n inflating: preprocessed_images/3430_left.jpg \n inflating: preprocessed_images/3430_right.jpg \n inflating: preprocessed_images/3431_left.jpg \n inflating: preprocessed_images/3431_right.jpg \n inflating: preprocessed_images/3432_left.jpg \n inflating: preprocessed_images/3432_right.jpg \n inflating: preprocessed_images/3433_left.jpg \n inflating: preprocessed_images/3433_right.jpg \n inflating: preprocessed_images/3434_left.jpg \n inflating: preprocessed_images/3434_right.jpg \n inflating: preprocessed_images/3435_left.jpg \n inflating: preprocessed_images/3435_right.jpg \n inflating: preprocessed_images/3436_left.jpg \n inflating: preprocessed_images/3436_right.jpg \n inflating: preprocessed_images/3437_left.jpg \n inflating: preprocessed_images/3437_right.jpg \n inflating: preprocessed_images/3438_left.jpg \n inflating: preprocessed_images/3438_right.jpg \n inflating: preprocessed_images/3439_left.jpg \n inflating: preprocessed_images/3439_right.jpg \n inflating: preprocessed_images/343_left.jpg \n inflating: preprocessed_images/343_right.jpg \n inflating: preprocessed_images/3440_left.jpg \n inflating: preprocessed_images/3440_right.jpg \n inflating: preprocessed_images/3441_left.jpg \n inflating: preprocessed_images/3441_right.jpg \n inflating: preprocessed_images/3442_left.jpg \n inflating: preprocessed_images/3442_right.jpg \n inflating: preprocessed_images/3443_left.jpg \n inflating: preprocessed_images/3443_right.jpg \n inflating: preprocessed_images/3444_left.jpg \n inflating: preprocessed_images/3444_right.jpg \n inflating: preprocessed_images/3445_left.jpg \n inflating: preprocessed_images/3445_right.jpg \n inflating: preprocessed_images/3446_left.jpg \n inflating: preprocessed_images/3446_right.jpg \n inflating: preprocessed_images/3448_left.jpg \n inflating: preprocessed_images/3448_right.jpg \n inflating: preprocessed_images/344_left.jpg \n inflating: preprocessed_images/344_right.jpg \n inflating: preprocessed_images/3450_left.jpg \n inflating: preprocessed_images/3450_right.jpg \n inflating: preprocessed_images/345_left.jpg \n inflating: preprocessed_images/345_right.jpg \n inflating: preprocessed_images/346_left.jpg \n inflating: preprocessed_images/347_left.jpg \n inflating: preprocessed_images/347_right.jpg \n inflating: preprocessed_images/3485_left.jpg \n inflating: preprocessed_images/3485_right.jpg \n inflating: preprocessed_images/348_left.jpg \n inflating: preprocessed_images/348_right.jpg \n inflating: preprocessed_images/349_left.jpg \n inflating: preprocessed_images/349_right.jpg \n inflating: preprocessed_images/34_left.jpg \n inflating: preprocessed_images/34_right.jpg \n inflating: preprocessed_images/350_left.jpg \n inflating: preprocessed_images/350_right.jpg \n inflating: preprocessed_images/351_left.jpg \n inflating: preprocessed_images/351_right.jpg \n inflating: preprocessed_images/352_left.jpg \n inflating: preprocessed_images/352_right.jpg \n inflating: preprocessed_images/353_left.jpg \n inflating: preprocessed_images/353_right.jpg \n inflating: preprocessed_images/354_left.jpg \n inflating: preprocessed_images/354_right.jpg \n inflating: preprocessed_images/355_left.jpg \n inflating: preprocessed_images/355_right.jpg \n inflating: preprocessed_images/356_left.jpg \n inflating: preprocessed_images/356_right.jpg \n inflating: preprocessed_images/357_left.jpg \n inflating: preprocessed_images/357_right.jpg \n inflating: preprocessed_images/358_left.jpg \n inflating: preprocessed_images/358_right.jpg \n inflating: preprocessed_images/359_left.jpg \n inflating: preprocessed_images/359_right.jpg \n inflating: preprocessed_images/35_left.jpg \n inflating: preprocessed_images/35_right.jpg \n inflating: preprocessed_images/360_left.jpg \n inflating: preprocessed_images/360_right.jpg \n inflating: preprocessed_images/361_right.jpg \n inflating: preprocessed_images/362_left.jpg \n inflating: preprocessed_images/363_left.jpg \n inflating: preprocessed_images/363_right.jpg \n inflating: preprocessed_images/364_left.jpg \n inflating: preprocessed_images/364_right.jpg \n inflating: preprocessed_images/365_left.jpg \n inflating: preprocessed_images/365_right.jpg \n inflating: preprocessed_images/366_left.jpg \n inflating: preprocessed_images/366_right.jpg \n inflating: preprocessed_images/367_left.jpg \n inflating: preprocessed_images/367_right.jpg \n inflating: preprocessed_images/368_left.jpg \n inflating: preprocessed_images/368_right.jpg \n inflating: preprocessed_images/370_left.jpg \n inflating: preprocessed_images/370_right.jpg \n inflating: preprocessed_images/371_left.jpg \n inflating: preprocessed_images/371_right.jpg \n inflating: preprocessed_images/372_left.jpg \n inflating: preprocessed_images/373_left.jpg \n inflating: preprocessed_images/373_right.jpg \n inflating: preprocessed_images/374_left.jpg \n inflating: preprocessed_images/374_right.jpg \n inflating: preprocessed_images/376_left.jpg \n inflating: preprocessed_images/376_right.jpg \n inflating: preprocessed_images/377_left.jpg \n inflating: preprocessed_images/377_right.jpg \n inflating: preprocessed_images/378_right.jpg \n inflating: preprocessed_images/379_left.jpg \n inflating: preprocessed_images/379_right.jpg \n inflating: preprocessed_images/37_left.jpg \n inflating: preprocessed_images/37_right.jpg \n inflating: preprocessed_images/380_left.jpg \n inflating: preprocessed_images/380_right.jpg \n inflating: preprocessed_images/381_left.jpg \n inflating: preprocessed_images/381_right.jpg \n inflating: preprocessed_images/382_left.jpg \n inflating: preprocessed_images/382_right.jpg \n inflating: preprocessed_images/383_left.jpg \n inflating: preprocessed_images/383_right.jpg \n inflating: preprocessed_images/384_left.jpg \n inflating: preprocessed_images/384_right.jpg \n inflating: preprocessed_images/385_left.jpg \n inflating: preprocessed_images/385_right.jpg \n inflating: preprocessed_images/386_left.jpg \n inflating: preprocessed_images/386_right.jpg \n inflating: preprocessed_images/388_left.jpg \n inflating: preprocessed_images/388_right.jpg \n inflating: preprocessed_images/389_left.jpg \n inflating: preprocessed_images/389_right.jpg \n inflating: preprocessed_images/38_right.jpg \n inflating: preprocessed_images/390_left.jpg \n inflating: preprocessed_images/390_right.jpg \n inflating: preprocessed_images/391_left.jpg \n inflating: preprocessed_images/391_right.jpg \n inflating: preprocessed_images/392_left.jpg \n inflating: preprocessed_images/392_right.jpg \n inflating: preprocessed_images/3932_left.jpg \n inflating: preprocessed_images/3932_right.jpg \n inflating: preprocessed_images/3934_left.jpg \n inflating: preprocessed_images/3934_right.jpg \n inflating: preprocessed_images/3935_right.jpg \n inflating: preprocessed_images/3936_left.jpg \n inflating: preprocessed_images/3936_right.jpg \n inflating: preprocessed_images/3937_left.jpg \n inflating: preprocessed_images/3937_right.jpg \n inflating: preprocessed_images/3938_left.jpg \n inflating: preprocessed_images/3938_right.jpg \n inflating: preprocessed_images/3939_left.jpg \n inflating: preprocessed_images/3939_right.jpg \n inflating: preprocessed_images/393_right.jpg \n inflating: preprocessed_images/3940_left.jpg \n inflating: preprocessed_images/3940_right.jpg \n inflating: preprocessed_images/3941_left.jpg \n inflating: preprocessed_images/3941_right.jpg \n inflating: preprocessed_images/3943_left.jpg \n inflating: preprocessed_images/3943_right.jpg \n inflating: preprocessed_images/3944_left.jpg \n inflating: preprocessed_images/3944_right.jpg \n inflating: preprocessed_images/3945_left.jpg \n inflating: preprocessed_images/3945_right.jpg \n inflating: preprocessed_images/3946_left.jpg \n inflating: preprocessed_images/3946_right.jpg \n inflating: preprocessed_images/3947_left.jpg \n inflating: preprocessed_images/3948_left.jpg \n inflating: preprocessed_images/3948_right.jpg \n inflating: preprocessed_images/3949_left.jpg \n inflating: preprocessed_images/3949_right.jpg \n inflating: preprocessed_images/394_left.jpg \n inflating: preprocessed_images/394_right.jpg \n inflating: preprocessed_images/3950_left.jpg \n inflating: preprocessed_images/3950_right.jpg \n inflating: preprocessed_images/3951_left.jpg \n inflating: preprocessed_images/3951_right.jpg \n inflating: preprocessed_images/3952_left.jpg \n inflating: preprocessed_images/3952_right.jpg \n inflating: preprocessed_images/3953_left.jpg \n inflating: preprocessed_images/3953_right.jpg \n inflating: preprocessed_images/3954_left.jpg \n inflating: preprocessed_images/3954_right.jpg \n inflating: preprocessed_images/3955_left.jpg \n inflating: preprocessed_images/3955_right.jpg \n inflating: preprocessed_images/3956_left.jpg \n inflating: preprocessed_images/3956_right.jpg \n inflating: preprocessed_images/3957_left.jpg \n inflating: preprocessed_images/3957_right.jpg \n inflating: preprocessed_images/3958_left.jpg \n inflating: preprocessed_images/3958_right.jpg \n inflating: preprocessed_images/395_left.jpg \n inflating: preprocessed_images/3960_left.jpg \n inflating: preprocessed_images/3960_right.jpg \n inflating: preprocessed_images/3961_left.jpg \n inflating: preprocessed_images/3961_right.jpg \n inflating: preprocessed_images/3962_left.jpg \n inflating: preprocessed_images/3962_right.jpg \n inflating: preprocessed_images/3965_left.jpg \n inflating: preprocessed_images/3965_right.jpg \n inflating: preprocessed_images/3966_left.jpg \n inflating: preprocessed_images/3966_right.jpg \n inflating: preprocessed_images/3968_left.jpg \n inflating: preprocessed_images/3968_right.jpg \n inflating: preprocessed_images/3969_left.jpg \n inflating: preprocessed_images/3969_right.jpg \n inflating: preprocessed_images/396_left.jpg \n inflating: preprocessed_images/396_right.jpg \n inflating: preprocessed_images/3970_left.jpg \n inflating: preprocessed_images/3970_right.jpg \n inflating: preprocessed_images/3971_left.jpg \n inflating: preprocessed_images/3971_right.jpg \n inflating: preprocessed_images/3972_left.jpg \n inflating: preprocessed_images/3972_right.jpg \n inflating: preprocessed_images/3973_left.jpg \n inflating: preprocessed_images/3973_right.jpg \n inflating: preprocessed_images/3976_left.jpg \n inflating: preprocessed_images/3976_right.jpg \n inflating: preprocessed_images/3977_left.jpg \n inflating: preprocessed_images/3977_right.jpg \n inflating: preprocessed_images/3978_left.jpg \n inflating: preprocessed_images/3978_right.jpg \n inflating: preprocessed_images/3979_left.jpg \n inflating: preprocessed_images/3979_right.jpg \n inflating: preprocessed_images/397_left.jpg \n inflating: preprocessed_images/397_right.jpg \n inflating: preprocessed_images/3980_left.jpg \n inflating: preprocessed_images/3980_right.jpg \n inflating: preprocessed_images/3981_left.jpg \n inflating: preprocessed_images/3981_right.jpg \n inflating: preprocessed_images/3982_left.jpg \n inflating: preprocessed_images/3982_right.jpg \n inflating: preprocessed_images/3983_left.jpg \n inflating: preprocessed_images/3983_right.jpg \n inflating: preprocessed_images/3984_left.jpg \n inflating: preprocessed_images/3984_right.jpg \n inflating: preprocessed_images/3985_left.jpg \n inflating: preprocessed_images/3985_right.jpg \n inflating: preprocessed_images/3986_left.jpg \n inflating: preprocessed_images/3986_right.jpg \n inflating: preprocessed_images/3987_left.jpg \n inflating: preprocessed_images/3987_right.jpg \n inflating: preprocessed_images/398_left.jpg \n inflating: preprocessed_images/398_right.jpg \n inflating: preprocessed_images/3990_left.jpg \n inflating: preprocessed_images/3990_right.jpg \n inflating: preprocessed_images/3991_left.jpg \n inflating: preprocessed_images/3991_right.jpg \n inflating: preprocessed_images/3992_left.jpg \n inflating: preprocessed_images/3992_right.jpg \n inflating: preprocessed_images/3993_left.jpg \n inflating: preprocessed_images/3993_right.jpg \n inflating: preprocessed_images/3994_left.jpg \n inflating: preprocessed_images/3994_right.jpg \n inflating: preprocessed_images/3995_left.jpg \n inflating: preprocessed_images/3995_right.jpg \n inflating: preprocessed_images/3996_left.jpg \n inflating: preprocessed_images/3996_right.jpg \n inflating: preprocessed_images/3997_left.jpg \n inflating: preprocessed_images/3997_right.jpg \n inflating: preprocessed_images/3998_left.jpg \n inflating: preprocessed_images/3998_right.jpg \n inflating: preprocessed_images/3999_left.jpg \n inflating: preprocessed_images/3999_right.jpg \n inflating: preprocessed_images/399_left.jpg \n inflating: preprocessed_images/399_right.jpg \n inflating: preprocessed_images/39_left.jpg \n inflating: preprocessed_images/3_left.jpg \n inflating: preprocessed_images/4001_left.jpg \n inflating: preprocessed_images/4001_right.jpg \n inflating: preprocessed_images/4002_left.jpg \n inflating: preprocessed_images/4002_right.jpg \n inflating: preprocessed_images/4003_left.jpg \n inflating: preprocessed_images/4003_right.jpg \n inflating: preprocessed_images/4005_left.jpg \n inflating: preprocessed_images/4005_right.jpg \n inflating: preprocessed_images/4007_right.jpg \n inflating: preprocessed_images/4008_left.jpg \n inflating: preprocessed_images/4008_right.jpg \n inflating: preprocessed_images/4009_left.jpg \n inflating: preprocessed_images/4009_right.jpg \n inflating: preprocessed_images/400_left.jpg \n inflating: preprocessed_images/400_right.jpg \n inflating: preprocessed_images/4011_left.jpg \n inflating: preprocessed_images/4011_right.jpg \n inflating: preprocessed_images/4012_left.jpg \n inflating: preprocessed_images/4012_right.jpg \n inflating: preprocessed_images/4013_left.jpg \n inflating: preprocessed_images/4013_right.jpg \n inflating: preprocessed_images/4014_left.jpg \n inflating: preprocessed_images/4014_right.jpg \n inflating: preprocessed_images/4015_left.jpg \n inflating: preprocessed_images/4015_right.jpg \n inflating: preprocessed_images/4016_left.jpg \n inflating: preprocessed_images/4016_right.jpg \n inflating: preprocessed_images/4018_left.jpg \n inflating: preprocessed_images/4018_right.jpg \n inflating: preprocessed_images/401_left.jpg \n inflating: preprocessed_images/401_right.jpg \n inflating: preprocessed_images/4020_left.jpg \n inflating: preprocessed_images/4020_right.jpg \n inflating: preprocessed_images/4021_left.jpg \n inflating: preprocessed_images/4021_right.jpg \n inflating: preprocessed_images/4022_left.jpg \n inflating: preprocessed_images/4022_right.jpg \n inflating: preprocessed_images/4023_left.jpg \n inflating: preprocessed_images/4023_right.jpg \n inflating: preprocessed_images/4024_left.jpg \n inflating: preprocessed_images/4024_right.jpg \n inflating: preprocessed_images/4025_left.jpg \n inflating: preprocessed_images/4025_right.jpg \n inflating: preprocessed_images/4026_left.jpg \n inflating: preprocessed_images/4026_right.jpg \n inflating: preprocessed_images/4027_left.jpg \n inflating: preprocessed_images/4027_right.jpg \n inflating: preprocessed_images/4028_left.jpg \n inflating: preprocessed_images/4028_right.jpg \n inflating: preprocessed_images/4029_left.jpg \n inflating: preprocessed_images/4029_right.jpg \n inflating: preprocessed_images/402_left.jpg \n inflating: preprocessed_images/402_right.jpg \n inflating: preprocessed_images/4030_left.jpg \n inflating: preprocessed_images/4030_right.jpg \n inflating: preprocessed_images/4032_left.jpg \n inflating: preprocessed_images/4032_right.jpg \n inflating: preprocessed_images/4033_left.jpg \n inflating: preprocessed_images/4033_right.jpg \n inflating: preprocessed_images/4034_left.jpg \n inflating: preprocessed_images/4034_right.jpg \n inflating: preprocessed_images/4035_left.jpg \n inflating: preprocessed_images/4035_right.jpg \n inflating: preprocessed_images/4036_left.jpg \n inflating: preprocessed_images/4036_right.jpg \n inflating: preprocessed_images/4037_left.jpg \n inflating: preprocessed_images/4037_right.jpg \n inflating: preprocessed_images/4038_left.jpg \n inflating: preprocessed_images/4038_right.jpg \n inflating: preprocessed_images/4039_left.jpg \n inflating: preprocessed_images/4039_right.jpg \n inflating: preprocessed_images/403_right.jpg \n inflating: preprocessed_images/4040_left.jpg \n inflating: preprocessed_images/4040_right.jpg \n inflating: preprocessed_images/4041_left.jpg \n inflating: preprocessed_images/4041_right.jpg \n inflating: preprocessed_images/4043_left.jpg \n inflating: preprocessed_images/4043_right.jpg \n inflating: preprocessed_images/4044_left.jpg \n inflating: preprocessed_images/4044_right.jpg \n inflating: preprocessed_images/4045_left.jpg \n inflating: preprocessed_images/4045_right.jpg \n inflating: preprocessed_images/4046_left.jpg \n inflating: preprocessed_images/4046_right.jpg \n inflating: preprocessed_images/4047_left.jpg \n inflating: preprocessed_images/4047_right.jpg \n inflating: preprocessed_images/4048_left.jpg \n inflating: preprocessed_images/4048_right.jpg \n inflating: preprocessed_images/4049_left.jpg \n inflating: preprocessed_images/4049_right.jpg \n inflating: preprocessed_images/404_left.jpg \n inflating: preprocessed_images/404_right.jpg \n inflating: preprocessed_images/4050_left.jpg \n inflating: preprocessed_images/4050_right.jpg \n inflating: preprocessed_images/4051_left.jpg \n inflating: preprocessed_images/4051_right.jpg \n inflating: preprocessed_images/4052_left.jpg \n inflating: preprocessed_images/4052_right.jpg \n inflating: preprocessed_images/4053_left.jpg \n inflating: preprocessed_images/4053_right.jpg \n inflating: preprocessed_images/4054_left.jpg \n inflating: preprocessed_images/4054_right.jpg \n inflating: preprocessed_images/4055_left.jpg \n inflating: preprocessed_images/4055_right.jpg \n inflating: preprocessed_images/4056_left.jpg \n inflating: preprocessed_images/4056_right.jpg \n inflating: preprocessed_images/4057_left.jpg \n inflating: preprocessed_images/4057_right.jpg \n inflating: preprocessed_images/4058_left.jpg \n inflating: preprocessed_images/4059_left.jpg \n inflating: preprocessed_images/4059_right.jpg \n inflating: preprocessed_images/405_left.jpg \n inflating: preprocessed_images/405_right.jpg \n inflating: preprocessed_images/4060_left.jpg \n inflating: preprocessed_images/4060_right.jpg \n inflating: preprocessed_images/4061_left.jpg \n inflating: preprocessed_images/4061_right.jpg \n inflating: preprocessed_images/4063_left.jpg \n inflating: preprocessed_images/4063_right.jpg \n inflating: preprocessed_images/4064_left.jpg \n inflating: preprocessed_images/4064_right.jpg \n inflating: preprocessed_images/4066_left.jpg \n inflating: preprocessed_images/4067_left.jpg \n inflating: preprocessed_images/4067_right.jpg \n inflating: preprocessed_images/4068_left.jpg \n inflating: preprocessed_images/4068_right.jpg \n inflating: preprocessed_images/4069_left.jpg \n inflating: preprocessed_images/4069_right.jpg \n inflating: preprocessed_images/406_left.jpg \n inflating: preprocessed_images/406_right.jpg \n inflating: preprocessed_images/4071_left.jpg \n inflating: preprocessed_images/4071_right.jpg \n inflating: preprocessed_images/4072_left.jpg \n inflating: preprocessed_images/4072_right.jpg \n inflating: preprocessed_images/4073_left.jpg \n inflating: preprocessed_images/4073_right.jpg \n inflating: preprocessed_images/4074_left.jpg \n inflating: preprocessed_images/4074_right.jpg \n inflating: preprocessed_images/4076_left.jpg \n inflating: preprocessed_images/4076_right.jpg \n inflating: preprocessed_images/4077_left.jpg \n inflating: preprocessed_images/4077_right.jpg \n inflating: preprocessed_images/4078_left.jpg \n inflating: preprocessed_images/4078_right.jpg \n inflating: preprocessed_images/4079_left.jpg \n inflating: preprocessed_images/4079_right.jpg \n inflating: preprocessed_images/407_left.jpg \n inflating: preprocessed_images/407_right.jpg \n inflating: preprocessed_images/4080_left.jpg \n inflating: preprocessed_images/4080_right.jpg \n inflating: preprocessed_images/4081_left.jpg \n inflating: preprocessed_images/4081_right.jpg \n inflating: preprocessed_images/4082_left.jpg \n inflating: preprocessed_images/4082_right.jpg \n inflating: preprocessed_images/4083_left.jpg \n inflating: preprocessed_images/4083_right.jpg \n inflating: preprocessed_images/4084_left.jpg \n inflating: preprocessed_images/4084_right.jpg \n inflating: preprocessed_images/4085_left.jpg \n inflating: preprocessed_images/4085_right.jpg \n inflating: preprocessed_images/4086_left.jpg \n inflating: preprocessed_images/4086_right.jpg \n inflating: preprocessed_images/4087_left.jpg \n inflating: preprocessed_images/4087_right.jpg \n inflating: preprocessed_images/4088_left.jpg \n inflating: preprocessed_images/4088_right.jpg \n inflating: preprocessed_images/4089_left.jpg \n inflating: preprocessed_images/4089_right.jpg \n inflating: preprocessed_images/408_left.jpg \n inflating: preprocessed_images/408_right.jpg \n inflating: preprocessed_images/4090_left.jpg \n inflating: preprocessed_images/4090_right.jpg \n inflating: preprocessed_images/4091_left.jpg \n inflating: preprocessed_images/4093_left.jpg \n inflating: preprocessed_images/4093_right.jpg \n inflating: preprocessed_images/4094_left.jpg \n inflating: preprocessed_images/4094_right.jpg \n inflating: preprocessed_images/4095_left.jpg \n inflating: preprocessed_images/4095_right.jpg \n inflating: preprocessed_images/4096_left.jpg \n inflating: preprocessed_images/4096_right.jpg \n inflating: preprocessed_images/4097_left.jpg \n inflating: preprocessed_images/4097_right.jpg \n inflating: preprocessed_images/4098_left.jpg \n inflating: preprocessed_images/4098_right.jpg \n inflating: preprocessed_images/4099_left.jpg \n inflating: preprocessed_images/4099_right.jpg \n inflating: preprocessed_images/40_left.jpg \n inflating: preprocessed_images/40_right.jpg \n inflating: preprocessed_images/4100_left.jpg \n inflating: preprocessed_images/4100_right.jpg \n inflating: preprocessed_images/4101_left.jpg \n inflating: preprocessed_images/4101_right.jpg \n inflating: preprocessed_images/4102_left.jpg \n inflating: preprocessed_images/4102_right.jpg \n inflating: preprocessed_images/4104_left.jpg \n inflating: preprocessed_images/4104_right.jpg \n inflating: preprocessed_images/4105_left.jpg \n inflating: preprocessed_images/4105_right.jpg \n inflating: preprocessed_images/4106_left.jpg \n inflating: preprocessed_images/4106_right.jpg \n inflating: preprocessed_images/4107_left.jpg \n inflating: preprocessed_images/4107_right.jpg \n inflating: preprocessed_images/4109_left.jpg \n inflating: preprocessed_images/4109_right.jpg \n inflating: preprocessed_images/410_left.jpg \n inflating: preprocessed_images/410_right.jpg \n inflating: preprocessed_images/4110_left.jpg \n inflating: preprocessed_images/4110_right.jpg \n inflating: preprocessed_images/4111_left.jpg \n inflating: preprocessed_images/4111_right.jpg \n inflating: preprocessed_images/4112_left.jpg \n inflating: preprocessed_images/4112_right.jpg \n inflating: preprocessed_images/4113_left.jpg \n inflating: preprocessed_images/4113_right.jpg \n inflating: preprocessed_images/4114_left.jpg \n inflating: preprocessed_images/4114_right.jpg \n inflating: preprocessed_images/4115_left.jpg \n inflating: preprocessed_images/4115_right.jpg \n inflating: preprocessed_images/4116_left.jpg \n inflating: preprocessed_images/4116_right.jpg \n inflating: preprocessed_images/4117_left.jpg \n inflating: preprocessed_images/4117_right.jpg \n inflating: preprocessed_images/4118_left.jpg \n inflating: preprocessed_images/4118_right.jpg \n inflating: preprocessed_images/4119_left.jpg \n inflating: preprocessed_images/4119_right.jpg \n inflating: preprocessed_images/411_left.jpg \n inflating: preprocessed_images/411_right.jpg \n inflating: preprocessed_images/4120_left.jpg \n inflating: preprocessed_images/4120_right.jpg \n inflating: preprocessed_images/4121_left.jpg \n inflating: preprocessed_images/4121_right.jpg \n inflating: preprocessed_images/4122_left.jpg \n inflating: preprocessed_images/4122_right.jpg \n inflating: preprocessed_images/4123_left.jpg \n inflating: preprocessed_images/4123_right.jpg \n inflating: preprocessed_images/4124_left.jpg \n inflating: preprocessed_images/4125_left.jpg \n inflating: preprocessed_images/4125_right.jpg \n inflating: preprocessed_images/4126_left.jpg \n inflating: preprocessed_images/4126_right.jpg \n inflating: preprocessed_images/4127_left.jpg \n inflating: preprocessed_images/4127_right.jpg \n inflating: preprocessed_images/4128_left.jpg \n inflating: preprocessed_images/4128_right.jpg \n inflating: preprocessed_images/4129_left.jpg \n inflating: preprocessed_images/4129_right.jpg \n inflating: preprocessed_images/412_left.jpg \n inflating: preprocessed_images/412_right.jpg \n inflating: preprocessed_images/4130_left.jpg \n inflating: preprocessed_images/4130_right.jpg \n inflating: preprocessed_images/4131_left.jpg \n inflating: preprocessed_images/4131_right.jpg \n inflating: preprocessed_images/4133_left.jpg \n inflating: preprocessed_images/4133_right.jpg \n inflating: preprocessed_images/4134_left.jpg \n inflating: preprocessed_images/4134_right.jpg \n inflating: preprocessed_images/4136_left.jpg \n inflating: preprocessed_images/4136_right.jpg \n inflating: preprocessed_images/4137_left.jpg \n inflating: preprocessed_images/4137_right.jpg \n inflating: preprocessed_images/4138_left.jpg \n inflating: preprocessed_images/4138_right.jpg \n inflating: preprocessed_images/4139_left.jpg \n inflating: preprocessed_images/4139_right.jpg \n inflating: preprocessed_images/413_left.jpg \n inflating: preprocessed_images/413_right.jpg \n inflating: preprocessed_images/4140_left.jpg \n inflating: preprocessed_images/4140_right.jpg \n inflating: preprocessed_images/4141_left.jpg \n inflating: preprocessed_images/4141_right.jpg \n inflating: preprocessed_images/4142_left.jpg \n inflating: preprocessed_images/4142_right.jpg \n inflating: preprocessed_images/4144_left.jpg \n inflating: preprocessed_images/4144_right.jpg \n inflating: preprocessed_images/4146_left.jpg \n inflating: preprocessed_images/4146_right.jpg \n inflating: preprocessed_images/4147_left.jpg \n inflating: preprocessed_images/4147_right.jpg \n inflating: preprocessed_images/4148_left.jpg \n inflating: preprocessed_images/4148_right.jpg \n inflating: preprocessed_images/414_right.jpg \n inflating: preprocessed_images/4151_left.jpg \n inflating: preprocessed_images/4151_right.jpg \n inflating: preprocessed_images/4152_left.jpg \n inflating: preprocessed_images/4152_right.jpg \n inflating: preprocessed_images/4153_left.jpg \n inflating: preprocessed_images/4153_right.jpg \n inflating: preprocessed_images/4154_left.jpg \n inflating: preprocessed_images/4154_right.jpg \n inflating: preprocessed_images/4155_left.jpg \n inflating: preprocessed_images/4155_right.jpg \n inflating: preprocessed_images/4156_left.jpg \n inflating: preprocessed_images/4156_right.jpg \n inflating: preprocessed_images/4158_left.jpg \n inflating: preprocessed_images/4158_right.jpg \n inflating: preprocessed_images/4159_left.jpg \n inflating: preprocessed_images/4159_right.jpg \n inflating: preprocessed_images/415_left.jpg \n inflating: preprocessed_images/415_right.jpg \n inflating: preprocessed_images/4160_left.jpg \n inflating: preprocessed_images/4160_right.jpg \n inflating: preprocessed_images/4161_left.jpg \n inflating: preprocessed_images/4161_right.jpg \n inflating: preprocessed_images/4162_left.jpg \n inflating: preprocessed_images/4162_right.jpg \n inflating: preprocessed_images/4163_left.jpg \n inflating: preprocessed_images/4163_right.jpg \n inflating: preprocessed_images/4164_left.jpg \n inflating: preprocessed_images/4164_right.jpg \n inflating: preprocessed_images/4166_left.jpg \n inflating: preprocessed_images/4166_right.jpg \n inflating: preprocessed_images/4167_left.jpg \n inflating: preprocessed_images/4167_right.jpg \n inflating: preprocessed_images/4168_left.jpg \n inflating: preprocessed_images/4168_right.jpg \n inflating: preprocessed_images/4169_right.jpg \n inflating: preprocessed_images/416_left.jpg \n inflating: preprocessed_images/416_right.jpg \n inflating: preprocessed_images/4171_left.jpg \n inflating: preprocessed_images/4171_right.jpg \n inflating: preprocessed_images/4172_left.jpg \n inflating: preprocessed_images/4172_right.jpg \n inflating: preprocessed_images/4173_left.jpg \n inflating: preprocessed_images/4173_right.jpg \n inflating: preprocessed_images/4174_left.jpg \n inflating: preprocessed_images/4174_right.jpg \n inflating: preprocessed_images/4175_left.jpg \n inflating: preprocessed_images/4175_right.jpg \n inflating: preprocessed_images/4176_left.jpg \n inflating: preprocessed_images/4176_right.jpg \n inflating: preprocessed_images/4177_left.jpg \n inflating: preprocessed_images/4177_right.jpg \n inflating: preprocessed_images/4178_left.jpg \n inflating: preprocessed_images/4178_right.jpg \n inflating: preprocessed_images/4179_left.jpg \n inflating: preprocessed_images/4179_right.jpg \n inflating: preprocessed_images/417_right.jpg \n inflating: preprocessed_images/4180_left.jpg \n inflating: preprocessed_images/4181_left.jpg \n inflating: preprocessed_images/4181_right.jpg \n inflating: preprocessed_images/4182_left.jpg \n inflating: preprocessed_images/4182_right.jpg \n inflating: preprocessed_images/4183_left.jpg \n inflating: preprocessed_images/4183_right.jpg \n inflating: preprocessed_images/4184_left.jpg \n inflating: preprocessed_images/4184_right.jpg \n inflating: preprocessed_images/4185_left.jpg \n inflating: preprocessed_images/4185_right.jpg \n inflating: preprocessed_images/4186_left.jpg \n inflating: preprocessed_images/4186_right.jpg \n inflating: preprocessed_images/4188_left.jpg \n inflating: preprocessed_images/4188_right.jpg \n inflating: preprocessed_images/418_right.jpg \n inflating: preprocessed_images/4190_left.jpg \n inflating: preprocessed_images/4190_right.jpg \n inflating: preprocessed_images/4191_left.jpg \n inflating: preprocessed_images/4191_right.jpg \n inflating: preprocessed_images/4192_left.jpg \n inflating: preprocessed_images/4192_right.jpg \n inflating: preprocessed_images/4193_left.jpg \n inflating: preprocessed_images/4193_right.jpg \n inflating: preprocessed_images/4194_left.jpg \n inflating: preprocessed_images/4194_right.jpg \n inflating: preprocessed_images/4195_left.jpg \n inflating: preprocessed_images/4195_right.jpg \n inflating: preprocessed_images/4196_left.jpg \n inflating: preprocessed_images/4196_right.jpg \n inflating: preprocessed_images/4197_left.jpg \n inflating: preprocessed_images/4197_right.jpg \n inflating: preprocessed_images/4198_left.jpg \n inflating: preprocessed_images/4199_left.jpg \n inflating: preprocessed_images/4199_right.jpg \n inflating: preprocessed_images/419_left.jpg \n inflating: preprocessed_images/419_right.jpg \n inflating: preprocessed_images/41_left.jpg \n inflating: preprocessed_images/4200_left.jpg \n inflating: preprocessed_images/4200_right.jpg \n inflating: preprocessed_images/4201_left.jpg \n inflating: preprocessed_images/4201_right.jpg \n inflating: preprocessed_images/4202_left.jpg \n inflating: preprocessed_images/4202_right.jpg \n inflating: preprocessed_images/4203_left.jpg \n inflating: preprocessed_images/4203_right.jpg \n inflating: preprocessed_images/4204_left.jpg \n inflating: preprocessed_images/4204_right.jpg \n inflating: preprocessed_images/4205_left.jpg \n inflating: preprocessed_images/4205_right.jpg \n inflating: preprocessed_images/4206_left.jpg \n inflating: preprocessed_images/4206_right.jpg \n inflating: preprocessed_images/4207_left.jpg \n inflating: preprocessed_images/4207_right.jpg \n inflating: preprocessed_images/4208_left.jpg \n inflating: preprocessed_images/4208_right.jpg \n inflating: preprocessed_images/4209_left.jpg \n inflating: preprocessed_images/4209_right.jpg \n inflating: preprocessed_images/420_left.jpg \n inflating: preprocessed_images/420_right.jpg \n inflating: preprocessed_images/4210_left.jpg \n inflating: preprocessed_images/4210_right.jpg \n inflating: preprocessed_images/4211_left.jpg \n inflating: preprocessed_images/4211_right.jpg \n inflating: preprocessed_images/4212_left.jpg \n inflating: preprocessed_images/4212_right.jpg \n inflating: preprocessed_images/4213_left.jpg \n inflating: preprocessed_images/4213_right.jpg \n inflating: preprocessed_images/4214_left.jpg \n inflating: preprocessed_images/4214_right.jpg \n inflating: preprocessed_images/4215_left.jpg \n inflating: preprocessed_images/4215_right.jpg \n inflating: preprocessed_images/4216_left.jpg \n inflating: preprocessed_images/4216_right.jpg \n inflating: preprocessed_images/4217_left.jpg \n inflating: preprocessed_images/4217_right.jpg \n inflating: preprocessed_images/4218_left.jpg \n inflating: preprocessed_images/4218_right.jpg \n inflating: preprocessed_images/4219_left.jpg \n inflating: preprocessed_images/421_left.jpg \n inflating: preprocessed_images/421_right.jpg \n inflating: preprocessed_images/4220_left.jpg \n inflating: preprocessed_images/4220_right.jpg \n inflating: preprocessed_images/4221_left.jpg \n inflating: preprocessed_images/4221_right.jpg \n inflating: preprocessed_images/4222_left.jpg \n inflating: preprocessed_images/4222_right.jpg \n inflating: preprocessed_images/4223_left.jpg \n inflating: preprocessed_images/4223_right.jpg \n inflating: preprocessed_images/4224_left.jpg \n inflating: preprocessed_images/4224_right.jpg \n inflating: preprocessed_images/4225_left.jpg \n inflating: preprocessed_images/4225_right.jpg \n inflating: preprocessed_images/4226_left.jpg \n inflating: preprocessed_images/4226_right.jpg \n inflating: preprocessed_images/4227_left.jpg \n inflating: preprocessed_images/4227_right.jpg \n inflating: preprocessed_images/4229_left.jpg \n inflating: preprocessed_images/4229_right.jpg \n inflating: preprocessed_images/422_left.jpg \n inflating: preprocessed_images/422_right.jpg \n inflating: preprocessed_images/4230_left.jpg \n inflating: preprocessed_images/4230_right.jpg \n inflating: preprocessed_images/4231_left.jpg \n inflating: preprocessed_images/4231_right.jpg \n inflating: preprocessed_images/4232_left.jpg \n inflating: preprocessed_images/4232_right.jpg \n inflating: preprocessed_images/4233_left.jpg \n inflating: preprocessed_images/4233_right.jpg \n inflating: preprocessed_images/4234_left.jpg \n inflating: preprocessed_images/4234_right.jpg \n inflating: preprocessed_images/4235_left.jpg \n inflating: preprocessed_images/4235_right.jpg \n inflating: preprocessed_images/4236_left.jpg \n inflating: preprocessed_images/4236_right.jpg \n inflating: preprocessed_images/4237_left.jpg \n inflating: preprocessed_images/4237_right.jpg \n inflating: preprocessed_images/4238_left.jpg \n inflating: preprocessed_images/4238_right.jpg \n inflating: preprocessed_images/4239_left.jpg \n inflating: preprocessed_images/4239_right.jpg \n inflating: preprocessed_images/423_left.jpg \n inflating: preprocessed_images/423_right.jpg \n inflating: preprocessed_images/4240_left.jpg \n inflating: preprocessed_images/4240_right.jpg \n inflating: preprocessed_images/4241_left.jpg \n inflating: preprocessed_images/4241_right.jpg \n inflating: preprocessed_images/4242_left.jpg \n inflating: preprocessed_images/4242_right.jpg \n inflating: preprocessed_images/4243_left.jpg \n inflating: preprocessed_images/4244_left.jpg \n inflating: preprocessed_images/4244_right.jpg \n inflating: preprocessed_images/4245_left.jpg \n inflating: preprocessed_images/4245_right.jpg \n inflating: preprocessed_images/4246_left.jpg \n inflating: preprocessed_images/4246_right.jpg \n inflating: preprocessed_images/4247_left.jpg \n inflating: preprocessed_images/4247_right.jpg \n inflating: preprocessed_images/4248_left.jpg \n inflating: preprocessed_images/4248_right.jpg \n inflating: preprocessed_images/4249_left.jpg \n inflating: preprocessed_images/4249_right.jpg \n inflating: preprocessed_images/424_left.jpg \n inflating: preprocessed_images/424_right.jpg \n inflating: preprocessed_images/4250_left.jpg \n inflating: preprocessed_images/4250_right.jpg \n inflating: preprocessed_images/4251_left.jpg \n inflating: preprocessed_images/4251_right.jpg \n inflating: preprocessed_images/4252_left.jpg \n inflating: preprocessed_images/4252_right.jpg \n inflating: preprocessed_images/4253_left.jpg \n inflating: preprocessed_images/4253_right.jpg \n inflating: preprocessed_images/4254_left.jpg \n inflating: preprocessed_images/4254_right.jpg \n inflating: preprocessed_images/4255_left.jpg \n inflating: preprocessed_images/4255_right.jpg \n inflating: preprocessed_images/4256_left.jpg \n inflating: preprocessed_images/4256_right.jpg \n inflating: preprocessed_images/4258_left.jpg \n inflating: preprocessed_images/4258_right.jpg \n inflating: preprocessed_images/4259_left.jpg \n inflating: preprocessed_images/4259_right.jpg \n inflating: preprocessed_images/425_left.jpg \n inflating: preprocessed_images/425_right.jpg \n inflating: preprocessed_images/4260_left.jpg \n inflating: preprocessed_images/4260_right.jpg \n inflating: preprocessed_images/4261_left.jpg \n inflating: preprocessed_images/4261_right.jpg \n inflating: preprocessed_images/4262_left.jpg \n inflating: preprocessed_images/4263_left.jpg \n inflating: preprocessed_images/4263_right.jpg \n inflating: preprocessed_images/4265_left.jpg \n inflating: preprocessed_images/4265_right.jpg \n inflating: preprocessed_images/4266_left.jpg \n inflating: preprocessed_images/4266_right.jpg \n inflating: preprocessed_images/4267_left.jpg \n inflating: preprocessed_images/4267_right.jpg \n inflating: preprocessed_images/4268_left.jpg \n inflating: preprocessed_images/4268_right.jpg \n inflating: preprocessed_images/4269_left.jpg \n inflating: preprocessed_images/4269_right.jpg \n inflating: preprocessed_images/426_left.jpg \n inflating: preprocessed_images/4270_left.jpg \n inflating: preprocessed_images/4270_right.jpg \n inflating: preprocessed_images/4271_left.jpg \n inflating: preprocessed_images/4271_right.jpg \n inflating: preprocessed_images/4272_left.jpg \n inflating: preprocessed_images/4272_right.jpg \n inflating: preprocessed_images/4273_left.jpg \n inflating: preprocessed_images/4273_right.jpg \n inflating: preprocessed_images/4274_left.jpg \n inflating: preprocessed_images/4274_right.jpg \n inflating: preprocessed_images/4275_left.jpg \n inflating: preprocessed_images/4275_right.jpg \n inflating: preprocessed_images/4276_left.jpg \n inflating: preprocessed_images/4276_right.jpg \n inflating: preprocessed_images/4277_left.jpg \n inflating: preprocessed_images/4277_right.jpg \n inflating: preprocessed_images/4278_left.jpg \n inflating: preprocessed_images/4278_right.jpg \n inflating: preprocessed_images/4279_left.jpg \n inflating: preprocessed_images/4279_right.jpg \n inflating: preprocessed_images/427_right.jpg \n inflating: preprocessed_images/4280_left.jpg \n inflating: preprocessed_images/4280_right.jpg \n inflating: preprocessed_images/4281_left.jpg \n inflating: preprocessed_images/4281_right.jpg \n inflating: preprocessed_images/4282_left.jpg \n inflating: preprocessed_images/4282_right.jpg \n inflating: preprocessed_images/4283_left.jpg \n inflating: preprocessed_images/4283_right.jpg \n inflating: preprocessed_images/4284_left.jpg \n inflating: preprocessed_images/4284_right.jpg \n inflating: preprocessed_images/4285_left.jpg \n inflating: preprocessed_images/4285_right.jpg \n inflating: preprocessed_images/4286_left.jpg \n inflating: preprocessed_images/4286_right.jpg \n inflating: preprocessed_images/4287_left.jpg \n inflating: preprocessed_images/4287_right.jpg \n inflating: preprocessed_images/4288_left.jpg \n inflating: preprocessed_images/4288_right.jpg \n inflating: preprocessed_images/4289_left.jpg \n inflating: preprocessed_images/4289_right.jpg \n inflating: preprocessed_images/428_left.jpg \n inflating: preprocessed_images/428_right.jpg \n inflating: preprocessed_images/4290_right.jpg \n inflating: preprocessed_images/4291_left.jpg \n inflating: preprocessed_images/4291_right.jpg \n inflating: preprocessed_images/4292_left.jpg \n inflating: preprocessed_images/4292_right.jpg \n inflating: preprocessed_images/4293_left.jpg \n inflating: preprocessed_images/4293_right.jpg \n inflating: preprocessed_images/4294_left.jpg \n inflating: preprocessed_images/4294_right.jpg \n inflating: preprocessed_images/4295_left.jpg \n inflating: preprocessed_images/4295_right.jpg \n inflating: preprocessed_images/4296_left.jpg \n inflating: preprocessed_images/4296_right.jpg \n inflating: preprocessed_images/4297_left.jpg \n inflating: preprocessed_images/4297_right.jpg \n inflating: preprocessed_images/4298_left.jpg \n inflating: preprocessed_images/4298_right.jpg \n inflating: preprocessed_images/4299_left.jpg \n inflating: preprocessed_images/4299_right.jpg \n inflating: preprocessed_images/429_left.jpg \n inflating: preprocessed_images/42_left.jpg \n inflating: preprocessed_images/42_right.jpg \n inflating: preprocessed_images/4300_left.jpg \n inflating: preprocessed_images/4300_right.jpg \n inflating: preprocessed_images/4301_left.jpg \n inflating: preprocessed_images/4301_right.jpg \n inflating: preprocessed_images/4302_left.jpg \n inflating: preprocessed_images/4302_right.jpg \n inflating: preprocessed_images/4303_left.jpg \n inflating: preprocessed_images/4303_right.jpg \n inflating: preprocessed_images/4304_left.jpg \n inflating: preprocessed_images/4304_right.jpg \n inflating: preprocessed_images/4305_left.jpg \n inflating: preprocessed_images/4305_right.jpg \n inflating: preprocessed_images/4306_left.jpg \n inflating: preprocessed_images/4306_right.jpg \n inflating: preprocessed_images/4307_left.jpg \n inflating: preprocessed_images/4307_right.jpg \n inflating: preprocessed_images/4308_left.jpg \n inflating: preprocessed_images/4308_right.jpg \n inflating: preprocessed_images/4309_left.jpg \n inflating: preprocessed_images/4309_right.jpg \n inflating: preprocessed_images/430_left.jpg \n inflating: preprocessed_images/430_right.jpg \n inflating: preprocessed_images/4310_left.jpg \n inflating: preprocessed_images/4310_right.jpg \n inflating: preprocessed_images/4311_left.jpg \n inflating: preprocessed_images/4311_right.jpg \n inflating: preprocessed_images/4312_left.jpg \n inflating: preprocessed_images/4312_right.jpg \n inflating: preprocessed_images/4313_left.jpg \n inflating: preprocessed_images/4313_right.jpg \n inflating: preprocessed_images/4314_left.jpg \n inflating: preprocessed_images/4314_right.jpg \n inflating: preprocessed_images/4315_left.jpg \n inflating: preprocessed_images/4315_right.jpg \n inflating: preprocessed_images/4316_left.jpg \n inflating: preprocessed_images/4316_right.jpg \n inflating: preprocessed_images/4317_left.jpg \n inflating: preprocessed_images/4317_right.jpg \n inflating: preprocessed_images/4318_left.jpg \n inflating: preprocessed_images/4318_right.jpg \n inflating: preprocessed_images/4319_left.jpg \n inflating: preprocessed_images/431_left.jpg \n inflating: preprocessed_images/431_right.jpg \n inflating: preprocessed_images/4320_left.jpg \n inflating: preprocessed_images/4320_right.jpg \n inflating: preprocessed_images/4321_left.jpg \n inflating: preprocessed_images/4321_right.jpg \n inflating: preprocessed_images/4322_left.jpg \n inflating: preprocessed_images/4322_right.jpg \n inflating: preprocessed_images/4323_left.jpg \n inflating: preprocessed_images/4323_right.jpg \n inflating: preprocessed_images/4324_left.jpg \n inflating: preprocessed_images/4324_right.jpg \n inflating: preprocessed_images/4325_left.jpg \n inflating: preprocessed_images/4325_right.jpg \n inflating: preprocessed_images/4326_left.jpg \n inflating: preprocessed_images/4326_right.jpg \n inflating: preprocessed_images/4327_left.jpg \n inflating: preprocessed_images/4327_right.jpg \n inflating: preprocessed_images/4328_left.jpg \n inflating: preprocessed_images/4328_right.jpg \n inflating: preprocessed_images/4329_left.jpg \n inflating: preprocessed_images/4329_right.jpg \n inflating: preprocessed_images/432_left.jpg \n inflating: preprocessed_images/4330_left.jpg \n inflating: preprocessed_images/4330_right.jpg \n inflating: preprocessed_images/4331_left.jpg \n inflating: preprocessed_images/4331_right.jpg \n inflating: preprocessed_images/4332_left.jpg \n inflating: preprocessed_images/4332_right.jpg \n inflating: preprocessed_images/4333_left.jpg \n inflating: preprocessed_images/4333_right.jpg \n inflating: preprocessed_images/4334_left.jpg \n inflating: preprocessed_images/4334_right.jpg \n inflating: preprocessed_images/4335_left.jpg \n inflating: preprocessed_images/4335_right.jpg \n inflating: preprocessed_images/4336_left.jpg \n inflating: preprocessed_images/4336_right.jpg \n inflating: preprocessed_images/4337_left.jpg \n inflating: preprocessed_images/4337_right.jpg \n inflating: preprocessed_images/4338_left.jpg \n inflating: preprocessed_images/4338_right.jpg \n inflating: preprocessed_images/4339_left.jpg \n inflating: preprocessed_images/4339_right.jpg \n inflating: preprocessed_images/433_left.jpg \n inflating: preprocessed_images/433_right.jpg \n inflating: preprocessed_images/4340_left.jpg \n inflating: preprocessed_images/4340_right.jpg \n inflating: preprocessed_images/4341_left.jpg \n inflating: preprocessed_images/4341_right.jpg \n inflating: preprocessed_images/4342_left.jpg \n inflating: preprocessed_images/4342_right.jpg \n inflating: preprocessed_images/4343_left.jpg \n inflating: preprocessed_images/4343_right.jpg \n inflating: preprocessed_images/4344_left.jpg \n inflating: preprocessed_images/4344_right.jpg \n inflating: preprocessed_images/4345_left.jpg \n inflating: preprocessed_images/4345_right.jpg \n inflating: preprocessed_images/4347_left.jpg \n inflating: preprocessed_images/4347_right.jpg \n inflating: preprocessed_images/4348_left.jpg \n inflating: preprocessed_images/4348_right.jpg \n inflating: preprocessed_images/4349_left.jpg \n inflating: preprocessed_images/4349_right.jpg \n inflating: preprocessed_images/434_left.jpg \n inflating: preprocessed_images/434_right.jpg \n inflating: preprocessed_images/4350_left.jpg \n inflating: preprocessed_images/4350_right.jpg \n inflating: preprocessed_images/4351_left.jpg \n inflating: preprocessed_images/4351_right.jpg \n inflating: preprocessed_images/4352_left.jpg \n inflating: preprocessed_images/4352_right.jpg \n inflating: preprocessed_images/4353_left.jpg \n inflating: preprocessed_images/4353_right.jpg \n inflating: preprocessed_images/4354_left.jpg \n inflating: preprocessed_images/4354_right.jpg \n inflating: preprocessed_images/4355_left.jpg \n inflating: preprocessed_images/4355_right.jpg \n inflating: preprocessed_images/4356_left.jpg \n inflating: preprocessed_images/4356_right.jpg \n inflating: preprocessed_images/4357_left.jpg \n inflating: preprocessed_images/4357_right.jpg \n inflating: preprocessed_images/4358_left.jpg \n inflating: preprocessed_images/4358_right.jpg \n inflating: preprocessed_images/4359_left.jpg \n inflating: preprocessed_images/4359_right.jpg \n inflating: preprocessed_images/435_left.jpg \n inflating: preprocessed_images/435_right.jpg \n inflating: preprocessed_images/4360_left.jpg \n inflating: preprocessed_images/4360_right.jpg \n inflating: preprocessed_images/4361_left.jpg \n inflating: preprocessed_images/4361_right.jpg \n inflating: preprocessed_images/4362_left.jpg \n inflating: preprocessed_images/4362_right.jpg \n inflating: preprocessed_images/4365_left.jpg \n inflating: preprocessed_images/4365_right.jpg \n inflating: preprocessed_images/4367_left.jpg \n inflating: preprocessed_images/4367_right.jpg \n inflating: preprocessed_images/4368_left.jpg \n inflating: preprocessed_images/4368_right.jpg \n inflating: preprocessed_images/4369_left.jpg \n inflating: preprocessed_images/4369_right.jpg \n inflating: preprocessed_images/436_left.jpg \n inflating: preprocessed_images/436_right.jpg \n inflating: preprocessed_images/4371_left.jpg \n inflating: preprocessed_images/4371_right.jpg \n inflating: preprocessed_images/4372_left.jpg \n inflating: preprocessed_images/4372_right.jpg \n inflating: preprocessed_images/4373_left.jpg \n inflating: preprocessed_images/4373_right.jpg \n inflating: preprocessed_images/4374_left.jpg \n inflating: preprocessed_images/4374_right.jpg \n inflating: preprocessed_images/4375_left.jpg \n inflating: preprocessed_images/4375_right.jpg \n inflating: preprocessed_images/4377_left.jpg \n inflating: preprocessed_images/4377_right.jpg \n inflating: preprocessed_images/4379_left.jpg \n inflating: preprocessed_images/4379_right.jpg \n inflating: preprocessed_images/437_left.jpg \n inflating: preprocessed_images/437_right.jpg \n inflating: preprocessed_images/4380_left.jpg \n inflating: preprocessed_images/4380_right.jpg \n inflating: preprocessed_images/4381_left.jpg \n inflating: preprocessed_images/4381_right.jpg \n inflating: preprocessed_images/4383_left.jpg \n inflating: preprocessed_images/4383_right.jpg \n inflating: preprocessed_images/4384_left.jpg \n inflating: preprocessed_images/4384_right.jpg \n inflating: preprocessed_images/4385_left.jpg \n inflating: preprocessed_images/4385_right.jpg \n inflating: preprocessed_images/4386_left.jpg \n inflating: preprocessed_images/4386_right.jpg \n inflating: preprocessed_images/4387_left.jpg \n inflating: preprocessed_images/4387_right.jpg \n inflating: preprocessed_images/4388_left.jpg \n inflating: preprocessed_images/4388_right.jpg \n inflating: preprocessed_images/4389_left.jpg \n inflating: preprocessed_images/4389_right.jpg \n inflating: preprocessed_images/438_right.jpg \n inflating: preprocessed_images/4391_left.jpg \n inflating: preprocessed_images/4391_right.jpg \n inflating: preprocessed_images/4392_left.jpg \n inflating: preprocessed_images/4392_right.jpg \n inflating: preprocessed_images/4393_left.jpg \n inflating: preprocessed_images/4393_right.jpg \n inflating: preprocessed_images/4394_left.jpg \n inflating: preprocessed_images/4395_left.jpg \n inflating: preprocessed_images/4395_right.jpg \n inflating: preprocessed_images/4396_left.jpg \n inflating: preprocessed_images/4396_right.jpg \n inflating: preprocessed_images/4397_left.jpg \n inflating: preprocessed_images/4397_right.jpg \n inflating: preprocessed_images/4398_left.jpg \n inflating: preprocessed_images/4398_right.jpg \n inflating: preprocessed_images/4399_left.jpg \n inflating: preprocessed_images/4399_right.jpg \n inflating: preprocessed_images/439_left.jpg \n inflating: preprocessed_images/439_right.jpg \n inflating: preprocessed_images/43_left.jpg \n inflating: preprocessed_images/43_right.jpg \n inflating: preprocessed_images/4400_left.jpg \n inflating: preprocessed_images/4400_right.jpg \n inflating: preprocessed_images/4401_left.jpg \n inflating: preprocessed_images/4401_right.jpg \n inflating: preprocessed_images/4402_left.jpg \n inflating: preprocessed_images/4402_right.jpg \n inflating: preprocessed_images/4403_left.jpg \n inflating: preprocessed_images/4403_right.jpg \n inflating: preprocessed_images/4404_left.jpg \n inflating: preprocessed_images/4404_right.jpg \n inflating: preprocessed_images/4406_left.jpg \n inflating: preprocessed_images/4406_right.jpg \n inflating: preprocessed_images/4407_left.jpg \n inflating: preprocessed_images/4407_right.jpg \n inflating: preprocessed_images/4408_left.jpg \n inflating: preprocessed_images/4408_right.jpg \n inflating: preprocessed_images/4409_left.jpg \n inflating: preprocessed_images/4409_right.jpg \n inflating: preprocessed_images/440_left.jpg \n inflating: preprocessed_images/4410_left.jpg \n inflating: preprocessed_images/4410_right.jpg \n inflating: preprocessed_images/4411_left.jpg \n inflating: preprocessed_images/4411_right.jpg \n inflating: preprocessed_images/4412_left.jpg \n inflating: preprocessed_images/4412_right.jpg \n inflating: preprocessed_images/4413_left.jpg \n inflating: preprocessed_images/4413_right.jpg \n inflating: preprocessed_images/4414_left.jpg \n inflating: preprocessed_images/4414_right.jpg \n inflating: preprocessed_images/4415_left.jpg \n inflating: preprocessed_images/4415_right.jpg \n inflating: preprocessed_images/4417_left.jpg \n inflating: preprocessed_images/4417_right.jpg \n inflating: preprocessed_images/4418_left.jpg \n inflating: preprocessed_images/4418_right.jpg \n inflating: preprocessed_images/4419_left.jpg \n inflating: preprocessed_images/4419_right.jpg \n inflating: preprocessed_images/441_right.jpg \n inflating: preprocessed_images/4420_left.jpg \n inflating: preprocessed_images/4420_right.jpg \n inflating: preprocessed_images/4421_left.jpg \n inflating: preprocessed_images/4421_right.jpg \n inflating: preprocessed_images/4422_left.jpg \n inflating: preprocessed_images/4422_right.jpg \n inflating: preprocessed_images/4424_left.jpg \n inflating: preprocessed_images/4424_right.jpg \n inflating: preprocessed_images/4425_left.jpg \n inflating: preprocessed_images/4425_right.jpg \n inflating: preprocessed_images/4426_left.jpg \n inflating: preprocessed_images/4426_right.jpg \n inflating: preprocessed_images/4427_left.jpg \n inflating: preprocessed_images/4428_left.jpg \n inflating: preprocessed_images/4428_right.jpg \n inflating: preprocessed_images/4429_left.jpg \n inflating: preprocessed_images/4429_right.jpg \n inflating: preprocessed_images/442_left.jpg \n inflating: preprocessed_images/442_right.jpg \n inflating: preprocessed_images/4430_left.jpg \n inflating: preprocessed_images/4430_right.jpg \n inflating: preprocessed_images/4431_left.jpg \n inflating: preprocessed_images/4431_right.jpg \n inflating: preprocessed_images/4432_left.jpg \n inflating: preprocessed_images/4432_right.jpg \n inflating: preprocessed_images/4433_left.jpg \n inflating: preprocessed_images/4433_right.jpg \n inflating: preprocessed_images/4436_left.jpg \n inflating: preprocessed_images/4436_right.jpg \n inflating: preprocessed_images/4437_left.jpg \n inflating: preprocessed_images/4437_right.jpg \n inflating: preprocessed_images/4438_left.jpg \n inflating: preprocessed_images/4438_right.jpg \n inflating: preprocessed_images/4439_left.jpg \n inflating: preprocessed_images/4439_right.jpg \n inflating: preprocessed_images/443_right.jpg \n inflating: preprocessed_images/4440_left.jpg \n inflating: preprocessed_images/4440_right.jpg \n inflating: preprocessed_images/4441_left.jpg \n inflating: preprocessed_images/4441_right.jpg \n inflating: preprocessed_images/4442_right.jpg \n inflating: preprocessed_images/4443_left.jpg \n inflating: preprocessed_images/4443_right.jpg \n inflating: preprocessed_images/4444_left.jpg \n inflating: preprocessed_images/4444_right.jpg \n inflating: preprocessed_images/4445_left.jpg \n inflating: preprocessed_images/4445_right.jpg \n inflating: preprocessed_images/4447_left.jpg \n inflating: preprocessed_images/4447_right.jpg \n inflating: preprocessed_images/4448_right.jpg \n inflating: preprocessed_images/4449_left.jpg \n inflating: preprocessed_images/4449_right.jpg \n inflating: preprocessed_images/444_left.jpg \n inflating: preprocessed_images/4450_left.jpg \n inflating: preprocessed_images/4450_right.jpg \n inflating: preprocessed_images/4451_left.jpg \n inflating: preprocessed_images/4451_right.jpg \n inflating: preprocessed_images/4452_left.jpg \n inflating: preprocessed_images/4452_right.jpg \n inflating: preprocessed_images/4453_left.jpg \n inflating: preprocessed_images/4453_right.jpg \n inflating: preprocessed_images/4455_left.jpg \n inflating: preprocessed_images/4455_right.jpg \n inflating: preprocessed_images/4456_left.jpg \n inflating: preprocessed_images/4456_right.jpg \n inflating: preprocessed_images/4458_left.jpg \n inflating: preprocessed_images/4458_right.jpg \n inflating: preprocessed_images/4459_left.jpg \n inflating: preprocessed_images/4459_right.jpg \n inflating: preprocessed_images/445_left.jpg \n inflating: preprocessed_images/445_right.jpg \n inflating: preprocessed_images/4460_right.jpg \n inflating: preprocessed_images/4461_left.jpg \n inflating: preprocessed_images/4461_right.jpg \n inflating: preprocessed_images/4462_left.jpg \n inflating: preprocessed_images/4462_right.jpg \n inflating: preprocessed_images/4464_left.jpg \n inflating: preprocessed_images/4464_right.jpg \n inflating: preprocessed_images/4465_left.jpg \n inflating: preprocessed_images/4465_right.jpg \n inflating: preprocessed_images/4466_left.jpg \n inflating: preprocessed_images/4466_right.jpg \n inflating: preprocessed_images/4467_left.jpg \n inflating: preprocessed_images/4467_right.jpg \n inflating: preprocessed_images/4468_left.jpg \n inflating: preprocessed_images/4468_right.jpg \n inflating: preprocessed_images/4469_left.jpg \n inflating: preprocessed_images/4469_right.jpg \n inflating: preprocessed_images/446_left.jpg \n inflating: preprocessed_images/446_right.jpg \n inflating: preprocessed_images/4470_left.jpg \n inflating: preprocessed_images/4470_right.jpg \n inflating: preprocessed_images/4471_left.jpg \n inflating: preprocessed_images/4471_right.jpg \n inflating: preprocessed_images/4472_left.jpg \n inflating: preprocessed_images/4472_right.jpg \n inflating: preprocessed_images/4473_left.jpg \n inflating: preprocessed_images/4473_right.jpg \n inflating: preprocessed_images/4474_left.jpg \n inflating: preprocessed_images/4474_right.jpg \n inflating: preprocessed_images/4475_left.jpg \n inflating: preprocessed_images/4475_right.jpg \n inflating: preprocessed_images/4476_left.jpg \n inflating: preprocessed_images/4476_right.jpg \n inflating: preprocessed_images/4477_left.jpg \n inflating: preprocessed_images/4477_right.jpg \n inflating: preprocessed_images/4478_left.jpg \n inflating: preprocessed_images/4478_right.jpg \n inflating: preprocessed_images/4479_left.jpg \n inflating: preprocessed_images/4479_right.jpg \n inflating: preprocessed_images/447_left.jpg \n inflating: preprocessed_images/447_right.jpg \n inflating: preprocessed_images/4480_left.jpg \n inflating: preprocessed_images/4480_right.jpg \n inflating: preprocessed_images/4481_left.jpg \n inflating: preprocessed_images/4481_right.jpg \n inflating: preprocessed_images/4484_left.jpg \n inflating: preprocessed_images/4484_right.jpg \n inflating: preprocessed_images/4486_left.jpg \n inflating: preprocessed_images/4486_right.jpg \n inflating: preprocessed_images/4487_left.jpg \n inflating: preprocessed_images/4487_right.jpg \n inflating: preprocessed_images/4488_left.jpg \n inflating: preprocessed_images/4488_right.jpg \n inflating: preprocessed_images/4489_left.jpg \n inflating: preprocessed_images/4489_right.jpg \n inflating: preprocessed_images/448_left.jpg \n inflating: preprocessed_images/448_right.jpg \n inflating: preprocessed_images/4491_left.jpg \n inflating: preprocessed_images/4491_right.jpg \n inflating: preprocessed_images/4492_left.jpg \n inflating: preprocessed_images/4492_right.jpg \n inflating: preprocessed_images/4493_left.jpg \n inflating: preprocessed_images/4493_right.jpg \n inflating: preprocessed_images/4494_left.jpg \n inflating: preprocessed_images/4494_right.jpg \n inflating: preprocessed_images/4495_left.jpg \n inflating: preprocessed_images/4495_right.jpg \n inflating: preprocessed_images/4496_left.jpg \n inflating: preprocessed_images/4496_right.jpg \n inflating: preprocessed_images/4497_left.jpg \n inflating: preprocessed_images/4497_right.jpg \n inflating: preprocessed_images/4498_left.jpg \n inflating: preprocessed_images/4498_right.jpg \n inflating: preprocessed_images/4499_left.jpg \n inflating: preprocessed_images/4499_right.jpg \n inflating: preprocessed_images/44_left.jpg \n inflating: preprocessed_images/44_right.jpg \n inflating: preprocessed_images/4500_left.jpg \n inflating: preprocessed_images/4500_right.jpg \n inflating: preprocessed_images/4502_left.jpg \n inflating: preprocessed_images/4502_right.jpg \n inflating: preprocessed_images/4504_left.jpg \n inflating: preprocessed_images/4504_right.jpg \n inflating: preprocessed_images/4506_left.jpg \n inflating: preprocessed_images/4506_right.jpg \n inflating: preprocessed_images/4507_left.jpg \n inflating: preprocessed_images/4507_right.jpg \n inflating: preprocessed_images/4508_left.jpg \n inflating: preprocessed_images/4508_right.jpg \n inflating: preprocessed_images/4509_left.jpg \n inflating: preprocessed_images/4509_right.jpg \n inflating: preprocessed_images/450_left.jpg \n inflating: preprocessed_images/450_right.jpg \n inflating: preprocessed_images/4511_left.jpg \n inflating: preprocessed_images/4511_right.jpg \n inflating: preprocessed_images/4512_left.jpg \n inflating: preprocessed_images/4512_right.jpg \n inflating: preprocessed_images/4513_left.jpg \n inflating: preprocessed_images/4513_right.jpg \n inflating: preprocessed_images/4515_left.jpg \n inflating: preprocessed_images/4515_right.jpg \n inflating: preprocessed_images/4516_left.jpg \n inflating: preprocessed_images/4516_right.jpg \n inflating: preprocessed_images/4519_left.jpg \n inflating: preprocessed_images/4519_right.jpg \n inflating: preprocessed_images/451_left.jpg \n inflating: preprocessed_images/451_right.jpg \n inflating: preprocessed_images/4520_left.jpg \n inflating: preprocessed_images/4520_right.jpg \n inflating: preprocessed_images/4522_right.jpg \n inflating: preprocessed_images/4523_left.jpg \n inflating: preprocessed_images/4523_right.jpg \n inflating: preprocessed_images/4524_left.jpg \n inflating: preprocessed_images/4524_right.jpg \n inflating: preprocessed_images/4525_left.jpg \n inflating: preprocessed_images/4525_right.jpg \n inflating: preprocessed_images/4527_left.jpg \n inflating: preprocessed_images/4527_right.jpg \n inflating: preprocessed_images/4528_left.jpg \n inflating: preprocessed_images/4528_right.jpg \n inflating: preprocessed_images/4529_left.jpg \n inflating: preprocessed_images/4529_right.jpg \n inflating: preprocessed_images/452_left.jpg \n inflating: preprocessed_images/452_right.jpg \n inflating: preprocessed_images/4530_left.jpg \n inflating: preprocessed_images/4530_right.jpg \n inflating: preprocessed_images/4531_left.jpg \n inflating: preprocessed_images/4531_right.jpg \n inflating: preprocessed_images/4532_left.jpg \n inflating: preprocessed_images/4532_right.jpg \n inflating: preprocessed_images/4533_left.jpg \n inflating: preprocessed_images/4533_right.jpg \n inflating: preprocessed_images/4534_left.jpg \n inflating: preprocessed_images/4534_right.jpg \n inflating: preprocessed_images/4535_left.jpg \n inflating: preprocessed_images/4535_right.jpg \n inflating: preprocessed_images/4536_left.jpg \n inflating: preprocessed_images/4536_right.jpg \n inflating: preprocessed_images/4538_left.jpg \n inflating: preprocessed_images/4538_right.jpg \n inflating: preprocessed_images/4539_left.jpg \n inflating: preprocessed_images/4539_right.jpg \n inflating: preprocessed_images/4540_left.jpg \n inflating: preprocessed_images/4540_right.jpg \n inflating: preprocessed_images/4541_left.jpg \n inflating: preprocessed_images/4541_right.jpg \n inflating: preprocessed_images/4542_left.jpg \n inflating: preprocessed_images/4542_right.jpg \n inflating: preprocessed_images/4543_left.jpg \n inflating: preprocessed_images/4543_right.jpg \n inflating: preprocessed_images/4544_left.jpg \n inflating: preprocessed_images/4544_right.jpg \n inflating: preprocessed_images/4545_left.jpg \n inflating: preprocessed_images/4545_right.jpg \n inflating: preprocessed_images/4546_left.jpg \n inflating: preprocessed_images/4546_right.jpg \n inflating: preprocessed_images/4547_left.jpg \n inflating: preprocessed_images/4547_right.jpg \n inflating: preprocessed_images/4548_left.jpg \n inflating: preprocessed_images/4548_right.jpg \n inflating: preprocessed_images/4549_left.jpg \n inflating: preprocessed_images/4549_right.jpg \n inflating: preprocessed_images/454_left.jpg \n inflating: preprocessed_images/454_right.jpg \n inflating: preprocessed_images/4550_left.jpg \n inflating: preprocessed_images/4550_right.jpg \n inflating: preprocessed_images/4551_left.jpg \n inflating: preprocessed_images/4552_left.jpg \n inflating: preprocessed_images/4552_right.jpg \n inflating: preprocessed_images/4553_left.jpg \n inflating: preprocessed_images/4553_right.jpg \n inflating: preprocessed_images/4555_left.jpg \n inflating: preprocessed_images/4555_right.jpg \n inflating: preprocessed_images/4556_left.jpg \n inflating: preprocessed_images/4556_right.jpg \n inflating: preprocessed_images/4557_left.jpg \n inflating: preprocessed_images/4557_right.jpg \n inflating: preprocessed_images/4558_left.jpg \n inflating: preprocessed_images/4558_right.jpg \n inflating: preprocessed_images/4559_left.jpg \n inflating: preprocessed_images/4559_right.jpg \n inflating: preprocessed_images/455_left.jpg \n inflating: preprocessed_images/455_right.jpg \n inflating: preprocessed_images/4560_left.jpg \n inflating: preprocessed_images/4560_right.jpg \n inflating: preprocessed_images/4561_left.jpg \n inflating: preprocessed_images/4561_right.jpg \n inflating: preprocessed_images/4562_left.jpg \n inflating: preprocessed_images/4562_right.jpg \n inflating: preprocessed_images/4563_left.jpg \n inflating: preprocessed_images/4563_right.jpg \n inflating: preprocessed_images/4564_left.jpg \n inflating: preprocessed_images/4564_right.jpg \n inflating: preprocessed_images/4565_left.jpg \n inflating: preprocessed_images/4565_right.jpg \n inflating: preprocessed_images/4566_left.jpg \n inflating: preprocessed_images/4566_right.jpg \n inflating: preprocessed_images/4567_left.jpg \n inflating: preprocessed_images/4567_right.jpg \n inflating: preprocessed_images/4568_left.jpg \n inflating: preprocessed_images/4568_right.jpg \n inflating: preprocessed_images/456_left.jpg \n inflating: preprocessed_images/456_right.jpg \n inflating: preprocessed_images/4570_left.jpg \n inflating: preprocessed_images/4570_right.jpg \n inflating: preprocessed_images/4571_left.jpg \n inflating: preprocessed_images/4571_right.jpg \n inflating: preprocessed_images/4572_left.jpg \n inflating: preprocessed_images/4572_right.jpg \n inflating: preprocessed_images/4573_left.jpg \n inflating: preprocessed_images/4573_right.jpg \n inflating: preprocessed_images/4574_left.jpg \n inflating: preprocessed_images/4574_right.jpg \n inflating: preprocessed_images/4575_left.jpg \n inflating: preprocessed_images/4575_right.jpg \n inflating: preprocessed_images/4576_left.jpg \n inflating: preprocessed_images/4576_right.jpg \n inflating: preprocessed_images/4577_left.jpg \n inflating: preprocessed_images/4577_right.jpg \n inflating: preprocessed_images/4578_left.jpg \n inflating: preprocessed_images/4578_right.jpg \n inflating: preprocessed_images/4579_left.jpg \n inflating: preprocessed_images/4579_right.jpg \n inflating: preprocessed_images/457_left.jpg \n inflating: preprocessed_images/457_right.jpg \n inflating: preprocessed_images/4580_right.jpg \n inflating: preprocessed_images/4581_left.jpg \n inflating: preprocessed_images/4581_right.jpg \n inflating: preprocessed_images/4582_left.jpg \n inflating: preprocessed_images/4582_right.jpg \n inflating: preprocessed_images/4583_left.jpg \n inflating: preprocessed_images/4583_right.jpg \n inflating: preprocessed_images/4584_left.jpg \n inflating: preprocessed_images/4584_right.jpg \n inflating: preprocessed_images/4585_left.jpg \n inflating: preprocessed_images/4585_right.jpg \n inflating: preprocessed_images/4586_left.jpg \n inflating: preprocessed_images/4586_right.jpg \n inflating: preprocessed_images/4587_left.jpg \n inflating: preprocessed_images/4587_right.jpg \n inflating: preprocessed_images/4588_left.jpg \n inflating: preprocessed_images/4588_right.jpg \n inflating: preprocessed_images/4589_left.jpg \n inflating: preprocessed_images/4589_right.jpg \n inflating: preprocessed_images/458_left.jpg \n inflating: preprocessed_images/458_right.jpg \n inflating: preprocessed_images/4590_left.jpg \n inflating: preprocessed_images/4590_right.jpg \n inflating: preprocessed_images/4591_left.jpg \n inflating: preprocessed_images/4591_right.jpg \n inflating: preprocessed_images/4592_left.jpg \n inflating: preprocessed_images/4592_right.jpg \n inflating: preprocessed_images/4593_left.jpg \n inflating: preprocessed_images/4593_right.jpg \n inflating: preprocessed_images/4594_left.jpg \n inflating: preprocessed_images/4594_right.jpg \n inflating: preprocessed_images/4595_left.jpg \n inflating: preprocessed_images/4595_right.jpg \n inflating: preprocessed_images/4596_left.jpg \n inflating: preprocessed_images/4596_right.jpg \n inflating: preprocessed_images/4597_left.jpg \n inflating: preprocessed_images/4597_right.jpg \n inflating: preprocessed_images/4599_left.jpg \n inflating: preprocessed_images/4599_right.jpg \n inflating: preprocessed_images/459_left.jpg \n inflating: preprocessed_images/45_left.jpg \n inflating: preprocessed_images/45_right.jpg \n inflating: preprocessed_images/4601_left.jpg \n inflating: preprocessed_images/4602_left.jpg \n inflating: preprocessed_images/4602_right.jpg \n inflating: preprocessed_images/4603_left.jpg \n inflating: preprocessed_images/4603_right.jpg \n inflating: preprocessed_images/4604_left.jpg \n inflating: preprocessed_images/4604_right.jpg \n inflating: preprocessed_images/4605_left.jpg \n inflating: preprocessed_images/4605_right.jpg \n inflating: preprocessed_images/4607_left.jpg \n inflating: preprocessed_images/4607_right.jpg \n inflating: preprocessed_images/4608_left.jpg \n inflating: preprocessed_images/4608_right.jpg \n inflating: preprocessed_images/4609_left.jpg \n inflating: preprocessed_images/4609_right.jpg \n inflating: preprocessed_images/460_left.jpg \n inflating: preprocessed_images/460_right.jpg \n inflating: preprocessed_images/4610_left.jpg \n inflating: preprocessed_images/4610_right.jpg \n inflating: preprocessed_images/4611_left.jpg \n inflating: preprocessed_images/4611_right.jpg \n inflating: preprocessed_images/4612_left.jpg \n inflating: preprocessed_images/4612_right.jpg \n inflating: preprocessed_images/4613_left.jpg \n inflating: preprocessed_images/4613_right.jpg \n inflating: preprocessed_images/4614_left.jpg \n inflating: preprocessed_images/4614_right.jpg \n inflating: preprocessed_images/4615_left.jpg \n inflating: preprocessed_images/4615_right.jpg \n inflating: preprocessed_images/4616_left.jpg \n inflating: preprocessed_images/4616_right.jpg \n inflating: preprocessed_images/4617_left.jpg \n inflating: preprocessed_images/4617_right.jpg \n inflating: preprocessed_images/4618_left.jpg \n inflating: preprocessed_images/4618_right.jpg \n inflating: preprocessed_images/4619_left.jpg \n inflating: preprocessed_images/4619_right.jpg \n inflating: preprocessed_images/461_left.jpg \n inflating: preprocessed_images/461_right.jpg \n inflating: preprocessed_images/4620_left.jpg \n inflating: preprocessed_images/4620_right.jpg \n inflating: preprocessed_images/4621_left.jpg \n inflating: preprocessed_images/4621_right.jpg \n inflating: preprocessed_images/4622_left.jpg \n inflating: preprocessed_images/4622_right.jpg \n inflating: preprocessed_images/4623_left.jpg \n inflating: preprocessed_images/4623_right.jpg \n inflating: preprocessed_images/4624_left.jpg \n inflating: preprocessed_images/4624_right.jpg \n inflating: preprocessed_images/4625_left.jpg \n inflating: preprocessed_images/4625_right.jpg \n inflating: preprocessed_images/4626_left.jpg \n inflating: preprocessed_images/4626_right.jpg \n inflating: preprocessed_images/4627_left.jpg \n inflating: preprocessed_images/4627_right.jpg \n inflating: preprocessed_images/4628_left.jpg \n inflating: preprocessed_images/4628_right.jpg \n inflating: preprocessed_images/4629_left.jpg \n inflating: preprocessed_images/4629_right.jpg \n inflating: preprocessed_images/462_left.jpg \n inflating: preprocessed_images/462_right.jpg \n inflating: preprocessed_images/4630_left.jpg \n inflating: preprocessed_images/4630_right.jpg \n inflating: preprocessed_images/4631_left.jpg \n inflating: preprocessed_images/4631_right.jpg \n inflating: preprocessed_images/4632_left.jpg \n inflating: preprocessed_images/4632_right.jpg \n inflating: preprocessed_images/4633_left.jpg \n inflating: preprocessed_images/4633_right.jpg \n inflating: preprocessed_images/4634_left.jpg \n inflating: preprocessed_images/4634_right.jpg \n inflating: preprocessed_images/4635_left.jpg \n inflating: preprocessed_images/4635_right.jpg \n inflating: preprocessed_images/4636_left.jpg \n inflating: preprocessed_images/4636_right.jpg \n inflating: preprocessed_images/4637_left.jpg \n inflating: preprocessed_images/4637_right.jpg \n inflating: preprocessed_images/4638_left.jpg \n inflating: preprocessed_images/4638_right.jpg \n inflating: preprocessed_images/4639_left.jpg \n inflating: preprocessed_images/4639_right.jpg \n inflating: preprocessed_images/463_left.jpg \n inflating: preprocessed_images/463_right.jpg \n inflating: preprocessed_images/4640_left.jpg \n inflating: preprocessed_images/4640_right.jpg \n inflating: preprocessed_images/4641_left.jpg \n inflating: preprocessed_images/4641_right.jpg \n inflating: preprocessed_images/4642_left.jpg \n inflating: preprocessed_images/4642_right.jpg \n inflating: preprocessed_images/4643_left.jpg \n inflating: preprocessed_images/4643_right.jpg \n inflating: preprocessed_images/4644_left.jpg \n inflating: preprocessed_images/4644_right.jpg \n inflating: preprocessed_images/4645_left.jpg \n inflating: preprocessed_images/4645_right.jpg \n inflating: preprocessed_images/4647_left.jpg \n inflating: preprocessed_images/4647_right.jpg \n inflating: preprocessed_images/464_left.jpg \n inflating: preprocessed_images/464_right.jpg \n inflating: preprocessed_images/4650_left.jpg \n inflating: preprocessed_images/4650_right.jpg \n inflating: preprocessed_images/4651_left.jpg \n inflating: preprocessed_images/4651_right.jpg \n inflating: preprocessed_images/4653_left.jpg \n inflating: preprocessed_images/4653_right.jpg \n inflating: preprocessed_images/4655_left.jpg \n inflating: preprocessed_images/4655_right.jpg \n inflating: preprocessed_images/4657_left.jpg \n inflating: preprocessed_images/4657_right.jpg \n inflating: preprocessed_images/4658_left.jpg \n inflating: preprocessed_images/4658_right.jpg \n inflating: preprocessed_images/4659_left.jpg \n inflating: preprocessed_images/465_left.jpg \n inflating: preprocessed_images/465_right.jpg \n inflating: preprocessed_images/4660_left.jpg \n inflating: preprocessed_images/4660_right.jpg \n inflating: preprocessed_images/4664_left.jpg \n inflating: preprocessed_images/4664_right.jpg \n inflating: preprocessed_images/4669_left.jpg \n inflating: preprocessed_images/4669_right.jpg \n inflating: preprocessed_images/466_left.jpg \n inflating: preprocessed_images/466_right.jpg \n inflating: preprocessed_images/4670_left.jpg \n inflating: preprocessed_images/4670_right.jpg \n inflating: preprocessed_images/4671_left.jpg \n inflating: preprocessed_images/4671_right.jpg \n inflating: preprocessed_images/4672_left.jpg \n inflating: preprocessed_images/4672_right.jpg \n inflating: preprocessed_images/4673_left.jpg \n inflating: preprocessed_images/4673_right.jpg \n inflating: preprocessed_images/4675_left.jpg \n inflating: preprocessed_images/4675_right.jpg \n inflating: preprocessed_images/4676_left.jpg \n inflating: preprocessed_images/4676_right.jpg \n inflating: preprocessed_images/4677_left.jpg \n inflating: preprocessed_images/4677_right.jpg \n inflating: preprocessed_images/4678_left.jpg \n inflating: preprocessed_images/4678_right.jpg \n inflating: preprocessed_images/4679_left.jpg \n inflating: preprocessed_images/4679_right.jpg \n inflating: preprocessed_images/467_left.jpg \n inflating: preprocessed_images/4682_left.jpg \n inflating: preprocessed_images/4682_right.jpg \n inflating: preprocessed_images/4683_left.jpg \n inflating: preprocessed_images/4683_right.jpg \n inflating: preprocessed_images/4686_left.jpg \n inflating: preprocessed_images/4686_right.jpg \n inflating: preprocessed_images/4688_left.jpg \n inflating: preprocessed_images/4688_right.jpg \n inflating: preprocessed_images/4689_left.jpg \n inflating: preprocessed_images/4689_right.jpg \n inflating: preprocessed_images/468_left.jpg \n inflating: preprocessed_images/468_right.jpg \n inflating: preprocessed_images/4690_left.jpg \n inflating: preprocessed_images/4690_right.jpg \n inflating: preprocessed_images/469_left.jpg \n inflating: preprocessed_images/469_right.jpg \n inflating: preprocessed_images/46_left.jpg \n inflating: preprocessed_images/46_right.jpg \n inflating: preprocessed_images/470_right.jpg \n inflating: preprocessed_images/471_left.jpg \n inflating: preprocessed_images/471_right.jpg \n inflating: preprocessed_images/473_left.jpg \n inflating: preprocessed_images/473_right.jpg \n inflating: preprocessed_images/475_left.jpg \n inflating: preprocessed_images/475_right.jpg \n inflating: preprocessed_images/476_left.jpg \n inflating: preprocessed_images/476_right.jpg \n inflating: preprocessed_images/477_left.jpg \n inflating: preprocessed_images/477_right.jpg \n inflating: preprocessed_images/4784_left.jpg \n inflating: preprocessed_images/4784_right.jpg \n inflating: preprocessed_images/478_right.jpg \n inflating: preprocessed_images/479_left.jpg \n inflating: preprocessed_images/479_right.jpg \n inflating: preprocessed_images/47_left.jpg \n inflating: preprocessed_images/47_right.jpg \n inflating: preprocessed_images/480_left.jpg \n inflating: preprocessed_images/480_right.jpg \n inflating: preprocessed_images/481_left.jpg \n inflating: preprocessed_images/481_right.jpg \n inflating: preprocessed_images/482_left.jpg \n inflating: preprocessed_images/482_right.jpg \n inflating: preprocessed_images/483_left.jpg \n inflating: preprocessed_images/483_right.jpg \n inflating: preprocessed_images/484_left.jpg \n inflating: preprocessed_images/484_right.jpg \n inflating: preprocessed_images/485_right.jpg \n inflating: preprocessed_images/486_left.jpg \n inflating: preprocessed_images/486_right.jpg \n inflating: preprocessed_images/487_left.jpg \n inflating: preprocessed_images/487_right.jpg \n inflating: preprocessed_images/488_left.jpg \n inflating: preprocessed_images/488_right.jpg \n inflating: preprocessed_images/489_left.jpg \n inflating: preprocessed_images/489_right.jpg \n inflating: preprocessed_images/48_left.jpg \n inflating: preprocessed_images/48_right.jpg \n inflating: preprocessed_images/491_left.jpg \n inflating: preprocessed_images/491_right.jpg \n inflating: preprocessed_images/492_left.jpg \n inflating: preprocessed_images/492_right.jpg \n inflating: preprocessed_images/493_left.jpg \n inflating: preprocessed_images/494_right.jpg \n inflating: preprocessed_images/495_left.jpg \n inflating: preprocessed_images/495_right.jpg \n inflating: preprocessed_images/496_left.jpg \n inflating: preprocessed_images/496_right.jpg \n inflating: preprocessed_images/497_right.jpg \n inflating: preprocessed_images/498_left.jpg \n inflating: preprocessed_images/498_right.jpg \n inflating: preprocessed_images/499_left.jpg \n inflating: preprocessed_images/499_right.jpg \n inflating: preprocessed_images/49_left.jpg \n inflating: preprocessed_images/49_right.jpg \n inflating: preprocessed_images/4_left.jpg \n inflating: preprocessed_images/4_right.jpg \n inflating: preprocessed_images/500_left.jpg \n inflating: preprocessed_images/500_right.jpg \n inflating: preprocessed_images/501_left.jpg \n inflating: preprocessed_images/501_right.jpg \n inflating: preprocessed_images/502_left.jpg \n inflating: preprocessed_images/502_right.jpg \n inflating: preprocessed_images/503_left.jpg \n inflating: preprocessed_images/503_right.jpg \n inflating: preprocessed_images/504_left.jpg \n inflating: preprocessed_images/504_right.jpg \n inflating: preprocessed_images/506_right.jpg \n inflating: preprocessed_images/507_left.jpg \n inflating: preprocessed_images/507_right.jpg \n inflating: preprocessed_images/508_left.jpg \n inflating: preprocessed_images/508_right.jpg \n inflating: preprocessed_images/509_left.jpg \n inflating: preprocessed_images/509_right.jpg \n inflating: preprocessed_images/50_right.jpg \n inflating: preprocessed_images/510_left.jpg \n inflating: preprocessed_images/510_right.jpg \n inflating: preprocessed_images/511_left.jpg \n inflating: preprocessed_images/511_right.jpg \n inflating: preprocessed_images/512_left.jpg \n inflating: preprocessed_images/512_right.jpg \n inflating: preprocessed_images/513_left.jpg \n inflating: preprocessed_images/513_right.jpg \n inflating: preprocessed_images/514_left.jpg \n inflating: preprocessed_images/514_right.jpg \n inflating: preprocessed_images/515_left.jpg \n inflating: preprocessed_images/515_right.jpg \n inflating: preprocessed_images/516_right.jpg \n inflating: preprocessed_images/517_left.jpg \n inflating: preprocessed_images/517_right.jpg \n inflating: preprocessed_images/518_right.jpg \n inflating: preprocessed_images/519_left.jpg \n inflating: preprocessed_images/519_right.jpg \n inflating: preprocessed_images/51_left.jpg \n inflating: preprocessed_images/51_right.jpg \n inflating: preprocessed_images/520_left.jpg \n inflating: preprocessed_images/520_right.jpg \n inflating: preprocessed_images/521_left.jpg \n inflating: preprocessed_images/521_right.jpg \n inflating: preprocessed_images/522_left.jpg \n inflating: preprocessed_images/522_right.jpg \n inflating: preprocessed_images/523_left.jpg \n inflating: preprocessed_images/523_right.jpg \n inflating: preprocessed_images/524_left.jpg \n inflating: preprocessed_images/524_right.jpg \n inflating: preprocessed_images/525_left.jpg \n inflating: preprocessed_images/525_right.jpg \n inflating: preprocessed_images/526_left.jpg \n inflating: preprocessed_images/526_right.jpg \n inflating: preprocessed_images/527_left.jpg \n inflating: preprocessed_images/527_right.jpg \n inflating: preprocessed_images/528_right.jpg \n inflating: preprocessed_images/529_left.jpg \n inflating: preprocessed_images/52_left.jpg \n inflating: preprocessed_images/52_right.jpg \n inflating: preprocessed_images/530_left.jpg \n inflating: preprocessed_images/530_right.jpg \n inflating: preprocessed_images/531_left.jpg \n inflating: preprocessed_images/531_right.jpg \n inflating: preprocessed_images/532_left.jpg \n inflating: preprocessed_images/532_right.jpg \n inflating: preprocessed_images/533_left.jpg \n inflating: preprocessed_images/533_right.jpg \n inflating: preprocessed_images/534_left.jpg \n inflating: preprocessed_images/534_right.jpg \n inflating: preprocessed_images/535_left.jpg \n inflating: preprocessed_images/535_right.jpg \n inflating: preprocessed_images/536_left.jpg \n inflating: preprocessed_images/537_left.jpg \n inflating: preprocessed_images/537_right.jpg \n inflating: preprocessed_images/538_left.jpg \n inflating: preprocessed_images/538_right.jpg \n inflating: preprocessed_images/539_left.jpg \n inflating: preprocessed_images/53_left.jpg \n inflating: preprocessed_images/53_right.jpg \n inflating: preprocessed_images/540_left.jpg \n inflating: preprocessed_images/540_right.jpg \n inflating: preprocessed_images/542_left.jpg \n inflating: preprocessed_images/542_right.jpg \n inflating: preprocessed_images/543_left.jpg \n inflating: preprocessed_images/543_right.jpg \n inflating: preprocessed_images/544_left.jpg \n inflating: preprocessed_images/544_right.jpg \n inflating: preprocessed_images/545_left.jpg \n inflating: preprocessed_images/545_right.jpg \n inflating: preprocessed_images/546_left.jpg \n inflating: preprocessed_images/546_right.jpg \n inflating: preprocessed_images/547_left.jpg \n inflating: preprocessed_images/547_right.jpg \n inflating: preprocessed_images/548_right.jpg \n inflating: preprocessed_images/549_left.jpg \n inflating: preprocessed_images/549_right.jpg \n inflating: preprocessed_images/54_left.jpg \n inflating: preprocessed_images/54_right.jpg \n inflating: preprocessed_images/550_left.jpg \n inflating: preprocessed_images/550_right.jpg \n inflating: preprocessed_images/551_left.jpg \n inflating: preprocessed_images/551_right.jpg \n inflating: preprocessed_images/552_left.jpg \n inflating: preprocessed_images/552_right.jpg \n inflating: preprocessed_images/553_left.jpg \n inflating: preprocessed_images/553_right.jpg \n inflating: preprocessed_images/554_left.jpg \n inflating: preprocessed_images/554_right.jpg \n inflating: preprocessed_images/555_left.jpg \n inflating: preprocessed_images/555_right.jpg \n inflating: preprocessed_images/556_left.jpg \n inflating: preprocessed_images/556_right.jpg \n inflating: preprocessed_images/557_left.jpg \n inflating: preprocessed_images/557_right.jpg \n inflating: preprocessed_images/558_left.jpg \n inflating: preprocessed_images/558_right.jpg \n inflating: preprocessed_images/559_left.jpg \n inflating: preprocessed_images/559_right.jpg \n inflating: preprocessed_images/55_left.jpg \n inflating: preprocessed_images/55_right.jpg \n inflating: preprocessed_images/560_left.jpg \n inflating: preprocessed_images/560_right.jpg \n inflating: preprocessed_images/561_left.jpg \n inflating: preprocessed_images/561_right.jpg \n inflating: preprocessed_images/562_left.jpg \n inflating: preprocessed_images/562_right.jpg \n inflating: preprocessed_images/563_left.jpg \n inflating: preprocessed_images/563_right.jpg \n inflating: preprocessed_images/564_left.jpg \n inflating: preprocessed_images/564_right.jpg \n inflating: preprocessed_images/565_left.jpg \n inflating: preprocessed_images/565_right.jpg \n inflating: preprocessed_images/566_left.jpg \n inflating: preprocessed_images/566_right.jpg \n inflating: preprocessed_images/567_left.jpg \n inflating: preprocessed_images/567_right.jpg \n inflating: preprocessed_images/569_left.jpg \n inflating: preprocessed_images/569_right.jpg \n inflating: preprocessed_images/56_left.jpg \n inflating: preprocessed_images/56_right.jpg \n inflating: preprocessed_images/570_left.jpg \n inflating: preprocessed_images/570_right.jpg \n inflating: preprocessed_images/571_left.jpg \n inflating: preprocessed_images/571_right.jpg \n inflating: preprocessed_images/572_left.jpg \n inflating: preprocessed_images/572_right.jpg \n inflating: preprocessed_images/573_left.jpg \n inflating: preprocessed_images/573_right.jpg \n inflating: preprocessed_images/574_left.jpg \n inflating: preprocessed_images/574_right.jpg \n inflating: preprocessed_images/575_left.jpg \n inflating: preprocessed_images/575_right.jpg \n inflating: preprocessed_images/576_left.jpg \n inflating: preprocessed_images/576_right.jpg \n inflating: preprocessed_images/578_left.jpg \n inflating: preprocessed_images/578_right.jpg \n inflating: preprocessed_images/579_left.jpg \n inflating: preprocessed_images/579_right.jpg \n inflating: preprocessed_images/580_left.jpg \n inflating: preprocessed_images/580_right.jpg \n inflating: preprocessed_images/582_left.jpg \n inflating: preprocessed_images/582_right.jpg \n inflating: preprocessed_images/583_left.jpg \n inflating: preprocessed_images/583_right.jpg \n inflating: preprocessed_images/584_left.jpg \n inflating: preprocessed_images/584_right.jpg \n inflating: preprocessed_images/585_left.jpg \n inflating: preprocessed_images/585_right.jpg \n inflating: preprocessed_images/587_left.jpg \n inflating: preprocessed_images/587_right.jpg \n inflating: preprocessed_images/588_left.jpg \n inflating: preprocessed_images/589_left.jpg \n inflating: preprocessed_images/589_right.jpg \n inflating: preprocessed_images/58_left.jpg \n inflating: preprocessed_images/58_right.jpg \n inflating: preprocessed_images/590_left.jpg \n inflating: preprocessed_images/590_right.jpg \n inflating: preprocessed_images/591_left.jpg \n inflating: preprocessed_images/591_right.jpg \n inflating: preprocessed_images/592_right.jpg \n inflating: preprocessed_images/593_right.jpg \n inflating: preprocessed_images/594_left.jpg \n inflating: preprocessed_images/594_right.jpg \n inflating: preprocessed_images/595_left.jpg \n inflating: preprocessed_images/596_left.jpg \n inflating: preprocessed_images/596_right.jpg \n inflating: preprocessed_images/597_left.jpg \n inflating: preprocessed_images/597_right.jpg \n inflating: preprocessed_images/598_left.jpg \n inflating: preprocessed_images/598_right.jpg \n inflating: preprocessed_images/599_left.jpg \n inflating: preprocessed_images/599_right.jpg \n inflating: preprocessed_images/59_left.jpg \n inflating: preprocessed_images/5_left.jpg \n inflating: preprocessed_images/5_right.jpg \n inflating: preprocessed_images/600_left.jpg \n inflating: preprocessed_images/601_left.jpg \n inflating: preprocessed_images/601_right.jpg \n inflating: preprocessed_images/602_left.jpg \n inflating: preprocessed_images/602_right.jpg \n inflating: preprocessed_images/603_left.jpg \n inflating: preprocessed_images/603_right.jpg \n inflating: preprocessed_images/604_left.jpg \n inflating: preprocessed_images/604_right.jpg \n inflating: preprocessed_images/605_left.jpg \n inflating: preprocessed_images/605_right.jpg \n inflating: preprocessed_images/606_left.jpg \n inflating: preprocessed_images/607_left.jpg \n inflating: preprocessed_images/607_right.jpg \n inflating: preprocessed_images/608_left.jpg \n inflating: preprocessed_images/608_right.jpg \n inflating: preprocessed_images/609_left.jpg \n inflating: preprocessed_images/609_right.jpg \n inflating: preprocessed_images/60_left.jpg \n inflating: preprocessed_images/60_right.jpg \n inflating: preprocessed_images/610_left.jpg \n inflating: preprocessed_images/610_right.jpg \n inflating: preprocessed_images/611_left.jpg \n inflating: preprocessed_images/611_right.jpg \n inflating: preprocessed_images/612_right.jpg \n inflating: preprocessed_images/613_left.jpg \n inflating: preprocessed_images/613_right.jpg \n inflating: preprocessed_images/614_left.jpg \n inflating: preprocessed_images/614_right.jpg \n inflating: preprocessed_images/615_left.jpg \n inflating: preprocessed_images/615_right.jpg \n inflating: preprocessed_images/616_left.jpg \n inflating: preprocessed_images/616_right.jpg \n inflating: preprocessed_images/617_left.jpg \n inflating: preprocessed_images/617_right.jpg \n inflating: preprocessed_images/618_left.jpg \n inflating: preprocessed_images/618_right.jpg \n inflating: preprocessed_images/619_left.jpg \n inflating: preprocessed_images/619_right.jpg \n inflating: preprocessed_images/61_left.jpg \n inflating: preprocessed_images/61_right.jpg \n inflating: preprocessed_images/620_left.jpg \n inflating: preprocessed_images/621_left.jpg \n inflating: preprocessed_images/621_right.jpg \n inflating: preprocessed_images/622_left.jpg \n inflating: preprocessed_images/622_right.jpg \n inflating: preprocessed_images/623_left.jpg \n inflating: preprocessed_images/623_right.jpg \n inflating: preprocessed_images/624_right.jpg \n inflating: preprocessed_images/625_left.jpg \n inflating: preprocessed_images/625_right.jpg \n inflating: preprocessed_images/627_left.jpg \n inflating: preprocessed_images/627_right.jpg \n inflating: preprocessed_images/628_left.jpg \n inflating: preprocessed_images/628_right.jpg \n inflating: preprocessed_images/629_left.jpg \n inflating: preprocessed_images/629_right.jpg \n inflating: preprocessed_images/62_left.jpg \n inflating: preprocessed_images/62_right.jpg \n inflating: preprocessed_images/630_left.jpg \n inflating: preprocessed_images/630_right.jpg \n inflating: preprocessed_images/632_left.jpg \n inflating: preprocessed_images/632_right.jpg \n inflating: preprocessed_images/633_left.jpg \n inflating: preprocessed_images/634_left.jpg \n inflating: preprocessed_images/634_right.jpg \n inflating: preprocessed_images/635_left.jpg \n inflating: preprocessed_images/635_right.jpg \n inflating: preprocessed_images/636_left.jpg \n inflating: preprocessed_images/636_right.jpg \n inflating: preprocessed_images/638_left.jpg \n inflating: preprocessed_images/638_right.jpg \n inflating: preprocessed_images/639_left.jpg \n inflating: preprocessed_images/639_right.jpg \n inflating: preprocessed_images/63_left.jpg \n inflating: preprocessed_images/640_left.jpg \n inflating: preprocessed_images/640_right.jpg \n inflating: preprocessed_images/641_left.jpg \n inflating: preprocessed_images/641_right.jpg \n inflating: preprocessed_images/642_left.jpg \n inflating: preprocessed_images/642_right.jpg \n inflating: preprocessed_images/643_left.jpg \n inflating: preprocessed_images/643_right.jpg \n inflating: preprocessed_images/644_left.jpg \n inflating: preprocessed_images/644_right.jpg \n inflating: preprocessed_images/645_left.jpg \n inflating: preprocessed_images/645_right.jpg \n inflating: preprocessed_images/646_left.jpg \n inflating: preprocessed_images/646_right.jpg \n inflating: preprocessed_images/647_left.jpg \n inflating: preprocessed_images/647_right.jpg \n inflating: preprocessed_images/648_left.jpg \n inflating: preprocessed_images/648_right.jpg \n inflating: preprocessed_images/649_left.jpg \n inflating: preprocessed_images/649_right.jpg \n inflating: preprocessed_images/64_left.jpg \n inflating: preprocessed_images/64_right.jpg \n inflating: preprocessed_images/650_left.jpg \n inflating: preprocessed_images/650_right.jpg \n inflating: preprocessed_images/651_left.jpg \n inflating: preprocessed_images/652_left.jpg \n inflating: preprocessed_images/652_right.jpg \n inflating: preprocessed_images/653_left.jpg \n inflating: preprocessed_images/653_right.jpg \n inflating: preprocessed_images/654_left.jpg \n inflating: preprocessed_images/654_right.jpg \n inflating: preprocessed_images/655_left.jpg \n inflating: preprocessed_images/655_right.jpg \n inflating: preprocessed_images/657_left.jpg \n inflating: preprocessed_images/657_right.jpg \n inflating: preprocessed_images/658_left.jpg \n inflating: preprocessed_images/659_left.jpg \n inflating: preprocessed_images/65_left.jpg \n inflating: preprocessed_images/65_right.jpg \n inflating: preprocessed_images/660_left.jpg \n inflating: preprocessed_images/660_right.jpg \n inflating: preprocessed_images/661_left.jpg \n inflating: preprocessed_images/661_right.jpg \n inflating: preprocessed_images/662_left.jpg \n inflating: preprocessed_images/662_right.jpg \n inflating: preprocessed_images/664_left.jpg \n inflating: preprocessed_images/664_right.jpg \n inflating: preprocessed_images/665_left.jpg \n inflating: preprocessed_images/665_right.jpg \n inflating: preprocessed_images/666_left.jpg \n inflating: preprocessed_images/666_right.jpg \n inflating: preprocessed_images/667_left.jpg \n inflating: preprocessed_images/667_right.jpg \n inflating: preprocessed_images/668_left.jpg \n inflating: preprocessed_images/668_right.jpg \n inflating: preprocessed_images/669_left.jpg \n inflating: preprocessed_images/669_right.jpg \n inflating: preprocessed_images/66_right.jpg \n inflating: preprocessed_images/670_left.jpg \n inflating: preprocessed_images/671_left.jpg \n inflating: preprocessed_images/672_left.jpg \n inflating: preprocessed_images/672_right.jpg \n inflating: preprocessed_images/674_left.jpg \n inflating: preprocessed_images/674_right.jpg \n inflating: preprocessed_images/675_left.jpg \n inflating: preprocessed_images/675_right.jpg \n inflating: preprocessed_images/676_left.jpg \n inflating: preprocessed_images/676_right.jpg \n inflating: preprocessed_images/677_left.jpg \n inflating: preprocessed_images/677_right.jpg \n inflating: preprocessed_images/678_left.jpg \n inflating: preprocessed_images/678_right.jpg \n inflating: preprocessed_images/679_right.jpg \n inflating: preprocessed_images/67_left.jpg \n inflating: preprocessed_images/67_right.jpg \n inflating: preprocessed_images/680_left.jpg \n inflating: preprocessed_images/680_right.jpg \n inflating: preprocessed_images/681_left.jpg \n inflating: preprocessed_images/681_right.jpg \n inflating: preprocessed_images/682_left.jpg \n inflating: preprocessed_images/682_right.jpg \n inflating: preprocessed_images/683_left.jpg \n inflating: preprocessed_images/684_left.jpg \n inflating: preprocessed_images/684_right.jpg \n inflating: preprocessed_images/685_left.jpg \n inflating: preprocessed_images/685_right.jpg \n inflating: preprocessed_images/686_left.jpg \n inflating: preprocessed_images/686_right.jpg \n inflating: preprocessed_images/687_left.jpg \n inflating: preprocessed_images/687_right.jpg \n inflating: preprocessed_images/688_left.jpg \n inflating: preprocessed_images/688_right.jpg \n inflating: preprocessed_images/689_left.jpg \n inflating: preprocessed_images/689_right.jpg \n inflating: preprocessed_images/68_left.jpg \n inflating: preprocessed_images/68_right.jpg \n inflating: preprocessed_images/690_left.jpg \n inflating: preprocessed_images/690_right.jpg \n inflating: preprocessed_images/691_left.jpg \n inflating: preprocessed_images/691_right.jpg \n inflating: preprocessed_images/692_left.jpg \n inflating: preprocessed_images/692_right.jpg \n inflating: preprocessed_images/693_left.jpg \n inflating: preprocessed_images/693_right.jpg \n inflating: preprocessed_images/694_left.jpg \n inflating: preprocessed_images/694_right.jpg \n inflating: preprocessed_images/695_left.jpg \n inflating: preprocessed_images/695_right.jpg \n inflating: preprocessed_images/696_left.jpg \n inflating: preprocessed_images/696_right.jpg \n inflating: preprocessed_images/697_left.jpg \n inflating: preprocessed_images/697_right.jpg \n inflating: preprocessed_images/698_left.jpg \n inflating: preprocessed_images/698_right.jpg \n inflating: preprocessed_images/699_left.jpg \n inflating: preprocessed_images/699_right.jpg \n inflating: preprocessed_images/69_left.jpg \n inflating: preprocessed_images/6_left.jpg \n inflating: preprocessed_images/6_right.jpg \n inflating: preprocessed_images/700_left.jpg \n inflating: preprocessed_images/700_right.jpg \n inflating: preprocessed_images/701_left.jpg \n inflating: preprocessed_images/701_right.jpg \n inflating: preprocessed_images/702_left.jpg \n inflating: preprocessed_images/702_right.jpg \n inflating: preprocessed_images/703_left.jpg \n inflating: preprocessed_images/703_right.jpg \n inflating: preprocessed_images/704_left.jpg \n inflating: preprocessed_images/704_right.jpg \n inflating: preprocessed_images/705_right.jpg \n inflating: preprocessed_images/707_left.jpg \n inflating: preprocessed_images/707_right.jpg \n inflating: preprocessed_images/708_left.jpg \n inflating: preprocessed_images/708_right.jpg \n inflating: preprocessed_images/709_left.jpg \n inflating: preprocessed_images/709_right.jpg \n inflating: preprocessed_images/70_left.jpg \n inflating: preprocessed_images/710_left.jpg \n inflating: preprocessed_images/710_right.jpg \n inflating: preprocessed_images/711_left.jpg \n inflating: preprocessed_images/711_right.jpg \n inflating: preprocessed_images/712_left.jpg \n inflating: preprocessed_images/712_right.jpg \n inflating: preprocessed_images/713_left.jpg \n inflating: preprocessed_images/713_right.jpg \n inflating: preprocessed_images/714_left.jpg \n inflating: preprocessed_images/714_right.jpg \n inflating: preprocessed_images/715_left.jpg \n inflating: preprocessed_images/715_right.jpg \n inflating: preprocessed_images/716_left.jpg \n inflating: preprocessed_images/717_left.jpg \n inflating: preprocessed_images/717_right.jpg \n inflating: preprocessed_images/718_left.jpg \n inflating: preprocessed_images/718_right.jpg \n inflating: preprocessed_images/719_left.jpg \n inflating: preprocessed_images/719_right.jpg \n inflating: preprocessed_images/71_left.jpg \n inflating: preprocessed_images/71_right.jpg \n inflating: preprocessed_images/720_right.jpg \n inflating: preprocessed_images/721_left.jpg \n inflating: preprocessed_images/722_left.jpg \n inflating: preprocessed_images/722_right.jpg \n inflating: preprocessed_images/723_left.jpg \n inflating: preprocessed_images/723_right.jpg \n inflating: preprocessed_images/724_left.jpg \n inflating: preprocessed_images/724_right.jpg \n inflating: preprocessed_images/725_left.jpg \n inflating: preprocessed_images/725_right.jpg \n inflating: preprocessed_images/726_left.jpg \n inflating: preprocessed_images/726_right.jpg \n inflating: preprocessed_images/727_left.jpg \n inflating: preprocessed_images/727_right.jpg \n inflating: preprocessed_images/728_left.jpg \n inflating: preprocessed_images/728_right.jpg \n inflating: preprocessed_images/72_left.jpg \n inflating: preprocessed_images/72_right.jpg \n inflating: preprocessed_images/730_left.jpg \n inflating: preprocessed_images/730_right.jpg \n inflating: preprocessed_images/731_left.jpg \n inflating: preprocessed_images/731_right.jpg \n inflating: preprocessed_images/732_left.jpg \n inflating: preprocessed_images/732_right.jpg \n inflating: preprocessed_images/733_left.jpg \n inflating: preprocessed_images/733_right.jpg \n inflating: preprocessed_images/734_left.jpg \n inflating: preprocessed_images/735_left.jpg \n inflating: preprocessed_images/735_right.jpg \n inflating: preprocessed_images/736_left.jpg \n inflating: preprocessed_images/737_left.jpg \n inflating: preprocessed_images/737_right.jpg \n inflating: preprocessed_images/738_left.jpg \n inflating: preprocessed_images/738_right.jpg \n inflating: preprocessed_images/739_left.jpg \n inflating: preprocessed_images/739_right.jpg \n inflating: preprocessed_images/73_left.jpg \n inflating: preprocessed_images/73_right.jpg \n inflating: preprocessed_images/740_left.jpg \n inflating: preprocessed_images/740_right.jpg \n inflating: preprocessed_images/741_left.jpg \n inflating: preprocessed_images/741_right.jpg \n inflating: preprocessed_images/742_left.jpg \n inflating: preprocessed_images/742_right.jpg \n inflating: preprocessed_images/743_left.jpg \n inflating: preprocessed_images/743_right.jpg \n inflating: preprocessed_images/744_left.jpg \n inflating: preprocessed_images/744_right.jpg \n inflating: preprocessed_images/745_left.jpg \n inflating: preprocessed_images/745_right.jpg \n inflating: preprocessed_images/746_left.jpg \n inflating: preprocessed_images/746_right.jpg \n inflating: preprocessed_images/747_left.jpg \n inflating: preprocessed_images/747_right.jpg \n inflating: preprocessed_images/748_left.jpg \n inflating: preprocessed_images/748_right.jpg \n inflating: preprocessed_images/749_left.jpg \n inflating: preprocessed_images/749_right.jpg \n inflating: preprocessed_images/74_right.jpg \n inflating: preprocessed_images/750_left.jpg \n inflating: preprocessed_images/750_right.jpg \n inflating: preprocessed_images/751_right.jpg \n inflating: preprocessed_images/752_left.jpg \n inflating: preprocessed_images/752_right.jpg \n inflating: preprocessed_images/753_left.jpg \n inflating: preprocessed_images/753_right.jpg \n inflating: preprocessed_images/754_left.jpg \n inflating: preprocessed_images/754_right.jpg \n inflating: preprocessed_images/755_left.jpg \n inflating: preprocessed_images/755_right.jpg \n inflating: preprocessed_images/756_left.jpg \n inflating: preprocessed_images/756_right.jpg \n inflating: preprocessed_images/757_left.jpg \n inflating: preprocessed_images/758_left.jpg \n inflating: preprocessed_images/758_right.jpg \n inflating: preprocessed_images/759_left.jpg \n inflating: preprocessed_images/759_right.jpg \n inflating: preprocessed_images/75_left.jpg \n inflating: preprocessed_images/75_right.jpg \n inflating: preprocessed_images/760_left.jpg \n inflating: preprocessed_images/761_left.jpg \n inflating: preprocessed_images/761_right.jpg \n inflating: preprocessed_images/762_left.jpg \n inflating: preprocessed_images/763_left.jpg \n inflating: preprocessed_images/763_right.jpg \n inflating: preprocessed_images/764_left.jpg \n inflating: preprocessed_images/764_right.jpg \n inflating: preprocessed_images/765_left.jpg \n inflating: preprocessed_images/765_right.jpg \n inflating: preprocessed_images/766_left.jpg \n inflating: preprocessed_images/766_right.jpg \n inflating: preprocessed_images/767_left.jpg \n inflating: preprocessed_images/767_right.jpg \n inflating: preprocessed_images/768_right.jpg \n inflating: preprocessed_images/769_left.jpg \n inflating: preprocessed_images/769_right.jpg \n inflating: preprocessed_images/770_left.jpg \n inflating: preprocessed_images/771_left.jpg \n inflating: preprocessed_images/771_right.jpg \n inflating: preprocessed_images/772_left.jpg \n inflating: preprocessed_images/772_right.jpg \n inflating: preprocessed_images/773_left.jpg \n inflating: preprocessed_images/773_right.jpg \n inflating: preprocessed_images/774_left.jpg \n inflating: preprocessed_images/774_right.jpg \n inflating: preprocessed_images/775_left.jpg \n inflating: preprocessed_images/775_right.jpg \n inflating: preprocessed_images/776_left.jpg \n inflating: preprocessed_images/776_right.jpg \n inflating: preprocessed_images/777_left.jpg \n inflating: preprocessed_images/777_right.jpg \n inflating: preprocessed_images/778_left.jpg \n inflating: preprocessed_images/778_right.jpg \n inflating: preprocessed_images/779_left.jpg \n inflating: preprocessed_images/779_right.jpg \n inflating: preprocessed_images/77_left.jpg \n inflating: preprocessed_images/77_right.jpg \n inflating: preprocessed_images/780_left.jpg \n inflating: preprocessed_images/780_right.jpg \n inflating: preprocessed_images/781_left.jpg \n inflating: preprocessed_images/781_right.jpg \n inflating: preprocessed_images/782_left.jpg \n inflating: preprocessed_images/782_right.jpg \n inflating: preprocessed_images/783_left.jpg \n inflating: preprocessed_images/783_right.jpg \n inflating: preprocessed_images/784_left.jpg \n inflating: preprocessed_images/784_right.jpg \n inflating: preprocessed_images/785_left.jpg \n inflating: preprocessed_images/785_right.jpg \n inflating: preprocessed_images/786_left.jpg \n inflating: preprocessed_images/786_right.jpg \n inflating: preprocessed_images/787_left.jpg \n inflating: preprocessed_images/787_right.jpg \n inflating: preprocessed_images/788_left.jpg \n inflating: preprocessed_images/788_right.jpg \n inflating: preprocessed_images/789_left.jpg \n inflating: preprocessed_images/789_right.jpg \n inflating: preprocessed_images/78_right.jpg \n inflating: preprocessed_images/790_left.jpg \n inflating: preprocessed_images/790_right.jpg \n inflating: preprocessed_images/791_left.jpg \n inflating: preprocessed_images/791_right.jpg \n inflating: preprocessed_images/792_left.jpg \n inflating: preprocessed_images/792_right.jpg \n inflating: preprocessed_images/794_left.jpg \n inflating: preprocessed_images/794_right.jpg \n inflating: preprocessed_images/795_right.jpg \n inflating: preprocessed_images/796_left.jpg \n inflating: preprocessed_images/796_right.jpg \n inflating: preprocessed_images/798_left.jpg \n inflating: preprocessed_images/798_right.jpg \n inflating: preprocessed_images/799_left.jpg \n inflating: preprocessed_images/799_right.jpg \n inflating: preprocessed_images/79_left.jpg \n inflating: preprocessed_images/79_right.jpg \n inflating: preprocessed_images/7_left.jpg \n inflating: preprocessed_images/7_right.jpg \n inflating: preprocessed_images/800_left.jpg \n inflating: preprocessed_images/800_right.jpg \n inflating: preprocessed_images/801_left.jpg \n inflating: preprocessed_images/801_right.jpg \n inflating: preprocessed_images/802_left.jpg \n inflating: preprocessed_images/802_right.jpg \n inflating: preprocessed_images/803_left.jpg \n inflating: preprocessed_images/803_right.jpg \n inflating: preprocessed_images/804_left.jpg \n inflating: preprocessed_images/805_left.jpg \n inflating: preprocessed_images/805_right.jpg \n inflating: preprocessed_images/806_left.jpg \n inflating: preprocessed_images/807_left.jpg \n inflating: preprocessed_images/807_right.jpg \n inflating: preprocessed_images/808_left.jpg \n inflating: preprocessed_images/808_right.jpg \n inflating: preprocessed_images/809_left.jpg \n inflating: preprocessed_images/809_right.jpg \n inflating: preprocessed_images/810_left.jpg \n inflating: preprocessed_images/810_right.jpg \n inflating: preprocessed_images/812_left.jpg \n inflating: preprocessed_images/812_right.jpg \n inflating: preprocessed_images/813_left.jpg \n inflating: preprocessed_images/813_right.jpg \n inflating: preprocessed_images/814_left.jpg \n inflating: preprocessed_images/814_right.jpg \n inflating: preprocessed_images/815_left.jpg \n inflating: preprocessed_images/815_right.jpg \n inflating: preprocessed_images/816_left.jpg \n inflating: preprocessed_images/816_right.jpg \n inflating: preprocessed_images/817_left.jpg \n inflating: preprocessed_images/818_left.jpg \n inflating: preprocessed_images/818_right.jpg \n inflating: preprocessed_images/819_right.jpg \n inflating: preprocessed_images/81_left.jpg \n inflating: preprocessed_images/81_right.jpg \n inflating: preprocessed_images/820_left.jpg \n inflating: preprocessed_images/820_right.jpg \n inflating: preprocessed_images/821_left.jpg \n inflating: preprocessed_images/821_right.jpg \n inflating: preprocessed_images/822_left.jpg \n inflating: preprocessed_images/822_right.jpg \n inflating: preprocessed_images/824_left.jpg \n inflating: preprocessed_images/824_right.jpg \n inflating: preprocessed_images/825_left.jpg \n inflating: preprocessed_images/825_right.jpg \n inflating: preprocessed_images/826_left.jpg \n inflating: preprocessed_images/826_right.jpg \n inflating: preprocessed_images/827_left.jpg \n inflating: preprocessed_images/828_left.jpg \n inflating: preprocessed_images/828_right.jpg \n inflating: preprocessed_images/829_left.jpg \n inflating: preprocessed_images/829_right.jpg \n inflating: preprocessed_images/82_left.jpg \n inflating: preprocessed_images/82_right.jpg \n inflating: preprocessed_images/830_left.jpg \n inflating: preprocessed_images/831_left.jpg \n inflating: preprocessed_images/831_right.jpg \n inflating: preprocessed_images/832_left.jpg \n inflating: preprocessed_images/832_right.jpg \n inflating: preprocessed_images/834_left.jpg \n inflating: preprocessed_images/834_right.jpg \n inflating: preprocessed_images/835_left.jpg \n inflating: preprocessed_images/835_right.jpg \n inflating: preprocessed_images/836_left.jpg \n inflating: preprocessed_images/837_left.jpg \n inflating: preprocessed_images/837_right.jpg \n inflating: preprocessed_images/838_left.jpg \n inflating: preprocessed_images/838_right.jpg \n inflating: preprocessed_images/839_left.jpg \n inflating: preprocessed_images/83_left.jpg \n inflating: preprocessed_images/83_right.jpg \n inflating: preprocessed_images/840_right.jpg \n inflating: preprocessed_images/841_right.jpg \n inflating: preprocessed_images/842_left.jpg \n inflating: preprocessed_images/844_right.jpg \n inflating: preprocessed_images/845_right.jpg \n inflating: preprocessed_images/846_left.jpg \n inflating: preprocessed_images/846_right.jpg \n inflating: preprocessed_images/847_left.jpg \n inflating: preprocessed_images/847_right.jpg \n inflating: preprocessed_images/848_left.jpg \n inflating: preprocessed_images/848_right.jpg \n inflating: preprocessed_images/849_left.jpg \n inflating: preprocessed_images/849_right.jpg \n inflating: preprocessed_images/84_left.jpg \n inflating: preprocessed_images/84_right.jpg \n inflating: preprocessed_images/850_left.jpg \n inflating: preprocessed_images/850_right.jpg \n inflating: preprocessed_images/852_left.jpg \n inflating: preprocessed_images/852_right.jpg \n inflating: preprocessed_images/853_left.jpg \n inflating: preprocessed_images/853_right.jpg \n inflating: preprocessed_images/854_left.jpg \n inflating: preprocessed_images/854_right.jpg \n inflating: preprocessed_images/855_left.jpg \n inflating: preprocessed_images/855_right.jpg \n inflating: preprocessed_images/856_left.jpg \n inflating: preprocessed_images/856_right.jpg \n inflating: preprocessed_images/857_left.jpg \n inflating: preprocessed_images/857_right.jpg \n inflating: preprocessed_images/858_left.jpg \n inflating: preprocessed_images/858_right.jpg \n inflating: preprocessed_images/859_left.jpg \n inflating: preprocessed_images/859_right.jpg \n inflating: preprocessed_images/85_left.jpg \n inflating: preprocessed_images/85_right.jpg \n inflating: preprocessed_images/860_left.jpg \n inflating: preprocessed_images/860_right.jpg \n inflating: preprocessed_images/861_left.jpg \n inflating: preprocessed_images/861_right.jpg \n inflating: preprocessed_images/862_left.jpg \n inflating: preprocessed_images/862_right.jpg \n inflating: preprocessed_images/864_left.jpg \n inflating: preprocessed_images/864_right.jpg \n inflating: preprocessed_images/865_left.jpg \n inflating: preprocessed_images/865_right.jpg \n inflating: preprocessed_images/866_left.jpg \n inflating: preprocessed_images/867_left.jpg \n inflating: preprocessed_images/868_left.jpg \n inflating: preprocessed_images/868_right.jpg \n inflating: preprocessed_images/869_left.jpg \n inflating: preprocessed_images/869_right.jpg \n inflating: preprocessed_images/86_left.jpg \n inflating: preprocessed_images/86_right.jpg \n inflating: preprocessed_images/870_left.jpg \n inflating: preprocessed_images/870_right.jpg \n inflating: preprocessed_images/871_left.jpg \n inflating: preprocessed_images/871_right.jpg \n inflating: preprocessed_images/872_left.jpg \n inflating: preprocessed_images/872_right.jpg \n inflating: preprocessed_images/873_left.jpg \n inflating: preprocessed_images/873_right.jpg \n inflating: preprocessed_images/874_left.jpg \n inflating: preprocessed_images/874_right.jpg \n inflating: preprocessed_images/875_left.jpg \n inflating: preprocessed_images/875_right.jpg \n inflating: preprocessed_images/876_left.jpg \n inflating: preprocessed_images/876_right.jpg \n inflating: preprocessed_images/877_left.jpg \n inflating: preprocessed_images/877_right.jpg \n inflating: preprocessed_images/878_left.jpg \n inflating: preprocessed_images/878_right.jpg \n inflating: preprocessed_images/879_left.jpg \n inflating: preprocessed_images/879_right.jpg \n inflating: preprocessed_images/87_left.jpg \n inflating: preprocessed_images/87_right.jpg \n inflating: preprocessed_images/880_left.jpg \n inflating: preprocessed_images/880_right.jpg \n inflating: preprocessed_images/881_left.jpg \n inflating: preprocessed_images/881_right.jpg \n inflating: preprocessed_images/883_left.jpg \n inflating: preprocessed_images/883_right.jpg \n inflating: preprocessed_images/884_left.jpg \n inflating: preprocessed_images/884_right.jpg \n inflating: preprocessed_images/885_left.jpg \n inflating: preprocessed_images/885_right.jpg \n inflating: preprocessed_images/886_left.jpg \n inflating: preprocessed_images/886_right.jpg \n inflating: preprocessed_images/887_left.jpg \n inflating: preprocessed_images/888_left.jpg \n inflating: preprocessed_images/888_right.jpg \n inflating: preprocessed_images/889_left.jpg \n inflating: preprocessed_images/889_right.jpg \n inflating: preprocessed_images/88_left.jpg \n inflating: preprocessed_images/88_right.jpg \n inflating: preprocessed_images/890_left.jpg \n inflating: preprocessed_images/890_right.jpg \n inflating: preprocessed_images/891_left.jpg \n inflating: preprocessed_images/891_right.jpg \n inflating: preprocessed_images/892_left.jpg \n inflating: preprocessed_images/892_right.jpg \n inflating: preprocessed_images/893_left.jpg \n inflating: preprocessed_images/893_right.jpg \n inflating: preprocessed_images/894_left.jpg \n inflating: preprocessed_images/894_right.jpg \n inflating: preprocessed_images/895_left.jpg \n inflating: preprocessed_images/896_left.jpg \n inflating: preprocessed_images/896_right.jpg \n inflating: preprocessed_images/897_left.jpg \n inflating: preprocessed_images/897_right.jpg \n inflating: preprocessed_images/898_left.jpg \n inflating: preprocessed_images/898_right.jpg \n inflating: preprocessed_images/899_left.jpg \n inflating: preprocessed_images/899_right.jpg \n inflating: preprocessed_images/89_right.jpg \n inflating: preprocessed_images/8_left.jpg \n inflating: preprocessed_images/8_right.jpg \n inflating: preprocessed_images/900_left.jpg \n inflating: preprocessed_images/901_left.jpg \n inflating: preprocessed_images/901_right.jpg \n inflating: preprocessed_images/902_left.jpg \n inflating: preprocessed_images/902_right.jpg \n inflating: preprocessed_images/903_left.jpg \n inflating: preprocessed_images/903_right.jpg \n inflating: preprocessed_images/904_left.jpg \n inflating: preprocessed_images/904_right.jpg \n inflating: preprocessed_images/905_right.jpg \n inflating: preprocessed_images/906_left.jpg \n inflating: preprocessed_images/906_right.jpg \n inflating: preprocessed_images/907_left.jpg \n inflating: preprocessed_images/907_right.jpg \n inflating: preprocessed_images/908_right.jpg \n inflating: preprocessed_images/909_left.jpg \n inflating: preprocessed_images/909_right.jpg \n inflating: preprocessed_images/90_right.jpg \n inflating: preprocessed_images/910_left.jpg \n inflating: preprocessed_images/910_right.jpg \n inflating: preprocessed_images/911_left.jpg \n inflating: preprocessed_images/911_right.jpg \n inflating: preprocessed_images/912_left.jpg \n inflating: preprocessed_images/912_right.jpg \n inflating: preprocessed_images/913_left.jpg \n inflating: preprocessed_images/913_right.jpg \n inflating: preprocessed_images/914_left.jpg \n inflating: preprocessed_images/915_left.jpg \n inflating: preprocessed_images/916_left.jpg \n inflating: preprocessed_images/916_right.jpg \n inflating: preprocessed_images/917_left.jpg \n inflating: preprocessed_images/917_right.jpg \n inflating: preprocessed_images/918_left.jpg \n inflating: preprocessed_images/918_right.jpg \n inflating: preprocessed_images/919_left.jpg \n inflating: preprocessed_images/919_right.jpg \n inflating: preprocessed_images/91_left.jpg \n inflating: preprocessed_images/91_right.jpg \n inflating: preprocessed_images/920_left.jpg \n inflating: preprocessed_images/921_left.jpg \n inflating: preprocessed_images/921_right.jpg \n inflating: preprocessed_images/922_left.jpg \n inflating: preprocessed_images/922_right.jpg \n inflating: preprocessed_images/923_left.jpg \n inflating: preprocessed_images/923_right.jpg \n inflating: preprocessed_images/924_left.jpg \n inflating: preprocessed_images/924_right.jpg \n inflating: preprocessed_images/925_left.jpg \n inflating: preprocessed_images/925_right.jpg \n inflating: preprocessed_images/926_left.jpg \n inflating: preprocessed_images/926_right.jpg \n inflating: preprocessed_images/927_left.jpg \n inflating: preprocessed_images/927_right.jpg \n inflating: preprocessed_images/928_left.jpg \n inflating: preprocessed_images/928_right.jpg \n inflating: preprocessed_images/929_left.jpg \n inflating: preprocessed_images/929_right.jpg \n inflating: preprocessed_images/92_left.jpg \n inflating: preprocessed_images/930_left.jpg \n inflating: preprocessed_images/930_right.jpg \n inflating: preprocessed_images/931_left.jpg \n inflating: preprocessed_images/931_right.jpg \n inflating: preprocessed_images/932_left.jpg \n inflating: preprocessed_images/932_right.jpg \n inflating: preprocessed_images/933_left.jpg \n inflating: preprocessed_images/933_right.jpg \n inflating: preprocessed_images/934_left.jpg \n inflating: preprocessed_images/934_right.jpg \n inflating: preprocessed_images/938_left.jpg \n inflating: preprocessed_images/938_right.jpg \n inflating: preprocessed_images/939_left.jpg \n inflating: preprocessed_images/939_right.jpg \n inflating: preprocessed_images/93_left.jpg \n inflating: preprocessed_images/93_right.jpg \n inflating: preprocessed_images/940_right.jpg \n inflating: preprocessed_images/941_left.jpg \n inflating: preprocessed_images/941_right.jpg \n inflating: preprocessed_images/942_left.jpg \n inflating: preprocessed_images/942_right.jpg \n inflating: preprocessed_images/943_left.jpg \n inflating: preprocessed_images/943_right.jpg \n inflating: preprocessed_images/944_left.jpg \n inflating: preprocessed_images/945_left.jpg \n inflating: preprocessed_images/945_right.jpg \n inflating: preprocessed_images/946_left.jpg \n inflating: preprocessed_images/946_right.jpg \n inflating: preprocessed_images/947_left.jpg \n inflating: preprocessed_images/947_right.jpg \n inflating: preprocessed_images/948_left.jpg \n inflating: preprocessed_images/948_right.jpg \n inflating: preprocessed_images/949_left.jpg \n inflating: preprocessed_images/949_right.jpg \n inflating: preprocessed_images/94_left.jpg \n inflating: preprocessed_images/94_right.jpg \n inflating: preprocessed_images/950_left.jpg \n inflating: preprocessed_images/950_right.jpg \n inflating: preprocessed_images/951_left.jpg \n inflating: preprocessed_images/951_right.jpg \n inflating: preprocessed_images/952_left.jpg \n inflating: preprocessed_images/952_right.jpg \n inflating: preprocessed_images/953_left.jpg \n inflating: preprocessed_images/953_right.jpg \n inflating: preprocessed_images/954_left.jpg \n inflating: preprocessed_images/954_right.jpg \n inflating: preprocessed_images/955_right.jpg \n inflating: preprocessed_images/956_left.jpg \n inflating: preprocessed_images/957_left.jpg \n inflating: preprocessed_images/957_right.jpg \n inflating: preprocessed_images/958_left.jpg \n inflating: preprocessed_images/958_right.jpg \n inflating: preprocessed_images/959_left.jpg \n inflating: preprocessed_images/959_right.jpg \n inflating: preprocessed_images/95_left.jpg \n inflating: preprocessed_images/95_right.jpg \n inflating: preprocessed_images/960_left.jpg \n inflating: preprocessed_images/960_right.jpg \n inflating: preprocessed_images/961_left.jpg \n inflating: preprocessed_images/961_right.jpg \n inflating: preprocessed_images/962_left.jpg \n inflating: preprocessed_images/962_right.jpg \n inflating: preprocessed_images/963_left.jpg \n inflating: preprocessed_images/963_right.jpg \n inflating: preprocessed_images/964_left.jpg \n inflating: preprocessed_images/964_right.jpg \n inflating: preprocessed_images/965_left.jpg \n inflating: preprocessed_images/965_right.jpg \n inflating: preprocessed_images/966_left.jpg \n inflating: preprocessed_images/966_right.jpg \n inflating: preprocessed_images/968_left.jpg \n inflating: preprocessed_images/968_right.jpg \n inflating: preprocessed_images/969_left.jpg \n inflating: preprocessed_images/96_left.jpg \n inflating: preprocessed_images/96_right.jpg \n inflating: preprocessed_images/970_left.jpg \n inflating: preprocessed_images/970_right.jpg \n inflating: preprocessed_images/971_left.jpg \n inflating: preprocessed_images/971_right.jpg \n inflating: preprocessed_images/972_left.jpg \n inflating: preprocessed_images/972_right.jpg \n inflating: preprocessed_images/973_left.jpg \n inflating: preprocessed_images/973_right.jpg \n inflating: preprocessed_images/974_left.jpg \n inflating: preprocessed_images/974_right.jpg \n inflating: preprocessed_images/975_left.jpg \n inflating: preprocessed_images/976_left.jpg \n inflating: preprocessed_images/976_right.jpg \n inflating: preprocessed_images/977_left.jpg \n inflating: preprocessed_images/977_right.jpg \n inflating: preprocessed_images/978_left.jpg \n inflating: preprocessed_images/978_right.jpg \n inflating: preprocessed_images/979_left.jpg \n inflating: preprocessed_images/979_right.jpg \n inflating: preprocessed_images/97_left.jpg \n inflating: preprocessed_images/97_right.jpg \n inflating: preprocessed_images/980_left.jpg \n inflating: preprocessed_images/980_right.jpg \n inflating: preprocessed_images/981_left.jpg \n inflating: preprocessed_images/981_right.jpg \n inflating: preprocessed_images/982_left.jpg \n inflating: preprocessed_images/983_left.jpg \n inflating: preprocessed_images/983_right.jpg \n inflating: preprocessed_images/984_left.jpg \n inflating: preprocessed_images/986_left.jpg \n inflating: preprocessed_images/986_right.jpg \n inflating: preprocessed_images/987_left.jpg \n inflating: preprocessed_images/987_right.jpg \n inflating: preprocessed_images/98_left.jpg \n inflating: preprocessed_images/98_right.jpg \n inflating: preprocessed_images/990_left.jpg \n inflating: preprocessed_images/990_right.jpg \n inflating: preprocessed_images/991_right.jpg \n inflating: preprocessed_images/992_left.jpg \n inflating: preprocessed_images/992_right.jpg \n inflating: preprocessed_images/993_right.jpg \n inflating: preprocessed_images/994_left.jpg \n inflating: preprocessed_images/996_left.jpg \n inflating: preprocessed_images/996_right.jpg \n inflating: preprocessed_images/997_left.jpg \n inflating: preprocessed_images/997_right.jpg \n inflating: preprocessed_images/998_left.jpg \n inflating: preprocessed_images/998_right.jpg \n inflating: preprocessed_images/999_left.jpg \n inflating: preprocessed_images/999_right.jpg \n inflating: preprocessed_images/99_left.jpg \n inflating: preprocessed_images/99_right.jpg \n inflating: preprocessed_images/9_left.jpg \n inflating: preprocessed_images/9_right.jpg \n"
]
],
[
[
"## Classfication",
"_____no_output_____"
],
[
"Import Statements",
"_____no_output_____"
]
],
[
[
"import numpy as np\r\nimport pandas as pd\r\nimport cv2\r\nimport random\r\nfrom tqdm import tqdm\r\nfrom sklearn.metrics import roc_curve, auc\r\nimport matplotlib.pyplot as plt\r\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nfrom itertools import cycle\r\nfrom sklearn import svm, datasets\r\nfrom sklearn.metrics import roc_curve, auc\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.preprocessing import label_binarize\r\nfrom sklearn.multiclass import OneVsRestClassifier\r\nfrom scipy import interp\r\nfrom sklearn.metrics import roc_auc_score\r\nimport os",
"_____no_output_____"
],
[
"df = pd.read_csv(\"/content/gdrive/My Drive/Kaggle/full_df.csv\")\r\ndf.head()",
"_____no_output_____"
],
[
"def has_myopia(text):\r\n if \"pathological myopia\" or \"myopia\" in text:\r\n return 1\r\n else:\r\n return 0\r\n\r\ndf[\"left_myopia\"] = df[\"Left-Diagnostic Keywords\"].apply(lambda x: has_myopia(x))\r\ndf[\"right_myopia\"] = df[\"Right-Diagnostic Keywords\"].apply(lambda x: has_myopia(x))\r\n\r\nleft_myopia = df.loc[(df.M == 1) & (df.left_myopia == 1)][\"Left-Fundus\"].values\r\nprint(left_myopia[:10])\r\n\r\nright_myopia = df.loc[(df.M == 1) & (df.right_myopia == 1)][\"Right-Fundus\"].values\r\nprint(right_myopia[:10])",
"['13_left.jpg' '16_left.jpg' '18_left.jpg' '35_left.jpg' '46_left.jpg'\n '54_left.jpg' '86_left.jpg' '106_left.jpg' '144_left.jpg' '145_left.jpg']\n['13_right.jpg' '16_right.jpg' '18_right.jpg' '35_right.jpg'\n '46_right.jpg' '54_right.jpg' '86_right.jpg' '106_right.jpg'\n '144_right.jpg' '145_right.jpg']\n"
],
[
"print(\"Left Eye Images having myopia: {}\".format(len(left_myopia)))\r\nprint(\"Right Eye Images having myopia: {}\".format(len(right_myopia)))",
"Left Eye Images having myopia: 306\nRight Eye Images having myopia: 306\n"
],
[
"left_normal = df.loc[(df.C ==0) & (df[\"Left-Diagnostic Keywords\"] == \"normal fundus\")][\"Left-Fundus\"].sample(300,random_state=42).values\r\nright_normal = df.loc[(df.C ==0) & (df[\"Right-Diagnostic Keywords\"] == \"normal fundus\")][\"Right-Fundus\"].sample(300,random_state=42).values\r\n\r\nprint(left_normal[:10])\r\nprint(right_normal[:10])",
"['3332_left.jpg' '4059_left.jpg' '69_left.jpg' '2415_left.jpg'\n '4176_left.jpg' '2711_left.jpg' '4614_left.jpg' '3174_left.jpg'\n '2862_left.jpg' '2424_left.jpg']\n['2964_right.jpg' '680_right.jpg' '500_right.jpg' '2368_right.jpg'\n '2820_right.jpg' '2769_right.jpg' '2696_right.jpg' '2890_right.jpg'\n '940_right.jpg' '2553_right.jpg']\n"
]
],
[
[
"Left and Right Images Together",
"_____no_output_____"
]
],
[
[
"myopia = np.concatenate((left_myopia,right_myopia),axis=0)\r\nnormal = np.concatenate((left_normal,right_normal),axis=0)",
"_____no_output_____"
],
[
"print(\"myopia: {}\".format(len(myopia)))\r\nprint(\"Normal: {}\".format(len(normal)))",
"myopia: 612\nNormal: 600\n"
],
[
"dataset_dir = \"/content/gdrive/MyDrive/Kaggle/preprocessed_images/\"\r\n\r\nimage_size = 224\r\n\r\nlabels = []\r\ndataset = []\r\n\r\ndef create_dataset(image_category,label):\r\n for img in tqdm(image_category):\r\n image_path = os.path.join(dataset_dir,img)\r\n try:\r\n image = cv2.imread(image_path,cv2.IMREAD_COLOR)\r\n image = cv2.resize(image,(image_size,image_size))\r\n except:\r\n continue\r\n \r\n dataset.append([np.array(image),np.array(label)])\r\n \r\n random.shuffle(dataset)\r\n return dataset",
"_____no_output_____"
],
[
"dataset = create_dataset(myopia,1)",
"100%|██████████| 612/612 [01:38<00:00, 6.19it/s]\n"
],
[
"len(dataset)",
"_____no_output_____"
],
[
"dataset = create_dataset(normal,0)",
"100%|██████████| 600/600 [03:02<00:00, 3.28it/s]\n"
],
[
"len(dataset)",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,7))\r\n\r\nfor i in range(10):\r\n sample = random.choice(range(len(dataset)))\r\n image = dataset[sample][0]\r\n category = dataset[sample][1]\r\n\r\n if category == 0:\r\n label = \"Normal\"\r\n else:\r\n label = \"Myopia\"\r\n\r\n plt.subplot(2,5,i+1)\r\n plt.imshow(image)\r\n plt.xlabel(label)\r\n\r\nplt.tight_layout()",
"_____no_output_____"
],
[
"x = np.array([i[0] for i in dataset]).reshape(-1,image_size,image_size,3)\r\ny = np.array([i[1] for i in dataset])",
"_____no_output_____"
],
[
"x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.2)",
"_____no_output_____"
]
],
[
[
"**Keras Pretrained Models**",
"_____no_output_____"
]
],
[
[
"!kaggle datasets download -d gaborfodor/keras-pretrained-models",
"Downloading keras-pretrained-models.zip to /content/gdrive/My Drive/Kaggle\n100% 940M/943M [00:10<00:00, 51.8MB/s]\n100% 943M/943M [00:10<00:00, 96.3MB/s]\n"
],
[
"!unzip \\*.zip && rm *.zip",
"Archive: keras-pretrained-models.zip\nreplace Kuszma.JPG? [y]es, [n]o, [A]ll, [N]one, [r]ename: A\n inflating: Kuszma.JPG \n inflating: imagenet_class_index.json \n inflating: inception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5 \n inflating: inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5 \n inflating: inception_v3_weights_tf_dim_ordering_tf_kernels.h5 \n inflating: inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \n inflating: resnet50_weights_tf_dim_ordering_tf_kernels.h5 \n inflating: resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 \n inflating: vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5 \n inflating: xception_weights_tf_dim_ordering_tf_kernels.h5 \n inflating: xception_weights_tf_dim_ordering_tf_kernels_notop.h5 \n"
],
[
"!ls",
"full_df.csv\nimagenet_class_index.json\ninception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5\ninception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5\ninception_v3_weights_tf_dim_ordering_tf_kernels.h5\ninception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5\nkaggle.json\nKuszma.JPG\nODIR-5K\npreprocessed_images\nresnet50_weights_tf_dim_ordering_tf_kernels.h5\nresnet50_weights_tf_dim_ordering_tf_kernels_notop.h5\nvgg16_weights_tf_dim_ordering_tf_kernels_notop.h5\nxception_weights_tf_dim_ordering_tf_kernels.h5\nxception_weights_tf_dim_ordering_tf_kernels_notop.h5\n"
],
[
"pwd",
"_____no_output_____"
],
[
"from keras.applications.vgg16 import VGG16, preprocess_input\r\n\r\nvgg16_weight_path = '/content/gdrive/MyDrive/Kaggle/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'\r\n\r\nvgg = VGG16(\r\n weights = vgg16_weight_path,\r\n include_top = False, \r\n input_shape = (224,224,3)\r\n)",
"_____no_output_____"
],
[
"for layer in vgg.layers:\r\n layer.trainable = False",
"_____no_output_____"
]
],
[
[
"**Model**",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras import Sequential\r\nfrom keras import layers\r\nfrom tensorflow.keras.layers import Flatten ,Dense\r\n\r\nmodel = Sequential()\r\n\r\nmodel.add(vgg)\r\nmodel.add(Dense(256, activation='relu'))\r\nmodel.add(layers.Dropout(rate=0.5))\r\nmodel.add(Dense(128, activation='sigmoid'))\r\nmodel.add(layers.Dropout(rate=0.2))\r\nmodel.add(Dense(128, activation='relu'))\r\nmodel.add(layers.Dropout(0.1))\r\nmodel.add(Flatten())\r\nmodel.add(Dense(1,activation=\"sigmoid\"))",
"_____no_output_____"
]
],
[
[
"Model's Summary",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nvgg16 (Functional) (None, 7, 7, 512) 14714688 \n_________________________________________________________________\ndense (Dense) (None, 7, 7, 256) 131328 \n_________________________________________________________________\ndropout (Dropout) (None, 7, 7, 256) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 7, 7, 128) 32896 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 7, 7, 128) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 7, 7, 128) 16512 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 7, 7, 128) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 6272) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 1) 6273 \n=================================================================\nTotal params: 14,901,697\nTrainable params: 187,009\nNon-trainable params: 14,714,688\n_________________________________________________________________\n"
],
[
"model.compile(optimizer=\"adam\", loss=\"binary_crossentropy\", metrics=[\"accuracy\"])",
"_____no_output_____"
],
[
"history = model.fit(x_train, y_train,\r\n batch_size = 32,\r\n epochs = 30,\r\n validation_data = (x_test, y_test)\r\n )",
"Epoch 1/30\n30/30 [==============================] - 14s 166ms/step - loss: 0.7364 - accuracy: 0.5664 - val_loss: 0.3256 - val_accuracy: 0.8814\nEpoch 2/30\n30/30 [==============================] - 4s 135ms/step - loss: 0.3134 - accuracy: 0.8985 - val_loss: 0.3197 - val_accuracy: 0.8729\nEpoch 3/30\n30/30 [==============================] - 4s 136ms/step - loss: 0.1924 - accuracy: 0.9313 - val_loss: 0.2582 - val_accuracy: 0.9195\nEpoch 4/30\n30/30 [==============================] - 4s 137ms/step - loss: 0.1816 - accuracy: 0.9397 - val_loss: 0.2330 - val_accuracy: 0.9322\nEpoch 5/30\n30/30 [==============================] - 4s 138ms/step - loss: 0.1563 - accuracy: 0.9380 - val_loss: 0.2102 - val_accuracy: 0.9237\nEpoch 6/30\n30/30 [==============================] - 4s 138ms/step - loss: 0.1177 - accuracy: 0.9535 - val_loss: 0.2422 - val_accuracy: 0.9280\nEpoch 7/30\n30/30 [==============================] - 4s 139ms/step - loss: 0.1382 - accuracy: 0.9453 - val_loss: 0.1700 - val_accuracy: 0.9492\nEpoch 8/30\n30/30 [==============================] - 4s 138ms/step - loss: 0.0817 - accuracy: 0.9723 - val_loss: 0.1532 - val_accuracy: 0.9492\nEpoch 9/30\n30/30 [==============================] - 4s 139ms/step - loss: 0.0767 - accuracy: 0.9730 - val_loss: 0.1590 - val_accuracy: 0.9492\nEpoch 10/30\n30/30 [==============================] - 4s 139ms/step - loss: 0.0566 - accuracy: 0.9891 - val_loss: 0.1529 - val_accuracy: 0.9576\nEpoch 11/30\n30/30 [==============================] - 4s 140ms/step - loss: 0.0350 - accuracy: 0.9917 - val_loss: 0.2020 - val_accuracy: 0.9449\nEpoch 12/30\n30/30 [==============================] - 4s 141ms/step - loss: 0.0445 - accuracy: 0.9859 - val_loss: 0.1552 - val_accuracy: 0.9619\nEpoch 13/30\n30/30 [==============================] - 4s 142ms/step - loss: 0.0450 - accuracy: 0.9836 - val_loss: 0.2791 - val_accuracy: 0.9322\nEpoch 14/30\n30/30 [==============================] - 4s 143ms/step - loss: 0.1034 - accuracy: 0.9566 - val_loss: 0.1231 - val_accuracy: 0.9576\nEpoch 15/30\n30/30 [==============================] - 4s 144ms/step - loss: 0.0228 - accuracy: 0.9962 - val_loss: 0.1275 - val_accuracy: 0.9534\nEpoch 16/30\n30/30 [==============================] - 4s 144ms/step - loss: 0.0177 - accuracy: 0.9932 - val_loss: 0.1823 - val_accuracy: 0.9492\nEpoch 17/30\n30/30 [==============================] - 4s 145ms/step - loss: 0.0243 - accuracy: 0.9926 - val_loss: 0.1027 - val_accuracy: 0.9788\nEpoch 18/30\n30/30 [==============================] - 4s 146ms/step - loss: 0.0317 - accuracy: 0.9888 - val_loss: 0.2455 - val_accuracy: 0.9322\nEpoch 19/30\n30/30 [==============================] - 4s 147ms/step - loss: 0.0402 - accuracy: 0.9893 - val_loss: 0.1035 - val_accuracy: 0.9746\nEpoch 20/30\n30/30 [==============================] - 4s 147ms/step - loss: 0.0160 - accuracy: 0.9947 - val_loss: 0.0870 - val_accuracy: 0.9831\nEpoch 21/30\n30/30 [==============================] - 4s 146ms/step - loss: 0.0160 - accuracy: 0.9925 - val_loss: 0.1053 - val_accuracy: 0.9746\nEpoch 22/30\n30/30 [==============================] - 4s 146ms/step - loss: 0.0170 - accuracy: 0.9945 - val_loss: 0.1289 - val_accuracy: 0.9449\nEpoch 23/30\n30/30 [==============================] - 4s 147ms/step - loss: 0.0250 - accuracy: 0.9909 - val_loss: 0.1463 - val_accuracy: 0.9492\nEpoch 24/30\n30/30 [==============================] - 4s 148ms/step - loss: 0.0211 - accuracy: 0.9927 - val_loss: 0.1259 - val_accuracy: 0.9492\nEpoch 25/30\n30/30 [==============================] - 4s 148ms/step - loss: 0.0319 - accuracy: 0.9856 - val_loss: 0.1308 - val_accuracy: 0.9492\nEpoch 26/30\n30/30 [==============================] - 4s 149ms/step - loss: 0.0256 - accuracy: 0.9900 - val_loss: 0.1539 - val_accuracy: 0.9492\nEpoch 27/30\n30/30 [==============================] - 5s 155ms/step - loss: 0.0222 - accuracy: 0.9894 - val_loss: 0.1294 - val_accuracy: 0.9703\nEpoch 28/30\n30/30 [==============================] - 4s 150ms/step - loss: 0.0220 - accuracy: 0.9923 - val_loss: 0.1254 - val_accuracy: 0.9619\nEpoch 29/30\n30/30 [==============================] - 4s 151ms/step - loss: 0.0234 - accuracy: 0.9936 - val_loss: 0.1234 - val_accuracy: 0.9534\nEpoch 30/30\n30/30 [==============================] - 5s 151ms/step - loss: 0.0317 - accuracy: 0.9874 - val_loss: 0.1012 - val_accuracy: 0.9746\n"
],
[
"%cd /content/gdrive/MyDrive/Kaggle",
"/content/gdrive/MyDrive/Kaggle\n"
],
[
"model.save('fundus_model_MYO.h5')\r\nprint('saved')",
"saved\n"
],
[
"!ls",
"full_df.csv\nfundus_model_AMD.h5\nfundus_model_CAT.h5\nfundus_model_MYO.h5\nimagenet_class_index.json\ninception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5\ninception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5\ninception_v3_weights_tf_dim_ordering_tf_kernels.h5\ninception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5\nkaggle.json\nODIR-5K\npreprocessed_images\nresnet50_weights_tf_dim_ordering_tf_kernels.h5\nresnet50_weights_tf_dim_ordering_tf_kernels_notop.h5\nvgg16_weights_tf_dim_ordering_tf_kernels_notop.h5\nvgg.png\nxception_weights_tf_dim_ordering_tf_kernels.h5\nxception_weights_tf_dim_ordering_tf_kernels_notop.h5\n"
],
[
"from sklearn.metrics import confusion_matrix,classification_report,accuracy_score\r\ny_pred = model.predict_classes(x_test)",
"/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/sequential.py:450: UserWarning: `model.predict_classes()` is deprecated and will be removed after 2021-01-01. Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype(\"int32\")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation).\n warnings.warn('`model.predict_classes()` is deprecated and '\n"
],
[
"accuracy_score(y_test,y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test,y_pred))",
" precision recall f1-score support\n\n 0 0.97 0.97 0.97 118\n 1 0.97 0.97 0.97 118\n\n accuracy 0.97 236\n macro avg 0.97 0.97 0.97 236\nweighted avg 0.97 0.97 0.97 236\n\n"
]
],
[
[
"## Predictions",
"_____no_output_____"
]
],
[
[
"# from IPython.display import Image, display\r\n\r\n# images = [\"/content/gdrive/MyDrive/Kaggle/preprocessed_images/560_right.jpg\",\r\n# \"/content/gdrive/MyDrive/Kaggle/preprocessed_images/1550_right.jpg\",\r\n# \"/content/gdrive/MyDrive/Kaggle/preprocessed_images/2330_right.jpg\",\r\n# \"/content/gdrive/MyDrive/Kaggle/preprocessed_images/0_left.jpg\",\r\n# \"/content/gdrive/MyDrive/Kaggle/preprocessed_images/179_right.jpg\"]\r\n\r\n# for image in images:\r\n# display(Image(image, width = 120, height = 120))\r\n# print()",
"_____no_output_____"
]
],
[
[
"Loaded Model",
"_____no_output_____"
]
],
[
[
"pwd",
"_____no_output_____"
],
[
"from tensorflow import keras\r\n\r\nmodel = keras.models.load_model('/content/gdrive/MyDrive/Kaggle/fundus_model_MYO.h5')\r\nprint('loaded')",
"loaded\n"
],
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nvgg16 (Functional) (None, 7, 7, 512) 14714688 \n_________________________________________________________________\ndense (Dense) (None, 7, 7, 256) 131328 \n_________________________________________________________________\ndropout (Dropout) (None, 7, 7, 256) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 7, 7, 128) 32896 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 7, 7, 128) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 7, 7, 128) 16512 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 7, 7, 128) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 6272) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 1) 6273 \n=================================================================\nTotal params: 14,901,697\nTrainable params: 187,009\nNon-trainable params: 14,714,688\n_________________________________________________________________\n"
],
[
"from keras.utils.vis_utils import plot_model\r\n\r\nplot_model(model, to_file='vgg.png')",
"_____no_output_____"
],
[
"from keras.preprocessing.image import load_img\n \nimage = load_img(\"/content/gdrive/MyDrive/Kaggle/preprocessed_images/179_right.jpg\", target_size=(224, 224))",
"_____no_output_____"
],
[
"from keras.preprocessing.image import img_to_array\r\n# convert the image pixels to a numpy array\r\nimage = img_to_array(image)",
"_____no_output_____"
],
[
"# reshape data for the model\r\nimage = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))",
"_____no_output_____"
],
[
"from keras.applications.vgg16 import preprocess_input\r\n# prepare the image for the VGG model\r\nimage = preprocess_input(image)",
"_____no_output_____"
]
],
[
[
"Normal Fundus",
"_____no_output_____"
]
],
[
[
"def disease(predic):\r\n if predic > 0.75:\r\n return 'Pathological Myopia'\r\n return 'Normal'\r\n\r\npred = model.predict(image)\r\nstatus = disease(pred[0])\r\n\r\nprint(\"Situation: {}\".format(status))\r\nprint(\"Percentage: {}\".format(round(int(pred[0]), 1)))",
"Situation: Normal\nPercentage: 0\n"
]
],
[
[
"Myopic Fundus",
"_____no_output_____"
]
],
[
[
"def ready_image(img_path):\r\n image = load_img(img_path, target_size=(224, 224))\r\n image = img_to_array(image)\r\n image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))\r\n image = preprocess_input(image)\r\n return image\r\n\r\nimage = ready_image(\"/content/gdrive/MyDrive/Kaggle/preprocessed_images/13_right.jpg\")",
"_____no_output_____"
],
[
"pred = model.predict(image)\r\nstatus = disease(pred[0])\r\n\r\nprint(\"Situation: {}\".format(status))\r\nprint(\"Percentage: {}\".format(round(int(pred[0]), 1)))",
"Situation: Pathological Myopia\nPercentage: 0\n"
],
[
"image = ready_image(\"/content/gdrive/MyDrive/Kaggle/preprocessed_images/233_right.jpg\")",
"_____no_output_____"
],
[
"pred = model.predict(image)\r\nstatus = disease(pred[0])\r\n\r\nprint(\"Situation: {}\".format(status))\r\nprint(\"Percentage: {}\".format(round(int(pred[0]), 1)))",
"Situation: Pathological Myopia\nPercentage: 0\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d04f2e6242386a83b1774340135539c9f1b82d2e | 17,885 | ipynb | Jupyter Notebook | eksctl/dask-gateway-test.ipynb | salvis2/terraform-aws | a2d6c9c02c10740ba166fdfe08bf26712a650f9a | [
"MIT"
] | null | null | null | eksctl/dask-gateway-test.ipynb | salvis2/terraform-aws | a2d6c9c02c10740ba166fdfe08bf26712a650f9a | [
"MIT"
] | null | null | null | eksctl/dask-gateway-test.ipynb | salvis2/terraform-aws | a2d6c9c02c10740ba166fdfe08bf26712a650f9a | [
"MIT"
] | 2 | 2020-01-06T22:07:28.000Z | 2020-01-30T22:54:51.000Z | 60.627119 | 1,268 | 0.657031 | [
[
[
"from dask_gateway import Gateway\nimport os",
"_____no_output_____"
],
[
"# External IPs\ngateway = Gateway(\n \"http://ad7f4b0a2492a11eabd750e8c5de8801-1750344606.us-west-2.elb.amazonaws.com\",\n proxy_address=\"tls://ad7f57e7d492a11eabd750e8c5de8801-778017149.us-west-2.elb.amazonaws.com:8786\",\n auth='jupyterhub'\n)",
"_____no_output_____"
],
[
"# Internal IPs\ngateway = Gateway(\n \"http://10.100.90.71:80\",\n proxy_address=\"tls://10.100.210.56:8786\",\n auth='jupyterhub'\n)",
"_____no_output_____"
],
[
"gateway.list_clusters()",
"_____no_output_____"
],
[
"os.environ['JUPYTER_IMAGE']",
"_____no_output_____"
]
],
[
[
"Started 15:05. Ended ",
"_____no_output_____"
]
],
[
[
"cluster = gateway.new_cluster(image=os.environ['JUPYTER_IMAGE'])",
"_____no_output_____"
]
],
[
[
"## Error Messages\n\n### Cluster-Autoscaler pod Logs\n\nPod jhub/dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6 is unschedulable\n\nPod dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6 can't be scheduled on eksctl-jupyterhub-salvis-nodegroup-user-spot-NodeGroup-1MDSBX01QDJ20, predicate failed: PodToleratesNodeTaints predicate mismatch, reason: node(s) had taints that the pod didn't tolerate\n\nPod dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6 can't be scheduled on eksctl-jupyterhub-salvis-nodegroup-worker-spot-NodeGroup-1IHH8XDNZ0NT8, predicate failed: PodToleratesNodeTaints predicate mismatch, reason: node(s) had taints that the pod didn't tolerate\n\nEvent(v1.ObjectReference{Kind:\"Pod\", Namespace:\"jhub\", Name:\"dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6\", UID:\"7bd0dfca-492b-11ea-bd75-0e8c5de88014\", APIVersion:\"v1\", ResourceVersion:\"3963\", FieldPath:\"\"}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 node(s) had taints that the pod didn't tolerate, 1 max limit reached",
"_____no_output_____"
],
[
"### Scheduler-proxy-dask-gateway pod Logs\n\nLots of \"Extracting SNI: Error reading TLS record header: EOF\"\n\n### gateway-dask-gateway pod Logs\n\n[I 2020-02-06 22:18:07.146 DaskGateway] Starting cluster 0fb1652be05749e1b290fcdea95f7bf9 for user salvis2...\n\n[I 2020-02-06 22:18:07.172 DaskGateway] Cluster 0fb1652be05749e1b290fcdea95f7bf9 has started, waiting for connection\n\n[I 2020-02-06 22:18:27.158 DaskGateway] 200 GET /api/clusters/0fb1652be05749e1b290fcdea95f7bf9?wait (192.168.139.35) 20004.70ms\n\n[I 2020-02-06 22:18:47.680 DaskGateway] 200 GET /api/clusters/0fb1652be05749e1b290fcdea95f7bf9?wait (192.168.139.35) 20014.58ms\n\n[W 2020-02-06 22:19:07.153 DaskGateway] Cluster 0fb1652be05749e1b290fcdea95f7bf9 startup timed out after 60.0 seconds",
"_____no_output_____"
],
[
"## To Try\n\nMore timeout. Not specified in cluster creation. 10 min limit still hit timeout. Probably not a timeout problem.\n\nPod tolerations bad? Do I need to re-enable the one tag I took out?\n\nPod tolerations: nothing specified in dask-gateway-config.yml. \n\n[Incorrect address?](https://github.com/dask/dask-gateway/issues/163) No\n\nInternal IPs? No.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d04f2ffd5223fd19b10a9811152e35096ddbc841 | 13,910 | ipynb | Jupyter Notebook | examples/simple_molecule_examples.ipynb | felixmusil/ml_tools | 8731bd5628edcf50d03ea7fc99c570f428a08f7b | [
"MIT"
] | 1 | 2020-03-10T09:13:45.000Z | 2020-03-10T09:13:45.000Z | examples/simple_molecule_examples.ipynb | felixmusil/ml_tools | 8731bd5628edcf50d03ea7fc99c570f428a08f7b | [
"MIT"
] | null | null | null | examples/simple_molecule_examples.ipynb | felixmusil/ml_tools | 8731bd5628edcf50d03ea7fc99c570f428a08f7b | [
"MIT"
] | null | null | null | 24.065744 | 109 | 0.561898 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import sys,os\nsys.path.insert(0,'../')",
"_____no_output_____"
],
[
"from ml_tools.descriptors import RawSoapInternal\nfrom ml_tools.models.KRR import KRR,TrainerCholesky,KRRFastCV\nfrom ml_tools.kernels import KernelPower,KernelSum\nfrom ml_tools.utils import get_mae,get_rmse,get_sup,get_spearman,get_score,load_pck,tqdm_cs\nfrom ml_tools.split import KFold,LCSplit,ShuffleSplit\nfrom ml_tools.compressor import FPSFilter",
"_____no_output_____"
],
[
"import numpy as np\nfrom ase.io import read,write\nfrom ase.visualize import view",
"_____no_output_____"
]
],
[
[
"# Build a kernel Matrix",
"_____no_output_____"
]
],
[
[
"# load the structures\nframes = read('data/dft-smiles_500.xyz',':')\nglobal_species = []\nfor frame in frames:\n global_species.extend(frame.get_atomic_numbers())\nglobal_species = np.unique(global_species)\n\n# split the structures in 2 sets\nframes_train = frames[:300]\nframes_test = frames[300:]",
"_____no_output_____"
],
[
"# set up the soap parameters\nsoap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,\n global_species=global_species,nocenters=[])\n\nrepresentation = RawSoapInternal(**soap_params)\n\n# set up the kernel parameters\nkernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])\n",
"_____no_output_____"
],
[
"# compute the soap vectors\nrawsoaps = representation.transform(frames_train)\nX_train = dict(feature_matrix=rawsoaps,strides=representation.strides)\n\n# compute the soap vectors\nrawsoaps = representation.transform(frames_test)\nX_test = dict(feature_matrix=rawsoaps,strides=representation.strides)",
"_____no_output_____"
],
[
"# compute the square kernel matrix\nKmat = kernel.transform(X_train)",
"_____no_output_____"
],
[
"# compute a rectangular kernel matrix\nKmat_rect = kernel.transform(X_test,X_train)",
"_____no_output_____"
]
],
[
[
"# FPS selection of the samples",
"_____no_output_____"
]
],
[
[
"# load the structures\nframes = read('data/dft-smiles_500.xyz',':300')\nglobal_species = []\nfor frame in frames:\n global_species.extend(frame.get_atomic_numbers())\nglobal_species = np.unique(global_species)",
"_____no_output_____"
],
[
"# set up the soap parameters\nsoap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,\n global_species=global_species,nocenters=[])\n\nrepresentation = RawSoapInternal(**soap_params)\n\n# set up the kernel parameters\nkernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])\n",
"_____no_output_____"
],
[
"# compute the soap vectors\nrawsoaps = representation.transform(frames)\nX = dict(feature_matrix=rawsoaps,strides=representation.strides)",
"_____no_output_____"
],
[
"# run the fps selection on the set and plot the minmax distance\nNselect = 250\ncompressor = FPSFilter(Nselect,kernel,act_on='sample',precompute_kernel=True,disable_pbar=True)\ncompressor.fit(X,dry_run=True)\ncompressor.plot()",
"_____no_output_____"
],
[
"# select the appropriate number of samples to select\ncompressor.Nselect = 250\n# and compress\nX_compressed = compressor.transform(X)",
"_____no_output_____"
],
[
"compressor.selected_ids[:compressor.Nselect]",
"_____no_output_____"
],
[
"X['feature_matrix'].shape",
"_____no_output_____"
],
[
"X_compressed['feature_matrix'].shape",
"_____no_output_____"
],
[
"X_compressed['strides'].shape",
"_____no_output_____"
]
],
[
[
"# FPS selection of the features",
"_____no_output_____"
]
],
[
[
"# load the structures\nframes = read('data/dft-smiles_500.xyz',':300')\nglobal_species = []\nfor frame in frames:\n global_species.extend(frame.get_atomic_numbers())\nglobal_species = np.unique(global_species)",
"_____no_output_____"
],
[
"# set up the soap parameters\nsoap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,\n global_species=global_species,nocenters=[])\n\nrepresentation = RawSoapInternal(**soap_params)\n\n# set up the kernel parameters\nkernel = KernelPower(zeta = 2)\n",
"_____no_output_____"
],
[
"# compute the soap vectors\nX = representation.transform(frames)",
"_____no_output_____"
],
[
"# run the fps selection on the set and plot the minmax distance\nNselect = 250\ncompressor = FPSFilter(Nselect,kernel,act_on='feature',precompute_kernel=True,disable_pbar=True)\ncompressor.fit(X,dry_run=True)\ncompressor.plot()",
"_____no_output_____"
],
[
"# select the appropriate number of samples to select\ncompressor.Nselect = 500\n# and compress\nX_compressed = compressor.transform(X)",
"_____no_output_____"
],
[
"compressor.selected_ids[:compressor.Nselect]",
"_____no_output_____"
]
],
[
[
"# get a cross validation score",
"_____no_output_____"
]
],
[
[
"# load the structures\nframes = read('data/dft-smiles_500.xyz',':')\nglobal_species = []\ny = []\nfor frame in frames:\n global_species.extend(frame.get_atomic_numbers())\n y.append(frame.info['dft_formation_energy_per_atom_in_eV'])\ny = np.array(y)\nglobal_species = np.unique(global_species)",
"_____no_output_____"
],
[
"# set up the soap parameters\nsoap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,\n global_species=global_species,nocenters=[])\n\nrepresentation = RawSoapInternal(**soap_params)\n\n# set up the kernel parameters\nkernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])\n\n# set the splitting rational\ncv = KFold(n_splits=6,random_state=10,shuffle=True)\n# set up the regression model\njitter = 1e-8\nkrr = KRRFastCV(jitter, 1.,cv)",
"_____no_output_____"
],
[
"# compute the soap vectors\nrawsoaps = representation.transform(frames)\nX = dict(feature_matrix=rawsoaps,strides=representation.strides)\nrawsoaps.shape",
"_____no_output_____"
],
[
"# compute the kernel matrix for the dataset\nKmat = kernel.transform(X)\n# fit the model\nkrr.fit(Kmat,y)\n# get the predictions for each folds\ny_pred = krr.predict()\n# compute the CV score for the dataset\nget_score(y_pred,y)",
"_____no_output_____"
]
],
[
[
"# LC",
"_____no_output_____"
]
],
[
[
"# load the structures\nframes = read('data/dft-smiles_500.xyz',':')\nglobal_species = []\ny = []\nfor frame in frames:\n global_species.extend(frame.get_atomic_numbers())\n y.append(frame.info['dft_formation_energy_per_atom_in_eV'])\ny = np.array(y)\nglobal_species = np.unique(global_species)",
"_____no_output_____"
],
[
"# set up the soap parameters\nsoap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,\n global_species=global_species,nocenters=[])\n\nrepresentation = RawSoapInternal(**soap_params)\n\n# set up the kernel parameters\nkernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])\n\n# set the splitting rational\ntrainer = TrainerCholesky(memory_efficient=True)\n# set up the regression model\njitter = 1e-8\nkrr = KRR(jitter,1.,trainer)\ntrain_sizes=[20,50,100]\nlc = LCSplit(ShuffleSplit, n_repeats=[20,20,20],train_sizes=train_sizes,test_size=100, random_state=10)",
"_____no_output_____"
],
[
"rawsoaps = representation.transform(frames)\nX = dict(feature_matrix=rawsoaps,strides=representation.strides)\nK = kernel.transform(X)",
"_____no_output_____"
],
[
"scores = {size:[] for size in train_sizes}\nfor train,test in tqdm_cs(lc.split(y),total=lc.n_splits):\n Ntrain = len(train)\n k_train = K[np.ix_(train,train)]\n y_train = y[train]\n k_test = K[np.ix_(test,train)]\n krr.fit(k_train,y_train)\n y_pred = krr.predict(k_test)\n scores[Ntrain].append(get_score(y_pred,y[test]))",
"_____no_output_____"
],
[
"sc_name = 'RMSE'\nNtrains = []\navg_scores = []\nfor Ntrain, score in scores.items():\n avg = 0\n for sc in score:\n avg += sc[sc_name]\n avg /= len(score)\n avg_scores.append(avg)\n Ntrains.append(Ntrain)",
"_____no_output_____"
],
[
"plt.plot(Ntrains,avg_scores,'--o')\nplt.xlabel('Number of training samples')\nplt.ylabel('Test {}'.format(sc_name))\nplt.xscale('log')\nplt.yscale('log')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04f3d72836644e9d5a43cfcda31fe806204f027 | 9,889 | ipynb | Jupyter Notebook | examples/hello-world-single-task.ipynb | moshewe/argo-python-dsl | 7f92144a50fa59179e42b6130bf7914a7bcc501e | [
"Apache-2.0"
] | 98 | 2020-03-19T16:15:40.000Z | 2022-03-25T13:16:37.000Z | examples/hello-world-single-task.ipynb | moshewe/argo-python-dsl | 7f92144a50fa59179e42b6130bf7914a7bcc501e | [
"Apache-2.0"
] | 27 | 2020-04-25T12:17:10.000Z | 2021-05-12T21:37:17.000Z | examples/hello-world-single-task.ipynb | moshewe/argo-python-dsl | 7f92144a50fa59179e42b6130bf7914a7bcc501e | [
"Apache-2.0"
] | 22 | 2020-04-25T12:14:42.000Z | 2022-01-28T01:26:42.000Z | 25.819843 | 169 | 0.452017 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"></ul></div>",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from argo.workflows.dsl import Workflow\nfrom argo.workflows.dsl import task\nfrom argo.workflows.dsl import template\n\nfrom argo.workflows.dsl.templates import V1Container\nfrom argo.workflows.dsl.templates import V1alpha1Template",
"_____no_output_____"
],
[
"import yaml\n\nfrom pprint import pprint\n\nfrom argo.workflows.dsl._utils import sanitize_for_serialization",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
]
],
[
[
"!sh -c '[ -f \"hello-world-single-task.yaml\" ] || curl -LO https://raw.githubusercontent.com/CermakM/argo-python-dsl/master/examples/hello-world-single-task.yaml'",
"_____no_output_____"
],
[
"from pathlib import Path\n\nmanifest = Path(\"./hello-world-single-task.yaml\").read_text()\nprint(manifest)",
"# @file: hello-world-single-task.yaml\napiVersion: argoproj.io/v1alpha1\nkind: Workflow\nmetadata:\n name: hello-world\n generateName: hello-world-\nspec:\n entrypoint: main\n templates:\n - name: main\n dag:\n tasks:\n - name: A\n template: whalesay\n\n # @task: [A]\n - name: whalesay\n container:\n name: whalesay\n image: docker/whalesay:latest\n command: [cowsay]\n args: [\"hello world\"]\nstatus: {}\n\n"
],
[
"class HelloWorld(Workflow):\n \n @task\n def A(self) -> V1alpha1Template:\n return self.whalesay()\n \n @template\n def whalesay(self) -> V1Container:\n container = V1Container(\n image=\"docker/whalesay:latest\",\n name=\"whalesay\",\n command=[\"cowsay\"],\n args=[\"hello world\"]\n )\n \n return container\n\nwf = HelloWorld()\nwf",
"_____no_output_____"
],
[
"print(wf.to_yaml())",
"api_version: argoproj.io/v1alpha1\nkind: Workflow\nmetadata:\n generate_name: hello-world-\n name: hello-world\nspec:\n entrypoint: main\n templates:\n - dag:\n tasks:\n - name: A\n template: whalesay\n name: main\n - container:\n args:\n - hello world\n command:\n - cowsay\n image: docker/whalesay:latest\n name: whalesay\n name: whalesay\nstatus: {}\n\n"
]
],
[
[
"---",
"_____no_output_____"
]
],
[
[
"pprint(sanitize_for_serialization(wf))",
"{'apiVersion': 'argoproj.io/v1alpha1',\n 'kind': 'Workflow',\n 'metadata': {'generateName': 'hello-world-', 'name': 'hello-world'},\n 'spec': {'entrypoint': 'main',\n 'templates': [{'dag': {'tasks': [{'name': 'A',\n 'template': 'whalesay'}]},\n 'name': 'main'},\n {'container': {'args': ['hello world'],\n 'command': ['cowsay'],\n 'image': 'docker/whalesay:latest',\n 'name': 'whalesay'},\n 'name': 'whalesay'}]},\n 'status': {}}\n"
],
[
"pprint(yaml.safe_load(manifest))",
"{'apiVersion': 'argoproj.io/v1alpha1',\n 'kind': 'Workflow',\n 'metadata': {'generateName': 'hello-world-', 'name': 'hello-world'},\n 'spec': {'entrypoint': 'main',\n 'templates': [{'dag': {'tasks': [{'name': 'A',\n 'template': 'whalesay'}]},\n 'name': 'main'},\n {'container': {'args': ['hello world'],\n 'command': ['cowsay'],\n 'image': 'docker/whalesay:latest',\n 'name': 'whalesay'},\n 'name': 'whalesay'}]},\n 'status': {}}\n"
],
[
"assert sanitize_for_serialization(wf) == yaml.safe_load(manifest), \"Manifests don't match.\"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d04f432cff1ad85f971250b0394596a5a2c95f7b | 9,192 | ipynb | Jupyter Notebook | CRCNS Data/Train_test_set_creation.ipynb | RohanParikh00/Neuron-Classification-HHMI-Janelia | e25f015291358fa4a07d407ab23eaf354edf2a9d | [
"BSD-3-Clause"
] | null | null | null | CRCNS Data/Train_test_set_creation.ipynb | RohanParikh00/Neuron-Classification-HHMI-Janelia | e25f015291358fa4a07d407ab23eaf354edf2a9d | [
"BSD-3-Clause"
] | null | null | null | CRCNS Data/Train_test_set_creation.ipynb | RohanParikh00/Neuron-Classification-HHMI-Janelia | e25f015291358fa4a07d407ab23eaf354edf2a9d | [
"BSD-3-Clause"
] | null | null | null | 39.114894 | 133 | 0.54047 | [
[
[
"import csv\nfrom numpy import genfromtxt\nimport numpy as np\nimport pandas as pd\nfrom random import random\nimport torch.optim as optim\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport math\nimport sklearn.linear_model\n\n# Function to check and remove NaNs from dataset\ndef dataChecker(arr):\n idxRow = -1\n for row in arr:\n idxRow = idxRow + 1\n for idx in range(len(row)):\n if math.isnan(arr[idxRow,idx]) == True:\n arr[idxRow, idx] = 0\n return arr\n\n# Find max value in the dataset and its index\ndef maxVal(arr):\n idxRow = -1\n maxVal = -100\n indexes = np.empty(2)\n for row in arr:\n idxRow = idxRow + 1\n for idx in range(len(row)):\n if ((arr[idxRow,idx] > maxVal) and (idx != 0 and idx != 4 and idx != 5 and idx != 6 and idx != 7 and idx != 8)):\n maxVal = arr[idxRow, idx]\n indexes[0] = idxRow\n indexes[1] = idx\n return indexes, maxVal\n\n# Find max value in the dataset and its index\ndef minVal(arr):\n idxRow = -1\n minVal = 100\n indexes = np.empty(2)\n for row in arr:\n idxRow = idxRow + 1\n for idx in range(len(row)):\n if ((arr[idxRow,idx] < minVal) and (idx != 0 and idx != 4 and idx != 5 and idx != 6 and idx != 7 and idx != 8)):\n minVal = arr[idxRow, idx]\n indexes[0] = idxRow\n indexes[1] = idx\n return indexes, minVal\n\n# Scale all values in the array that are the waveform or waveform-dependent to a range\ndef scaleVals(arrIn, arrOut, minAllowed, maxAllowed, minValue, maxValue):\n idxRow = -1\n for row in arrIn:\n idxRow = idxRow + 1\n for idx in range(len(row)):\n if(idx != 0 and idx != 4 and idx != 5 and idx != 6 and idx != 7 and idx != 8):\n scaled = (((maxAllowed - minAllowed) * (arrIn[idxRow,idx] - minValue)) / (maxValue - minValue)) + minAllowed\n arrOut[idxRow, idx] = scaled\n else:\n arrOut[idxRow, idx] = arrIn[idxRow,idx]\n return arrOut\n\n# Perform Recursive Feature Elimination to identify the 3 top features\ndef RFE(arr):\n #data = X, target = Y\n X = arr[:,1:9]\n Y = arr[:,0]\n\n #Feature extraction\n model = sklearn.linear_model.LogisticRegression() \n rfeFeatures = sklearn.feature_selection.RFE(model, 3)\n fit = rfeFeatures.fit(X,Y)\n return fit.ranking_\n\n# Number of waveforms for each neuron cell type \nvalsFS = 1438775\nvalsPT = 319484\nvalsIT = 126460\n\n# Number of rows in each array\nrows_FS = valsFS\nrows_PT = valsPT\nrows_IT = valsIT\n\n# Separation value to split up training:testing sets (67:33)\nsep_FS = 2 * rows_FS // 3\nsep_PT = 2 * rows_PT // 3\nsep_IT = 2 * rows_IT // 3\n\n# Create training sets\ncol = 38\ntrainArrSize = sep_FS\ntrain_set_FS = np.empty((trainArrSize,col))\ntrain_set_PT_attr = np.empty((trainArrSize,col))\ntrain_set_IT_attr = np.empty((trainArrSize,col))\n\n# Fill the training sets with the 66% that is already existent (prior to oversampling)\nfor indFS_init in range(sep_FS):\n train_set_FS[indFS_init, :] = FS[indFS_init,:]\n\nfor indPT_init in range(sep_PT):\n train_set_PT_attr[indPT_init, :] = PT[indPT_init,:]\n\nfor indIT_init in range(sep_IT):\n train_set_IT_attr[indIT_init, :] = IT[indIT_init,:]\n\n# Fill the test sets to completion\ntest_set_FS = np.empty((0,col)) \ntest_size_FS = valsFS - sep_FS\ntest_set_PT = np.zeros((0,col)) \ntest_size_PT = valsPT - sep_PT\ntest_set_IT = np.zeros((0,col)) \ntest_size_IT = valsIT - sep_IT\n\ntest_set_FS = np.append(test_set_FS, FS[sep_FS:valsFS, :], axis = 0)\ntest_set_PT = np.append(test_set_PT, PT[sep_PT:valsPT, :], axis = 0)\ntest_set_IT = np.append(test_set_IT, IT[sep_IT:valsIT, :], axis = 0)\n\n# Oversampling the minority with replacement\n\n# Determine how much to add to PT/IT and size of pre-oversampling array\nnumAdd_PT = sep_FS - sep_PT\nnumAdd_IT = sep_FS - sep_IT\ntrainPTArrSize = sep_PT\ntrainITArrSize = sep_IT\n\n# Randomize attribute-wise (_attr) for all features but the waveform,\n# which will be randomized as single unit\nfor indPT_2 in range(trainPTArrSize,numAdd_PT+trainPTArrSize):\n for attrPT in range(9):\n rand = int(random() * (sep_PT+1))\n train_set_PT_attr[indPT_2,attrPT] = train_set_PT_attr[rand, attrPT]\n rand = int(random() * (sep_PT+1))\n train_set_PT_attr[indPT_2, 9:] = train_set_PT_attr[rand, 9:]\n\nfor indIT_2 in range(trainITArrSize,numAdd_IT+trainITArrSize):\n for attrIT in range(9):\n rand = int(random() * (sep_IT+1))\n train_set_IT_attr[indIT_2,attrIT] = train_set_IT_attr[rand, attrIT]\n rand = int(random() * (sep_IT+1))\n train_set_IT_attr[indIT_2, 9:] = train_set_IT_attr[rand, 9:]\n\n# Randomly combine individual training and testing sets into master training and testing sets\ntrain_set_attr = np.empty((trainArrSize * 3, col))\ncountFS = 0\ncountPT = 0\ncountIT = 0\nindTrain= 0\nwhile indTrain < (trainArrSize * 3):\n rand = int(random() * 3 + 1)\n if rand == 1 and (countFS + 1 <= trainArrSize):\n train_set_attr[indTrain,:] = train_set_FS[countFS,:]\n countFS = countFS + 1\n indTrain = indTrain + 1\n elif rand == 2 and (countPT + 1 <= trainArrSize):\n train_set_attr[indTrain,:] = train_set_PT_attr[countPT,:]\n countPT = countPT + 1 \n indTrain = indTrain + 1\n elif rand == 3 and (countIT + 1 <= trainArrSize):\n train_set_attr[indTrain,:] = train_set_IT_attr[countIT,:]\n countIT = countIT + 1 \n indTrain = indTrain + 1\n\ntest_set = np.empty((test_size_FS + test_size_PT + test_size_IT, col))\ncountFS = 0\ncountPT = 0\ncountIT = 0 \nindTest = 0\nwhile indTest < (test_size_FS + test_size_PT + test_size_IT):\n rand = int(random() * 3 + 1)\n if rand == 1 and (countFS + 1 <= test_size_FS):\n test_set[indTest,:] = test_set_FS[countFS,:]\n countFS = countFS + 1\n indTest = indTest + 1\n elif rand == 2 and (countPT + 1 <= test_size_PT):\n test_set[indTest,:] = test_set_PT[countPT,:]\n countPT = countPT + 1 \n indTest = indTest + 1\n elif rand == 3 and (countIT + 1 <= test_size_IT):\n test_set[indTest,:] = test_set_IT[countIT,:]\n countIT = countIT + 1 \n indTest = indTest + 1\n\n# Remove NaNs in each array\ntrain_set_attr = dataChecker(train_set_attr)\ntest_set = dataChecker(test_set)\n\n# Scaling inputs to 0-1\n \ntrain_set_attr_scld = np.empty((2877549, 38))\ntest_set_scld = np.empty((628241, 38))\n\nminValue = -0.00098502\nmaxValue = 0.0011485\n\ntrain_set_attr_scld = scaleVals(train_set_attr, train_set_attr_scld, 0, 1, minValue, maxValue)\ntest_set_scld = scaleVals(test_set, test_set_scld, 0, 1, minValue, maxValue)\n\n# Save files as a .csv \nnp.savetxt('train_set_attr_scld.csv', train_set_attr_scld, delimiter = \",\")\nnp.savetxt('test_set_scld.csv', test_set_scld, delimiter = \",\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d04f4cb7a6aa383e3cb8d3665ae048e30c7b7976 | 339,101 | ipynb | Jupyter Notebook | ML_breast_cancer_detection_with_SVM_KNN.ipynb | psychty/jubilant-potato | 3bfcbb92e5b5a30cbc9e7f9479aaf47bb3f43a49 | [
"MIT"
] | null | null | null | ML_breast_cancer_detection_with_SVM_KNN.ipynb | psychty/jubilant-potato | 3bfcbb92e5b5a30cbc9e7f9479aaf47bb3f43a49 | [
"MIT"
] | null | null | null | ML_breast_cancer_detection_with_SVM_KNN.ipynb | psychty/jubilant-potato | 3bfcbb92e5b5a30cbc9e7f9479aaf47bb3f43a49 | [
"MIT"
] | null | null | null | 402.255042 | 264,273 | 0.916759 | [
[
[
"# How to detect breast cancer with a Support Vector Machine (SVM) and k-nearest neighbours clustering and compare results.",
"_____no_output_____"
],
[
"Load some packages ",
"_____no_output_____"
]
],
[
[
"import scipy\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt \r\nimport pandas as pd \r\nimport sklearn\r\n\r\nfrom sklearn import preprocessing\r\nfrom sklearn.model_selection import train_test_split # cross_validation is deprecated\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.svm import SVC\r\nfrom sklearn import model_selection\r\nfrom sklearn.metrics import classification_report, accuracy_score\r\nfrom pandas.plotting import scatter_matrix\r\n\r\n\r\nprint('NumPy must be 1.14 to run this, it is {}'.format(np.__version__))\r\nprint('Python should be version 2.7 or higher, it is {}'.format(sys.version))",
"NumPy must be 1.14 to run this, it is 1.20.3\nPython should be version 2.7 or higher, it is 3.9.5 (tags/v3.9.5:0a7dcbd, May 3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)]\n"
]
],
[
[
"Read in the dataset from thw UCI data repository.\r\n\r\nThis details a lot of information from cells, such as their size, clump thickness, shape etc. A pathologist would consider these to determine whether a cell had cancer. \r\n\r\nSpecifically, we use the read_csv command from pd (pandas) package and supply a url of the dataset and some column names. Then we display the table.",
"_____no_output_____"
]
],
[
[
"# Load Dataset\r\nurl = \"https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data\"\r\nnames = ['id', 'clump_thickness', 'uniform_cell_size', 'uniform_cell_shape',\r\n 'marginal_adhesion', 'single_epithelial_size', 'bare_nuclei',\r\n 'bland_chromatin', 'normal_nucleoli', 'mitoses', 'class']\r\ndf = pd.read_csv(url, names=names)\r\n\r\ndf.drop(['id'], 1, inplace = True) # We have removed the id field from the dataframe as we would not be running any models on it and we already know that each row represents a single cell.\r\n\r\ndisplay(df)",
"_____no_output_____"
]
],
[
[
"Get some summary statistics for each of our variables",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
]
],
[
[
"The dataset has some missing values. you can use .isnull() to return booleen true false and then tabulate that using .describe to say how many occurrences of true or false there are.",
"_____no_output_____"
]
],
[
[
"df.isnull().describe()",
"_____no_output_____"
]
],
[
[
"If you have missing data, you can replace it.",
"_____no_output_____"
]
],
[
[
"df.replace('?', -9999, inplace = True)",
"_____no_output_____"
]
],
[
[
"Class contains information on whether the tumour is benign (class = 2) or malignant (class = 4).\r\n\r\nNext we plot a histogram of all variables to show the distrubition.",
"_____no_output_____"
]
],
[
[
"df.hist(figsize = (15,15))\r\nplt.show() # by using plt.show() you render just the plot itself, because python will always display only the last command.",
"_____no_output_____"
]
],
[
[
"Look at the relationship between variables with a scatter matrix.\r\n\r\nThere looks like a pretty strong linear relationship between unifrorm cell shape and uniform cell size.\r\n\r\nIf you look at the cells representing comparisons with class (our outcome variable), it appears that there are a range of values for each of the items.",
"_____no_output_____"
]
],
[
[
"scatter_matrix(df, figsize = (15,15))\r\nplt.show() # by using plt.show() you render just the plot itself, because python will always display only the last command.",
"_____no_output_____"
]
],
[
[
"### Models",
"_____no_output_____"
],
[
"Create training and testing datasets.\r\n\r\nWe need to keep some of the data back to validate the model, seeing how well it generalises to other data.\r\n\r\nx data will contain all the potential explanatory variables (called features I think in this context)\r\ny will contain the outcome data (called label in ML)",
"_____no_output_____"
]
],
[
[
"X_df = np.array(df.drop(['class'], 1)) # this will create a variable called X_df which is df except class\r\ny_df = np.array(df['class']) # this is just the class field\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X_df, y_df, test_size=0.2) # split the dataset into four, two with features, two with labels (and choose 20% of the data for testing (validation))",
"_____no_output_____"
]
],
[
[
"Add a seed to make the data reproducible (this will change the results a little each time we run the model)",
"_____no_output_____"
]
],
[
[
"seed = 8\r\nscoring = 'accuracy'",
"_____no_output_____"
]
],
[
[
"### Create training models",
"_____no_output_____"
],
[
"make an empty list then append",
"_____no_output_____"
]
],
[
[
"models = [] \r\nmodels.append(('KNN', KNeighborsClassifier(n_neighbors = 5))) # You can alter the number of neighbours\r\nmodels.append(('SVM', SVC()))\r\n\r\nresults = [] # also create lists for results and names. We use this to print out the results\r\nnames = []",
"_____no_output_____"
]
],
[
[
"Evaluate each model in turn",
"_____no_output_____"
]
],
[
[
"for name, model in models:\r\n kfold = model_selection.KFold(n_splits=10, random_state = seed, shuffle = True)\r\n cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring)\r\n results.append(cv_results)\r\n names.append(name)\r\n msg = \"%s: %f (%f)\" % (name, cv_results.mean(), cv_results.std())\r\n print(msg)",
"KNN: 0.967825 (0.023671)\nSVM: 0.638539 (0.053601)\n"
]
],
[
[
"The KNN tries to cluster the data points into two groups, malignant and benign, whilst the SWM is looking for the optimal separating hyperplane (??) that can separate the data points into malignant and benign cells",
"_____no_output_____"
],
[
"## Making predictions",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d04f52e34aaac417b2505bb410a247a505f41edd | 234,877 | ipynb | Jupyter Notebook | tutorial_nbs/02_ROCKET_a_new_SOTA_classifier.ipynb | duyniem/tsai | 18ea05f6077fe011fb9e3f206311abe4c0f3105c | [
"Apache-2.0"
] | null | null | null | tutorial_nbs/02_ROCKET_a_new_SOTA_classifier.ipynb | duyniem/tsai | 18ea05f6077fe011fb9e3f206311abe4c0f3105c | [
"Apache-2.0"
] | null | null | null | tutorial_nbs/02_ROCKET_a_new_SOTA_classifier.ipynb | duyniem/tsai | 18ea05f6077fe011fb9e3f206311abe4c0f3105c | [
"Apache-2.0"
] | 1 | 2021-08-12T20:45:07.000Z | 2021-08-12T20:45:07.000Z | 145.254793 | 114,344 | 0.868663 | [
[
[
"<a href=\"https://colab.research.google.com/github/timeseriesAI/tsai/blob/master/tutorial_nbs/02_ROCKET_a_new_SOTA_classifier.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"created by Ignacio Oguiza - email: [email protected]",
"_____no_output_____"
],
[
"<img src=\"https://github.com/timeseriesAI/tsai/blob/master/tutorial_nbs/images/Rocket.svg?raw=1\" width=\"150\">",
"_____no_output_____"
],
[
"ROCKET (RandOm Convolutional KErnel Transform) is a new Time Series Classification (TSC) method that has just been released (Oct 29th, 2019), and has achieved **state-of-the-art performance on the UCR univariate time series classification datasets, surpassing HIVE-COTE (the previous state of the art since 2017) in accuracy, with exceptional speed compared to other traditional DL methods.** \n\nTo achieve these 2 things at once is **VERY IMPRESSIVE**. ROCKET is certainly a new TSC method you should try.\n\nAuthors:\nDempster, A., Petitjean, F., & Webb, G. I. (2019). ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. arXiv preprint arXiv:1910.13051.\n\n[paper](https://arxiv.org/pdf/1910.13051)\n\nThere are 2 main limitations to the original ROCKET method though:\n- Released code doesn't handle multivariate data\n- It doesn't run on a GPU, so it's slow when used with a large datasets\n\nIn this notebook you will learn: \n- how you can use the original ROCKET method\n- you will also learn about a new ROCKET version I have developed in Pytorch, that handles both **univariate and multivariate** data, and uses **GPU**\n- you will see how you can integrate the ROCKET features with fastai or other classifiers",
"_____no_output_____"
],
[
"## Import libraries 📚",
"_____no_output_____"
]
],
[
[
"# ## NOTE: UNCOMMENT AND RUN THIS CELL IF YOU NEED TO INSTALL/ UPGRADE TSAI\n# stable = False # True: latest version from github, False: stable version in pip\n# if stable: \n# !pip install -Uqq tsai\n# else: \n# !pip install -Uqq git+https://github.com/timeseriesAI/tsai.git\n\n# ## NOTE: REMEMBER TO RESTART YOUR RUNTIME ONCE THE INSTALLATION IS FINISHED",
"\u001b[K |████████████████████████████████| 194kB 12.2MB/s \n\u001b[K |████████████████████████████████| 22.2MB 60.3MB/s \n\u001b[K |████████████████████████████████| 5.7MB 30.2MB/s \n\u001b[K |████████████████████████████████| 9.5MB 42.2MB/s \n\u001b[K |████████████████████████████████| 3.2MB 45.6MB/s \n\u001b[K |████████████████████████████████| 2.5MB 46.7MB/s \n\u001b[K |████████████████████████████████| 174kB 37.1MB/s \n\u001b[K |████████████████████████████████| 901kB 36.5MB/s \n\u001b[K |████████████████████████████████| 92kB 13.4MB/s \n\u001b[K |████████████████████████████████| 61kB 9.9MB/s \n\u001b[K |████████████████████████████████| 25.3MB 1.6MB/s \n\u001b[K |████████████████████████████████| 675kB 46.8MB/s \n\u001b[K |████████████████████████████████| 102kB 14.3MB/s \n\u001b[?25h Building wheel for tsai (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for contextvars (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
],
[
"from tsai.all import *\nprint('tsai :', tsai.__version__)\nprint('fastai :', fastai.__version__)\nprint('fastcore :', fastcore.__version__)\nprint('torch :', torch.__version__)",
"/usr/local/lib/python3.6/dist-packages/numba/np/ufunc/parallel.py:363: NumbaWarning: The TBB threading layer requires TBB version 2019.5 or later i.e., TBB_INTERFACE_VERSION >= 11005. Found TBB_INTERFACE_VERSION = 9107. The TBB threading layer is disabled.\n warnings.warn(problem)\n"
]
],
[
[
"## How to use the original ROCKET method? 🚀",
"_____no_output_____"
],
[
"ROCKET is applied in 2 phases:\n\n1. Generate features from each time series: ROCKET calculates 20k features from each time series, independently of the sequence length. \n2. Apply a classifier to those calculated features. Those features are then used by the classifier of your choice. In the original code they use 2 simple linear classifiers: RidgeClassifierCV and Logistic Regression, but you can use any classifier.",
"_____no_output_____"
],
[
"### 1️⃣ Generate features\n\nLet's first generate the features. We'll import data from a UCR Time Series dataset.\n\nThe original method requires the time series to be in a 2d array of shape (samples, len). Remember than only univariate sequences are allow in this original method.",
"_____no_output_____"
]
],
[
[
"X_train, y_train, X_valid, y_valid = get_UCR_data('OliveOil')\nseq_len = X_train.shape[-1]\nX_train = X_train[:, 0]\nX_valid = X_valid[:, 0]\nlabels = np.unique(y_train)\ntransform = {}\nfor i, l in enumerate(labels): transform[l] = i\ny_train = np.vectorize(transform.get)(y_train)\ny_valid = np.vectorize(transform.get)(y_valid)",
"_____no_output_____"
]
],
[
[
"Now we normalize the data to mean 0 and std 1 **'per sample'** (recommended by the authors), that is they set each sample to mean 0 and std 1).",
"_____no_output_____"
]
],
[
[
"X_train = (X_train - X_train.mean(axis = 1, keepdims = True)) / (X_train.std(axis = 1, keepdims = True) + 1e-8)\nX_valid = (X_valid - X_valid.mean(axis = 1, keepdims = True)) / (X_valid.std(axis = 1, keepdims = True) + 1e-8)\nX_train.mean(axis = 1, keepdims = True).shape",
"_____no_output_____"
]
],
[
[
"To generate the features, we first need to create the 10k random kernels that will be used to process the data.",
"_____no_output_____"
]
],
[
[
"kernels = generate_kernels(seq_len, 10000)",
"_____no_output_____"
]
],
[
[
"Now we apply those ramdom kernels to the data",
"_____no_output_____"
]
],
[
[
"X_train_tfm = apply_kernels(X_train, kernels)\nX_valid_tfm = apply_kernels(X_valid, kernels)",
"_____no_output_____"
]
],
[
[
"### 2️⃣ Apply a classifier\n\nSo now we have the features, and we are ready to apply a classifier. \n\nLet's use a simple, linear RidgeClassifierCV as they propose in the paper. We first instantiate it. \n\nNote:\nalphas: Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to C^-1 in other linear models such as LogisticRegression or LinearSVC.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import RidgeClassifierCV\nclassifier = RidgeClassifierCV(alphas=np.logspace(-3, 3, 7), normalize=True)",
"_____no_output_____"
],
[
" classifier.fit(X_train_tfm, y_train)",
"_____no_output_____"
],
[
"classifier.score(X_valid_tfm, y_valid)",
"_____no_output_____"
]
],
[
[
"☣️ **This is pretty impressive! It matches or exceeds the state-of-the-art performance without any fine tuning in <2 seconds!!!**",
"_____no_output_____"
]
],
[
[
"kernels = generate_kernels(seq_len, 10000)\nX_train_tfm = apply_kernels(X_train, kernels)\nX_valid_tfm = apply_kernels(X_valid, kernels)\nfrom sklearn.linear_model import RidgeClassifierCV\nclassifier = RidgeClassifierCV(alphas=np.logspace(-3, 3, 7), normalize=True)\nclassifier.fit(X_train_tfm, y_train)\nclassifier.score(X_valid_tfm, y_valid)",
"_____no_output_____"
]
],
[
[
"⚠️ Bear in mind that this process is not deterministic since there is randomness involved in the kernels. In thiis case, performance may vary between .9 to .933",
"_____no_output_____"
],
[
"## How to use ROCKET with large and/ or multivariate datasets on GPU? - Recommended ⭐️",
"_____no_output_____"
],
[
"As stated before, the current ROCKET method doesn't support multivariate time series or GPU. This may be a drawback in some cases. \n\nTo overcome both limitations I've created a multivariate ROCKET on GPU in Pytorch. ",
"_____no_output_____"
],
[
"### 1️⃣ Generate features\n\nFirst you prepare the input data and normalize it per sample. The input to ROCKET Pytorch is a 3d tensor of shape (samples, vars, len), preferrable on gpu.",
"_____no_output_____"
],
[
"The way to use ROCKET in Pytorch is the following:\n\n* Create a dataset as you would normally do in `tsai`. \n* Create a TSDataLoaders with the following kwargs: \n * drop_last=False. In this way we get features for every input sample.\n * shuffle_train=False\n * batch_tfms=[TSStandardize(by_sample=True)] so that input is normalized by sample, as recommended by the authors\n",
"_____no_output_____"
]
],
[
[
"X, y, splits = get_UCR_data('HandMovementDirection', split_data=False)\ntfms = [None, [Categorize()]]\nbatch_tfms = [TSStandardize(by_sample=True)]\ndls = get_ts_dls(X, y, splits=splits, tfms=tfms, drop_last=False, shuffle_train=False, batch_tfms=batch_tfms, bs=10_000)",
"_____no_output_____"
]
],
[
[
"☣️☣️ You will be able to create a dls (TSDataLoaders) object with unusually large batch sizes. I've tested it with a large dataset and a batch size = 100_000 and it worked fine. This is because ROCKET is not a usual Deep Learning model. It just applies convolutions (kernels) one at a time to create the features.",
"_____no_output_____"
],
[
"Instantiate a rocket model with the desired n_kernels (authors use 10_000) and kernel sizes (7, 9 and 11 in the original paper). ",
"_____no_output_____"
]
],
[
[
"model = build_ts_model(ROCKET, dls=dls) # n_kernels=10_000, kss=[7, 9, 11] set by default, but you can pass other values as kwargs",
"_____no_output_____"
]
],
[
[
"Now generate rocket features for the entire train and valid datasets using the create_rocket_features convenience function `create_rocket_features`.",
"_____no_output_____"
],
[
"And we now transform the original data, creating 20k features per sample",
"_____no_output_____"
]
],
[
[
"X_train, y_train = create_rocket_features(dls.train, model)\nX_valid, y_valid = create_rocket_features(dls.valid, model)\nX_train.shape, X_valid.shape",
"_____no_output_____"
]
],
[
[
"### 2️⃣ Apply a classifier",
"_____no_output_____"
],
[
"Once you build the 20k features per sample, you can use them to train any classifier of your choice.",
"_____no_output_____"
],
[
"#### RidgeClassifierCV",
"_____no_output_____"
],
[
"And now you apply a classifier of your choice. \nWith RidgeClassifierCV in particular, there's no need to normalize the calculated features before passing them to the classifier, as it does it internally (if normalize is set to True as recommended by the authors).",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import RidgeClassifierCV\nridge = RidgeClassifierCV(alphas=np.logspace(-8, 8, 17), normalize=True)\nridge.fit(X_train, y_train)\nprint(f'alpha: {ridge.alpha_:.2E} train: {ridge.score(X_train, y_train):.5f} valid: {ridge.score(X_valid, y_valid):.5f}')",
"alpha: 1.00E+01 train: 1.00000 valid: 0.50000\n"
]
],
[
[
"This result is amazing!! The previous state of the art (Inceptiontime) was .37837",
"_____no_output_____"
],
[
"#### Logistic Regression",
"_____no_output_____"
],
[
"In the case of other classifiers (like Logistic Regression), the authors recommend a per-feature normalization.",
"_____no_output_____"
]
],
[
[
"eps = 1e-6\nCs = np.logspace(-5, 5, 11)\nfrom sklearn.linear_model import LogisticRegression\nbest_loss = np.inf\nfor i, C in enumerate(Cs):\n f_mean = X_train.mean(axis=0, keepdims=True)\n f_std = X_train.std(axis=0, keepdims=True) + eps # epsilon to avoid dividing by 0\n X_train_tfm2 = (X_train - f_mean) / f_std\n X_valid_tfm2 = (X_valid - f_mean) / f_std\n classifier = LogisticRegression(penalty='l2', C=C, n_jobs=-1)\n classifier.fit(X_train_tfm2, y_train)\n probas = classifier.predict_proba(X_train_tfm2)\n loss = nn.CrossEntropyLoss()(torch.tensor(probas), torch.tensor(y_train)).item()\n train_score = classifier.score(X_train_tfm2, y_train)\n val_score = classifier.score(X_valid_tfm2, y_valid)\n if loss < best_loss:\n best_eps = eps\n best_C = C\n best_loss = loss\n best_train_score = train_score\n best_val_score = val_score\n print('{:2} eps: {:.2E} C: {:.2E} loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(\n i, eps, C, loss, train_score, val_score))\nprint('\\nBest result:')\nprint('eps: {:.2E} C: {:.2E} train_loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(\n best_eps, best_C, best_loss, best_train_score, best_val_score))",
" 0 eps: 1.00E-06 C: 1.00E-05 loss: 1.35151 train_acc: 0.80000 valid_acc: 0.41892\n 1 eps: 1.00E-06 C: 1.00E-04 loss: 1.15433 train_acc: 1.00000 valid_acc: 0.45946\n 2 eps: 1.00E-06 C: 1.00E-03 loss: 0.85364 train_acc: 1.00000 valid_acc: 0.48649\n 3 eps: 1.00E-06 C: 1.00E-02 loss: 0.76183 train_acc: 1.00000 valid_acc: 0.48649\n 4 eps: 1.00E-06 C: 1.00E-01 loss: 0.74625 train_acc: 1.00000 valid_acc: 0.48649\n 5 eps: 1.00E-06 C: 1.00E+00 loss: 0.74401 train_acc: 1.00000 valid_acc: 0.48649\n 6 eps: 1.00E-06 C: 1.00E+01 loss: 0.74371 train_acc: 1.00000 valid_acc: 0.50000\n 7 eps: 1.00E-06 C: 1.00E+02 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n 8 eps: 1.00E-06 C: 1.00E+03 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n 9 eps: 1.00E-06 C: 1.00E+04 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n10 eps: 1.00E-06 C: 1.00E+05 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n\nBest result:\neps: 1.00E-06 C: 1.00E+05 train_loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n"
]
],
[
[
"☣️ Note: Epsilon has a large impact on the result. You can actually test several values to find the one that best fits your problem, but bear in mind you can only select C and epsilon based on train data!!! ",
"_____no_output_____"
],
[
"##### RandomSearch",
"_____no_output_____"
],
[
"One way to do this would be to perform a random search using several epsilon and C values",
"_____no_output_____"
]
],
[
[
"n_tests = 10\nepss = np.logspace(-8, 0, 9)\nCs = np.logspace(-5, 5, 11)\n\nfrom sklearn.linear_model import LogisticRegression\nbest_loss = np.inf\nfor i in range(n_tests):\n eps = np.random.choice(epss)\n C = np.random.choice(Cs)\n f_mean = X_train.mean(axis=0, keepdims=True)\n f_std = X_train.std(axis=0, keepdims=True) + eps # epsilon\n X_train_tfm2 = (X_train - f_mean) / f_std\n X_valid_tfm2 = (X_valid - f_mean) / f_std\n classifier = LogisticRegression(penalty='l2', C=C, n_jobs=-1)\n classifier.fit(X_train_tfm2, y_train)\n probas = classifier.predict_proba(X_train_tfm2)\n loss = nn.CrossEntropyLoss()(torch.tensor(probas), torch.tensor(y_train)).item()\n train_score = classifier.score(X_train_tfm2, y_train)\n val_score = classifier.score(X_valid_tfm2, y_valid)\n if loss < best_loss:\n best_eps = eps\n best_C = C\n best_loss = loss\n best_train_score = train_score\n best_val_score = val_score\n print('{:2} eps: {:.2E} C: {:.2E} loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(\n i, eps, C, loss, train_score, val_score))\nprint('\\nBest result:')\nprint('eps: {:.2E} C: {:.2E} train_loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(\n best_eps, best_C, best_loss, best_train_score, best_val_score))",
" 0 eps: 1.00E-03 C: 1.00E-03 loss: 0.85501 train_acc: 1.00000 valid_acc: 0.48649\n 1 eps: 1.00E-02 C: 1.00E-03 loss: 0.86484 train_acc: 1.00000 valid_acc: 0.47297\n 2 eps: 1.00E-06 C: 1.00E+03 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n 3 eps: 1.00E-04 C: 1.00E-05 loss: 1.35157 train_acc: 0.80000 valid_acc: 0.41892\n 4 eps: 1.00E-07 C: 1.00E+00 loss: 0.74401 train_acc: 1.00000 valid_acc: 0.48649\n 5 eps: 1.00E-07 C: 1.00E-03 loss: 0.85364 train_acc: 1.00000 valid_acc: 0.48649\n 6 eps: 1.00E-01 C: 1.00E-05 loss: 1.36582 train_acc: 0.93125 valid_acc: 0.40541\n 7 eps: 1.00E-06 C: 1.00E+02 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n 8 eps: 1.00E-07 C: 1.00E+05 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n 9 eps: 1.00E-03 C: 1.00E+02 loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n\nBest result:\neps: 1.00E-07 C: 1.00E+05 train_loss: 0.74367 train_acc: 1.00000 valid_acc: 0.48649\n"
]
],
[
[
"In general, the original method may be a bit faster than the GPU method, but for larger datasets, there's a great benefit in using the GPU version.",
"_____no_output_____"
],
[
"In addition to this, I have also run the code on the TSC UCR multivariate datasets (all the ones that don't contain nan values), and the results are also very good, beating the previous state-of-the-art in this category as well by a large margin. For example, ROCKET reduces InceptionTime errors by 26% on average.",
"_____no_output_____"
],
[
"#### Fastai classifier head",
"_____no_output_____"
]
],
[
[
"X = concat(X_train, X_valid)\ny = concat(y_train, y_valid)\nsplits = get_predefined_splits(X_train, X_valid)",
"_____no_output_____"
],
[
"tfms = [None, [Categorize()]]\ndsets = TSDatasets(X, y, tfms=tfms, splits=splits)\ndls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, batch_tfms=[TSStandardize(by_var=True)])# per feature normalization\ndls.show_batch()",
"_____no_output_____"
],
[
"def lin_zero_init(layer):\n if isinstance(layer, nn.Linear):\n nn.init.constant_(layer.weight.data, 0.)\n if layer.bias is not None: nn.init.constant_(layer.bias.data, 0.)",
"_____no_output_____"
],
[
"model = create_mlp_head(dls.vars, dls.c, dls.len)\nmodel.apply(lin_zero_init)\nlearn = Learner(dls, model, metrics=accuracy, cbs=ShowGraph())\nlearn.fit_one_cycle(50, lr_max=1e-4)\nlearn.plot_metrics()",
"_____no_output_____"
]
],
[
[
"#### XGBoost",
"_____no_output_____"
]
],
[
[
"eps = 1e-6\n\n# normalize 'per feature'\nf_mean = X_train.mean(axis=0, keepdims=True)\nf_std = X_train.std(axis=0, keepdims=True) + eps\nX_train_norm = (X_train - f_mean) / f_std\nX_valid_norm = (X_valid - f_mean) / f_std\n\nimport xgboost as xgb\nclassifier = xgb.XGBClassifier(max_depth=3,\n learning_rate=0.1,\n n_estimators=100,\n verbosity=1,\n objective='binary:logistic',\n booster='gbtree',\n tree_method='auto',\n n_jobs=-1,\n gpu_id=default_device().index,\n gamma=0,\n min_child_weight=1,\n max_delta_step=0,\n subsample=.5,\n colsample_bytree=1,\n colsample_bylevel=1,\n colsample_bynode=1,\n reg_alpha=0,\n reg_lambda=1,\n scale_pos_weight=1,\n base_score=0.5,\n random_state=0,\n missing=None)\n\nclassifier.fit(X_train_norm, y_train)\npreds = classifier.predict(X_valid_norm)\n(preds == y_valid).mean()",
"_____no_output_____"
]
],
[
[
"## Conclusions",
"_____no_output_____"
],
[
"ROCKET is a great method for TSC that has established a new level of performance both in terms of accuracy and time. It does it by successfully applying an approach quite different from the traditional DL approaches. The method uses 10k random kernels to generate features that are then classified by linear classifiers (although you may use a classifier of your choice).\nThe original method has 2 limitations (lack of multivariate and lack of GPU support) that are overcome by the Pytorch implementation shared in this notebook.\n\nSo this is all the code you need to train a state-of-the-art model using rocket and GPU in `tsai`:\n\n```\nX, y, splits = get_UCR_data('HandMovementDirection', return_split=False)\ntfms = [None, [Categorize()]]\nbatch_tfms = [TSStandardize(by_sample=True)]\ndsets = TSDatasets(X, y, tfms=tfms, splits=splits)\ndls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=64, drop_last=False, shuffle_train=False, batch_tfms=[TSStandardize(by_sample=True)])\nmodel = create_model(ROCKET, dls=dls)\nX_train, y_train = create_rocket_features(dls.train, model)\nX_valid, y_valid = create_rocket_features(dls.valid, model)\nridge = RidgeClassifierCV(alphas=np.logspace(-8, 8, 17), normalize=True)\nridge.fit(X_train, y_train)\nprint(f'alpha: {ridge.alpha_:.2E} train: {ridge.score(X_train, y_train):.5f} valid: {ridge.score(X_valid, y_valid):.5f}')\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d04f5c44f971685720d357137de604862260d61d | 418,248 | ipynb | Jupyter Notebook | Brownian_motion.ipynb | CarSomma/Brownian-Process | 1b318090ac72e7c293f60b478cbc2121d4e6b14c | [
"MIT"
] | null | null | null | Brownian_motion.ipynb | CarSomma/Brownian-Process | 1b318090ac72e7c293f60b478cbc2121d4e6b14c | [
"MIT"
] | null | null | null | Brownian_motion.ipynb | CarSomma/Brownian-Process | 1b318090ac72e7c293f60b478cbc2121d4e6b14c | [
"MIT"
] | null | null | null | 308.670111 | 86,796 | 0.927856 | [
[
[
"# Brownian process in stock price dynamics\n\n",
"_____no_output_____"
],
[
"Brownian Moton:\n\n\n\n\nsource: https://en.wikipedia.org/wiki/Brownian_motion",
"_____no_output_____"
],
[
"\n\n\nA **random-walk** can be seen as a **motion** resulting from a succession of discrete **random steps**.\n\nThe random-walk after the i-th steps is:\n\\begin{equation}\n\\tag{1}\nX_{i} = X_{i-1} + \\epsilon_{i} \n\\end{equation}\n\nbeing $X_{i=0} = X_{0} = 0$ the starting point and $\\epsilon_{i}$ a random variable",
"_____no_output_____"
]
],
[
[
"# conda install -c anaconda pandas-datareader \n# pip install pandas-datareader",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()",
"_____no_output_____"
],
[
"# Possible steps\nsteps = [-1,1] # backward and forward of 1 units\n# Nr of steps n_steps\nn_steps = 100",
"_____no_output_____"
],
[
"# Initialise the random walk variable X\nX = np.zeros(n_steps) #<--- numpy array of (N=n_steps) zeros\n# Fill in X according to eq. 1\nfor i in range(1,n_steps):\n X[i]= X[i-1] + np.random.choice(steps)#<--- from 1 to fulfill Initial condition \n \n",
"_____no_output_____"
],
[
"# Faster alternative\ndef random_walk(steps,n_steps):\n random_steps = np.random.choice(steps,size=n_steps)\n X = random_steps.cumsum()\n X[0] = 0 # <--- initial position\n return X",
"_____no_output_____"
],
[
"for i in range(4):\n plt.plot(random_walk(steps,n_steps))",
"_____no_output_____"
]
],
[
[
"**If we repeat the experiment where does the man end up in average?**\n\n",
"_____no_output_____"
]
],
[
[
"# Repeat the random walk n_trials time \n# Record the last position for each trial \ndef monte_random_walk(n_steps,steps,n_trials):\n X_fin = np.zeros(n_trials)#<-- X_fin numpy array of (N=n_trial) zeros\n for i in range(n_trials):\n X_fin[i] =random_walk(steps,n_steps)[-1]\n return X_fin",
"_____no_output_____"
],
[
"n_trial = 20000\nsteps = [-1,1]\nn_steps = 100",
"_____no_output_____"
],
[
"X_fin = monte_random_walk(n_steps,steps,n_trial)",
"_____no_output_____"
],
[
"# Plot the distribution of X_fin\nwidth_bin = 4\nn_bins = int(np.ceil((np.max(X_fin)-np.min(X_fin))/width_bin))\n\nsns.distplot(X_fin,kde=True,bins=n_bins);\nplt.xlabel('Final position');",
"_____no_output_____"
],
[
"np.std(X_fin)",
"_____no_output_____"
]
],
[
[
"\n\n\n\n\n\nWe can see a Brownian process $B(t)$ as a **continuous Gaussian** random walk. \n\n**Gaussian & continuous**: we divide the observation time $t$ into $N$ small timestep $\\Delta t$, so that $t=N\\cdot\\Delta t$.\n\nFor any time $t_i=i\\cdot\\Delta t$, the change in $B$ is normally distributed:\n\n$$ B_{i+1}-B_i \\sim \\sqrt{\\Delta t}\\cdot N(0,1)$$\n\nTaking time step $\\Delta t$ smaller and smaller will make B a continuous random walk.",
"_____no_output_____"
]
],
[
[
"def brownian_motion(T,N,n_trials,random_seed = None):\n np.random.seed(random_seed)\n dt = T/N\n random_steps = np.sqrt(dt)*np.random.normal(loc = 0,scale = 1,size = (N,n_trials))\n random_steps[0,:] = 0\n X = np.cumsum(random_steps,axis=0)\n \n return X",
"_____no_output_____"
],
[
"T=7\nN=100\nn_trials=2000\nrandom_seed = 1\ndt=T/N\ndt",
"_____no_output_____"
],
[
"X= brownian_motion(T,N,n_trials,random_seed)",
"_____no_output_____"
],
[
"# Last step\nX_fin = X[-1,:]",
"_____no_output_____"
],
[
"plt.plot(X);",
"_____no_output_____"
],
[
"# Plot the distribution of X_fin\nwidth_bin = .51\nn_bins = int(np.ceil((np.max(X_fin)-np.min(X_fin))/width_bin))\n\nsns.distplot(X_fin,bins=n_bins);\n",
"_____no_output_____"
]
],
[
[
"### Connection to stock-price\n\nThe dynamics of stock-prices can be modeled by the following equation:\n\n\\begin{equation}\n\\tag{2}\n\\Delta S_{t} = \\mu S_{t} \\Delta t + \\sigma S_{t}\\Delta B_{t}\n\\end{equation}\n\nbeing:\n\n$S$ the stock price,\n\n$\\mu$ the drift coefficient (a.k.a the mean of returns),\n\n$\\sigma$ the diffusion coefficient (a.k.a the standard deviation of returns),\n\n$B$ the brownian motion.\n\nThe eq. (2) admits the following solution:\n\\begin{equation}\n\\tag{3}\nS(t) = S_{0} \\cdot e^{[(\\mu - \\sigma^2/2)\\cdot t + \\sigma \\cdot B_{t}] } \n\\end{equation}",
"_____no_output_____"
]
],
[
[
"def stock_price(N,S0,u,sigma,T,n_trials,random_seed = None):\n \"\"\"\n N: number of intervals\n S0: initial stock price\n u: mean of returns over some period\n sigma: volatility a.k.a. standard deviation of returns\n random_seed: seed for pseudorandom generator\n T: observation time\n m: number of brownian path\n \"\"\"\n dt = T/N \n t = np.arange(0.,T,dt)\n t=t[:,np.newaxis]\n drift = (u - (sigma/np.sqrt(2))**2)*t\n shock = sigma * brownian_motion(T,N,n_trials,random_seed = None)\n S = S0*np.exp(drift + shock)\n return t, S",
"_____no_output_____"
]
],
[
[
"### Scraping from Yahoo Finance",
"_____no_output_____"
]
],
[
[
"from pandas_datareader import data as scraper\nimport pandas as pd\n\nsymbol = 'FB' # 'FB'Facebook, 'FCA.MI' FIAT Crysler, 'AAPL' Apple\nstart_date = '2020-01-01'\nend_date = '2020-12-31'\ndf = scraper.DataReader(symbol, 'yahoo', start_date, end_date)\n\n",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"#close price\nclose_price = df['Close']\nclose_price.plot();\nplt.ylabel('Price $');",
"_____no_output_____"
],
[
"# Calculate the daily percentage return\ndaily_return= (close_price.pct_change() ) ",
"_____no_output_____"
],
[
"daily_return.plot(label='Daily Return')\n(close_price*.002).plot(label='Close Price');\nplt.legend();",
"_____no_output_____"
],
[
"# Plot the distribution of daily_return\nwidth_bin = .01\nn_bins = int(np.ceil((np.max(daily_return)-np.min(daily_return))/width_bin))\n\nsns.distplot(daily_return,bins=n_bins);\nplt.title(\"Daily returns on FB, 2020\");",
"_____no_output_____"
],
[
"# compute the return mu and the sigma\nmu = np.mean(daily_return)\nsigma = np.std(daily_return)\n\nprint(f'Mean of daily-returns μ: {round(mu,4)*100} %')\nprint('')\nprint(f'Volatility σ: {round(sigma,3)}')\n",
"Mean of daily-returns μ: 0.15 %\n\nVolatility σ: 0.029\n"
],
[
"# Parameters simulation\nN = 5000 # <--- lenght of each trials\nT=252 # <--- # days of a business year\nS0=close_price[0] # <--- Initial close-price\nn_trials=25500 # <--- # of trials\nT/N # <--- Δt about 0.05 ",
"_____no_output_____"
],
[
"# Extracting stock price pathways and time vector from the model\nt,model_S = stock_price(N,S0,mu,sigma,T,n_trials,random_seed = 42)\n#model_S.shape",
"_____no_output_____"
],
[
"# Define other two time range\nt2=np.arange(0,253,1)\n\n",
"_____no_output_____"
],
[
"# Plot simulated and actual stock-prizes\nplt.plot(t,model_S);\n#plt.plot(t3,close_price[-12:],linewidth=3,c='k');\nplt.plot(t2,close_price[:],linewidth=3,c='k');\nplt.xlabel('Days');\nplt.ylabel('Stock Price');",
"_____no_output_____"
],
[
"# Compute final predicted stock-price\nS_fin = model_S[-1,:]",
"_____no_output_____"
],
[
"# Calculate mean and std from S_fin\nmean = np.mean(S_fin)\nmedian=np.median(S_fin)\nstd_ = np.std(S_fin)\nmin_ = np.min(S_fin)\nmax_ = np.max(S_fin)\nprint('*******************')\nprint(f' * Statistics *')\nprint('*******************\\n')\nprint(f'Min: {round(min_)} $')\nprint(f'Max: {round(max_)} $')\nprint(f'Median: {round(median)} $')\nprint(f'Mean: {round(mean)} $')\nprint(f'Standard deviation: {round(std_)} $')",
"*******************\n * Statistics *\n*******************\n\nMin: 47.0 $\nMax: 1705.0 $\nMedian: 272.0 $\nMean: 303.0 $\nStandard deviation: 147.0 $\n"
],
[
"# Plot the simulated final stock-price\nsns.distplot(S_fin);\nplt.plot([median,median], [0, .02], 'k-.', lw=6,label='median')\nplt.plot([mean,mean], [0, .02], 'b-.', lw=2,label='mean')\nplt.plot([close_price[-1],close_price[-1]], [0, .02], 'g-', lw=2,label='actual prize')\nplt.ylim(top=0.004);\nplt.xlim(left=-100,right=1200)\nplt.legend();\nplt.title('Montecarlo Simulation on Facebook Stock-Price');\nplt.xlabel('Stock price $');",
"_____no_output_____"
],
[
"from scipy.stats import norm,lognorm,t",
"_____no_output_____"
],
[
"def lognorm_fit(data_,x_min,x_max,dx):\n # Fits the datas with a log-norm distribution\n params = lognorm.fit(data_)\n shape, mean, std = params\n \n # Generate a log-norm probability distribution function pdf\n x = np.arange(x_min,x_max,dx)\n lnd = lognorm(s=shape,loc=mean,scale=std)# <--- initialise the log-norm distribution\n lognormal_pdf =lnd.pdf(x) \n \n # Calculate the mode of distribution\n index_max = np.argmax(lognormal_pdf) #np.where(lognormal_pdf == np.max(lognormal_pdf))\n mode =x[index_max]\n return lnd,lognormal_pdf, mode,x\n ",
"_____no_output_____"
],
[
"x_min=0\nx_max=5000\ndx=.1",
"_____no_output_____"
],
[
"# Distribution and mode\nlnd_S,lognormal_pdf_S,mode_S,x = lognorm_fit(S_fin,x_min,x_max,dx)",
"_____no_output_____"
],
[
"# Plot the simulated final stock-price\nsns.distplot(S_fin);\nsns.lineplot(x,lognormal_pdf_S,label = 'log-normal')\nplt.plot([mode_S,mode_S],[0,.02],'r-.',label= 'mode')\nplt.plot([median,median], [0, .02], 'k-.', lw=6,label='median')\nplt.plot([mean,mean], [0, .02], 'b-.', lw=2,label='mean')\nplt.plot([close_price[-1],close_price[-1]], [0, .02], 'g-', lw=2,label='actual prize')\nplt.ylim(top=0.004);\nplt.xlim(left=-100,right=1200)\nplt.legend();\nplt.title('Montecarlo Simulation on Facebook Stock-Price');\nplt.xlabel('Stock price $');",
"_____no_output_____"
]
],
[
[
"what is the probability of having a loss after one year?",
"_____no_output_____"
]
],
[
[
"# Annual Return\nannual_return_pct = (S_fin -S0)/S0",
"_____no_output_____"
],
[
"# Calculate mean and std from S_fin\nmean_ar = np.mean(annual_return_pct)\nmedian_ar=np.median(annual_return_pct)\nstd_ar = np.std(annual_return_pct)\nmin_ar = np.min(annual_return_pct)\nmax_ar = np.max(annual_return_pct)\nprint('*******************')\nprint(f' * Statistics *')\nprint('*******************\\n')\nprint(f'Min: {round(min_ar,2)} %')\nprint(f'Max: {round(max_ar,2)} %')\nprint(f'Median: {round(median_ar,2)} %')\nprint(f'Mean: {round(mean_ar,2)} %')\nprint(f'Standard deviation: {round(std_ar,2)} %')",
"*******************\n * Statistics *\n*******************\n\nMin: -0.77 %\nMax: 7.13 %\nMedian: 0.3 %\nMean: 0.44 %\nStandard deviation: 0.7 %\n"
],
[
"# Plot distribution of simulated annual return\nsns.distplot(annual_return_pct);\n\nplt.ylim(top=0.8);\nplt.xlim(left=-3,right=6)\n\nplt.title('Montecarlo Simulation on Facebook Stock-Price');\nplt.xlabel('Annual Return % ');",
"_____no_output_____"
]
],
[
[
"Analysis of underlying distribution",
"_____no_output_____"
]
],
[
[
"x_min=-5\nx_max=6\ndx=.001",
"_____no_output_____"
],
[
"# Distribution and mode\nlnd_ar,lognormal_pdf_ar,mode_ar,x_ar = lognorm_fit(annual_return_pct,x_min,x_max,dx)",
"_____no_output_____"
],
[
"# Plot distribution of simulated annual return\nsns.distplot(annual_return_pct);\nsns.lineplot(x_ar,lognormal_pdf_ar,label = 'log-normal');\nplt.plot([mode_ar,mode_ar],[0,.9],'k-.',label= 'mode');\n\nplt.ylim(top=0.8);\nplt.xlim(left=-3,right=6)\nplt.legend();\nplt.text(x=2,y=.5,s=f'mode @ {round(mode_ar,3)}',)\nplt.title('Montecarlo Simulation on Facebook Stock-Price');\nplt.xlabel('Annual Return % ');",
"_____no_output_____"
],
[
"# Cumulative distribution Function CDF (probability of obtaining a value equal or smaller than the given value)\ncdf = lnd_ar.cdf(x_ar) # <--- cumulative ",
"_____no_output_____"
],
[
"# Plot CDF SF function\nsns.lineplot(x_ar,cdf,label='CDF');\nplt.plot([0,0], [0, 1], 'r-', lw=2,label='No returns');\nplt.legend();\nplt.xlabel('Annual Return %');\n",
"_____no_output_____"
],
[
"def get_prob(value_return1,value_return2=None):\n \n mask_1 = (x_ar<=value_return1)\n \n if value_return2==None:\n prob = round(np.max(cdf[mask_1])*100,2)\n \n else:\n mask_2 = (x_ar<=value_return2)\n area1 = np.max(cdf[mask_1])*100\n area2 = np.max(cdf[mask_2])*100\n prob = np.round(area2 - area1,2)\n return prob",
"_____no_output_____"
],
[
"print('**************************************')\nprint(' * Results *')\nprint('**************************************\\n')\nprint(' Return_1 Return_2 Probability\\n')\nprint(f'Loss {get_prob(-0.0001)} %')\nprint(f'Gain 0.1% 1% {get_prob(0.1,1)} % ')\nprint(f'Gain 1% 2% {get_prob(1,2)} % ')\n\n",
"**************************************\n * Results *\n**************************************\n\n Return_1 Return_2 Probability\n\nLoss 28.48 %\nGain 0.1% 1% 46.65 % \nGain 1% 2% 13.91 % \n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04f7a75c30a79354978b6ee021847147e30af9b | 51,595 | ipynb | Jupyter Notebook | notebooks/17-street-network-orientations.ipynb | baerbelblume/osmnx-examples | 60a43403b60387fb038fcf7fda547184adefc16c | [
"MIT"
] | null | null | null | notebooks/17-street-network-orientations.ipynb | baerbelblume/osmnx-examples | 60a43403b60387fb038fcf7fda547184adefc16c | [
"MIT"
] | null | null | null | notebooks/17-street-network-orientations.ipynb | baerbelblume/osmnx-examples | 60a43403b60387fb038fcf7fda547184adefc16c | [
"MIT"
] | null | null | null | 146.994302 | 38,760 | 0.855277 | [
[
[
"# City street network orientations\n\nCompare the spatial orientations of city street networks with OSMnx.\n\n - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)\n - [GitHub repo](https://github.com/gboeing/osmnx)\n - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)\n - [Documentation](https://osmnx.readthedocs.io/en/stable/)\n - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport osmnx as ox\nimport pandas as pd\n\nox.config(log_console=True, use_cache=True)\nweight_by_length = False\n\nox.__version__",
"_____no_output_____"
],
[
"# define the study sites as label : query\nplaces = {'Atlanta' : 'Atlanta, GA, USA',\n 'Boston' : 'Boston, MA, USA',\n 'Buffalo' : 'Buffalo, NY, USA',\n 'Charlotte' : 'Charlotte, NC, USA',\n 'Chicago' : 'Chicago, IL, USA',\n 'Cleveland' : 'Cleveland, OH, USA',\n 'Dallas' : 'Dallas, TX, USA',\n 'Houston' : 'Houston, TX, USA',\n 'Denver' : 'Denver, CO, USA',\n 'Detroit' : 'Detroit, MI, USA',\n 'Las Vegas' : 'Las Vegas, NV, USA',\n 'Los Angeles' : {'city':'Los Angeles', 'state':'CA', 'country':'USA'},\n 'Manhattan' : 'Manhattan, NYC, NY, USA',\n 'Miami' : 'Miami, FL, USA',\n 'Minneapolis' : 'Minneapolis, MN, USA',\n 'Orlando' : 'Orlando, FL, USA',\n 'Philadelphia' : 'Philadelphia, PA, USA',\n 'Phoenix' : 'Phoenix, AZ, USA',\n 'Portland' : 'Portland, OR, USA',\n 'Sacramento' : 'Sacramento, CA, USA',\n 'San Francisco' : {'city':'San Francisco', 'state':'CA', 'country':'USA'},\n 'Seattle' : 'Seattle, WA, USA',\n 'St Louis' : 'St. Louis, MO, USA',\n 'Tampa' : 'Tampa, FL, USA',\n 'Washington' : 'Washington, DC, USA'}",
"_____no_output_____"
],
[
"places = {'Accra' : 'Accra Metropolitan, Greater Accra Region, Ghana'}",
"_____no_output_____"
],
[
"# verify OSMnx geocodes each query to what you expect\ngdf = ox.gdf_from_places(places.values())\ngdf",
"_____no_output_____"
]
],
[
[
"## Get the street networks and their edge bearings",
"_____no_output_____"
]
],
[
[
"def reverse_bearing(x):\n return x + 180 if x < 180 else x - 180",
"_____no_output_____"
],
[
"bearings = {}\nfor place in sorted(places.keys()):\n \n # get the graph\n query = places[place]\n G = ox.graph_from_place(query, network_type='drive')\n \n # calculate edge bearings\n Gu = ox.add_edge_bearings(ox.get_undirected(G))\n \n if weight_by_length:\n # weight bearings by length (meters)\n city_bearings = []\n for u, v, k, d in Gu.edges(keys=True, data=True):\n city_bearings.extend([d['bearing']] * int(d['length']))\n b = pd.Series(city_bearings)\n bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True')\n else:\n # don't weight bearings, just take one value per street segment\n b = pd.Series([d['bearing'] for u, v, k, d in Gu.edges(keys=True, data=True)])\n bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True')",
"_____no_output_____"
]
],
[
[
"## Visualize it",
"_____no_output_____"
]
],
[
[
"def count_and_merge(n, bearings):\n # make twice as many bins as desired, then merge them in pairs\n # prevents bin-edge effects around common values like 0° and 90°\n n = n * 2\n bins = np.arange(n + 1) * 360 / n\n count, _ = np.histogram(bearings, bins=bins)\n \n # move the last bin to the front, so eg 0.01° and 359.99° will be binned together\n count = np.roll(count, 1)\n return count[::2] + count[1::2]",
"_____no_output_____"
],
[
"# function to draw a polar histogram for a set of edge bearings\ndef polar_plot(ax, bearings, n=36, title=''):\n\n bins = np.arange(n + 1) * 360 / n\n count = count_and_merge(n, bearings)\n _, division = np.histogram(bearings, bins=bins)\n frequency = count / count.sum()\n division = division[0:-1]\n width = 2 * np.pi / n\n\n ax.set_theta_zero_location('N')\n ax.set_theta_direction('clockwise')\n\n x = division * np.pi / 180\n bars = ax.bar(x, height=frequency, width=width, align='center', bottom=0, zorder=2,\n color='#003366', edgecolor='k', linewidth=0.5, alpha=0.7)\n \n ax.set_ylim(top=frequency.max())\n \n title_font = {'family':'Century Gothic', 'size':24, 'weight':'bold'}\n xtick_font = {'family':'Century Gothic', 'size':10, 'weight':'bold', 'alpha':1.0, 'zorder':3}\n ytick_font = {'family':'Century Gothic', 'size': 9, 'weight':'bold', 'alpha':0.2, 'zorder':3}\n \n ax.set_title(title.upper(), y=1.05, fontdict=title_font)\n \n ax.set_yticks(np.linspace(0, max(ax.get_ylim()), 5))\n yticklabels = ['{:.2f}'.format(y) for y in ax.get_yticks()]\n yticklabels[0] = ''\n ax.set_yticklabels(labels=yticklabels, fontdict=ytick_font)\n \n xticklabels = ['N', '', 'E', '', 'S', '', 'W', '']\n ax.set_xticklabels(labels=xticklabels, fontdict=xtick_font)\n ax.tick_params(axis='x', which='major', pad=-2)",
"_____no_output_____"
],
[
"# create figure and axes\nn = len(places)\nncols = int(np.ceil(np.sqrt(n)))\nnrows = int(np.ceil(n / ncols))\nfigsize = (ncols * 5, nrows * 5)\nfig, axes = plt.subplots(nrows, ncols, figsize=figsize, subplot_kw={'projection':'polar'})\n\n# plot each city's polar histogram\nfor ax, place in zip(axes.flat, sorted(places.keys())):\n polar_plot(ax, bearings[place].dropna(), title=place)\n\n# add super title and save full image\nsuptitle_font = {'family':'Century Gothic', 'fontsize':60, 'fontweight':'normal', 'y':1.07}\nfig.suptitle('City Street Network Orientation', **suptitle_font)\nfig.tight_layout()\nfig.subplots_adjust(hspace=0.35)\nfig.savefig('images/street-orientations.png', dpi=120, bbox_inches='tight')\nplt.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d04f8291408ff03e7bafb67e47880a6005465c71 | 2,423 | ipynb | Jupyter Notebook | Final Project - News Headline Generation and Validation/.ipynb_checkpoints/app-checkpoint.ipynb | vighneshvnkt/deep-learning-assignments | 61ef33fbcd228e06a0aa3059f6a849d33234a3ab | [
"MIT"
] | 29 | 2018-08-09T03:00:19.000Z | 2021-11-08T09:31:03.000Z | Final Project - News Headline Generation and Validation/.ipynb_checkpoints/app-checkpoint.ipynb | vighneshvnkt/deep-learning-assignments | 61ef33fbcd228e06a0aa3059f6a849d33234a3ab | [
"MIT"
] | 2 | 2018-11-03T04:06:04.000Z | 2018-11-30T19:32:51.000Z | Final Project - News Headline Generation and Validation/.ipynb_checkpoints/app-checkpoint.ipynb | vighneshvnkt/deep-learning-assignments | 61ef33fbcd228e06a0aa3059f6a849d33234a3ab | [
"MIT"
] | 7 | 2019-01-18T16:33:31.000Z | 2021-09-11T13:27:14.000Z | 24.72449 | 110 | 0.503095 | [
[
[
"from flask import Flask, render_template, request, send_from_directory\napp = Flask(__name__)\n\ndef get_generated_title(gen_title):\n gen_title=\"\"\n gen_title=\"I am the generated title\"\n return(gen_title)\n\ndef get_cosine_similarity(cosine):\n cosine=\"\"\n cosine='50.89%'\n return(cosine)\n\[email protected]('/')\ndef landing_page():\n return ('Welcome!!!!')\n\[email protected]('/index/')\ndef index_page():\n return render_template('index.html')\n\[email protected]('/result',methods = ['POST', 'GET'])\ndef result_page():\n cosine=\"\"\n gen_title=\"\"\n if request.method == 'POST':\n result = request.form\n #cosine='99%'\n cosine = get_cosine_similarity(cosine)\n gen_title = get_generated_title(gen_title)\n return render_template(\"result.html\",result = result, cosine=cosine, gen_title = gen_title)\n\n\nif __name__ == '__main__':\n app.run()",
" * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)\n127.0.0.1 - - [22/Apr/2018 05:48:24] \"GET / HTTP/1.1\" 200 -\n127.0.0.1 - - [22/Apr/2018 05:48:28] \"GET /index/ HTTP/1.1\" 200 -\n127.0.0.1 - - [22/Apr/2018 05:48:35] \"POST /result HTTP/1.1\" 200 -\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d04f8ae49687c4338160b2431e8fac3e9092e023 | 8,884 | ipynb | Jupyter Notebook | Practicas/.ipynb_checkpoints/Practica 5 - Modelado de Robots-checkpoint.ipynb | robblack007/clase-dinamica-robot | f38cb358f2681e9c0dce979acbdcd81bf63bd59c | [
"MIT"
] | null | null | null | Practicas/.ipynb_checkpoints/Practica 5 - Modelado de Robots-checkpoint.ipynb | robblack007/clase-dinamica-robot | f38cb358f2681e9c0dce979acbdcd81bf63bd59c | [
"MIT"
] | 1 | 2016-01-26T18:33:11.000Z | 2016-05-30T23:58:07.000Z | Practicas/.ipynb_checkpoints/Practica 5 - Modelado de Robots-checkpoint.ipynb | robblack007/clase-dinamica-robot | f38cb358f2681e9c0dce979acbdcd81bf63bd59c | [
"MIT"
] | null | null | null | 27.251534 | 419 | 0.545475 | [
[
[
"# Modelado de Robots",
"_____no_output_____"
],
[
"Recordando la práctica anterior, tenemos que la ecuación diferencial que caracteriza a un sistema masa-resorte-amoritguador es:\n\n$$\nm \\ddot{x} + c \\dot{x} + k x = F\n$$\n\ny revisamos 3 maneras de obtener el comportamiento de ese sistema, sin embargo nos interesa saber el comportamiento de un sistema mas complejo, un robot; empezaremos con un pendulo simple, el cual tiene la siguiente ecuación de movimiento:\n\n$$\nm l^2 \\ddot{q} + m g l \\cos{q} = \\tau\n$$\n\nComo podemos ver, son similares en el sentido de que involucran una sola variable, sin embargo, en la segunda ecuación, nuestra variable esta involucrada adentro de una función no lineal ($\\cos{q}$), por lo que nuestra ecuación diferencial es no lineal, y por lo tanto _no_ podemos usar el formalismo de función de transferencia para resolverla; tenemos que usar la función ```odeint``` para poder resolverla.\n\nComo es de segundo grado, tenemos que dividir nuestra ecuación diferencial en dos mas simples, por lo tanto usaremos el siguiente truco:\n\n$$\n\\frac{d}{dt} q = \\dot{q}\n$$\n\nentonces, tenemos dos ecuaciones diferenciales, por lo que podemos resolver dos incognitas $q$ y $\\dot{q}$.\n\nUtilizando nuestros conocimientos de algebra lineal, podemos acomodar nuestro sistema de ecuaciones en una matriz, de tal manera que si antes teniamos que:\n\n$$\n\\begin{align}\n\\frac{d}{dt} q &= \\dot{q} \\\\\n\\frac{d}{dt} \\dot{q} &= \\ddot{q} = \\frac{\\tau - m g l \\cos{q}}{ml^2}\n\\end{align}\n$$\n\nPor lo que podemos ver que nuestro sistema de ecuaciones tiene un estado mas grande que antes; la ecuación diferencial que teniamos como no lineal, de segundo orden, podemos escribirla como no lineal, de primer orden siempre y cuando nuestro estado sea mas grande.\n\nDefinamos a lo que nos referimos con estado:\n\n$$\nx =\n\\begin{pmatrix}\nq \\\\\n\\dot{q}\n\\end{pmatrix}\n$$\n\ncon esta definición de estado, podemos escribir el sistema de ecuaciónes de arriba como:\n\n$$\n\\frac{d}{dt} x = \\dot{x} = \\frac{d}{dt}\n\\begin{pmatrix}\nq \\\\\n\\dot{q}\n\\end{pmatrix} =\n\\begin{pmatrix}\n\\dot{q} \\\\\n\\frac{\\tau - m g l \\cos{q}}{ml^2}\n\\end{pmatrix}\n$$\n\no bien $\\dot{x} = f(x)$, en donde $f(x)$ es una función vectorial, o bien, un vector de funciones:\n\n$$\nf(x) =\n\\begin{pmatrix}\n\\dot{q} \\\\\n\\frac{\\tau - m g l \\cos{q}}{ml^2}\n\\end{pmatrix}\n$$\n\nPor lo que ya estamos listos para simular este sistema mecánico, con la ayuda de ```odeint()```; empecemos importando laas librerias necesarias:",
"_____no_output_____"
]
],
[
[
"from scipy.integrate import odeint",
"_____no_output_____"
],
[
"from numpy import linspace",
"_____no_output_____"
]
],
[
[
"y definiendo una función que devuelva un arreglo con los valores de $f(x)$",
"_____no_output_____"
]
],
[
[
"def f(x, t):\n from numpy import cos\n q, q̇ = x\n τ = 0\n m = 1\n g = 9.81\n l = 1\n return [q̇, τ - m*g*l*cos(q)/(m*l**2)]",
"_____no_output_____"
]
],
[
[
"Vamos a simular desde el tiempo $0$, hasta $10$, y las condiciones iniciales del pendulo son $q=0$ y $\\dot{q} = 0$.",
"_____no_output_____"
]
],
[
[
"ts = linspace(0, 10, 100)\nx0 = [0, 0]",
"_____no_output_____"
]
],
[
[
"Utilizamos la función ```odeint``` para simular el comportamiento del pendulo, dandole la función que programamos con la dinámica de $f(x)$ y sacamos los valores de $q$ y $\\dot{q}$ que nos devolvió ```odeint``` envueltos en el estado $x$",
"_____no_output_____"
]
],
[
[
"xs = odeint(func = f, y0 = x0, t = ts)\nqs, q̇s = list(zip(*xs.tolist()))",
"_____no_output_____"
]
],
[
[
"En este punto ya tenemos nuestros datos de la simulación, tan solo queda graficarlos para interpretar los resultados:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom matplotlib.pyplot import style, plot, figure\nstyle.use(\"ggplot\")",
"_____no_output_____"
],
[
"fig1 = figure(figsize = (8, 8))\n\nax1 = fig1.gca()\n\nax1.plot(xs);",
"_____no_output_____"
],
[
"fig2 = figure(figsize = (8, 8))\n\nax2 = fig2.gca()\n\nax2.plot(qs)\nax2.plot(q̇s);",
"_____no_output_____"
]
],
[
[
"Pero las gráficas de trayectoria son aburridas, recordemos que podemos hacer una animación con matplotlib:",
"_____no_output_____"
]
],
[
[
"from matplotlib import animation\nfrom numpy import sin, cos, arange",
"_____no_output_____"
],
[
"# Se define el tamaño de la figura\nfig = figure(figsize=(8, 8))\n\n# Se define una sola grafica en la figura y se dan los limites de los ejes x y y\naxi = fig.add_subplot(111, autoscale_on=False, xlim=(-1.5, 1.5), ylim=(-2, 1))\n\n# Se utilizan graficas de linea para el eslabon del pendulo\nlinea, = axi.plot([], [], \"-o\", lw=2, color='gray')\n\ndef init():\n # Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema\n linea.set_data([], [])\n return linea\n\ndef animate(i):\n # Esta funcion se ejecuta para cada cuadro del GIF\n \n # Se obtienen las coordenadas x y y para el eslabon\n xs, ys = [[0, cos(qs[i])], [0, sin(qs[i])]]\n linea.set_data(xs, ys)\n \n return linea\n\n# Se hace la animacion dandole el nombre de la figura definida al principio, la funcion que\n# se debe ejecutar para cada cuadro, el numero de cuadros que se debe de hacer, el periodo \n# de cada cuadro y la funcion inicial\nani = animation.FuncAnimation(fig, animate, arange(1, len(qs)), interval=25,\n blit=True, init_func=init)\n\n# Se guarda el GIF en el archivo indicado\nani.save('./imagenes/pendulo-simple.gif', writer='imagemagick');",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Problemas",
"_____no_output_____"
],
[
"1. Realiza una gráfica de trayectoria y una animación de un pendulo doble.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d04f9aec6cb4167dfce60dfadc14d8eab830394a | 204,666 | ipynb | Jupyter Notebook | source/prophet_Yang.ipynb | 2020NIA/roundabout_project | 51be03fef15af4e3de4df75af4d096cf5dc42894 | [
"MIT"
] | 1 | 2020-08-10T00:52:14.000Z | 2020-08-10T00:52:14.000Z | source/.ipynb_checkpoints/prophet_Yang-checkpoint.ipynb | 2020NIA/roundabout_project | 51be03fef15af4e3de4df75af4d096cf5dc42894 | [
"MIT"
] | null | null | null | source/.ipynb_checkpoints/prophet_Yang-checkpoint.ipynb | 2020NIA/roundabout_project | 51be03fef15af4e3de4df75af4d096cf5dc42894 | [
"MIT"
] | 1 | 2020-08-18T01:33:53.000Z | 2020-08-18T01:33:53.000Z | 154.348416 | 85,656 | 0.838493 | [
[
[
"from fbprophet import Prophet\nimport pandas as pd\nimport numpy as np\nimport time\n\ndf = pd.read_csv(\"/Users/yangdongjae/Desktop/2020/대외활동/2020년 공공 빅데이터 청년 인턴십/실무형 프로젝트/Data/Core_Data_교차로별 사고현황.csv\")",
"_____no_output_____"
],
[
"df['발생년월일시'] = df['발생년월일시'].astype(str)\ndf['발생년월일시'] = df['발생년월일시'].str[:-2]",
"_____no_output_____"
],
[
"df_sample = df[['발생년월일시' , '사망자수']]\ndf_sample = df_sample.rename({'발생년월일시':'ds' , '사망자수':'y'}, axis = 'columns')\n\ndf_sample",
"_____no_output_____"
],
[
"m = Prophet(changepoint_range = 0.9)\nm.fit(df_sample)",
"INFO:numexpr.utils:Note: NumExpr detected 12 cores but \"NUMEXPR_MAX_THREADS\" not set, so enforcing safe limit of 8.\nINFO:numexpr.utils:NumExpr defaulting to 8 threads.\nINFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.\n"
],
[
"future = m.make_future_dataframe(periods = 365)\nfuture.tail()",
"_____no_output_____"
],
[
"forecast = m.predict(future)\nforecast.tail()",
"_____no_output_____"
],
[
"forecast.tail()",
"_____no_output_____"
],
[
"forecast[['ds','yhat','yhat_lower','yhat_upper']].tail(60)",
"_____no_output_____"
],
[
"fig1 = m.plot(forecast)",
"_____no_output_____"
],
[
"fig2 = m.plot_components(forecast)",
"_____no_output_____"
],
[
"from fbprophet.plot import add_changepoints_to_plot\n\nfig = m.plot(forecast)\na = add_changepoints_to_plot(fig.gca(), m, forecast)",
"_____no_output_____"
],
[
"forecast = Prophet(interval_width = 0.95).fit(df_sample).predict(future)",
"INFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.\n"
],
[
"m = Prophet(mcmc_samples = 300)\nforecast = m.fit(df_sample).predict(future)\nfig = m.plot_components(forecast)",
"INFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04fac856ed11499098e55c5d7ac0f73f5e75da3 | 410,537 | ipynb | Jupyter Notebook | biorxiv/time_to_publication/3.0-mjt-kaplan-meier-plots.ipynb | danich1/annorxiver | 8fab17e1c3ebce7b9e3fc54ea64585b37d9b3825 | [
"CC0-1.0",
"BSD-3-Clause"
] | 4 | 2020-05-13T23:44:57.000Z | 2021-07-04T23:51:46.000Z | biorxiv/time_to_publication/3.0-mjt-kaplan-meier-plots.ipynb | danich1/annorxiver | 8fab17e1c3ebce7b9e3fc54ea64585b37d9b3825 | [
"CC0-1.0",
"BSD-3-Clause"
] | 23 | 2020-03-23T18:35:25.000Z | 2021-09-21T21:14:20.000Z | biorxiv/time_to_publication/3.0-mjt-kaplan-meier-plots.ipynb | danich1/annorxiver | 8fab17e1c3ebce7b9e3fc54ea64585b37d9b3825 | [
"CC0-1.0",
"BSD-3-Clause"
] | 3 | 2020-01-31T18:27:55.000Z | 2020-05-29T20:26:22.000Z | 195.960382 | 121,492 | 0.855073 | [
[
[
"import pandas as pd\nfrom lifelines import KaplanMeierFitter\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"preprints_df = pd.read_csv(\"output/biorxiv_article_metadata.tsv\", sep=\"\\t\",)",
"_____no_output_____"
],
[
"preprints_df[\"date_received\"] = pd.to_datetime(preprints_df[\"date_received\"])",
"_____no_output_____"
],
[
"xml_df = (\n preprints_df.sort_values(by=\"date_received\")\n .dropna(subset=[\"date_received\"])\n .groupby(\"doi\")\n .first()\n)",
"_____no_output_____"
],
[
"api_df = pd.read_csv(\"output/biorxiv_published_api_data.tsv\", sep=\"\\t\")",
"_____no_output_____"
],
[
"api_df[api_df[\"published_date\"].str.contains(\":\")]",
"_____no_output_____"
],
[
"index = api_df[api_df[\"published_date\"].str.contains(\":\")].index\napi_df.loc[index, \"published_date\"] = (\n api_df.loc[index, \"published_date\"].str.split(\":\").str[0]\n)",
"_____no_output_____"
],
[
"for col in [\"preprint_date\", \"published_date\"]:\n api_df[col] = pd.to_datetime(api_df[col])",
"_____no_output_____"
],
[
"api_df.set_index(\"biorxiv_doi\")",
"_____no_output_____"
],
[
"merged_df = pd.merge(\n xml_df,\n api_df.set_index(\"biorxiv_doi\"),\n left_index=True,\n right_index=True,\n how=\"outer\",\n)",
"_____no_output_____"
],
[
"merged_df",
"_____no_output_____"
],
[
"merged_df[\"document\"].isna().sum()",
"_____no_output_____"
],
[
"merged_df[\"published_doi\"].isna().sum()",
"_____no_output_____"
],
[
"len(merged_df)",
"_____no_output_____"
],
[
"# lets ignore papers we don't have xmls for\nmerged_df = pd.merge(\n xml_df,\n api_df.set_index(\"biorxiv_doi\"),\n left_index=True,\n right_index=True,\n how=\"left\",\n)",
"_____no_output_____"
],
[
"merged_df[\"published\"] = ~merged_df[\"published_doi\"].isna()",
"_____no_output_____"
],
[
"# I should change this to when the data was pulled, but I didn't record that for now :(\nmerged_df.loc[merged_df[\"published\"], \"observation_date\"] = merged_df.loc[\n merged_df[\"published\"], \"published_date\"\n]\nmerged_df.loc[~merged_df[\"published\"], \"observation_date\"] = pd.datetime.today()",
"/home/thielk/envs/misc/lib/python3.6/site-packages/ipykernel_launcher.py:5: FutureWarning: The pandas.datetime class is deprecated and will be removed from pandas in a future version. Import from datetime instead.\n \"\"\"\n"
],
[
"merged_df[\"observation_duration\"] = (\n merged_df[\"observation_date\"] - merged_df[\"date_received\"]\n)",
"_____no_output_____"
],
[
"(merged_df[\"observation_duration\"] < pd.Timedelta(0)).sum()",
"_____no_output_____"
],
[
"merged_df = merged_df[merged_df[\"observation_duration\"] > pd.Timedelta(0)]",
"_____no_output_____"
],
[
"ax = sns.distplot(\n merged_df[\"observation_duration\"].dt.total_seconds() / 60 / 60 / 24 / 365\n)",
"_____no_output_____"
],
[
"kmf = KaplanMeierFitter()",
"_____no_output_____"
],
[
"kmf.fit(\n merged_df[\"observation_duration\"].dt.total_seconds() / 60 / 60 / 24 / 365,\n event_observed=merged_df[\"published\"],\n)\nax = kmf.plot(label=\"all papers\", logx=True)\n_ = ax.set_ylabel(\"proportion of unpublished biorxiv papers\")\n_ = ax.set_xlabel(\"timeline (years)\")\n_ = ax.set_ylim(0, 1)",
"_____no_output_____"
],
[
"f = plt.figure(figsize=(10, 8))\n\nax = None\nfor category, cat_group in merged_df.groupby(\"category\"):\n kmf.fit(\n cat_group[\"observation_duration\"].dt.total_seconds() / 60 / 60 / 24 / 365,\n event_observed=cat_group[\"published\"],\n )\n ax = kmf.plot(label=category, ax=ax, ci_show=False, logx=True)\n\n# Shrink current axis by 20%\nbox = ax.get_position()\nax.set_position([box.x0, box.y0, box.width * 0.8, box.height])\n\n# Put a legend to the right of the current axis\n_ = ax.legend(loc=\"center left\", bbox_to_anchor=(1, 0.5), title=\"Biorxiv category\")\n\n_ = ax.set_ylabel(\"proportion of unpublished biorxiv papers\")\n_ = ax.set_xlabel(\"timeline (years)\")\n_ = ax.set_ylim(0, 1)",
"_____no_output_____"
],
[
"merged_df[\"doi_prefix\"] = merged_df[\"published_doi\"].str.split(\"/\").str[0]",
"_____no_output_____"
],
[
"%%time\nf = plt.figure(figsize=(10, 8))\n\nax = None\nfor category, cat_group in merged_df.groupby(\"doi_prefix\"):\n if len(cat_group) > 100:\n kmf.fit(\n cat_group[\"observation_duration\"].dt.total_seconds() / 60 / 60 / 24 / 365,\n event_observed=cat_group[\"published\"],\n )\n ax = kmf.plot(label=category, ax=ax, ci_show=False, logx=True)\n\n# Shrink current axis by 20%\nbox = ax.get_position()\nax.set_position([box.x0, box.y0, box.width * 0.8, box.height])\n\n# Put a legend to the right of the current axis\n_ = ax.legend(loc=\"center left\", bbox_to_anchor=(1, 0.5), title=\"DOI prefix\")\n\n_ = ax.set_ylabel(\"proportion of unpublished biorxiv papers\")\n_ = ax.set_xlabel(\"timeline (years)\")\n_ = ax.set_ylim(0, 1)",
"CPU times: user 1.62 s, sys: 12.5 ms, total: 1.63 s\nWall time: 1.63 s\n"
],
[
"%%time\ndoi_prefix_df = merged_df.groupby(\"doi_prefix\").apply(\n lambda cat_group: pd.Series(\n {\n \"count\": len(cat_group),\n \"80th_percentile\": kmf.fit(\n cat_group[\"observation_duration\"].dt.total_seconds() / 60 / 60 / 24,\n event_observed=cat_group[\"published\"],\n ).percentile(0.8),\n }\n )\n)",
"/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n/home/thielk/envs/misc/lib/python3.6/site-packages/lifelines/fitters/__init__.py:277: ApproximationWarning: Approximating using `survival_function_`. To increase accuracy, try using or increasing the resolution of the timeline kwarg in `.fit(..., timeline=timeline)`.\n\n exceptions.ApproximationWarning,\n"
],
[
"doi_prefix_df[doi_prefix_df[\"count\"] > 50].sort_values(\"80th_percentile\").head()",
"_____no_output_____"
]
],
[
[
"F1000 Research Ltd <== 10.12688\n\nMDPI AG <== 10.3390 - wikipedia notes questionable quality of peer-review",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d04fb7bfe35f0c30b65997bfebcec46a18d473cc | 1,041,767 | ipynb | Jupyter Notebook | scripts/ratios-lefse.ipynb | amnona/paper-metaanalysis | 15ae52be6321819096d98c70b64c8f7b6dfafcfd | [
"MIT"
] | null | null | null | scripts/ratios-lefse.ipynb | amnona/paper-metaanalysis | 15ae52be6321819096d98c70b64c8f7b6dfafcfd | [
"MIT"
] | 1 | 2021-05-23T12:18:23.000Z | 2021-06-01T05:56:09.000Z | scripts/ratios-lefse.ipynb | amnona/paper-metaanalysis | 15ae52be6321819096d98c70b64c8f7b6dfafcfd | [
"MIT"
] | 1 | 2021-06-15T08:47:23.000Z | 2021-06-15T08:47:23.000Z | 490.705134 | 82,708 | 0.940522 | [
[
[
"import calour as ca\nimport calour_utils as cu",
"/home/amnon/miniconda3/envs/calour/lib/python3.6/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n"
],
[
"import numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport glob\nimport os\nimport pandas as pd\nimport shutil",
"_____no_output_____"
],
[
"ca.set_log_level('INFO')",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
],
[
"pwd",
"_____no_output_____"
]
],
[
[
"# Load the data\n### Without the known blooming bacteria (from American Gut paper)",
"_____no_output_____"
]
],
[
[
"ca.set_log_level('ERROR')\nratios=ca.read_amplicon('../lefse_ratios/ratios.biom','../studies/index.csv',\n feature_metadata_file='../taxonomy/DB1-15_taxonomy_svs_numbers.tsv',normalize=None, min_reads=None)\nca.set_log_level('INFO')",
"_____no_output_____"
],
[
"ratios.sparse = False\nratios",
"_____no_output_____"
],
[
"np.sum(np.sum(ratios.data==0,axis=0)>30)",
"_____no_output_____"
],
[
"ratios.feature_metadata['keep']=(np.sum(ratios.data==0,axis=0)<=30)",
"_____no_output_____"
],
[
"ratios=ratios.filter_by_metadata('keep',[True],axis='f')",
"_____no_output_____"
]
],
[
[
"## Fix taxonomy and filter chloroplast/mitochondria",
"_____no_output_____"
]
],
[
[
"ratios.feature_metadata['taxonomy'] = ratios.feature_metadata.Taxon",
"_____no_output_____"
],
[
"ratios.feature_metadata['taxonomy'].fillna('NA',inplace=True)",
"_____no_output_____"
],
[
"ratios = ratios.filter_by_taxonomy(['chloroplast','cyanobacteria','mitochondria'],negate=True)",
"2022-01-05 19:08:47 INFO 928 features remain.\n"
],
[
"disease_colors = {}\ndisease_colors = {xx: (0,0,0) for xx in ratios.sample_metadata.disease.unique()}\ndisease_colors.update({'HIV': (1.00,0.93,0.35),'Autism': (0.50,0.99,0.52),'Bipolar': (1.00, 0.63, 0.00),\n 'IBD_Crohn disease': (0.72,0.11,0.11),'IBD_Ulcerative Colitis': (0.043,1,0.97),\n 'IBD_Inflammtory bowel disease': (0.90,0.59,0.043),\n 'Diabetes T2': (0.47,0.53,0.80),\n 'Depression': (0.48,0.12,0.64),\n 'Obesity': (0.25,0.32,0.71),\n 'Parkinson': (0.29,0.08,0.55),\n 'Schizophrenia': (0.88,0.75,0.91), \n 'Gastroenteritis': (0.94,0.33,0.31),\n 'Heart diseases': (0.33,0.43,1.00),\n 'Irritable bowel syndrom': (0.90,0.45,0.45),\n 'Alzheimer': (0.83, 0.83, 0.83), 'Anorexia': (0.83, 0.83, 0.83), 'Cancer': (0.83, 0.83, 0.83), 'Autoimmun diseases': (0.83, 0.83, 0.83), 'C.difficile infection': (0.83, 0.83, 0.83), \n 'Cancer': (0.83, 0.83, 0.83), 'Chronic fatigue syndrome': (0.83, 0.83, 0.83), 'Diabetes T1': (0.83, 0.83, 0.83), 'Gout': (0.83, 0.83, 0.83),\n 'Hepatitis B': (0.83, 0.83, 0.83), 'Hepatitis C': (0.83, 0.83, 0.83), 'Hypertension': (0.83, 0.83, 0.83), \n 'Lupus': (0.83, 0.83, 0.83), 'Pancreatitis': (0.83, 0.83, 0.83), 'Psoriasis': (0.83, 0.83, 0.83), 'Rheumatoid arthritis': (0.83, 0.83, 0.83), \n \n })",
"_____no_output_____"
]
],
[
[
"### creat a chart pie for diseases",
"_____no_output_____"
]
],
[
[
"ratios.sample_metadata['pie_disease']=ratios.sample_metadata.disease.copy()\nratios.sample_metadata.pie_disease.replace('Gout','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Irritable bowel syndrom','IBS',inplace=True)\nratios.sample_metadata.pie_disease.replace('Hepatitis B','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('IBD_Crohn disease','IBD',inplace=True)\nratios.sample_metadata.pie_disease.replace('IBD_Ulcerative Colitis','IBD',inplace=True)\nratios.sample_metadata.pie_disease.replace('IBD_Inflammtory bowel disease','IBD',inplace=True)\nratios.sample_metadata.pie_disease.replace('Alzheimer','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Anorexia','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Autoimmun diseases','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Cancer','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('C.difficile infection','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Diabetes T1','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Hypertension','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Chronic fatigue syndrome','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Gout','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('C.difficile infection','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Gout','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Lupus','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Pancreatitis','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Psoriasis','Other',inplace=True)\nratios.sample_metadata.pie_disease.replace('Rheumatoid arthritis','Other',inplace=True)\n\npass",
"_____no_output_____"
],
[
"disease_colors.update({'HIV': (1.00,0.93,0.35),'Autism': (0.50,0.99,0.52),\n 'Bipolar': (1.00, 0.63, 0.00),\n 'IBD': (0.72,0.11,0.11), \n 'Diabetes T2': (0.47,0.53,0.80),\n 'Depression': (0.48,0.12,0.64),\n 'Obesity': (0.25,0.32,0.71),\n 'Parkinson’s': (0.29,0.08,0.55),\n 'Schizophrenia': (0.88,0.75,0.91), \n 'Gastroenteritis': (0.94,0.33,0.31),\n 'Heart diseases': (0.33,0.43,1.00),\n 'IBS': (0.90,0.45,0.45), \n 'Other': (0.83, 0.83, 0.83)}) ",
"_____no_output_____"
],
[
"plt.figure()\npp=plt.pie(ratios.sample_metadata.pie_disease.value_counts(),textprops={'fontsize': 7}, labels=ratios.sample_metadata.pie_disease.unique(), labeldistance=0.5, rotatelabels=True)\nfor pie_wedge in pp[0]:\n pie_wedge.set_edgecolor('white')\n pie_wedge.set_facecolor(disease_colors[pie_wedge.get_label()])\n",
"_____no_output_____"
]
],
[
[
"### Prepare the colormap for the heatmaps\nWe want coolwarm, with white for exact 0s (which mean not present)",
"_____no_output_____"
]
],
[
[
"current_cmap = mpl.cm.get_cmap('coolwarm')\ncurrent_cmap.set_bad(color='red')\nncm = current_cmap(np.linspace(0,1,1000000))\nncm[500000]=(1,1,1,1)\nncm=mpl.colors.ListedColormap(ncm)",
"_____no_output_____"
]
],
[
[
"# Look at the data",
"_____no_output_____"
]
],
[
[
"ratios.feature_metadata",
"_____no_output_____"
],
[
"ratios.plot(gui='cli',norm=None,cmap=ncm ,clim=[-0.5,0.5], bad_color='w')",
"_____no_output_____"
],
[
"ratios.plot(gui='cli',norm=None,cmap=ncm ,clim=[-1,1], bad_color='w')",
"_____no_output_____"
],
[
"ratios=ratios.sort_abundance(key=np.mean)",
"_____no_output_____"
],
[
"ratios.plot(gui='cli',norm=None,cmap=ncm ,clim=[-1,1], bad_color='w')",
"_____no_output_____"
],
[
"# cu.splot(ratios,'disease',norm=None,cmap=ncm,clim=[-0.5,0.5],xticks_max=None)",
"_____no_output_____"
]
],
[
[
"\n# Plot all bacteria",
"_____no_output_____"
],
[
"## aggregate all samples by disease so CD/UC count as 1",
"_____no_output_____"
]
],
[
[
"ratios_agg=ratios.aggregate_by_metadata('disease',agg='mean')\nratios_agg",
"_____no_output_____"
],
[
"# cu.splot(ratios_agg,'disease',norm=None,cmap=ncm,clim=[-0.25,0.25],xticks_max=None)",
"_____no_output_____"
],
[
"ratios_agg.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-0.25,0.25],xticks_max=None)",
"_____no_output_____"
],
[
"ratios",
"_____no_output_____"
],
[
"np.sum(ratios_agg.data[:]>0)",
"_____no_output_____"
],
[
"np.sum(ratios_agg.data[:]<0)",
"_____no_output_____"
],
[
"np.sum(ratios_agg.data[:]==0)",
"_____no_output_____"
]
],
[
[
"## Sort by mean abundance over all disease\nWith 1 sample per disease (aggregation by mean)",
"_____no_output_____"
]
],
[
[
"ratios_agg=ratios_agg.sort_abundance(key=np.mean)",
"_____no_output_____"
],
[
"# cu.splot(ratios_agg,'disease',norm=None,cmap=ncm,clim=[-0.25,0.25],xticks_max=None)",
"_____no_output_____"
],
[
"allbact = ratios.filter_ids(ratios_agg.feature_metadata.index)\nallbact = allbact.sort_samples('disease')",
"_____no_output_____"
],
[
"allbact",
"_____no_output_____"
],
[
"f=allbact.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-1,1],xticks_max=None,xticklabel_len=None,\n xticklabel_kwargs={'size':8, 'rotation':90}, barx_fields=['disease'],barx_label=False,barx_colors=disease_colors)",
"_____no_output_____"
],
[
"f.save_figure('../figures/sup-heatmap-allbact-lefse.pdf')",
"_____no_output_____"
]
],
[
[
"# Plot the non-specific bacteria\nUsing the binomial sign test (only on experiments where the bacteria is present), with at least 4 experiments per bacteria. FDR=0.1\n\nThe test is done on 1 aggregated sample per disease to prevent bias by disease with many studies",
"_____no_output_____"
]
],
[
[
"np.random.seed(2020)\nnonspecific_agg=cu.get_sign_pvals(ratios_agg,alpha=0.25,min_present=4)",
"keeping 928 features with enough ratios\nfound 55 significant\n"
],
[
"nonspecific = ratios.filter_ids(nonspecific_agg.feature_metadata.index)\nnonspecific = nonspecific.sort_samples('disease')",
"_____no_output_____"
],
[
"nonspecific.feature_metadata = nonspecific.feature_metadata.join(nonspecific_agg.feature_metadata,lsuffix='',rsuffix='_agg')",
"_____no_output_____"
],
[
"nonspecific.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-0.5,0.5],xticks_max=None,xticklabel_len=None)",
"_____no_output_____"
],
[
"cu.splot(nonspecific,'disease',norm=None,cmap=ncm,clim=[-1,1],xticks_max=None,xticklabel_len=None)",
"_____no_output_____"
],
[
"f=nonspecific.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-1,1],xticks_max=None,xticklabel_len=None,\n xticklabel_kwargs={'size':8, 'rotation':90},barx_fields=['disease'],barx_label=False,barx_colors=disease_colors)\n",
"_____no_output_____"
],
[
"f.save_figure('../figures/sup-heatmap-nonspecific-lefse.pdf')",
"_____no_output_____"
]
],
[
[
"### Save the non-secific bacteria",
"_____no_output_____"
]
],
[
[
"nonspecific_agg.save('../lefse_ratios/nonspecific/nonspecific')",
"_____no_output_____"
],
[
"nonspecific_agg.save_fasta('../lefse_ratios/nonspecific/nonspecific.fa',header='seq')",
"_____no_output_____"
],
[
"nonspecific.save('../lefse_ratios/nonspecific/nonspecific_all',fmt='txt')",
"2022-01-05 19:09:04 WARNING .txt format does not support taxonomy information in save. Saving without taxonomy.\n"
]
],
[
[
"### Also save only the ones going up or down",
"_____no_output_____"
]
],
[
[
"nsup_ids=nonspecific_agg.feature_metadata[nonspecific_agg.feature_metadata.esize > 0]\nnsdown_ids=nonspecific_agg.feature_metadata[nonspecific_agg.feature_metadata.esize < 0]",
"_____no_output_____"
],
[
"len(nsup_ids)",
"_____no_output_____"
],
[
"len(nsdown_ids)",
"_____no_output_____"
],
[
"nsup = nonspecific.filter_ids(nsup_ids.index)\nnsup.save('../lefse_ratios/nonspecific/nonspecific-up')",
"_____no_output_____"
],
[
"nsdown = nonspecific.filter_ids(nsdown_ids.index)\nnsdown.save('../lefse_ratios/nonspecific/nonspecific-down')",
"_____no_output_____"
]
],
[
[
"## how many higher/lower in non-specific",
"_____no_output_____"
]
],
[
[
"np.sum(nonspecific_agg.feature_metadata.esize<0)",
"_____no_output_____"
],
[
"np.sum(nonspecific_agg.feature_metadata.esize>0)",
"_____no_output_____"
]
],
[
[
"## Get the enriched dbBact terms",
"_____no_output_____"
]
],
[
[
"nonspecific_agg.feature_metadata['_calour_stat'] = nonspecific_agg.feature_metadata['esize']\nnonspecific_agg.feature_metadata['_calour_direction'] = 'down'\nnonspecific_agg.feature_metadata.loc[nonspecific_agg.feature_metadata['esize']>0,'_calour_direction']='up'",
"_____no_output_____"
],
[
"f,dterms = nonspecific_agg.plot_diff_abundance_enrichment()",
"2022-01-05 19:09:04 INFO Getting dbBact annotations for 55 sequences, please wait...\n2022-01-05 19:09:06 INFO got 2322 annotations\n2022-01-05 19:09:06 INFO Got 9034 annotation-sequence pairs\n2022-01-05 19:09:06 INFO Added annotation data to experiment. Total 2322 annotations, 55 terms\n"
],
[
"f.figure.savefig('../figures/sup-nonspecific-dbbact-terms-lefse.pdf')",
"_____no_output_____"
]
],
[
[
"### Draw the dbbact term wordcloud for the non-specific bacteria",
"_____no_output_____"
]
],
[
[
"dbbact=ca.database._get_database_class('dbbact')",
"_____no_output_____"
],
[
"f=dbbact.draw_wordcloud(nonspecific)",
"2022-01-05 19:09:09 INFO Getting dbBact annotations for 55 sequences, please wait...\n2022-01-05 19:09:10 INFO got 2322 annotations\n2022-01-05 19:09:10 INFO Got 9034 annotation-sequence pairs\n2022-01-05 19:09:10 INFO Added annotation data to experiment. Total 2322 annotations, 55 terms\n"
],
[
"f.savefig('../figures/sup-wordcloud-nonspecific-lefse.pdf')",
"_____no_output_____"
],
[
"f=dbbact.draw_wordcloud(nsup)",
"2022-01-05 19:09:14 INFO Getting dbBact annotations for 14 sequences, please wait...\n2022-01-05 19:09:16 INFO got 1859 annotations\n2022-01-05 19:09:16 INFO Got 2978 annotation-sequence pairs\n2022-01-05 19:09:16 INFO Added annotation data to experiment. Total 1859 annotations, 14 terms\n"
],
[
"f.savefig('../figures/sup-wordcloud-nonspecific-up-lefse.pdf')",
"_____no_output_____"
],
[
"f=dbbact.draw_wordcloud(nsdown)",
"2022-01-05 19:09:20 INFO Getting dbBact annotations for 41 sequences, please wait...\n2022-01-05 19:09:21 INFO got 1228 annotations\n2022-01-05 19:09:21 INFO Got 6056 annotation-sequence pairs\n2022-01-05 19:09:21 INFO Added annotation data to experiment. Total 1228 annotations, 41 terms\n"
],
[
"f.savefig('../figures/sup-wordcloud-nonspecific-down-lefse.pdf')",
"_____no_output_____"
]
],
[
[
"# IBD specific",
"_____no_output_____"
]
],
[
[
"def nzdiff(data,labels):\n '''Calculate the mean difference between two groups without using 0s\n used for the calour.diff_abundance for only non-zero samples\n \n Parameters\n ----------\n data: np.array\n sample * feature(similar to calour Experiment.data)\n labels:::: np.array of 0s and 1s\n the label for each sample.\n \n Returns\n -------\n np.array\n for each feature, mean(group1:group1!=0)- mean(group2: group2!=0)\n '''\n data0=data[:,labels==0]\n data1=data[:,labels==1]\n res = np.zeros(data.shape[0])\n for i in range(data.shape[0]):\n m1=data1[i,:]\n m1=m1[m1!=0]\n if len(m1) == 0:\n continue\n m1=np.mean(m1)\n m0=data0[i,:]\n m0=m0[m0!=0]\n if len(m0) == 0:\n continue\n m0=np.mean(m0)\n res[i]= m1 - m0\n return res",
"_____no_output_____"
],
[
"def ratio_enrichment(exp, field, val1, val2=None, alpha=0.1, min_prev=3, random_seed=None, transform=None):\n '''Identify bacteria significantly enriched (i.e. ratios higher/lower) in samples with field=val1 vs. val2 (or all other samples if val2==None)\n Test is performed only on non-zero features present in at least min_prev samples in each group.\n \n Parameters\n ----------\n exp: calour.Experiment\n The experiment to test\n field: str\n Name of the field for identifying the 2 groups of samples\n val1: str or list of str\n Values of field for the first group of samples\n val2: str or list of str or None\n Values of field for the second group of samples. If None, use all samples not with val1\n alpha: float, optional\n the dsFDR threshold\n min_prev: int, optional\n use only bacteria present in at least min_prev samples (not 0) in each group\n random_seed: int, optional\n transform: str or None, optional\n the data transform (from ca.diff_abundance)\n '''\n # pre filter the data to keep only features present in enough samples in both groups\n e1 = exp.filter_samples(field, val1)\n e1.sparse=False\n e1.data[e1.data!=0] = 1\n e1 = e1.filter_sum_abundance(min_prev)\n if val2 is None:\n e2 = exp.filter_samples(field, val1, negate=True)\n else:\n e2 = exp.filter_samples(field, val2)\n e2.sparse=False\n e2.data[e2.data!=0] = 1\n e2 = e2.filter_sum_abundance(min_prev)\n # keep only features present in > min_prev samples in group1 and group2\n exp = exp.filter_ids(e1.feature_metadata.index)\n exp = exp.filter_ids(e2.feature_metadata.index)\n print('%d remaining after filtering for min_prev %d' % (len(exp.feature_metadata), min_prev))\n\n # find the features significantly different between group1 and group2\n # we use the nzdiff statist\n dd=exp.diff_abundance(field,val1,val2, transform=transform,alpha=alpha,method=nzdiff,random_seed=random_seed)\n return dd",
"_____no_output_____"
]
],
[
[
"### remove the biopsies studies",
"_____no_output_____"
]
],
[
[
"ratios_no_biop = ratios.filter_samples('_sample_id',['23', '29', '49', '52'],negate=True)\nratios_no_biop",
"_____no_output_____"
]
],
[
[
"# Calculate the specific bacteria\n## without the Gevers biopsies studies",
"_____no_output_____"
]
],
[
[
"def nice_taxonomy(exp):\n '''add nice taxonomy string (only phyla+genus+species if available) for heatmap\n \n Parameters\n ----------\n exp: calour.AmpliconExperiment\n with the taxonomy in 'Taxon' field\n \n Returns\n -------\n exp: calour.AmpliconExperiment, with added feature metadata field \"nice_tax\"\n '''\n nice_tax=[]\n for cidx,crow in exp.feature_metadata.iterrows():\n ctax = crow['Taxon']\n ctax=ctax.split(';')\n new_tax = ctax[1].split('_')[-1]+'|'\n if len(ctax) > 5:\n new_tax += ctax[5].split('_')[-1]\n if len(ctax) > 6:\n if len(ctax[6])>4:\n new_tax += '|'+ctax[6].split('_')[-1]\n else:\n new_tax += ctax[-1].split('_')[-1]\n nice_tax.append(new_tax)\n newexp = exp.copy()\n newexp.feature_metadata['nice_tax'] = nice_tax\n return newexp",
"_____no_output_____"
],
[
"np.random.seed(2020)\nspecific_no_biop=ratio_enrichment(ratios_no_biop, 'disease',['IBD_Crohn disease','IBD_Ulcerative Colitis'],\n alpha=0.1, min_prev=3,random_seed=2020, transform='rankdata')",
"2022-01-05 19:09:25 WARNING Do you forget to normalize your data? It is required before running this function\n2022-01-05 19:09:25 INFO After filtering, 868 remain.\n2022-01-05 19:09:25 WARNING Do you forget to normalize your data? It is required before running this function\n2022-01-05 19:09:25 INFO After filtering, 928 remain.\n2022-01-05 19:09:25 WARNING 60 ids were not in the experiment and were dropped.\n868 remaining after filtering for min_prev 3\n2022-01-05 19:09:25 WARNING Do you forget to normalize your data? It is required before running this function\n2022-01-05 19:09:25 INFO After filtering, 333 remain.\n2022-01-05 19:09:25 INFO 10 samples with value 1 (['IBD_Crohn disease', 'IBD_Ulcerative Colitis'])\n2022-01-05 19:09:34 INFO number of higher in IBD_Crohn disease,IBD_Ulcerative Colitis: 15. number of higher in NOT IBD_Crohn disease,IBD_Ulcerative Colitis : 1. total 16\n"
],
[
"specific_no_biop.save('../lefse_ratios/ibd_specific/ibd-no-biopsies-specific')",
"_____no_output_____"
],
[
"specific_no_biop.save_fasta('../lefse_ratios/ibd_specific/ibd-no-biopsies-specific')",
"_____no_output_____"
],
[
"specific_no_biop = specific_no_biop.sort_samples('disease')",
"_____no_output_____"
],
[
"specific_no_biop = nice_taxonomy(specific_no_biop)",
"_____no_output_____"
],
[
"f=specific_no_biop.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-1,1],\n xticks_max=None,xticklabel_len=None, xticklabel_kwargs={'size':5, 'rotation':90},\n feature_field='nice_tax', yticklabel_len=None, yticklabel_kwargs={'size':5}, barx_fields=['disease'],barx_label=False,barx_colors=disease_colors)",
"_____no_output_____"
],
[
"f.figure.savefig('../figures/sup-heatmap-specific-lefse.pdf')",
"_____no_output_____"
]
],
[
[
"### draw the wordcloud for the CD/UC specific bacteria",
"_____no_output_____"
]
],
[
[
"f=dbbact.draw_wordcloud(specific_no_biop)",
"2022-01-05 19:09:36 INFO Getting dbBact annotations for 16 sequences, please wait...\n2022-01-05 19:09:37 INFO got 1642 annotations\n2022-01-05 19:09:37 INFO Got 2880 annotation-sequence pairs\n2022-01-05 19:09:37 INFO Added annotation data to experiment. Total 1642 annotations, 16 terms\n"
],
[
"f.savefig('../figures/sup-wordcloud-specific-lefse.pdf')",
"_____no_output_____"
]
],
[
[
"# Venn comparison to main analysis",
"_____no_output_____"
]
],
[
[
"import matplotlib_venn",
"_____no_output_____"
],
[
"ns_norarefaction_down = pd.read_csv('../ratios/nonspecific/nonspecific-down_feature.txt',sep='\\t')\nns_lefse_down = pd.read_csv('../lefse_ratios/nonspecific/nonspecific-down_feature.txt',sep='\\t')\n\nns_norarefaction_up = pd.read_csv('../ratios/nonspecific/nonspecific-up_feature.txt',sep='\\t')\nns_lefse_up = pd.read_csv('../lefse_ratios/nonspecific/nonspecific-up_feature.txt',sep='\\t')",
"_____no_output_____"
],
[
"f=plt.figure()\nmatplotlib_venn.venn3([set(ns_norarefaction_up['_feature_id'].values),set(ns_lefse_up['_feature_id'].values),set(ns_lefse_down['_feature_id'].values)],set_labels=['NR up','LEFSE up','LEFSE down'])\nf.savefig('../figures/sup-fig-venn-lefse-up.pdf')",
"_____no_output_____"
],
[
"f=plt.figure()\nmatplotlib_venn.venn3([set(ns_norarefaction_down['_feature_id'].values),set(ns_lefse_up['_feature_id'].values),set(ns_lefse_down['_feature_id'].values)],set_labels=['NR up','LEFSE up','LEFSE down'])\nf.savefig('../figures/sup-fig-venn-lefse-down.pdf')",
"_____no_output_____"
],
[
"spec_norarefaction = pd.read_csv('../ratios/ibd_specific/ibd-no-biopsies-specific_feature.txt',sep='\\t')\nspec_lefse = pd.read_csv('../lefse_ratios/ibd_specific/ibd-no-biopsies-specific_feature.txt',sep='\\t')\n",
"_____no_output_____"
],
[
"f=plt.figure()\nmatplotlib_venn.venn2([set(spec_norarefaction['_feature_id'].values),set(spec_lefse['_feature_id'].values)],set_labels=['NRMS','LEFSE'])\n# f.savefig('../figures/sup-fig-venn-lefse-down.pdf')",
"_____no_output_____"
],
[
"ib=set(spec_norarefaction['_feature_id'].values).intersection(set(spec_lefse['_feature_id'].values))",
"_____no_output_____"
],
[
"print([spec_norarefaction[spec_norarefaction['_feature_id']==x]['SV_number'].values for x in ib])",
"[array(['SV14256'], dtype=object), array(['SV12509'], dtype=object), array(['SV13476'], dtype=object), array(['SV13324'], dtype=object), array(['SV13969'], dtype=object), array(['SV09299'], dtype=object), array(['SV13351'], dtype=object)]\n"
],
[
"spec_norarefaction.iloc[0]['Taxon']",
"_____no_output_____"
]
],
[
[
"# compare lefse to nrmd using all lefse features and direction of change",
"_____no_output_____"
]
],
[
[
"nrmd_up=pd.read_csv('../ratios/nonspecific/nonspecific-up_feature.txt',sep='\\t',index_col=0)\nnrmd_down=pd.read_csv('../ratios/nonspecific/nonspecific-down_feature.txt',sep='\\t',index_col=0)",
"_____no_output_____"
],
[
"all_lefse = pd.read_csv('../lefse_ratios/all_lefse_ratios.txt',sep='\\t',index_col=0)",
"_____no_output_____"
],
[
"up_dir=all_lefse.filter(nrmd_up.index,axis='index')\ndown_dir=all_lefse.filter(nrmd_down.index,axis='index')",
"_____no_output_____"
],
[
"print('in NRMD up (%d), %d (LEFSE>0), %d (LEFSE<0)'% (len(nrmd_up),np.sum(np.mean(up_dir, axis=1)>0),np.sum(np.mean(up_dir, axis=1)<0)))",
"in NRMD up (31), 1 (LEFSE>0), 30 (LEFSE<0)\n"
],
[
"print('in NRMD down (%d), %d (LEFSE>0), %d (LEFSE<0)'% (len(nrmd_down),np.sum(np.mean(down_dir, axis=1)>0),np.sum(np.mean(down_dir, axis=1)<0)))",
"in NRMD down (97), 95 (LEFSE>0), 2 (LEFSE<0)\n"
],
[
"smd=pd.read_csv('../studies/index.csv',sep='\\t',index_col=0)\nsmd.index=smd.index.astype(str)",
"_____no_output_____"
],
[
"xx=ca.AmpliconExperiment.from_pandas(up_dir.transpose())",
"_____no_output_____"
],
[
"xx.sample_metadata=xx.sample_metadata.merge(smd,how='left',left_index=True,right_index=True)\nxx=xx.sort_samples('disease')",
"_____no_output_____"
],
[
"f=xx.plot(sample_field='disease', clim=[-1,1],norm=None,cmap=ncm, xticks_max=None,xticklabel_len=None,\n xticklabel_kwargs={'size':8, 'rotation':90},barx_fields=['disease'],barx_label=False,barx_colors=disease_colors,bad_color='w')",
"_____no_output_____"
],
[
"f.save_figure('../figures/sup-lefse-dir-for-nrmd-up.pdf')",
"_____no_output_____"
],
[
"xx=ca.AmpliconExperiment.from_pandas(down_dir.transpose())",
"_____no_output_____"
],
[
"xx.sample_metadata=xx.sample_metadata.merge(smd,how='left',left_index=True,right_index=True)\nxx=xx.sort_samples('disease')",
"_____no_output_____"
],
[
"f=xx.plot(sample_field='disease', clim=[-1,1],norm=None,cmap=ncm, xticks_max=None,xticklabel_len=None,\n xticklabel_kwargs={'size':8, 'rotation':90},barx_fields=['disease'],barx_label=False,barx_colors=disease_colors, bad_color='w')",
"_____no_output_____"
],
[
"f.save_figure('../figures/sup-lefse-dir-for-nrmd-down.pdf')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04fc7ccee03285356516dc959c89e196027e776 | 213,910 | ipynb | Jupyter Notebook | examples/use_with_numpyro.ipynb | hriebl/blackjax | bde4477f00d194ad2ffaf54bfeba9c73de1f78e4 | [
"Apache-2.0"
] | 1 | 2022-02-23T20:32:31.000Z | 2022-02-23T20:32:31.000Z | examples/use_with_numpyro.ipynb | hriebl/blackjax | bde4477f00d194ad2ffaf54bfeba9c73de1f78e4 | [
"Apache-2.0"
] | null | null | null | examples/use_with_numpyro.ipynb | hriebl/blackjax | bde4477f00d194ad2ffaf54bfeba9c73de1f78e4 | [
"Apache-2.0"
] | null | null | null | 402.086466 | 164,980 | 0.937983 | [
[
[
"# Use BlackJAX with Numpyro",
"_____no_output_____"
],
[
"BlackJAX can take any log-probability function as long as it is compatible with JAX's JIT. In this notebook we show how we can use Numpyro as a modeling language and BlackJAX as an inference library.\n\nWe reproduce the Eight Schools example from the [Numpyro documentation](https://github.com/pyro-ppl/numpyro) (all credit for the model goes to the Numpyro team). For this notebook to run you will need to install Numpyro:\n\n```bash\npip install numpyro\n```",
"_____no_output_____"
]
],
[
[
"import jax\nimport numpy as np\nimport numpyro\nimport numpyro.distributions as dist\nfrom numpyro.infer.reparam import TransformReparam\nfrom numpyro.infer.util import initialize_model\n\nimport blackjax",
"_____no_output_____"
],
[
"num_warmup = 1000\n\n# We can use this notebook for simple benchmarking by setting\n# below to True and run from Terminal.\n# $ipython examples/use_with_numpyro.ipynb\nRUN_BENCHMARK = False\n\nif RUN_BENCHMARK:\n num_sample = 5_000_000\n print(f\"Benchmark with {num_warmup} warmup steps and {num_sample} sampling steps.\")\nelse:\n num_sample = 10_000",
"_____no_output_____"
]
],
[
[
"## Data",
"_____no_output_____"
]
],
[
[
"# Data of the Eight Schools Model\nJ = 8\ny = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])\nsigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"We use the non-centered version of the model described towards the end of the README on Numpyro's repository:",
"_____no_output_____"
]
],
[
[
"# Eight Schools example - Non-centered Reparametrization\ndef eight_schools_noncentered(J, sigma, y=None):\n mu = numpyro.sample(\"mu\", dist.Normal(0, 5))\n tau = numpyro.sample(\"tau\", dist.HalfCauchy(5))\n with numpyro.plate(\"J\", J):\n with numpyro.handlers.reparam(config={\"theta\": TransformReparam()}):\n theta = numpyro.sample(\n \"theta\",\n dist.TransformedDistribution(\n dist.Normal(0.0, 1.0), dist.transforms.AffineTransform(mu, tau)\n ),\n )\n numpyro.sample(\"obs\", dist.Normal(theta, sigma), obs=y)",
"_____no_output_____"
]
],
[
[
"We need to translate the model into a log-probability function that will be used by BlackJAX to perform inference. For that we use the `initialize_model` function in Numpyro's internals. We will also use the initial position it returns:",
"_____no_output_____"
]
],
[
[
"rng_key = jax.random.PRNGKey(0)\n\ninit_params, potential_fn_gen, *_ = initialize_model(\n rng_key,\n eight_schools_noncentered,\n model_args=(J, sigma, y),\n dynamic_args=True,\n)",
"_____no_output_____"
]
],
[
[
"Now we create the potential using the `potential_fn_gen` provided by Numpyro and initialize the NUTS state with BlackJAX:",
"_____no_output_____"
]
],
[
[
"if RUN_BENCHMARK:\n print(\"\\nBlackjax:\")\n print(\"-> Running warmup.\")",
"_____no_output_____"
]
],
[
[
"We now run the window adaptation in BlackJAX:",
"_____no_output_____"
]
],
[
[
"%%time\n\ninitial_position = init_params.z\nlogprob = lambda position: -potential_fn_gen(J, sigma, y)(position)\n\nadapt = blackjax.window_adaptation(\n blackjax.nuts, logprob, num_warmup, target_acceptance_rate=0.8\n)\nlast_state, kernel, _ = adapt.run(rng_key, initial_position)",
"CPU times: user 2.43 s, sys: 7.96 ms, total: 2.44 s\nWall time: 2.42 s\n"
]
],
[
[
"Let us now perform inference using the previously computed step size and inverse mass matrix. We also time the sampling to give you an idea of how fast BlackJAX can be on simple models:",
"_____no_output_____"
]
],
[
[
"if RUN_BENCHMARK:\n print(\"-> Running sampling.\")",
"_____no_output_____"
],
[
"%%time\n\n\ndef inference_loop(rng_key, kernel, initial_state, num_samples):\n @jax.jit\n def one_step(state, rng_key):\n state, info = kernel(rng_key, state)\n return state, (state, info)\n\n keys = jax.random.split(rng_key, num_samples)\n _, (states, infos) = jax.lax.scan(one_step, initial_state, keys)\n\n return states, (\n infos.acceptance_probability,\n infos.is_divergent,\n infos.integration_steps,\n )\n\n\n# Sample from the posterior distribution\nstates, infos = inference_loop(rng_key, kernel, last_state, num_sample)\n_ = states.position[\"mu\"].block_until_ready()",
"CPU times: user 2.25 s, sys: 30.2 ms, total: 2.28 s\nWall time: 2.26 s\n"
]
],
[
[
"Let us compute the average acceptance probability and check the number of divergences (to make sure that the model sampled correctly, and that the sampling time is not a result of a majority of divergent transitions):",
"_____no_output_____"
]
],
[
[
"acceptance_rate = np.mean(infos[0])\nnum_divergent = np.mean(infos[1])\n\nprint(f\"\\nAcceptance rate: {acceptance_rate:.2f}\")\nprint(f\"{100*num_divergent:.2f}% divergent transitions\")",
"\nAcceptance rate: 0.89\n0.02% divergent transitions\n"
]
],
[
[
"Let us now plot the distribution of the parameters. Note that since we use a transformed variable, Numpyro does not output the school treatment effect directly:",
"_____no_output_____"
]
],
[
[
"if not RUN_BENCHMARK:\n import seaborn as sns\n from matplotlib import pyplot as plt\n\n samples = states.position\n\n fig, axes = plt.subplots(ncols=2)\n fig.set_size_inches(12, 5)\n sns.kdeplot(samples[\"mu\"], ax=axes[0])\n sns.kdeplot(samples[\"tau\"], ax=axes[1])\n axes[0].set_xlabel(\"mu\")\n axes[1].set_xlabel(\"tau\")\n fig.tight_layout()",
"_____no_output_____"
],
[
"if not RUN_BENCHMARK:\n fig, axes = plt.subplots(8, 2, sharex=\"col\", sharey=\"col\")\n fig.set_size_inches(12, 10)\n for i in range(J):\n axes[i][0].plot(samples[\"theta_base\"][:, i])\n axes[i][0].title.set_text(f\"School {i} relative treatment effect chain\")\n sns.kdeplot(samples[\"theta_base\"][:, i], ax=axes[i][1], shade=True)\n axes[i][1].title.set_text(f\"School {i} relative treatment effect distribution\")\n axes[J - 1][0].set_xlabel(\"Iteration\")\n axes[J - 1][1].set_xlabel(\"School effect\")\n fig.tight_layout()\n plt.show()",
"_____no_output_____"
],
[
"if not RUN_BENCHMARK:\n for i in range(J):\n print(\n f\"Relative treatment effect for school {i}: {np.mean(samples['theta_base'][:, i]):.2f}\"\n )",
"Relative treatment effect for school 0: 0.34\nRelative treatment effect for school 1: 0.11\nRelative treatment effect for school 2: -0.09\nRelative treatment effect for school 3: 0.07\nRelative treatment effect for school 4: -0.16\nRelative treatment effect for school 5: -0.07\nRelative treatment effect for school 6: 0.35\nRelative treatment effect for school 7: 0.07\n"
]
],
[
[
"## Compare sampling time with Numpyro\n\nWe compare the time it took BlackJAX to do the warmup for 1,000 iterations and then taking 100,000 samples with Numpyro's:",
"_____no_output_____"
]
],
[
[
"from numpyro.infer import MCMC, NUTS",
"_____no_output_____"
],
[
"if RUN_BENCHMARK:\n print(\"\\nNumpyro:\")\n print(\"-> Running warmup+sampling.\")",
"_____no_output_____"
],
[
"%%time\n\nnuts_kernel = NUTS(eight_schools_noncentered, target_accept_prob=0.8)\nmcmc = MCMC(\n nuts_kernel, num_warmup=num_warmup, num_samples=num_sample, progress_bar=False\n)\n\nrng_key = jax.random.PRNGKey(0)\nmcmc.run(rng_key, J, sigma, y=y, extra_fields=(\"num_steps\", \"accept_prob\"))\nsamples = mcmc.get_samples()\n_ = samples[\"mu\"].block_until_ready()",
"CPU times: user 2.43 s, sys: 30.8 ms, total: 2.46 s\nWall time: 2.44 s\n"
],
[
"print(f\"\\nAcceptance rate: {mcmc.get_extra_fields()['accept_prob'].mean():.2f}\")\nprint(f\"{100*mcmc.get_extra_fields()['diverging'].mean():.2f}% divergent transitions\")",
"\nAcceptance rate: 0.89\n0.00% divergent transitions\n"
],
[
"print(f\"\\nBlackjax average {infos[2].mean():.2f} leapfrog per iteration.\")\nprint(\n f\"Numpyro average {mcmc.get_extra_fields()['num_steps'].mean():.2f} leapfrog per iteration.\"\n)",
"\nBlackjax average 7.11 leapfrog per iteration.\nNumpyro average 8.91 leapfrog per iteration.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d04fceeabfbfc259aa7883690434abf232a43e95 | 7,498 | ipynb | Jupyter Notebook | programacioi/chapters/Tema4/Tema4_Exercicis_subprogrames.ipynb | bmalcover/material_programacio | e3a14c5a19f1ac5cb0c75237338886b41d7ceeaf | [
"MIT"
] | null | null | null | programacioi/chapters/Tema4/Tema4_Exercicis_subprogrames.ipynb | bmalcover/material_programacio | e3a14c5a19f1ac5cb0c75237338886b41d7ceeaf | [
"MIT"
] | null | null | null | programacioi/chapters/Tema4/Tema4_Exercicis_subprogrames.ipynb | bmalcover/material_programacio | e3a14c5a19f1ac5cb0c75237338886b41d7ceeaf | [
"MIT"
] | null | null | null | 47.157233 | 136 | 0.64844 | [
[
[
"## Exercicis del Tema 4\n\n### Subprogrames\n\nEs recomanable fer tots els exercicis en el mateix fitxer Python. Un cop heu realitzat la funció o subprograma\ncorresponent heu de comprovar el seu correcte funcionament.\n\n1.Subprograma que rep dos enters, els suma i retorna el resultat.\n\n2.Procediment que rep dos enters, els suma i mostra el valor resultant.\n\n3.Subprograma que rep dos nombres i torna el més gran.\n\n4.Subprograma que rep tres nombres i torna el més gran.\n\n4.1Realitzar un subprograma que rep tres nombres i torna el gran fent ús del subprograma del punt 3.\n\n5.Subprograma que rep tres nombres i mostra per pantalla si hi ha almanco dos nombres iguals.\n\n6.Subprograma que rep un nombre i retorna el seu valor absolut.\n\n7.Subprograma que rep un caràcter i retorna cert si el caràcter és una vocal.\n\n8.Subprograma que rep dos sencers i retorna cert si el primer és divisor del segon.\n\n9.Subprograma anomenat llegir_int que retorna un enter llegit del teclat.\n\n10.Subprograma que llegeix dos enters del teclat i retorna cert si el primer és divisor del segon.\n\n11.Subprograma que rep un enter i retorna un valor booleà segons si aquest és un nombre primer o no.\n\n12.Subprograma que rep un enter i retorna el menor nombre primer major que aquest número.\n\n```\n Per exemple\n - El nombre 7 tornaria 11.\n - El nombre 14 tornaria 17.\n - 22 tornaria 23.\n```\n\n13.Subprograma que rep un caràcter de l'abecedari en llengua anglesa o un digit entre el 0 i el 9, i el retorna en\nmajúscules.\n```\n Per exemple\n El caràcter a retorna el caràcter A.\n El caràcter B retorna el caràcter B.\n El caràcter 9 retorna el caràcter 9.\n```\n\n14.Subprograma que rep els dos costats d'un triangle rectangle i torna la hipotenusa.\n\n15.Realitzar un subprograma que rep dos enters i torna el mcd de tots dos.\n\n16.Realitzar el subprograma anomenat _word2num_ que llegeix caràcters numèrics ( '1', '2', '3'…, '0') com si d'una\nparaula es tracta i ens retorna un nombre enter. Podeu usar el punt o el enter com a final de seqüència.\n```\n Si llegim la seqüència de caràcters '1234' ha de tornar el nombre 1234.\n```\n\n17.Realitzar un subprograma que rep dos sencers, que representen una fracció. La funció hauria de reduir la fracció\nals termes més petits possibles i després mostrar per pantalla tant el numerador com el denominador de la fracció\nreduïda.\n```\n Exemple:\n Si els paràmetres passats a la funció són 6 i 63, llavors el resultat és 2 i 21.\n```\n\n18.Realitzar el subprograma anomenat _sumaseq_ que rep un enter en el rang 1, 9 i un nombre de repeticions. El\nprograma ha\nde realitzar la següent operació:\n```\n Si rep 8 i 5 repeticions ha de retornar el resultat de la següent suma 8 + 88 + 888 +8888 + 88888\n Si rep 5 i 2 repeticions ha de retornar: 5 + 55\n Si rep 1 i 8 repeticions ha de retornar: 1 + 11 +111+ 1111 + 11111 + 111111 + 1111111 + 11111111\n```\n\n19.Realitzar un programa que rep 3 nombres (d, m, a) que representen una data (dia, mes i any). El programa ha de\ntornar el dia següent a aquesta data. Heu de tenir en compte els dies de cada mes i els anys de traspàs.\n\n### Seqüències\n\nA continuació teniu una llista de problemes relacionats amb seqüències de text. Abans de fer aquests exercicis s'ha\nd'entendre bé el material del tema 4 i dominar els exercicis de subprogrames. Es recomana crear un document per cada\nun dels problemes.\n\n1.\tComptar el nombre de paraules parells (nombre de lletres parells) i el nombre de paraules senars(nombre de lletres senars).\n2.\tComptar les paraules que tenen a almanco una 'a'.\n3.\tComptar les paraules que tenen a almanco una vocal.\n4.\tComptar les paraules que comencen per 'sa'. **Per exemple:** savis són els que en saben. 2 paraules comencen per sa.\n5.\tCrear un programa que compti les paraules que tenen més vocals que consonants.\n6.\tCrear un programa que ens digui si una seqüència de paraules és un abecegrama. Un abecegrama és una frase les\nparaules es disposen en ordre alfabètic; és a dir, la primera paraula de la frase comença per a; la segona, per b; la\n tercera, per c ...\n**Per exemple:**\nahir brollava calor d emocions fum gelat hui immens jardi karma latent malejant nu onades peregrines que\nrestrenyen salobre temps un venerable wagneria xiprer yep zingar.\n\n7.\tFer un programa que ens digui quantes lletres té la paraula més llarga d'una seqüència acabada en punt.\n8.\tFer un programa que ens informi de quantes paraules contenen la lletra 'j' sense que aquesta no sigui ni la\nprimera ni la darrera de les seves lletres en una seqüència de text acabada en '.'.\n\n\n### Seqüències numèriques\n\nA continuació teniu un conjunt de problemes relacionats amb seqüències numèriques. Es recomana crear un fitxer per\ncada un dels problemes.\n\n1.Realitza un programa que genera 50 nombres aleatoris amb valors entre el 0 i el 10 i ens mostra quantes vegades\ntenim 2 nombres consecutius que son iguals. També volem saber quins han estat aquests nombres.\n\n1.1.Fes una variant del programa anterior que mostra les vegades on el nombre previ és múltiple del nombre actual i\nquines són aquestes parelles.\n\n2.En el rang (0, 2223). Quants nombres tenen almanco dues vegades seguides el nombre 2. Per exemple: 22, 221 ...\n\n3.Els nombres primers bessons són aquelles parelles de nombres primers que difereixen en 2. És a dir, dos nombres p i q\n(amb p < q) són primers bessons si q = p + 2. Excepte pel cas del 2 i el 3. Quins són els primers bessons majors al\nvalor 150? Solució: (179, 181) Quants primers bessons hi ha entre 100 i 1000? Solució: 27 Quins són?\n\n4.Quants nombres de cinc dígits comencen per 4, acaben en 5 i les seves xifres sumen 18?\n\n5.Realitza un programa que genera 100 nombres aleatoris amb valors entre el 100 i el 1000. Ha de mostrar aquells que\ntenen les seves xifres en ordre estrictament decreixent.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d04fd8f0488e48c7cf71c3477c3b52ef543d37ec | 16,205 | ipynb | Jupyter Notebook | homework03/homework03_part3_gan_basic.ipynb | VendettaPrime/Practical_DL | d673eda35dfb645011745cc2d71f5c4450a573ff | [
"MIT"
] | null | null | null | homework03/homework03_part3_gan_basic.ipynb | VendettaPrime/Practical_DL | d673eda35dfb645011745cc2d71f5c4450a573ff | [
"MIT"
] | null | null | null | homework03/homework03_part3_gan_basic.ipynb | VendettaPrime/Practical_DL | d673eda35dfb645011745cc2d71f5c4450a573ff | [
"MIT"
] | null | null | null | 30.120818 | 692 | 0.523727 | [
[
[
"The visualization used for this homework is based on Alexandr Verinov's code. ",
"_____no_output_____"
],
[
"# Generative models",
"_____no_output_____"
],
[
"In this homework we will try several criterions for learning an implicit model. Almost everything is written for you, and you only need to implement the objective for the game and play around with the model. \n\n**0)** Read the code\n\n**1)** Implement objective for a vanilla [Generative Adversarial Networks](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) (GAN). The hyperparameters are already set in the code. The model will converge if you implement the objective (1) right. \n\n**2)** Note the discussion in the paper, that the objective for $G$ can be of two kinds: $min_G log(1 - D)$ and $min_G - log(D)$. Implement the second objective and ensure model converges. Most likely, in this example you will not notice the difference, but people usually use the second objective, it really matters in more complicated scenarios.\n\n**3 & 4)** Implement [Wasserstein GAN](https://arxiv.org/abs/1701.07875) ([WGAN](https://arxiv.org/abs/1704.00028)) and WGAN-GP. To make the discriminator have Lipschitz property you need to clip discriminator's weights to $[-0.01, 0.01]$ range (WGAN) or use gradient penalty (WGAN-GP). You will need to make few modifications to the code: 1) remove sigmoids from discriminator 2) add weight clipping clipping / gradient penaly. 3) change objective. See [implementation 1](https://github.com/martinarjovsky/WassersteinGAN/) / [implementation 2](https://github.com/caogang/wgan-gp). They also use different optimizer. The default hyperparameters may not work, spend time to tune them.\n\n**5) Bonus: same thing without GANs** Implement maximum mean discrepancy estimator (MMD). MMD is discrepancy measure between distributions. In our case we use it to calculate discrepancy between real and fake data. You need to implement RBF kernel $k(x,x')=\\exp \\left(-{\\frac {1}{2\\sigma ^{2}}}||x-x'||^{2}\\right)$ and an MMD estimator (see eq.8 from https://arxiv.org/pdf/1505.03906.pdf). MMD is then used instead of discriminator.",
"_____no_output_____"
]
],
[
[
"#!L\n\"\"\" \n Please, implement everything in one notebook, using if statements to switch between the tasks\n\"\"\"\nTASK = 1 # 2, 3, 4, 5",
"_____no_output_____"
]
],
[
[
"# Imports",
"_____no_output_____"
]
],
[
[
"#!L\nimport numpy as np\nimport time\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnp.random.seed(12345)\nlims=(-5, 5)",
"_____no_output_____"
]
],
[
[
"# Define sampler from real data and Z ",
"_____no_output_____"
]
],
[
[
"#!L\nfrom scipy.stats import rv_discrete\n\nMEANS = np.array(\n [[-1,-3],\n [1,3],\n [-2,0],\n ])\nCOVS = np.array(\n [[[1,0.8],[0.8,1]],\n [[1,-0.5],[-0.5,1]],\n [[1,0],[0,1]],\n ])\nPROBS = np.array([\n 0.2,\n 0.5,\n 0.3\n ])\nassert len(MEANS) == len(COVS) == len(PROBS), \"number of components mismatch\"\nCOMPONENTS = len(MEANS)\n\ncomps_dist = rv_discrete(values=(range(COMPONENTS), PROBS))\n\ndef sample_true(N):\n comps = comps_dist.rvs(size=N)\n conds = np.arange(COMPONENTS)[:,None] == comps[None,:]\n arr = np.array([np.random.multivariate_normal(MEANS[c], COVS[c], size=N)\n for c in range(COMPONENTS)])\n return np.select(conds[:,:,None], arr).astype(np.float32)\n\nNOISE_DIM = 20\ndef sample_noise(N):\n return np.random.normal(size=(N,NOISE_DIM)).astype(np.float32)",
"_____no_output_____"
]
],
[
[
"# Visualization functions",
"_____no_output_____"
]
],
[
[
"#!L\ndef vis_data(data):\n \"\"\"\n Visualizes data as histogram\n \"\"\"\n hist = np.histogram2d(data[:, 1], data[:, 0], bins=100, range=[lims, lims])\n plt.pcolormesh(hist[1], hist[2], hist[0], alpha=0.5)\n\nfixed_noise = sample_noise(1000)\ndef vis_g():\n \"\"\"\n Visualizes generator's samples as circles\n \"\"\"\n data = generator(Variable(torch.Tensor(fixed_noise))).data.numpy()\n if np.isnan(data).any():\n return\n \n plt.scatter(data[:,0], data[:,1], alpha=0.2, c='b')\n plt.xlim(lims)\n plt.ylim(lims)\n \ndef vis_d():\n \"\"\"\n Visualizes discriminator's gradient on grid\n \"\"\"\n X, Y = np.meshgrid(np.linspace(lims[0], lims[1], 30), np.linspace(lims[0], lims[1], 30))\n X = X.flatten()\n Y = Y.flatten()\n grid = Variable(torch.Tensor(np.vstack([X, Y]).T), requires_grad=True)\n data_gen = generator(Variable(torch.Tensor(fixed_noise)))\n loss = d_loss(discriminator(data_gen), discriminator(grid))\n loss.backward()\n grads = - grid.grad.data.numpy()\n plt.quiver(X, Y, grads[:, 0], grads[:, 1], color='black',alpha=0.9)",
"_____no_output_____"
]
],
[
[
"# Define architectures",
"_____no_output_____"
],
[
"After you've passed task 1 you can play with architectures.",
"_____no_output_____"
],
[
"#### Generator",
"_____no_output_____"
]
],
[
[
"#!L\nclass Generator(nn.Module):\n def __init__(self, noise_dim, out_dim, hidden_dim=100):\n super(Generator, self).__init__()\n \n self.fc1 = nn.Linear(noise_dim, hidden_dim)\n nn.init.xavier_normal_(self.fc1.weight)\n nn.init.constant_(self.fc1.bias, 0.0)\n \n self.fc2 = nn.Linear(hidden_dim, hidden_dim)\n nn.init.xavier_normal_(self.fc2.weight)\n nn.init.constant_(self.fc2.bias, 0.0)\n \n self.fc3 = nn.Linear(hidden_dim, out_dim)\n nn.init.xavier_normal_(self.fc3.weight)\n nn.init.constant_(self.fc3.bias, 0.0)\n\n def forward(self, z):\n \"\"\"\n Generator takes a vector of noise and produces sample\n \"\"\"\n h1 = F.tanh(self.fc1(z))\n h2 = F.leaky_relu(self.fc2(h1))\n y_gen = self.fc3(h2)\n return y_gen",
"_____no_output_____"
]
],
[
[
"#### Discriminator",
"_____no_output_____"
]
],
[
[
"#!L\nclass Discriminator(nn.Module):\n def __init__(self, in_dim, hidden_dim=100):\n super(Discriminator, self).__init__()\n \n self.fc1 = nn.Linear(in_dim, hidden_dim)\n nn.init.xavier_normal_(self.fc1.weight)\n nn.init.constant_(self.fc1.bias, 0.0)\n \n self.fc2 = nn.Linear(hidden_dim, hidden_dim)\n nn.init.xavier_normal_(self.fc2.weight)\n nn.init.constant_(self.fc2.bias, 0.0)\n \n self.fc3 = nn.Linear(hidden_dim, hidden_dim)\n nn.init.xavier_normal_(self.fc3.weight)\n nn.init.constant_(self.fc3.bias, 0.0)\n \n self.fc4 = nn.Linear(hidden_dim, 1)\n nn.init.xavier_normal_(self.fc4.weight)\n nn.init.constant_(self.fc4.bias, 0.0)\n\n def forward(self, x):\n h1 = F.tanh(self.fc1(x))\n h2 = F.leaky_relu(self.fc2(h1))\n h3 = F.leaky_relu(self.fc3(h2))\n score = torch.sigmoid(self.fc4(h3))\n return score",
"_____no_output_____"
]
],
[
[
"# Define updates and losses",
"_____no_output_____"
]
],
[
[
"#!L\ngenerator = Generator(NOISE_DIM, out_dim = 2)\ndiscriminator = Discriminator(in_dim = 2)\n\nlr = 0.001\n\ng_optimizer = optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999))\nd_optimizer = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999))",
"_____no_output_____"
]
],
[
[
"Notice we are using ADAM optimizer with `beta1=0.5` for both discriminator and discriminator. This is a common practice and works well. Motivation: models should be flexible and adapt itself rapidly to the distributions. \n\nYou can try different optimizers and parameters.",
"_____no_output_____"
]
],
[
[
"#!L\n################################\n# IMPLEMENT HERE\n# Define the g_loss and d_loss here\n# these are the only lines of code you need to change to implement GAN game\n\ndef g_loss():\n # if TASK == 1: \n # do something\n \n return # TODO\ndef d_loss():\n # if TASK == 1: \n # do something\n\n return # TODO\n################################",
"_____no_output_____"
]
],
[
[
"# Get real data",
"_____no_output_____"
]
],
[
[
"#!L\ndata = sample_true(100000)\ndef iterate_minibatches(X, batchsize, y=None):\n perm = np.random.permutation(X.shape[0])\n \n for start in range(0, X.shape[0], batchsize):\n end = min(start + batchsize, X.shape[0])\n if y is None:\n yield X[perm[start:end]]\n else:\n yield X[perm[start:end]], y[perm[start:end]]",
"_____no_output_____"
],
[
"#!L\nplt.rcParams['figure.figsize'] = (12, 12)\nvis_data(data)\nvis_g()\nvis_d()",
"_____no_output_____"
]
],
[
[
"**Legend**:\n- Blue dots are generated samples. \n- Colored histogram at the back shows density of real data. \n- And with arrows we show gradients of the discriminator -- they are the directions that discriminator pushes generator's samples. ",
"_____no_output_____"
],
[
"# Train the model",
"_____no_output_____"
]
],
[
[
"#!L\nfrom IPython import display\n\nplt.xlim(lims)\nplt.ylim(lims)\n\nnum_epochs = 100\nbatch_size = 64\n\n# ===========================\n# IMPORTANT PARAMETER:\n# Number of D updates per G update\n# ===========================\nk_d, k_g = 4, 1\n\naccs = []\n\ntry:\n for epoch in range(num_epochs):\n for input_data in iterate_minibatches(data, batch_size):\n \n # Optimize D\n for _ in range(k_d):\n # Sample noise\n noise = Variable(torch.Tensor(sample_noise(len(input_data))))\n \n # Do an update\n inp_data = Variable(torch.Tensor(input_data))\n data_gen = generator(noise)\n loss = d_loss(discriminator(data_gen), discriminator(inp_data))\n d_optimizer.zero_grad()\n loss.backward()\n d_optimizer.step()\n \n # Optimize G\n for _ in range(k_g):\n # Sample noise\n noise = Variable(torch.Tensor(sample_noise(len(input_data))))\n \n # Do an update\n data_gen = generator(noise)\n loss = g_loss(discriminator(data_gen))\n g_optimizer.zero_grad()\n loss.backward()\n g_optimizer.step()\n \n # Visualize\n plt.clf()\n vis_data(data); vis_g(); vis_d()\n display.clear_output(wait=True)\n display.display(plt.gcf())\n\n \nexcept KeyboardInterrupt:\n pass",
"_____no_output_____"
]
],
[
[
"# Describe your findings here",
"_____no_output_____"
],
[
"A ya tomat. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0500657e6d231a34648339f790e834dcab01822 | 3,419 | ipynb | Jupyter Notebook | nlp-labs/Day_09/QnA_Model/QnA_Handson.ipynb | skymind-talent/nlp-traininglabs | 05b39277133f911856073ca5b839248629e2a742 | [
"Apache-2.0"
] | 2 | 2021-09-13T08:21:05.000Z | 2022-01-13T14:07:51.000Z | nlp-labs/Day_09/QnA_Model/QnA_Handson.ipynb | skymind-talent/nlp-traininglabs | 05b39277133f911856073ca5b839248629e2a742 | [
"Apache-2.0"
] | null | null | null | nlp-labs/Day_09/QnA_Model/QnA_Handson.ipynb | skymind-talent/nlp-traininglabs | 05b39277133f911856073ca5b839248629e2a742 | [
"Apache-2.0"
] | 8 | 2021-09-13T08:21:09.000Z | 2021-09-23T06:35:30.000Z | 27.796748 | 118 | 0.575899 | [
[
[
"> **Copyright (c) 2020 Skymind Holdings Berhad**<br><br>\n> **Copyright (c) 2021 Skymind Education Group Sdn. Bhd.**<br>\n<br>\nLicensed under the Apache License, Version 2.0 (the \\\"License\\\");\n<br>you may not use this file except in compliance with the License.\n<br>You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0/\n<br>\n<br>Unless required by applicable law or agreed to in writing, software\n<br>distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\n<br>WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n<br>See the License for the specific language governing permissions and\n<br>limitations under the License.\n<br>\n<br>\n**SPDX-License-Identifier: Apache-2.0**\n<br>",
"_____no_output_____"
],
[
"INSTRUCTION: Follow the steps in the commented line for each section and run the code.",
"_____no_output_____"
]
],
[
[
"\"\"\"\ninstall torch(PyTorch) and transformers\nto install them type in your terminal:\npip install torch\npip install transformers\n\"\"\"",
"_____no_output_____"
],
[
"# import the necessary library\nfrom transformers import pipeline\n\n# write your context (where model seeks the answer for the question)\ncontext = \"\"\"\nYou can add your own context here. Try to write something or copy from other source.\n\"\"\"\n\n# write your own question\nquestion = \"\"",
"_____no_output_____"
],
[
"# initialize your model\n\"\"\"\nThis is a pretrained model that we can get from huggingface\nThere are more models that we can find there: https://huggingface.co/\nGo to this web page and import a model and a tokenizer by putting the model and tokenizer into the parameters\n\"\"\"\n# uncomment this code below\n\n# question_answering = pipeline('question-answering', model= , tokenizer=)",
"_____no_output_____"
],
[
"# test the model (uncomment the code below)\n\n# result = question_answering(question=question, context=context)\n# print(\"Answer:\", result['answer'])\n# print(\"Score:\", result['score'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d05011bc889f28c3171fba598b018a059ee2dafe | 23,878 | ipynb | Jupyter Notebook | Python/AbsoluteAndOtherAlgorithms/8ProstateGE/AEFS_64.ipynb | xinxingwu-uk/UFS | e7f0d430be8ff2984c740da63c16699d73163a19 | [
"MIT"
] | 2 | 2021-11-20T12:35:31.000Z | 2022-02-22T07:49:36.000Z | Python/AbsoluteAndOtherAlgorithms/8ProstateGE/AEFS_64.ipynb | xinxingwu-uk/UFS | e7f0d430be8ff2984c740da63c16699d73163a19 | [
"MIT"
] | null | null | null | Python/AbsoluteAndOtherAlgorithms/8ProstateGE/AEFS_64.ipynb | xinxingwu-uk/UFS | e7f0d430be8ff2984c740da63c16699d73163a19 | [
"MIT"
] | null | null | null | 46.727984 | 513 | 0.612614 | [
[
[
"# 1. Import libraries",
"_____no_output_____"
]
],
[
[
"#----------------------------Reproducible----------------------------------------------------------------------------------------\nimport numpy as np\nimport tensorflow as tf\nimport random as rn\nimport os\n\nseed=0\nos.environ['PYTHONHASHSEED'] = str(seed)\n\nnp.random.seed(seed)\nrn.seed(seed)\n#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)\nsession_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)\n\nfrom keras import backend as K\n\n#tf.set_random_seed(seed)\ntf.compat.v1.set_random_seed(seed)\n#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)\nsess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)\n\nK.set_session(sess)\n#----------------------------Reproducible----------------------------------------------------------------------------------------\n\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\n#--------------------------------------------------------------------------------------------------------------------------------\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n%matplotlib inline\nmatplotlib.style.use('ggplot')\n\nimport random\nimport scipy.sparse as sparse\nimport scipy.io\n\nfrom keras.utils import to_categorical\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nfrom skfeature.function.similarity_based import lap_score\nfrom skfeature.utility import construct_W\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.linear_model import LinearRegression\nimport time\nimport pandas as pd",
"/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nUsing TensorFlow backend.\n"
],
[
"def mse_check(train, val):\n LR = LinearRegression(n_jobs = -1)\n LR.fit(train[0], train[1])\n MSELR = ((LR.predict(val[0]) - val[1]) ** 2).mean()\n return MSELR\n\ndef next_batch(samples, labels, num):\n # Return a total of `num` random samples and labels.\n idx = np.random.choice(len(samples), num)\n\n return samples[idx], labels[idx]\n\ndef standard_single_hidden_layer_autoencoder(X, units, O):\n reg_alpha = 1e-3\n D = X.shape[1]\n weights = tf.get_variable(\"weights\", [D, units])\n biases = tf.get_variable(\"biases\", [units])\n X = tf.matmul(X, weights) + biases\n X = tf.layers.dense(X, O, kernel_regularizer = tf.contrib.layers.l2_regularizer(reg_alpha))\n return X, weights\n\ndef aefs_subset_selector(train, K, epoch_num=1000, alpha=0.1):\n D = train[0].shape[1]\n O = train[1].shape[1]\n learning_rate = 0.001\n \n tf.reset_default_graph()\n \n X = tf.placeholder(tf.float32, (None, D))\n TY = tf.placeholder(tf.float32, (None, O))\n Y, weights = standard_single_hidden_layer_autoencoder(X, K, O)\n \n loss = tf.reduce_mean(tf.square(TY - Y)) + alpha * tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(weights), axis=1)), axis=0) + tf.losses.get_total_loss()\n train_op = tf.train.AdamOptimizer(learning_rate).minimize(loss)\n \n init = tf.global_variables_initializer()\n \n batch_size = 8\n batch_per_epoch = train[0].shape[0] // batch_size\n \n costs = []\n \n session_config = tf.ConfigProto()\n session_config.gpu_options.allow_growth = False\n \n with tf.Session(config = session_config) as sess:\n sess.run(init)\n for ep in range(epoch_num):\n cost = 0\n for batch_n in range(batch_per_epoch):\n imgs, yimgs = next_batch(train[0], train[1], batch_size)\n _, c, p = sess.run([train_op, loss, weights], feed_dict = {X: imgs, TY: yimgs})\n cost += c / batch_per_epoch\n costs.append(cost)\n \n return list(np.argmax(np.abs(p), axis=0)), costs\n\ndef AEFS(train, test, K, debug = True):\n x_train, x_val, y_train, y_val = train_test_split(train[0], train[1], test_size = 0.1)\n print(\"y_train.shape\",y_train.shape)\n bindices = []\n bmse = 1e100\n for alpha in [1e-3, 1e-1, 1e1, 1e3]:\n print(\"alpha\",alpha)\n indices, _ = aefs_subset_selector(train, K)\n mse = mse_check((train[0][:, indices], train[1]), (x_val[:, indices], y_val))\n if bmse > mse:\n bmse = mse\n bindices = indices\n if debug:\n print(bindices, bmse)\n return train[0][:, bindices], test[0][:, bindices]",
"_____no_output_____"
],
[
"#--------------------------------------------------------------------------------------------------------------------------------\ndef ETree(p_train_feature,p_train_label,p_test_feature,p_test_label,p_seed):\n clf = ExtraTreesClassifier(n_estimators=50, random_state=p_seed)\n \n # Training\n clf.fit(p_train_feature, p_train_label)\n \n # Training accuracy\n print('Training accuracy:',clf.score(p_train_feature, np.array(p_train_label)))\n print('Training accuracy:',accuracy_score(np.array(p_train_label),clf.predict(p_train_feature)))\n #print('Training accuracy:',np.sum(clf.predict(p_train_feature)==np.array(p_train_label))/p_train_label.shape[0])\n\n # Testing accuracy\n print('Testing accuracy:',clf.score(p_test_feature, np.array(p_test_label)))\n print('Testing accuracy:',accuracy_score(np.array(p_test_label),clf.predict(p_test_feature)))\n #print('Testing accuracy:',np.sum(clf.predict(p_test_feature)==np.array(p_test_label))/p_test_label.shape[0])",
"_____no_output_____"
],
[
"#--------------------------------------------------------------------------------------------------------------------------------\ndef write_to_csv(p_data,p_path):\n dataframe = pd.DataFrame(p_data)\n dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')",
"_____no_output_____"
]
],
[
[
"# 2. Loading data",
"_____no_output_____"
]
],
[
[
"data_path=\"./Dataset/Prostate_GE.mat\"\nData = scipy.io.loadmat(data_path)\n\ndata_arr=Data['X']\nlabel_arr=Data['Y'][:, 0]-1\n\nData=MinMaxScaler(feature_range=(0,1)).fit_transform(data_arr)\n\nC_train_x,C_test_x,C_train_y,C_test_y= train_test_split(Data,label_arr,test_size=0.2,random_state=seed)\n\nprint('Shape of C_train_x: ' + str(C_train_x.shape)) \nprint('Shape of C_train_y: ' + str(C_train_y.shape)) \nprint('Shape of C_test_x: ' + str(C_test_x.shape)) \nprint('Shape of C_test_y: ' + str(C_test_y.shape)) ",
"Shape of C_train_x: (81, 5966)\nShape of C_train_y: (81,)\nShape of C_test_x: (21, 5966)\nShape of C_test_y: (21,)\n"
],
[
"key_feture_number=64",
"_____no_output_____"
]
],
[
[
"# 3. Model",
"_____no_output_____"
]
],
[
[
"train=(C_train_x,C_train_x)\ntest=(C_test_x,C_test_x)\n\nstart = time.clock()\n\nC_train_selected_x, C_test_selected_x = AEFS((train[0], train[0]), (test[0], test[0]), key_feture_number)\n\ntime_cost=time.clock() - start\n\nwrite_to_csv(np.array([time_cost]),\"./log/AEFS_time\"+str(key_feture_number)+\".csv\")",
"y_train.shape (72, 5966)\nalpha 0.001\nWARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\n"
]
],
[
[
"# 4. Classifying",
"_____no_output_____"
],
[
"### Extra Trees",
"_____no_output_____"
]
],
[
[
"train_feature=C_train_x\ntrain_label=C_train_y\ntest_feature=C_test_x\ntest_label=C_test_y\n\nprint('Shape of train_feature: ' + str(train_feature.shape)) \nprint('Shape of train_label: ' + str(train_label.shape)) \nprint('Shape of test_feature: ' + str(test_feature.shape)) \nprint('Shape of test_label: ' + str(test_label.shape)) \n\np_seed=seed\nETree(train_feature,train_label,test_feature,test_label,p_seed)",
"Shape of train_feature: (81, 5966)\nShape of train_label: (81,)\nShape of test_feature: (21, 5966)\nShape of test_label: (21,)\nTraining accuracy: 1.0\nTraining accuracy: 1.0\nTesting accuracy: 0.9523809523809523\nTesting accuracy: 0.9523809523809523\n"
],
[
"train_feature=C_train_selected_x\ntrain_label=C_train_y\n\ntest_feature=C_test_selected_x\ntest_label=C_test_y\n\nprint('Shape of train_feature: ' + str(train_feature.shape)) \nprint('Shape of train_label: ' + str(train_label.shape)) \nprint('Shape of test_feature: ' + str(test_feature.shape)) \nprint('Shape of test_label: ' + str(test_label.shape)) \n\np_seed=seed\nETree(train_feature,train_label,test_feature,test_label,p_seed)",
"Shape of train_feature: (81, 64)\nShape of train_label: (81,)\nShape of test_feature: (21, 64)\nShape of test_label: (21,)\nTraining accuracy: 1.0\nTraining accuracy: 1.0\nTesting accuracy: 0.8571428571428571\nTesting accuracy: 0.8571428571428571\n"
]
],
[
[
"# 6. Reconstruction loss",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\n\ndef mse_check(train, test):\n LR = LinearRegression(n_jobs = -1)\n LR.fit(train[0], train[1])\n MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()\n return MSELR",
"_____no_output_____"
],
[
"train_feature_tuple=(C_train_selected_x,C_train_x)\ntest_feature_tuple=(C_test_selected_x,C_test_x)\n\nreconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)\nprint(reconstruction_loss)",
"0.27951798104721787\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d050152b8504af6b4a599f62d4433278fd2e0439 | 5,525 | ipynb | Jupyter Notebook | 1. Extending the dataset with data from other sources/X - Deprecated - Connecting the regions through BAMS information.ipynb | marenpg/jupyter_basal_ganglia | ab4bb2034f559ecdad3a507edd752c290670c2df | [
"CC-BY-4.0"
] | null | null | null | 1. Extending the dataset with data from other sources/X - Deprecated - Connecting the regions through BAMS information.ipynb | marenpg/jupyter_basal_ganglia | ab4bb2034f559ecdad3a507edd752c290670c2df | [
"CC-BY-4.0"
] | null | null | null | 1. Extending the dataset with data from other sources/X - Deprecated - Connecting the regions through BAMS information.ipynb | marenpg/jupyter_basal_ganglia | ab4bb2034f559ecdad3a507edd752c290670c2df | [
"CC-BY-4.0"
] | null | null | null | 31.392045 | 144 | 0.549321 | [
[
[
"# Deprecated - Connecting Brain region through BAMS information\n\nThis script connects brain regions through BAMS conenctivity informtation.\nHowever, at this level the connectivity information has no reference to the original, and that is not ok. Thereby do **not** use this.",
"_____no_output_____"
]
],
[
[
"### DEPRECATED\n\nimport pandas as pd\nimport re \nimport itertools\nfrom difflib import SequenceMatcher\n\nroot = \"Data/csvs/basal_ganglia/regions\"\nsim_csv_loc = \"/region_similarity.csv\"\n\n\ndef similar(a, b):\n return SequenceMatcher(None, a, b).ratio()\n\n\n## Prepare regions and regions_other csvs\ndf_all_regions = pd.read_csv(root + \"/all_regions.csv\", dtype=\"object\")\n\ndf = pd.DataFrame(columns = [\"ID1\", \"Region_name_1\", \"ID2\", \"Region_name_2\", \"Sim\"])\n\n# Put region names and ID into tuple list \nsubset = df_all_regions[[\"ID\", \"Region_name\"]]\nregion_name_tuples = [tuple(x) for x in subset.to_numpy()]\n\n# Find all combinations of region_names and look at similarity in name\nfor a, b in itertools.combinations(region_name_tuples, 2):\n id1, reg1 = a\n id2, reg2 = b\n sim_score = similar(reg1, reg2)\n \n if(sim_score > 0.7):\n a_row = pd.Series([id1, reg1, id2, reg2, sim_score], index = [\"ID1\", \"Region_name_1\", \"ID2\", \"Region_name_2\", \"Sim\"])\n df = df.append(a_row, ignore_index=True)\n\n\n# Store similarities\ndf_sorted = df.sort_values('Sim')\ndf_sorted.to_csv(root + sim_csv_loc, encoding='utf-8')\n\nprint(\"Similarities stored in\", sim_csv_loc)",
"Similarities stored in /region_similarity.csv\n"
],
[
"def get_count_of_type(label, session):\n q = \"MATCH (n:%s) RETURN count(n)\" % label\n res = session.run(q)\n print(\"Added\", res.value()[0], \"nodes of type\", label)\n \ndef get_count_of_relationship(label, session):\n q = \"MATCH ()-[r:%s]-() RETURN count(*)\" %label\n res = session.run(q)\n print(\"Added\", res.value()[0], \"relationships of type\", label)\n\ndef get_csv_path(csv_file):\n path_all_csv = os.path.realpath(\"Data/csvs/basal_ganglia/regions\")\n return os.path.join(path_all_csv, csv_file).replace(\"\\\\\",\"/\")\n\n",
"_____no_output_____"
],
[
"## Then find the regions that correspond to each other and stor that in a new CSV file\n\n# Add relation to all areas that define positions\npositioning = [\"caudal\", \"rostral\", \"ventral\", \"dorsal\"]\narea_describing = [\"internal\", \"compact\", \"core\", \"shell\"]\n\ndf_sims = pd.read_csv(root + sim_csv_loc, converters = {\"Sims\": float})\n\n# ALl with score above 0.95 are the same\n # Also the same: Substantia innominata, basal\",103,\"Substantia innominata, basal part\" 0.91\n \ndf_equals = df_sims.loc[df_sims['Sim'] > 0.95]\ndf_sorted.to_csv(root + \"/regions_equal.csv\", encoding='utf-8')\n\nfrom neo4j import GraphDatabase, basic_auth\nfrom dotenv import load_dotenv\nimport os\n\nload_dotenv()\n\nneo4jUser = os.getenv(\"NEO4J_USER\")\nneo4jPwd = os.getenv(\"NEO4J_PASSWORD\")\n\ndriver = GraphDatabase.driver(\"bolt://localhost:7687\",auth=basic_auth(neo4jUser, neo4jPwd))\n\n# Relationship EQUALS between equal BrainRegion nodes\ncsv_file_path = \"file:///%s\" % get_csv_path(\"regions_equal.csv\")\nquery=\"\"\"\n LOAD CSV WITH HEADERS FROM \"%s\" AS row\n MATCH (a:BrainRegion { id: row.ID1})\n MATCH (c:BrainRegion { id: row.ID2 })\n MERGE (a)-[:EQUALS]->(c)\n \"\"\" % csv_file_path\n\nwith driver.session() as session:\n session.run(query)\n get_count_of_relationship(\"EQUALS\", session)\n\n## TODO add rel for belongs-to/part of \n",
"Added 6124 relationships of type EQUALS\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d050169bb493413e80df17f397f9e7e92c3e20f1 | 257,965 | ipynb | Jupyter Notebook | Projeto_final.ipynb | andrade-lcs/letscode_python | b699bff9debc4df9e40dca858c8792e7d3a572fb | [
"MIT"
] | null | null | null | Projeto_final.ipynb | andrade-lcs/letscode_python | b699bff9debc4df9e40dca858c8792e7d3a572fb | [
"MIT"
] | null | null | null | Projeto_final.ipynb | andrade-lcs/letscode_python | b699bff9debc4df9e40dca858c8792e7d3a572fb | [
"MIT"
] | null | null | null | 129.435524 | 124,690 | 0.732537 | [
[
[
"import requests as r\n\nurl = 'https://api.covid19api.com/dayone/country/brazil'\nresp = r.get(url)\n\nresp.status_code",
"_____no_output_____"
],
[
"raw_data = resp.json()",
"_____no_output_____"
],
[
"raw_data[0]",
"_____no_output_____"
],
[
"final_data = []\nfor data in raw_data:\n final_data.append([data['Confirmed'], data['Deaths'], data['Recovered'], data['Active'], data['Date']])",
"_____no_output_____"
],
[
"final_data.insert(0, ['Confirmed', 'Deaths', 'Recovered', 'Active', 'Date'])",
"_____no_output_____"
],
[
"final_data",
"_____no_output_____"
],
[
"Confirmed = 0\nDeaths = 1\nRecovered = 2\nActive = 3\nDate = 4",
"_____no_output_____"
],
[
"for i in range(1, len(final_data)):\n final_data[i][Date] = final_data[i][Date][:10]\nfinal_data",
"_____no_output_____"
],
[
"import datetime as dt",
"_____no_output_____"
],
[
"print(dt.time(12, 6, 21, 7)) #h:min:seg:milseg\nprint(dt.date(2021, 7, 8)) #ano-mes-dia\nprint(dt.datetime(2021, 7, 8, 12, 6, 21, 7)) #ano-mes-dia h:min:seg:milseg",
"12:06:21.000007\n2021-07-08\n2021-07-08 12:06:21.000007\n"
],
[
"import csv",
"_____no_output_____"
],
[
"with open('brasil-covid.csv', 'w', encoding='utf-8') as file:\n writer = csv.writer(file)\n writer.writerows(final_data)",
"_____no_output_____"
],
[
"for i in range(1, len(final_data)):\n final_data[i][Date] = dt.datetime.strptime(final_data[i][Date], '%Y-%m-%d')\nfinal_data",
"_____no_output_____"
],
[
"def get_datasets(y, labels):\n if type(y[0]) == list:\n datasets = []\n for i in range(len(y)):\n datasets.append({\n 'labels' : labels[i],\n 'data' : y[i]\n })\n return datasets\n else:\n return [\n {\n 'labels' : labes[0],\n 'data' : y\n }\n ]",
"_____no_output_____"
],
[
"def set_title(title=''):\n if title != '':\n display = 'true'\n else:\n display = 'false'\n return {\n 'title' : title,\n 'display' : display \n }",
"_____no_output_____"
],
[
"def creater_chart(x, y, labels, kind='bar', title=''):\n datasets = get_datasets(y, labels)\n options = set_title(title)\n chart = {\n 'type' : kind,\n 'data' : {\n 'labels' : x,\n 'datasets' : datasets\n },\n 'options' : options\n }\n return chart",
"_____no_output_____"
],
[
"def get_api_chart(chart):\n url_base = 'https://quickchart.io/chart'\n resp = r.get(f'{url_base}?c={str(chart)}')\n return resp.content",
"_____no_output_____"
],
[
"def save_image(path, content):\n with open(path, 'wb') as image:\n image.write(content)",
"_____no_output_____"
],
[
"from PIL import Image\nfrom IPython.display import display",
"_____no_output_____"
],
[
"def display_image(path):\n img_pil = Image.open(path)\n display(img_pil)",
"_____no_output_____"
],
[
"y_data_1 = []\nfor obs in final_data[1::10]:\n y_data_1.append(obs[Confirmed])\n\ny_data_2 = []\nfor obs in final_data[1::10]:\n y_data_2.append(obs[Recovered])\n\nlabels = ['Confirmed', 'Recovered']\n\nx = []\nfor obs in final_data[1::10]:\n x.append(obs[Date].strftime('%d/%m/%Y'))\n\nchart = creater_chart(x, [y_data_1, y_data_2], labels, title='Gráfico:ConfirmadosxRecperados')\nchart_content = get_api_chart(chart)\nsave_image('grafico.png', chart_content)\ndisplay_image('grafico.png')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d05021f44a4263de3b5d2a1dd9039615bf90575b | 175,280 | ipynb | Jupyter Notebook | tile-coding/Tile_Coding.ipynb | kw90/deep-reinforcement-learning | f7c70515d96689e10a622ea7450b5f288076d62c | [
"MIT"
] | null | null | null | tile-coding/Tile_Coding.ipynb | kw90/deep-reinforcement-learning | f7c70515d96689e10a622ea7450b5f288076d62c | [
"MIT"
] | null | null | null | tile-coding/Tile_Coding.ipynb | kw90/deep-reinforcement-learning | f7c70515d96689e10a622ea7450b5f288076d62c | [
"MIT"
] | null | null | null | 215.862069 | 65,052 | 0.881647 | [
[
[
"# Tile Coding\n---\n\nTile coding is an innovative way of discretizing a continuous space that enables better generalization compared to a single grid-based approach. The fundamental idea is to create several overlapping grids or _tilings_; then for any given sample value, you need only check which tiles it lies in. You can then encode the original continuous value by a vector of integer indices or bits that identifies each activated tile.\n\n### 1. Import the Necessary Packages",
"_____no_output_____"
]
],
[
[
"# Import common libraries\nimport sys\nimport gym\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Set plotting options\n%matplotlib inline\nplt.style.use('ggplot')\nnp.set_printoptions(precision=3, linewidth=120)",
"_____no_output_____"
]
],
[
[
"### 2. Specify the Environment, and Explore the State and Action Spaces\n\nWe'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's begin with an environment that has a continuous state space, but a discrete action space.",
"_____no_output_____"
]
],
[
[
"# Create an environment\nenv = gym.make('Acrobot-v1')\nenv.seed(505);\n\n# Explore state (observation) space\nprint(\"State space:\", env.observation_space)\nprint(\"- low:\", env.observation_space.low)\nprint(\"- high:\", env.observation_space.high)\n\n# Explore action space\nprint(\"Action space:\", env.action_space)",
"\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\nState space: Box(6,)\n- low: [ -1. -1. -1. -1. -12.566 -28.274]\n- high: [ 1. 1. 1. 1. 12.566 28.274]\nAction space: Discrete(3)\n"
]
],
[
[
"Note that the state space is multi-dimensional, with most dimensions ranging from -1 to 1 (positions of the two joints), while the final two dimensions have a larger range. How do we discretize such a space using tiles?\n\n### 3. Tiling\n\nLet's first design a way to create a single tiling for a given state space. This is very similar to a uniform grid! The only difference is that you should include an offset for each dimension that shifts the split points.\n\nFor instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, `bins = (10, 10)`, and `offsets = (-0.1, 0.5)`, then return a list of 2 NumPy arrays (2 dimensions) each containing the following split points (9 split points per dimension):\n\n```\n[array([-0.9, -0.7, -0.5, -0.3, -0.1, 0.1, 0.3, 0.5, 0.7]),\n array([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5, 4.5])]\n```\n\nNotice how the split points for the first dimension are offset by `-0.1`, and for the second dimension are offset by `+0.5`. This might mean that some of our tiles, especially along the perimeter, are partially outside the valid state space, but that is unavoidable and harmless.",
"_____no_output_____"
]
],
[
[
"def float_range(start: float, stop: float, step_size: float):\n count: int = 0\n while True:\n temp = start + count * step_size\n if step_size > 0 and temp >= stop:\n break\n if step_size < 0 and temp <= stop:\n break\n yield temp\n count += 1",
"_____no_output_____"
],
[
"def create_tiling_grid(low, high, bins=(10, 10), offsets=(0.0, 0.0)):\n \"\"\"Define a uniformly-spaced grid that can be used for tile-coding a space.\n \n Parameters\n ----------\n low : array_like\n Lower bounds for each dimension of the continuous space.\n high : array_like\n Upper bounds for each dimension of the continuous space.\n bins : tuple\n Number of bins or tiles along each corresponding dimension.\n offsets : tuple\n Split points for each dimension should be offset by these values.\n \n Returns\n -------\n grid : list of array_like\n A list of arrays containing split points for each dimension.\n \"\"\"\n \n tiling_grid_d = []\n for d in range(0, len(bins)):\n low_bound_d = low[d]\n high_bound_d = high[d]\n range_d = abs(high_bound_d - low_bound_d)\n step_size_d = range_d / bins[d]\n offset_d = offsets[d]\n raw_tiling_grid_d = [x for x in \\\n float_range(low_bound_d + step_size_d + offset_d, \\\n high_bound_d, step_size_d)]\n tiling_grid_d.append(raw_tiling_grid_d[:(bins[d]-1)])\n\n return tiling_grid_d\n\nlow = [-1.0, -5.0]\nhigh = [1.0, 5.0]\ncreate_tiling_grid(low, high, bins=(10, 10), offsets=(-0.1, 0.5)) # [test]",
"_____no_output_____"
]
],
[
[
"You can now use this function to define a set of tilings that are a little offset from each other.",
"_____no_output_____"
]
],
[
[
"def create_tilings(low, high, tiling_specs):\n \"\"\"Define multiple tilings using the provided specifications.\n\n Parameters\n ----------\n low : array_like\n Lower bounds for each dimension of the continuous space.\n high : array_like\n Upper bounds for each dimension of the continuous space.\n tiling_specs : list of tuples\n A sequence of (bins, offsets) to be passed to create_tiling_grid().\n\n Returns\n -------\n tilings : list\n A list of tilings (grids), each produced by create_tiling_grid().\n \"\"\"\n return [create_tiling_grid(low, high, bins, offset) for bins, offset in tiling_specs]\n\n\n# Tiling specs: [(<bins>, <offsets>), ...]\ntiling_specs = [((10, 10), (-0.066, -0.33)),\n ((10, 10), (0.0, 0.0)),\n ((10, 10), (0.066, 0.33))]\ntilings = create_tilings(low, high, tiling_specs)",
"_____no_output_____"
]
],
[
[
"It may be hard to gauge whether you are getting desired results or not. So let's try to visualize these tilings.",
"_____no_output_____"
]
],
[
[
"from matplotlib.lines import Line2D\n\ndef visualize_tilings(tilings):\n \"\"\"Plot each tiling as a grid.\"\"\"\n prop_cycle = plt.rcParams['axes.prop_cycle']\n colors = prop_cycle.by_key()['color']\n linestyles = ['-', '--', ':']\n legend_lines = []\n\n fig, ax = plt.subplots(figsize=(10, 10))\n for i, grid in enumerate(tilings):\n for x in grid[0]:\n l = ax.axvline(x=x, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)], label=i)\n for y in grid[1]:\n l = ax.axhline(y=y, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)])\n legend_lines.append(l)\n ax.grid('off')\n ax.legend(legend_lines, [\"Tiling #{}\".format(t) for t in range(len(legend_lines))], facecolor='white', framealpha=0.9)\n ax.set_title(\"Tilings\")\n return ax # return Axis object to draw on later, if needed\n\n\nvisualize_tilings(tilings);",
"_____no_output_____"
]
],
[
[
"Great! Now that we have a way to generate these tilings, we can next write our encoding function that will convert any given continuous state value to a discrete vector.\n\n### 4. Tile Encoding\n\nImplement the following to produce a vector that contains the indices for each tile that the input state value belongs to. The shape of the vector can be the same as the arrangment of tiles you have, or it can be ultimately flattened for convenience.\n\nYou can use the same `discretize()` function here from grid-based discretization, and simply call it for each tiling.",
"_____no_output_____"
]
],
[
[
"def discretize(sample, grid):\n \"\"\"Discretize a sample as per given grid.\n \n Parameters\n ----------\n sample : array_like\n A single sample from the (original) continuous space.\n grid : list of array_like\n A list of arrays containing split points for each dimension.\n \n Returns\n -------\n discretized_sample : array_like\n A sequence of integers with the same number of dimensions as sample.\n \"\"\"\n digitized_d = ()\n for dimension in range(0, len(sample)):\n digitized_d = digitized_d + (int(np.digitize(sample[dimension],\n grid[dimension],\n right=False)),)\n return digitized_d\n\n\ndef tile_encode(sample, tilings, flatten=False):\n \"\"\"Encode given sample using tile-coding.\n \n Parameters\n ----------\n sample : array_like\n A single sample from the (original) continuous space.\n tilings : list\n A list of tilings (grids), each produced by create_tiling_grid().\n flatten : bool\n If true, flatten the resulting binary arrays into a single long vector.\n\n Returns\n -------\n encoded_sample : list or array_like\n A list of binary vectors, one for each tiling, or flattened into one.\n \"\"\"\n encoded_tiles = [discretize(sample, tiling) for tiling in tilings]\n if flatten:\n return np.concatenate(encoded_tiles)\n else:\n return encoded_tiles\n\n\n# Test with some sample values\nsamples = [(-1.2 , -5.1 ),\n (-0.75, 3.25),\n (-0.5 , 0.0 ),\n ( 0.25, -1.9 ),\n ( 0.15, -1.75),\n ( 0.75, 2.5 ),\n ( 0.7 , -3.7 ),\n ( 1.0 , 5.0 )]\nencoded_samples = [tile_encode(sample, tilings) for sample in samples]\nprint(\"\\nSamples:\", repr(samples), sep=\"\\n\")\nprint(\"\\nEncoded samples:\", repr(encoded_samples), sep=\"\\n\")",
"\nSamples:\n[(-1.2, -5.1), (-0.75, 3.25), (-0.5, 0.0), (0.25, -1.9), (0.15, -1.75), (0.75, 2.5), (0.7, -3.7), (1.0, 5.0)]\n\nEncoded samples:\n[[(0, 0), (0, 0), (0, 0)], [(1, 8), (1, 8), (0, 7)], [(2, 5), (2, 5), (2, 4)], [(6, 3), (6, 3), (5, 2)], [(6, 3), (5, 3), (5, 2)], [(9, 7), (8, 7), (8, 7)], [(8, 1), (8, 1), (8, 0)], [(9, 9), (9, 9), (9, 9)]]\n"
]
],
[
[
"Note that we did not flatten the encoding above, which is why each sample's representation is a pair of indices for each tiling. This makes it easy to visualize it using the tilings.",
"_____no_output_____"
]
],
[
[
"from matplotlib.patches import Rectangle\n\ndef visualize_encoded_samples(samples, encoded_samples, tilings, low=None, high=None):\n \"\"\"Visualize samples by activating the respective tiles.\"\"\"\n samples = np.array(samples) # for ease of indexing\n\n # Show tiling grids\n ax = visualize_tilings(tilings)\n \n # If bounds (low, high) are specified, use them to set axis limits\n if low is not None and high is not None:\n ax.set_xlim(low[0], high[0])\n ax.set_ylim(low[1], high[1])\n else:\n # Pre-render (invisible) samples to automatically set reasonable axis limits, and use them as (low, high)\n ax.plot(samples[:, 0], samples[:, 1], 'o', alpha=0.0)\n low = [ax.get_xlim()[0], ax.get_ylim()[0]]\n high = [ax.get_xlim()[1], ax.get_ylim()[1]]\n\n # Map each encoded sample (which is really a list of indices) to the corresponding tiles it belongs to\n tilings_extended = [np.hstack((np.array([low]).T, grid, np.array([high]).T)) for grid in tilings] # add low and high ends\n tile_centers = [(grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 for grid_extended in tilings_extended] # compute center of each tile\n tile_toplefts = [grid_extended[:, :-1] for grid_extended in tilings_extended] # compute topleft of each tile\n tile_bottomrights = [grid_extended[:, 1:] for grid_extended in tilings_extended] # compute bottomright of each tile\n\n prop_cycle = plt.rcParams['axes.prop_cycle']\n colors = prop_cycle.by_key()['color']\n for sample, encoded_sample in zip(samples, encoded_samples):\n for i, tile in enumerate(encoded_sample):\n # Shade the entire tile with a rectangle\n topleft = tile_toplefts[i][0][tile[0]], tile_toplefts[i][1][tile[1]]\n bottomright = tile_bottomrights[i][0][tile[0]], tile_bottomrights[i][1][tile[1]]\n ax.add_patch(Rectangle(topleft, bottomright[0] - topleft[0], bottomright[1] - topleft[1],\n color=colors[i], alpha=0.33))\n\n # In case sample is outside tile bounds, it may not have been highlighted properly\n if any(sample < topleft) or any(sample > bottomright):\n # So plot a point in the center of the tile and draw a connecting line\n cx, cy = tile_centers[i][0][tile[0]], tile_centers[i][1][tile[1]]\n ax.add_line(Line2D([sample[0], cx], [sample[1], cy], color=colors[i]))\n ax.plot(cx, cy, 's', color=colors[i])\n \n # Finally, plot original samples\n ax.plot(samples[:, 0], samples[:, 1], 'o', color='r')\n\n ax.margins(x=0, y=0) # remove unnecessary margins\n ax.set_title(\"Tile-encoded samples\")\n return ax\n\nvisualize_encoded_samples(samples, encoded_samples, tilings);",
"_____no_output_____"
]
],
[
[
"Inspect the results and make sure you understand how the corresponding tiles are being chosen. Note that some samples may have one or more tiles in common.\n\n### 5. Q-Table with Tile Coding\n\nThe next step is to design a special Q-table that is able to utilize this tile coding scheme. It should have the same kind of interface as a regular table, i.e. given a `<state, action>` pair, it should return a `<value>`. Similarly, it should also allow you to update the `<value>` for a given `<state, action>` pair (note that this should update all the tiles that `<state>` belongs to).\n\nThe `<state>` supplied here is assumed to be from the original continuous state space, and `<action>` is discrete (and integer index). The Q-table should internally convert the `<state>` to its tile-coded representation when required.",
"_____no_output_____"
]
],
[
[
"class QTable:\n \"\"\"Simple Q-table.\"\"\"\n\n def __init__(self, state_size, action_size):\n \"\"\"Initialize Q-table.\n \n Parameters\n ----------\n state_size : tuple\n Number of discrete values along each dimension of state space.\n action_size : int\n Number of discrete actions in action space.\n \"\"\"\n self.state_size = state_size\n self.action_size = action_size\n\n self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))\n # TODO: Create Q-table, initialize all Q-values to zero\n # Note: If state_size = (9, 9), action_size = 2, q_table.shape should be (9, 9, 2)\n \n print(\"QTable(): size =\", self.q_table.shape)\n\n\nclass TiledQTable:\n \"\"\"Composite Q-table with an internal tile coding scheme.\"\"\"\n \n def __init__(self, low, high, tiling_specs, action_size):\n \"\"\"Create tilings and initialize internal Q-table(s).\n \n Parameters\n ----------\n low : array_like\n Lower bounds for each dimension of state space.\n high : array_like\n Upper bounds for each dimension of state space.\n tiling_specs : list of tuples\n A sequence of (bins, offsets) to be passed to create_tilings() along with low, high.\n action_size : int\n Number of discrete actions in action space.\n \"\"\"\n self.tilings = create_tilings(low, high, tiling_specs)\n self.state_sizes = [tuple(len(splits)+1 for splits in tiling_grid) for tiling_grid in self.tilings]\n self.action_size = action_size\n self.q_tables = [QTable(state_size, self.action_size) for state_size in self.state_sizes]\n print(\"TiledQTable(): no. of internal tables = \", len(self.q_tables))\n \n def get(self, state, action):\n \"\"\"Get Q-value for given <state, action> pair.\n \n Parameters\n ----------\n state : array_like\n Vector representing the state in the original continuous space.\n action : int\n Index of desired action.\n \n Returns\n -------\n value : float\n Q-value of given <state, action> pair, averaged from all internal Q-tables.\n \"\"\"\n # TODO: Encode state to get tile indices\n state_encoding = tile_encode(state, self.tilings)\n # TODO: Retrieve q-value for each tiling, and return their average\n action_value: float = 0.0\n for i, tile_q_table in enumerate(self.q_tables):\n action_value += tile_q_table.q_table[tuple(state_encoding[i] + (action,))]\n return action_value / len(self.q_tables)\n\n def update(self, state, action, value, alpha=0.1):\n \"\"\"Soft-update Q-value for given <state, action> pair to value.\n \n Instead of overwriting Q(state, action) with value, perform soft-update:\n Q(state, action) = alpha * value + (1.0 - alpha) * Q(state, action)\n \n Parameters\n ----------\n state : array_like\n Vector representing the state in the original continuous space.\n action : int\n Index of desired action.\n value : float\n Desired Q-value for <state, action> pair.\n alpha : float\n Update factor to perform soft-update, in [0.0, 1.0] range.\n \"\"\"\n # TODO: Encode state to get tile indices\n state_encoding = tile_encode(state, self.tilings)\n # TODO: Update q-value for each tiling by update factor alpha\n for i, tile_q_table in enumerate(self.q_tables):\n q_table_value = tile_q_table.q_table[tuple(state_encoding[i] + (action,))]\n new_value = alpha * value + (1.0 - alpha) * q_table_value\n tile_q_table.q_table[tuple(state_encoding[i] + (action,))] = new_value\n\n# Test with a sample Q-table\ntq = TiledQTable(low, high, tiling_specs, 2)\ns1 = 3; s2 = 4; a = 0; q = 1.0\nprint(\"[GET] Q({}, {}) = {}\".format(samples[s1], a, tq.get(samples[s1], a))) # check value at sample = s1, action = a\nprint(\"[UPDATE] Q({}, {}) = {}\".format(samples[s2], a, q)); tq.update(samples[s2], a, q) # update value for sample with some common tile(s)\nprint(\"[GET] Q({}, {}) = {}\".format(samples[s1], a, tq.get(samples[s1], a))) # check value again, should be slightly updated",
"QTable(): size = (10, 10, 2)\nQTable(): size = (10, 10, 2)\nQTable(): size = (10, 10, 2)\nTiledQTable(): no. of internal tables = 3\n[GET] Q((0.25, -1.9), 0) = 0.0\n[UPDATE] Q((0.15, -1.75), 0) = 1.0\n[GET] Q((0.25, -1.9), 0) = 0.06666666666666667\n"
]
],
[
[
"If you update the q-value for a particular state (say, `(0.25, -1.91)`) and action (say, `0`), then you should notice the q-value of a nearby state (e.g. `(0.15, -1.75)` and same action) has changed as well! This is how tile-coding is able to generalize values across the state space better than a single uniform grid.",
"_____no_output_____"
],
[
"### 6. Implement a Q-Learning Agent using Tile-Coding\n\nNow it's your turn to apply this discretization technique to design and test a complete learning agent! ",
"_____no_output_____"
]
],
[
[
"class QLearningAgentTileCoding:\n \"\"\"Q-Learning agent that can act on a continuous state space by discretizing it.\"\"\"\n\n def __init__(self, env, tiled_q_table, alpha=0.02, gamma=0.99,\n epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=123):\n \"\"\"Initialize variables, create grid for discretization.\"\"\"\n # Environment info\n self.env = env\n self.state_size = tiled_q_tables.state_sizes\n self.action_size = self.env.action_space.n # 1-dimensional discrete action space\n self.seed = np.random.seed(seed)\n print(\"Environment:\", self.env)\n print(\"State space size:\", self.state_size)\n print(\"Action space size:\", self.action_size)\n \n # Learning parameters\n self.alpha = alpha # learning rate\n self.gamma = gamma # discount factor\n self.epsilon = self.initial_epsilon = epsilon # initial exploration rate\n self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon\n self.min_epsilon = min_epsilon\n \n # Create Q-table\n self.tiled_q_table = tiled_q_table\n\n def reset_episode(self, state):\n \"\"\"Reset variables for a new episode.\"\"\"\n # Gradually decrease exploration rate\n self.epsilon *= self.epsilon_decay_rate\n self.epsilon = max(self.epsilon, self.min_epsilon)\n\n # Decide initial action\n self.last_state = state\n Q_state = [self.tiled_q_table.get(state, action) for action in range(self.action_size)]\n self.last_action = np.argmax(Q_state)\n return self.last_action\n \n def reset_exploration(self, epsilon=None):\n \"\"\"Reset exploration rate used when training.\"\"\"\n self.epsilon = epsilon if epsilon is not None else self.initial_epsilon\n\n def act(self, state, reward=None, done=None, mode='train'):\n \"\"\"Pick next action and update internal Q table (when mode != 'test').\"\"\"\n Q_state = [self.tiled_q_table.get(state, action) for action in range(self.action_size)]\n if mode == 'test':\n # Test mode: Simply produce an action\n action = np.argmax(Q_state)\n else:\n # Train mode (default): Update Q table, pick next action\n # Note: We update the Q table entry for the *last* (state, action) pair with current state, reward\n action_value = reward + self.gamma * max(Q_state)\n self.tiled_q_table.update(self.last_state, self.last_action, action_value, self.alpha)\n \n # Exploration vs. exploitation\n do_exploration = np.random.uniform(0, 1) < self.epsilon\n if do_exploration:\n # Pick a random action\n action = np.random.randint(0, self.action_size)\n else:\n # Pick the best action from Q table\n action = np.argmax(Q_state)\n\n # Roll over current state, action for next step\n self.last_state = state\n self.last_action = action\n return action",
"_____no_output_____"
],
[
"n_bins = 10\nobs_space = env.observation_space\nn_actions = env.action_space.n\nobs_space_shape = env.observation_space.shape[0]\nbins = tuple([n_bins]*obs_space_shape)\noffset_positions = (obs_space.high - obs_space.low)/(3*n_bins)\ntiling_specifications = [(bins, -offset_positions),\n (bins, tuple([0.0] * obs_space_shape)),\n (bins, +offset_positions)]\ntiled_q_tables = TiledQTable(obs_space.low,\n obs_space.high,\n tiling_specifications,\n n_actions)\nagent = QLearningAgentTileCoding(env=env,\n tiled_q_table=tiled_q_tables)\n\n\nprint(f'''Observation Space Shape: {obs_space_shape}''')\nprint(f'''Bins: {bins}''')\nprint(f'''Offsets: {offset_positions}''')\nprint(f'''Tilings: {tiling_specifications}''')",
"QTable(): size = (10, 10, 10, 10, 10, 10, 3)\nQTable(): size = (10, 10, 10, 10, 10, 10, 3)\nQTable(): size = (10, 10, 10, 10, 10, 10, 3)\nTiledQTable(): no. of internal tables = 3\nEnvironment: <TimeLimit<AcrobotEnv<Acrobot-v1>>>\nState space size: [(10, 10, 10, 10, 10, 10), (10, 10, 10, 10, 10, 10), (10, 10, 10, 10, 10, 10)]\nAction space size: 3\nObservation Space Shape: 6\nBins: (10, 10, 10, 10, 10, 10)\nOffsets: [ 0.067 0.067 0.067 0.067 0.838 1.885]\nTilings: [((10, 10, 10, 10, 10, 10), array([-0.067, -0.067, -0.067, -0.067, -0.838, -1.885], dtype=float32)), ((10, 10, 10, 10, 10, 10), (0.0, 0.0, 0.0, 0.0, 0.0, 0.0)), ((10, 10, 10, 10, 10, 10), array([ 0.067, 0.067, 0.067, 0.067, 0.838, 1.885], dtype=float32))]\n"
],
[
"def run(agent, env, num_episodes=10000, mode='train'):\n \"\"\"Run agent in given reinforcement learning environment and return scores.\"\"\"\n scores = []\n max_avg_score = -np.inf\n for i_episode in range(1, num_episodes+1):\n # Initialize episode\n state = env.reset()\n action = agent.reset_episode(state)\n total_reward = 0\n done = False\n\n # Roll out steps until done\n while not done:\n state, reward, done, info = env.step(action)\n total_reward += reward\n action = agent.act(state, reward, done, mode)\n\n # Save final score\n scores.append(total_reward)\n\n # Print episode stats\n if mode == 'train':\n if len(scores) > 100:\n avg_score = np.mean(scores[-100:])\n if avg_score > max_avg_score:\n max_avg_score = avg_score\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{} | Max Average Score: {}\".format(i_episode, num_episodes, max_avg_score), end=\"\")\n sys.stdout.flush()\n return scores\n\nscores = run(agent, env)",
"Episode 10000/10000 | Max Average Score: -329.66"
],
[
"import pandas as pd\n\ndef plot_scores(scores, rolling_window=100):\n \"\"\"Plot scores and optional rolling mean using specified window.\"\"\"\n plt.plot(scores); plt.title(\"Scores\");\n rolling_mean = pd.Series(scores).rolling(rolling_window).mean()\n plt.plot(rolling_mean);\n return rolling_mean\n\nrolling_mean = plot_scores(scores)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0503bbcff31f9bf930357b939c4a3f3da4dd8ac | 24,966 | ipynb | Jupyter Notebook | aulas/aula2.ipynb | artuguen28/Do_Zero_Ao_DS | e1b369b29d4ab6c291c25080d8508fde37e042bf | [
"MIT"
] | null | null | null | aulas/aula2.ipynb | artuguen28/Do_Zero_Ao_DS | e1b369b29d4ab6c291c25080d8508fde37e042bf | [
"MIT"
] | null | null | null | aulas/aula2.ipynb | artuguen28/Do_Zero_Ao_DS | e1b369b29d4ab6c291c25080d8508fde37e042bf | [
"MIT"
] | null | null | null | 31.966709 | 118 | 0.326524 | [
[
[
"#### Importando biblioteca Pandas",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"#### Carregando o dataset na variável data",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('datasets\\kc_house_data.csv')\n",
"_____no_output_____"
]
],
[
[
"Selecionando pelos nomes",
"_____no_output_____"
]
],
[
[
"print(data[['id', 'date', 'price']])",
"_____no_output_____"
]
],
[
[
"Selecionando pelos índices",
"_____no_output_____"
]
],
[
[
"print(data.iloc[0:10, 1:4])",
" date price bedrooms\n0 20141013T000000 221900.0 3\n1 20141209T000000 538000.0 3\n2 20150225T000000 180000.0 2\n3 20141209T000000 604000.0 4\n4 20150218T000000 510000.0 3\n5 20140512T000000 1225000.0 4\n6 20140627T000000 257500.0 3\n7 20150115T000000 291850.0 3\n8 20150415T000000 229500.0 3\n9 20150312T000000 323000.0 3\n"
]
],
[
[
"#### Respondendo as perguntas de negócio",
"_____no_output_____"
],
[
"Data do imóvel mais antigo",
"_____no_output_____"
]
],
[
[
"data['date'] = pd.to_datetime(data['date'])\ndata.sort_values('date', ascending=True)",
"_____no_output_____"
]
],
[
[
"Determinar o maior numero de andares e contar quantos temos por andar",
"_____no_output_____"
]
],
[
[
"data['floors'].unique()\n\nprint(data.loc[data['floors'] == 3.5].shape)",
"(8, 21)\n"
]
],
[
[
"Criando classificação",
"_____no_output_____"
]
],
[
[
"data['level'] = 'standard'\n\ndata.loc[data['price'] > 540000, 'level'] = 'high_level'\ndata.loc[data['price'] < 540000, 'level'] = 'low_level'\n\ndata.head()\n",
"_____no_output_____"
]
],
[
[
"Relatório ordenado pelo preço",
"_____no_output_____"
]
],
[
[
"report = data[['id', 'date', 'price', 'bedrooms', 'sqft_lot', 'level']].sort_values('price', ascending=False)\nreport.to_csv('datasets/report_aula02.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0503cc55ac01ab5da43e74f431d6aedf516aafa | 29,363 | ipynb | Jupyter Notebook | ModelTraining/VoxNetTrain.ipynb | matthewwicker/IterativeSalienceOcclusion | 71603d952c8e95eec47ffd2819d73d2fa41c264f | [
"BSD-3-Clause"
] | 12 | 2019-04-01T15:58:57.000Z | 2021-03-19T08:50:13.000Z | ModelTraining/VoxNetTrain.ipynb | matthewwicker/IterativeSalienceOcclusion | 71603d952c8e95eec47ffd2819d73d2fa41c264f | [
"BSD-3-Clause"
] | null | null | null | ModelTraining/VoxNetTrain.ipynb | matthewwicker/IterativeSalienceOcclusion | 71603d952c8e95eec47ffd2819d73d2fa41c264f | [
"BSD-3-Clause"
] | 1 | 2019-12-09T06:32:32.000Z | 2019-12-09T06:32:32.000Z | 73.591479 | 5,790 | 0.662364 | [
[
[
"import h5py\nimport numpy as np\n\nfiles = ['../Data/ModelNet40_train/ply_data_train0.h5',\n '../Data/ModelNet40_train/ply_data_train1.h5',\n '../Data/ModelNet40_train/ply_data_train2.h5',\n '../Data/ModelNet40_train/ply_data_train3.h5',\n '../Data/ModelNet40_train/ply_data_train4.h5']\n#files = ['../Data/ModelNet10_train/modelnet10_train.h5']\nd = []\nl = []\n\nfor i in range(len(files)):\n fh5 = h5py.File(files[0], 'r')\n data = fh5['data'][:]\n label = fh5['label'][:]\n fh5.close()\n if(i != 0):\n d = np.append(d, data, axis=0)\n l = np.append(l, label, axis=0)\n else:\n d = data\n l = label\n\nprint d.shape\nprint l.shape",
"(10240, 2048, 3)\n(10240, 1)\n"
],
[
"import matplotlib.pyplot as plt\nplt.hist(l, bins=100)\nplt.show()",
"_____no_output_____"
],
[
"from keras.utils import to_categorical\nY_train = to_categorical(l)\nclasses = Y_train.shape[1]\nprint Y_train.shape\nprint \"Loaded dataset with %s classes\"%(classes)",
"Using TensorFlow backend.\n"
],
[
"from tqdm import trange\n# now we need to voxelize that point cloud...\ndef voxelize(dim, data):\n # uncomment below if you have not already normalized your object to [0,1]^3\n #m = max(x.min(), x.max(), key=abs)\n #data /= m # This puts the data in [0,1]\n data *= (dim/2) # This puts the data in [0,dim]\n data += (dim/2) \n data = np.asarray([[int(i[0]), int(i[1]), int(i[2])] for i in data])\n data = np.unique(data, axis=1)\n retval = np.zeros((dim, dim, dim))\n for i in data:\n retval[i[0]][i[1]][i[2]] = 1\n retval = np.asarray([retval])\n return retval\n\nX_train = [voxelize(32, i) for i in d]",
"_____no_output_____"
],
[
"X_train = np.asarray(X_train)\nX_train = np.reshape(X_train, (-1, 32, 32, 32, 1))\nprint X_train.shape",
"(10240, 32, 32, 32, 1)\n"
],
[
"files = ['../Data/ModelNet40_test/ply_data_test0.h5',\n '../Data/ModelNet40_test/ply_data_test1.h5']\n\nd = []\nl = []\n\nfor i in range(len(files)):\n fh5 = h5py.File(files[0], 'r')\n data = fh5['data'][:]\n label = fh5['label'][:]\n fh5.close()\n if(i != 0):\n d = np.append(d, data, axis=0)\n l = np.append(l, label, axis=0)\n else:\n d = data\n l = label\n\nprint d.shape\nprint l.shape\n\nY_test = to_categorical(l)\nX_test = [voxelize(32, i) for i in d]\nX_test = np.asarray(X_test)\nX_test = np.reshape(X_test, (-1, 32, 32, 32, 1))",
"(4096, 2048, 3)\n(4096, 1)\n"
],
[
"import keras\nfrom keras import backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Convolution3D, MaxPooling3D\nfrom keras.layers import Conv3D\nfrom keras.layers.core import Activation, Dense, Dropout, Flatten\nfrom keras.layers.advanced_activations import LeakyReLU\nfrom keras.regularizers import l2\nfrom keras.callbacks import LearningRateScheduler, ModelCheckpoint\nfrom keras.optimizers import SGD\nimport random\nimport numpy as np\n\nnum_classes = classes\n\n# Defining VoxNet in Keras 2\nmodel = Sequential()\nmodel.add(Conv3D(input_shape=(32, 32, 32, 1), filters=32, \n kernel_size=(5,5,5), strides=(2, 2, 2)))\nmodel.add(Activation(LeakyReLU(alpha=0.1)))\nmodel.add(Dropout(rate=0.3))\nmodel.add(Conv3D(filters=32, kernel_size=(3,3,3)))\nmodel.add(Activation(LeakyReLU(alpha=0.1)))\nmodel.add(MaxPooling3D(pool_size=(2, 2, 2), strides=None))\nmodel.add(Dropout(rate=0.4))\nmodel.add(Flatten())\nmodel.add(Dense(units=128, activation='relu'))\nmodel.add(Dropout(rate=0.5))\nmodel.add(Dense(units=num_classes, kernel_initializer='normal', activation='relu'))\nmodel.add(Activation(\"softmax\"))\nmodel.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=[\"accuracy\"])\nmodel.summary()\n",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv3d_1 (Conv3D) (None, 14, 14, 14, 32) 4032 \n_________________________________________________________________\nactivation_1 (Activation) (None, 14, 14, 14, 32) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 14, 14, 14, 32) 0 \n_________________________________________________________________\nconv3d_2 (Conv3D) (None, 12, 12, 12, 32) 27680 \n_________________________________________________________________\nactivation_2 (Activation) (None, 12, 12, 12, 32) 0 \n_________________________________________________________________\nmax_pooling3d_1 (MaxPooling3 (None, 6, 6, 6, 32) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 6, 6, 6, 32) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 6912) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 128) 884864 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 40) 5160 \n_________________________________________________________________\nactivation_3 (Activation) (None, 40) 0 \n=================================================================\nTotal params: 921,736\nTrainable params: 921,736\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"history = model.fit(x=X_train, y=Y_train, batch_size=16, \n epochs=25, verbose=1, validation_data=(X_test, Y_test))",
"Train on 10240 samples, validate on 4096 samples\nEpoch 1/25\n10240/10240 [==============================] - 5s - loss: 0.2134 - acc: 0.9272 - val_loss: 1.9852 - val_acc: 0.6299\nEpoch 2/25\n10240/10240 [==============================] - 5s - loss: 0.1999 - acc: 0.9334 - val_loss: 1.9680 - val_acc: 0.6299\nEpoch 3/25\n10240/10240 [==============================] - 5s - loss: 0.1952 - acc: 0.9311 - val_loss: 2.0854 - val_acc: 0.6343\nEpoch 4/25\n10240/10240 [==============================] - 5s - loss: 0.1826 - acc: 0.9374 - val_loss: 2.0867 - val_acc: 0.6377\nEpoch 5/25\n10240/10240 [==============================] - 5s - loss: 0.1834 - acc: 0.9357 - val_loss: 2.1404 - val_acc: 0.6294\nEpoch 6/25\n10240/10240 [==============================] - 5s - loss: 0.1698 - acc: 0.9421 - val_loss: 2.1168 - val_acc: 0.6392\nEpoch 7/25\n10240/10240 [==============================] - 5s - loss: 0.1543 - acc: 0.9451 - val_loss: 2.1695 - val_acc: 0.6357\nEpoch 8/25\n10240/10240 [==============================] - 5s - loss: 0.1586 - acc: 0.9460 - val_loss: 2.1634 - val_acc: 0.6348\nEpoch 9/25\n10240/10240 [==============================] - 5s - loss: 0.1500 - acc: 0.9474 - val_loss: 2.1505 - val_acc: 0.6440\nEpoch 10/25\n10240/10240 [==============================] - 5s - loss: 0.1389 - acc: 0.9530 - val_loss: 2.1863 - val_acc: 0.6396\nEpoch 11/25\n10240/10240 [==============================] - 5s - loss: 0.1360 - acc: 0.9529 - val_loss: 2.2220 - val_acc: 0.6304\nEpoch 12/25\n10240/10240 [==============================] - 5s - loss: 0.1304 - acc: 0.9522 - val_loss: 2.1790 - val_acc: 0.6450\nEpoch 13/25\n10240/10240 [==============================] - 5s - loss: 0.1224 - acc: 0.9565 - val_loss: 2.2488 - val_acc: 0.6372\nEpoch 14/25\n10240/10240 [==============================] - 5s - loss: 0.1149 - acc: 0.9598 - val_loss: 2.2520 - val_acc: 0.6382\nEpoch 15/25\n10240/10240 [==============================] - 5s - loss: 0.1160 - acc: 0.9604 - val_loss: 2.3152 - val_acc: 0.6445\nEpoch 16/25\n10240/10240 [==============================] - 5s - loss: 0.1134 - acc: 0.9600 - val_loss: 2.3379 - val_acc: 0.6372\nEpoch 17/25\n10240/10240 [==============================] - 5s - loss: 0.1132 - acc: 0.9615 - val_loss: 2.3237 - val_acc: 0.6450\nEpoch 18/25\n10240/10240 [==============================] - 5s - loss: 0.1027 - acc: 0.9672 - val_loss: 2.3943 - val_acc: 0.6323\nEpoch 19/25\n10240/10240 [==============================] - 5s - loss: 0.0970 - acc: 0.9653 - val_loss: 2.4038 - val_acc: 0.6377\nEpoch 20/25\n10240/10240 [==============================] - 5s - loss: 0.1048 - acc: 0.9608 - val_loss: 2.3850 - val_acc: 0.6338\nEpoch 21/25\n 2528/10240 [======>.......................] - ETA: 3s - loss: 0.0887 - acc: 0.9719"
],
[
"# serialize model to JSON\nfrom keras.models import model_from_json\nimport os\n#model_json = model.to_json()\n#with open(\"voxnet40.json\", \"w\") as json_file:\n# json_file.write(model_json)\n# serialize weights to HDF5\nmodel.save_weights(\"VoxNet-ModelNet40.h5\")\nprint(\"Saved model to disk\")",
"Saved model to disk\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0506197d0c9c4b7e6a29dabda627ab7f4a3b204 | 11,641 | ipynb | Jupyter Notebook | 01.Algorithm/algorithm.ipynb | HenryPaik1/Study | deaa1df746587c3dc7fa3b7b73107035a704131b | [
"MIT"
] | 2 | 2018-11-02T14:57:12.000Z | 2018-11-06T14:36:22.000Z | 01.Algorithm/algorithm.ipynb | HenryPaik1/study | deaa1df746587c3dc7fa3b7b73107035a704131b | [
"MIT"
] | null | null | null | 01.Algorithm/algorithm.ipynb | HenryPaik1/study | deaa1df746587c3dc7fa3b7b73107035a704131b | [
"MIT"
] | null | null | null | 23.375502 | 78 | 0.393781 | [
[
[
"# Sorting\n### 1. Bubble: $O(n^2)$\nrepeatedly swapping the adjacent elements if they are in wrong order\n### 2. Selection: $O(n^2)$\nfind largest number and place it in the correct order\n### 3. Insertion: $O(n^2)$\n### 4. Shell: $O(n^2)$\n### 5. Merge: $O(n \\log n)$\n### 6. Quick: $O(n \\log n)$\nit is important to select proper pivot\n### 7. Counting: $O(n)$\n### 8. Radix: $O(n)$\n### 9. Bucket: $O(n)$",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"# Bubble",
"_____no_output_____"
]
],
[
[
"def bubble(arr):\n n = len(arr)\n for i in range(n):\n # (n-1)-(i): 뒤에서부터 i+1 번째 idx\n # 0번째 -> 커서가 n-1까지 움직임\n # 1번째 -> 커서가 n-1-1\n for j in range(0, (n-1)-i):\n print(j)\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]",
"_____no_output_____"
],
[
"def bubble(arr):\n n = len(arr)\n for i in range(n):\n for j in range(0, n-1-(i+1))",
"_____no_output_____"
],
[
"arr = [64, 34, 25, 12, 22, 11, 90]\nbubble(arr)\narr",
"0\n1\n2\n3\n4\n5\n0\n1\n2\n3\n4\n0\n1\n2\n3\n0\n1\n2\n0\n1\n0\n"
],
[
"def bubble2(arr):\n n = len(arr)\n for i in range(n):\n swapped = False\n for j in range(0, n-1-i):\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]\n # 정렬 안된 부분이 있음\n swapped = True \n if swapped == False:\n break",
"_____no_output_____"
],
[
"def b(arr):\n n = len(arr)\n for i in range(n):\n swapped = False\n for j in range(0, n-1-i):\n if arr[j] > arr[j+1]:\n swapped = True\n arr[j], arr[j+1] = arr[j+1], arr[j]\n if swapped == False:\n return\n ",
"_____no_output_____"
]
],
[
[
"# Selection Sorting",
"_____no_output_____"
]
],
[
[
"def Selection(arr):\n n = len(arr)\n for i in range(n-1, 0, -1):\n positionOfMax=0\n for loc in range(1, i+1):\n if arr[loc] > arr[positionOfMax]:\n positionOfMax = loc\n \n arr[i], arr[loc] = arr[loc], arr[i]\n\n# test code\narr = [54,26,93,17,77,31,44,55,20]\nSelection(arr)\nprint(arr)\n ",
"[54, 26, 93, 17, 77, 31, 44, 55, 20]\n"
]
],
[
[
"# Quick",
"_____no_output_____"
]
],
[
[
"# partition은 cur가 앞에서부터 high까지 순회하면서 \ndef partition(arr, low, high):\n i = low - 1\n pivot = arr[high]\n \n for cur in range(low, high):\n print(cur, i)\n if arr[cur] <= pivot:\n i += 1\n arr[i], arr[cur] = arr[cur], arr[i]\n \n arr[i+1], arr[high] = arr[high], arr[i+1]\n return i+1\n\ndef QuickSort(arr, low, high):\n if low < high:\n pi = partition(arr, low, high)\n # 절반 중 1\n QuickSort(arr, low, pi-1)\n # 절반 중 2\n QuickSort(arr, pi+1, high)\n \n# test code\narr = [10, 7, 8, 9, 1, 5]\nn = len(arr)\nQuickSort(arr, 0, n-1)\nfor i in range(n):\n print(arr[i])",
"0 -1\n1 -1\n2 -1\n3 -1\n4 -1\n2 1\n3 1\n4 1\n3 2\n4 2\n4 3\n1\n5\n7\n8\n9\n10\n"
]
],
[
[
"# Quick2",
"_____no_output_____"
]
],
[
[
"def partition(arr, start, end):\n povot = arr[start]\n i = start + 1\n j = end -1\n \n while True:\n # i: traverse from begin\n # j: traverse from end\n \n # if arr[i](left side of pivot) smaller than pivot, then pass\n while (i <= j and arr[i] <= pivot):\n i += 1\n # if arr[j](right side of pivot) larger than pivot, then pass\n while (i <= j and arr[j] >= pivot):\n j -= 1 \n \n if i <= j:\n arr[i], arr[j] = arr[j], arr[i]\n print(start)\n # i, j가 엇갈리면 left side of pivot의 맨 오른쪽 값과 pivot(맨앞) 자리바꿈\n else:\n arr[start], arr[j] = arr[j], arr[start]\n return j\n\ndef quicksort(arr, start, end):\n if end - start > 1:\n # p: pivot location\n p = partition(arr, start, end)\n quicksort(arr, start=start, end=p)\n quicksort(arr, start=p+1, end=end)",
"_____no_output_____"
]
],
[
[
"# 계수정렬 Counting Sort\n- reference: https://www.geeksforgeeks.org/radix-sort/\n- count_arr: count how many each of 0,1,2,...,n is in arr\n- iter 0, 1, ..., n\n- fill ans with 0, 1, ..., n",
"_____no_output_____"
]
],
[
[
"# 핵심은 counting arr생성\n# 갯수만큼 itter\ndef counting_sort(arr, max_val):\n count_arr = [0 for _ in range(max_val)]\n for num in arr:\n count_arr[num] += 1\n \n i = 0\n for num in range(max_val):\n iter_n = count_arr[num]\n for _ in range(iter_n):\n arr[i] = num\n i += 1\n return arr\n\n# test code\narr = [5,1,5,1,1,2,4,3,4,3,2]\nmax_val = 6\ncounting_sort(arr, max_val)",
"_____no_output_____"
]
],
[
[
"# 기수정렬 Radix Sort",
"_____no_output_____"
],
[
"## 핵심\n- `숫자 //` 원하는 `digit`(첫쨰 자리: 1, 둘째 자리: 10, ...) `% 10`\n- `// 10^(digit-1)`: 끝자리가 내가 원하는 digit의 숫자가 됨\n - eg. 25948의 끝에서 셋째 자리 9를 끝자리로 만드려면, 25948 // 10^(3-1) = 259\n- `%10`: 마지막 끝자리만 남김",
"_____no_output_____"
]
],
[
[
"4378 // 10**(4-1) % 10",
"_____no_output_____"
],
[
"def SortingByDigit(arr, exp):\n n = len(arr)\n output = [0 for _ in range(n)]\n count = [0 for _ in range(10)]\n\n for num in arr:\n last_digit = num // exp % 10\n count[last_digit] += 1\n\n i = 1\n while i < max_:\n count[i] += count[i-1]\n i += 1\n print('digit:', np.log10(exp)+1)\n print(count)\n\n # 왜 거꾸로 iter? 마지막 가장 큰 digit에 근거해 배열할 때 필요\n i = n-1\n while i >= 0: \n last_digit = (arr[i] // exp) % 10\n idx_by_cum = count[last_digit]\n output[idx_by_cum - 1] = arr[i] \n count[last_digit] -= 1\n i -= 1\n print(count)\n # update arr\n i = 0\n for i in range(0,len(arr)): \n arr[i] = output[i] \n# arr = [i for i in output]\n print(arr)\n print()\n\ndef radixSort(arr):\n max_ = max(arr)\n exp = 1\n while (max_ // exp) > 0:\n print(max_, exp)\n SortingByDigit(arr, exp)\n exp *= 10\n\n# test code\narr = [170, 5145, 3145, 2145, 802, 24] \nradixSort(arr)",
"5145 1\ndigit: 1.0\n[1, 1, 2, 2, 3, 6, 6, 6, 6, 6]\n[0, 1, 1, 2, 2, 3, 6, 6, 6, 6]\n[170, 802, 24, 5145, 3145, 2145]\n\n5145 10\ndigit: 2.0\n[1, 1, 2, 2, 5, 5, 5, 6, 6, 6]\n[0, 1, 1, 2, 2, 5, 5, 5, 6, 6]\n[802, 24, 5145, 3145, 2145, 170]\n\n5145 100\ndigit: 3.0\n[1, 5, 5, 5, 5, 5, 5, 5, 6, 6]\n[0, 1, 5, 5, 5, 5, 5, 5, 5, 6]\n[24, 5145, 3145, 2145, 170, 802]\n\n5145 1000\ndigit: 4.0\n[3, 3, 4, 5, 5, 6, 6, 6, 6, 6]\n[0, 3, 3, 4, 5, 5, 6, 6, 6, 6]\n[24, 170, 802, 2145, 3145, 5145]\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0508544f552be7580201fcc6ba530dd03cdd2a4 | 6,970 | ipynb | Jupyter Notebook | Airtable/Airtable_Get_data.ipynb | techthiyanes/awesome-notebooks | 10ab4da1b94dfa101e908356a649609b0b17561a | [
"BSD-3-Clause"
] | null | null | null | Airtable/Airtable_Get_data.ipynb | techthiyanes/awesome-notebooks | 10ab4da1b94dfa101e908356a649609b0b17561a | [
"BSD-3-Clause"
] | null | null | null | Airtable/Airtable_Get_data.ipynb | techthiyanes/awesome-notebooks | 10ab4da1b94dfa101e908356a649609b0b17561a | [
"BSD-3-Clause"
] | null | null | null | 25.253623 | 280 | 0.588522 | [
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# Airtable - Get data\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Airtable/Airtable_Get_data.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #airtable #database #productivity #spreadsheet #naas_drivers #operations #snippet #dataframe",
"_____no_output_____"
],
[
"**Author:** [Jeremy Ravenel](https://www.linkedin.com/in/ACoAAAJHE7sB5OxuKHuzguZ9L6lfDHqw--cdnJg/)",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Import library",
"_____no_output_____"
]
],
[
[
"from naas_drivers import airtable",
"_____no_output_____"
]
],
[
[
"### Variables",
"_____no_output_____"
]
],
[
[
"API_KEY = 'API_KEY'\nBASE_KEY = 'BASE_KEY'\nTABLE_NAME = 'TABLE_NAME'",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Connect to airtable and get data",
"_____no_output_____"
]
],
[
[
"df = naas_drivers.airtable.connect(API_KEY,\n BASE_KEY, \n TABLE_NAME).get(view='All opportunities',\n maxRecords=20)",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Display result",
"_____no_output_____"
]
],
[
[
"df",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d05094d07fec2b0005d180ebdd3076b9c50e056d | 152,406 | ipynb | Jupyter Notebook | notebooks/pyfocs_ex3_finalcheck.ipynb | klapo/btmm_process | 46a245bf5bbb8c4cbbcc48ab1e05e2a742286c89 | [
"MIT"
] | 3 | 2019-02-11T21:47:59.000Z | 2019-08-22T11:41:26.000Z | notebooks/pyfocs_ex3_finalcheck.ipynb | klapo/btmm_process | 46a245bf5bbb8c4cbbcc48ab1e05e2a742286c89 | [
"MIT"
] | 4 | 2018-09-20T09:12:36.000Z | 2019-09-24T13:00:24.000Z | notebooks/pyfocs_ex3_finalcheck.ipynb | klapo/btmm_process | 46a245bf5bbb8c4cbbcc48ab1e05e2a742286c89 | [
"MIT"
] | null | null | null | 312.307377 | 92,772 | 0.921073 | [
[
[
"# Physically labeled data: pyfocs single-ended examples\n\nFinally, after all of that (probably confusing) work we can map the data to physical coordinates.",
"_____no_output_____"
]
],
[
[
"import xarray as xr\nimport pyfocs\nimport os",
"/Users/karllapo/anaconda3/lib/python3.7/typing.py:847: FutureWarning: xarray subclass DataStore should explicitly define __slots__\n super().__init_subclass__(*args, **kwargs)\n"
]
],
[
[
"# 1. Load data\n\n## 1.1 Configuration files\n\nAs in the previous example we will load and prepare the configuration files. This time we will load all the configuration files.\n\nPhysically labeled data is triggered by setting the below flag within the configuration file.\n\n```python\nfinal_flag = True\n```",
"_____no_output_____"
]
],
[
[
"dir_example = os.path.join('../tests/data/')\n\n# Grab a configuration file for the twisted pair pvc fiber and for the stainless steel fiber\nconfig_names = [\n 'example_configuration_steelfiber.yml',\n 'example_twistedpair_bothwls.yml',\n 'example_twistedpair_p1wls.yml',\n 'example_twistedpair_p2wls.yml',\n]\n\ncfg_fname = os.path.join(dir_example, config_names[0])\ncfg_ss, lib_ss = pyfocs.check.config(cfg_fname, ignore_flags=True)\n\ncfg_fname = os.path.join(dir_example, config_names[1])\ncfg_both, lib_both = pyfocs.check.config(cfg_fname, ignore_flags=True)\n\ncfg_fname = os.path.join(dir_example, config_names[2])\ncfg_p1, lib_p1 = pyfocs.check.config(cfg_fname, ignore_flags=True)\n\ncfg_fname = os.path.join(dir_example, config_names[3])\ncfg_p2, lib_p2 = pyfocs.check.config(cfg_fname, ignore_flags=True)\n",
"_____no_output_____"
]
],
[
[
"## 1.2 Data\n\n- In this case we only use a single twisted pair, p1, since it is closer to the DTS device in LAF space yielding a less noisy signal.\n- Additionally, we will load the paired heated-unheated stainless steel fiber that has been interpolated to a common spatial index.\n",
"_____no_output_____"
]
],
[
[
"ds_p1 = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_p1-wls_unheated.nc'))\nds_p2 = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_p2-wls_unheated.nc'))\nds_cold = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_ss-wls_unheated.nc'))\nds_heat = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_ss-wls_heated.nc'))\n\nprint('=================')\nprint('Unheated fibers - Twisted PVC fiber, pair 1')\nprint(ds_p1)\nprint('')\n\nprint('=================')\nprint('Unheated fibers - Twisted PVC fiber, pair 2')\nprint(ds_p2)\nprint('')\n\nprint('=================')\nprint('Unheated fibers - stainless steel')\nprint(ds_cold)\nprint('')\n\nprint('=================')\nprint('Heated fibers - stainless steel')\nprint(ds_heat)\nprint('')\n",
"=================\nUnheated fibers - Twisted PVC fiber, pair 1\n<xarray.Dataset>\nDimensions: (time: 60, xyz: 1612)\nCoordinates:\n * time (time) datetime64[ns] 2019-07-22T00:00:05 ... 2019-07-22T00:05:00\n LAF (xyz) float64 ...\n unheated (xyz) object ...\n x (xyz) float64 ...\n y (xyz) float64 ...\n z (xyz) float64 ...\nDimensions without coordinates: xyz\nData variables:\n cal_temp (time, xyz) float64 ...\nAttributes:\n dt: 5s\n dLAF: 0.254\n unheated: IR_NE1_p1;IR_NE1_p2;IR_NE2_p1;IR_NE2_p2;IR_NW_p1;IR_NW_p2;IR_S...\n\n=================\nUnheated fibers - Twisted PVC fiber, pair 2\n<xarray.Dataset>\nDimensions: (time: 60, xyz: 1612)\nCoordinates:\n * time (time) datetime64[ns] 2019-07-22T00:00:05 ... 2019-07-22T00:05:00\n LAF (xyz) float64 ...\n unheated (xyz) object ...\n x (xyz) float64 ...\n y (xyz) float64 ...\n z (xyz) float64 ...\nDimensions without coordinates: xyz\nData variables:\n cal_temp (time, xyz) float64 ...\nAttributes:\n dt: 5s\n dLAF: 0.254\n unheated: IR_NE1_p1;IR_NE1_p2;IR_NE2_p1;IR_NE2_p2;IR_NW_p1;IR_NW_p2;IR_S...\n\n=================\nUnheated fibers - stainless steel\n<xarray.Dataset>\nDimensions: (time: 60, xyz: 2377)\nCoordinates:\n * time (time) datetime64[ns] 2019-07-22T00:00:05 ... 2019-07-22T00:05:00\n LAF (xyz) float64 ...\n unheated (xyz) object ...\n x (xyz) float64 ...\n y (xyz) float64 ...\n z (xyz) float64 ...\nDimensions without coordinates: xyz\nData variables:\n cal_temp (time, xyz) float64 ...\nAttributes:\n dt: 5s\n dLAF: 0.254\n\n=================\nHeated fibers - stainless steel\n<xarray.Dataset>\nDimensions: (time: 60, xyz: 2377)\nCoordinates:\n * time (time) datetime64[ns] 2019-07-22T00:00:05 ... 2019-07-22T00:05:00\n LAF (xyz) float64 ...\n heated (xyz) object ...\n x (xyz) float64 ...\n y (xyz) float64 ...\n z (xyz) float64 ...\nDimensions without coordinates: xyz\nData variables:\n cal_temp (time, xyz) float64 ...\nAttributes:\n dt: 5s\n dLAF: 0.254\n\n"
]
],
[
[
"Here we see that all datasets now have `x`, `y`, and `z` coordinates which are labeled using the `xyz` multiindex. Other quantities have been dropped.\n\nThe netcdf files are also now labeled differently. Channel information has been excluded and there is now a label on the location type at the end of the file name.",
"_____no_output_____"
],
[
"# 2. Calculate wind speed\n\n## 2.1 Construct the power variable\n\nHere I will construct a data variable of power. The details on what is happening here are not important besides `power` is a data variable with dimensions of LAF. The wind speed code can accept `power` as a DataArray with dimensions shared with `cal_temp` or as a single float.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n\npower_loc = {\n '1': [1892.5, 2063.5],\n '2': [2063.5, 2205.5],\n '3': [2207.0, 2361.],\n '4': [2361., 2524.]}\n\npower_vals = {\n '1': 6.1,\n '2': 6.4,\n '3': 4.7,\n '4': 5.4,}\n\nds_heat['power'] = ('LAF', np.zeros_like(ds_heat.LAF))\n\nfor p in power_vals:\n laf_mask = ((ds_heat.LAF > power_loc[p][0]) & (ds_heat.LAF < power_loc[p][1]))\n ds_heat['power'] = xr.where(laf_mask, np.ones_like(ds_heat.LAF.values) * power_vals[p], ds_heat.power.values)",
"_____no_output_____"
]
],
[
[
"## 2.2 Calculate wind speed",
"_____no_output_____"
]
],
[
[
"wind_speed = pyfocs.wind_speed.calculate(ds_heat.cal_temp, ds_cold.cal_temp, ds_heat.power)\n",
"Converted air temperature from Celsius to Kelvin.\nConverted air temperature from Celsius to Kelvin.\nConverted air temperature from Celsius to Kelvin.\nConverted air temperature from Celsius to Kelvin.\n"
]
],
[
[
"## 2.3 Split up wind speed based\n\nWind speed is most efficiently measured in the direction orthogonal to the fiber. Since we have fibers that are orthogonal to each other that means we effectively measured wind in two different directions. We represent that here by combining sections that are parallel to each other.",
"_____no_output_____"
]
],
[
[
"cross_valley_components = ['OR_SE', 'OR_NW']\nlogic = [wind_speed.unheated == l for l in cross_valley_components]\nlogic = xr.concat(logic, dim='locations').any(dim='locations')\nwind_speed_cross_valley = wind_speed.where(logic, drop=True)\n\nalong_valley_components = ['OR_SW2', 'OR_SW1', 'OR_NE1', 'OR_NE2']\nlogic = [wind_speed.unheated == l for l in along_valley_components]\nlogic = xr.concat(logic, dim='locations').any(dim='locations')\nwind_speed_along_valley = wind_speed.where(logic, drop=True)",
"_____no_output_____"
]
],
[
[
"## 2.4 Create a Dataset that contains all unheated data",
"_____no_output_____"
]
],
[
[
"unheated = xr.concat([ds_cold, ds_p1], dim='xyz', coords='different')",
"_____no_output_____"
]
],
[
[
"# 3. Plot your Fiber Optic Distributed Sensing data\n\n## 3.1 Wind speed and temperature",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\n\nfig = plt.figure(figsize=(12, 6),)\n\nspec = fig.add_gridspec(ncols=4,\n nrows=2,\n width_ratios=[1, 0.08, 0.04, 0.08],\n hspace=0.18, wspace=0.25,\n )\nax_ew_cbar = fig.add_subplot(spec[0, 3])\nax_ns_cbar = fig.add_subplot(spec[1, 3])\nax_t_cbar = fig.add_subplot(spec[:, 1])\nax_temp = fig.add_subplot(spec[:, 0])\n\nim = ax_temp.scatter(unheated.x, unheated.y, s=10,\n c=unheated.mean(dim='time').cal_temp.values,\n cmap='viridis', vmin=8.5, vmax=10)\nax_temp.set_ylabel('Relative Northing (m)')\nax_temp.set_xlabel('Relative Easting (m)')\nplt.colorbar(im, cax=ax_t_cbar, extend='both')\nax_t_cbar.set_ylabel('Temperature (C)')\nax_temp.set_title('a) LOVE19 Outer Array', loc='left')\n\nim = ax_temp.scatter(wind_speed_along_valley.x * 1.1,\n wind_speed_along_valley.y * 1.1,\n s=10,\n c=wind_speed_along_valley.mean(dim='time').values,\n cmap='Oranges', vmin=0.5, vmax=4)\nplt.colorbar(im, cax=ax_ew_cbar, extend='max')\nax_ew_cbar.set_ylabel('Along valley wind (m/s)')\n\nim = ax_temp.scatter(wind_speed_cross_valley.x * 1.1,\n wind_speed_cross_valley.y * 1.1,\n s=10,\n c=wind_speed_cross_valley.mean(dim='time').values,\n cmap='Blues', vmin=0.5, vmax=4)\nplt.colorbar(im, cax=ax_ns_cbar, extend='max')\nax_ns_cbar.set_ylabel('Cross valley wind (m/s)')",
"_____no_output_____"
]
],
[
[
"## 3.2 Biases in space",
"_____no_output_____"
]
],
[
[
"ds_p2 = ds_p2.interp_like(ds_p1)\n\nfig = plt.figure(figsize=(8, 6),)\n\nspec = fig.add_gridspec(ncols=2,\n nrows=1,\n width_ratios=[1, 0.1],\n hspace=0.18, wspace=0.25,\n )\nax_t_cbar = fig.add_subplot(spec[:, 1])\nax_temp = fig.add_subplot(spec[:, 0])\n\nim = ax_temp.scatter(\n ds_p1.x,\n ds_p1.y,\n s=10,\n c=(ds_p1.cal_temp - ds_p2.cal_temp).mean(dim='time').values,\n cmap='RdBu', vmin=-0.5, vmax=0.5)\nax_temp.set_ylabel('Relative Northing (m)')\nax_temp.set_xlabel('Relative Easting (m)')\nplt.colorbar(im, cax=ax_t_cbar, extend='both')\nax_t_cbar.set_ylabel('p1 - p2 (K)')\nax_temp.set_title('LOVE19 Twisted PVC Fiber Bias', loc='left')",
"_____no_output_____"
]
],
[
[
"Here we can see that the reference sections are a bit misleading. While they evaluate to effectively zero bias, there are substantial biases between what should be replicate measurements. We have found this to be typical of DTS observations. The cause and correction is a subject of on-going research but we highlight as a final word of caution on DTS. The method is excpetionally powerful but is very far from a push-button operation. It requires a substantial investment in time for all steps: setting up the fiber takes much longer than other instruments, preparing the dataset is a long process even with the tools provided by pyfocs, and it is still a new technique that is subject to uncertainties that are not even known to the community.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d05095a41428b0c63420fc44cbf5c4bdcaf1ac23 | 28,558 | ipynb | Jupyter Notebook | introduction_to_amazon_algorithms/seq2seq_translation_en-de/SageMaker-Seq2Seq-Translation-English-German.ipynb | karim7262/amazon-sagemaker-examples | f86a97fba33cd04e98ab9ccb1d6e974b73a924e4 | [
"Apache-2.0"
] | 1 | 2018-03-13T09:46:17.000Z | 2018-03-13T09:46:17.000Z | 40_AWS_SageMaker/introduction_to_amazon_algorithms/seq2seq_translation_en-de/SageMaker-Seq2Seq-Translation-English-German.ipynb | donwany/PipeLine.AI | 523ad468dd13fa953d9d41c3c0150b5fdcc2586c | [
"Apache-2.0"
] | null | null | null | 40_AWS_SageMaker/introduction_to_amazon_algorithms/seq2seq_translation_en-de/SageMaker-Seq2Seq-Translation-English-German.ipynb | donwany/PipeLine.AI | 523ad468dd13fa953d9d41c3c0150b5fdcc2586c | [
"Apache-2.0"
] | 2 | 2018-07-24T12:33:48.000Z | 2018-07-24T13:30:44.000Z | 33.836493 | 557 | 0.57476 | [
[
[
"# Machine Translation English-German Example Using SageMaker Seq2Seq\n\n1. [Introduction](#Introduction)\n2. [Setup](#Setup)\n3. [Download dataset and preprocess](#Download-dataset-and-preprocess)\n3. [Training the Machine Translation model](#Training-the-Machine-Translation-model)\n4. [Inference](#Inference)",
"_____no_output_____"
],
[
"## Introduction\n\nWelcome to our Machine Translation end-to-end example! In this demo, we will train a English-German translation model and will test the predictions on a few examples.\n\nSageMaker Seq2Seq algorithm is built on top of [Sockeye](https://github.com/awslabs/sockeye), a sequence-to-sequence framework for Neural Machine Translation based on MXNet. SageMaker Seq2Seq implements state-of-the-art encoder-decoder architectures which can also be used for tasks like Abstractive Summarization in addition to Machine Translation.\n\nTo get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.",
"_____no_output_____"
],
[
"## Setup\n\nLet's start by specifying:\n- The S3 bucket and prefix that you want to use for training and model data. **This should be within the same region as the Notebook Instance, training, and hosting.**\n- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp in the cell below with a the appropriate full IAM role arn string(s).",
"_____no_output_____"
]
],
[
[
"# S3 bucket and prefix\nbucket = '<your_s3_bucket_name_here>'\nprefix = 'sagemaker/<your_s3_prefix_here>' # E.g.'sagemaker/seq2seq/eng-german'",
"_____no_output_____"
],
[
"import boto3\nimport re\nfrom sagemaker import get_execution_role\n\nrole = get_execution_role()",
"_____no_output_____"
]
],
[
[
"Next, we'll import the Python libraries we'll need for the remainder of the exercise.",
"_____no_output_____"
]
],
[
[
"from time import gmtime, strftime\nimport time\nimport numpy as np\nimport os\nimport json\n\n# For plotting attention matrix later on\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Download dataset and preprocess",
"_____no_output_____"
],
[
"In this notebook, we will train a English to German translation model on a dataset from the\n[Conference on Machine Translation (WMT) 2017](http://www.statmt.org/wmt17/).",
"_____no_output_____"
]
],
[
[
"%%bash\nwget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.de.gz & \\\nwget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.en.gz & wait\ngunzip corpus.tc.de.gz & \\\ngunzip corpus.tc.en.gz & wait\nmkdir validation\ncurl http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/dev.tgz | tar xvzf - -C validation",
"_____no_output_____"
]
],
[
[
"Please note that it is a common practise to split words into subwords using Byte Pair Encoding (BPE). Please refer to [this](https://github.com/awslabs/sockeye/tree/master/tutorials/wmt) tutorial if you are interested in performing BPE.",
"_____no_output_____"
],
[
"Since training on the whole dataset might take several hours/days, for this demo, let us train on the **first 10,000 lines only**. Don't run the next cell if you want to train on the complete dataset.",
"_____no_output_____"
]
],
[
[
"!head -n 10000 corpus.tc.en > corpus.tc.en.small\n!head -n 10000 corpus.tc.de > corpus.tc.de.small",
"_____no_output_____"
]
],
[
[
"Now, let's use the preprocessing script `create_vocab_proto.py` (provided with this notebook) to create vocabulary mappings (strings to integers) and convert these files to x-recordio-protobuf as required for training by SageMaker Seq2Seq. \nUncomment the cell below and run to see check the arguments this script expects.",
"_____no_output_____"
]
],
[
[
"%%bash\n# python3 create_vocab_proto.py -h",
"_____no_output_____"
]
],
[
[
"The cell below does the preprocessing. If you are using the complete dataset, the script might take around 10-15 min on an m4.xlarge notebook instance. Remove \".small\" from the file names for training on full datasets.",
"_____no_output_____"
]
],
[
[
"%%time\n%%bash\npython3 create_vocab_proto.py \\\n --train-source corpus.tc.en.small \\\n --train-target corpus.tc.de.small \\\n --val-source validation/newstest2014.tc.en \\\n --val-target validation/newstest2014.tc.de",
"_____no_output_____"
]
],
[
[
"The script will output 4 files, namely:\n- train.rec : Contains source and target sentences for training in protobuf format\n- val.rec : Contains source and target sentences for validation in protobuf format\n- vocab.src.json : Vocabulary mapping (string to int) for source language (English in this example)\n- vocab.trg.json : Vocabulary mapping (string to int) for target language (German in this example)\n\nLet's upload the pre-processed dataset and vocabularies to S3",
"_____no_output_____"
]
],
[
[
"def upload_to_s3(bucket, prefix, channel, file):\n s3 = boto3.resource('s3')\n data = open(file, \"rb\")\n key = prefix + \"/\" + channel + '/' + file\n s3.Bucket(bucket).put_object(Key=key, Body=data)\n\nupload_to_s3(bucket, prefix, 'train', 'train.rec')\nupload_to_s3(bucket, prefix, 'validation', 'val.rec')\nupload_to_s3(bucket, prefix, 'vocab', 'vocab.src.json')\nupload_to_s3(bucket, prefix, 'vocab', 'vocab.trg.json')",
"_____no_output_____"
],
[
"region_name = boto3.Session().region_name",
"_____no_output_____"
],
[
"containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/seq2seq:latest',\n 'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/seq2seq:latest',\n 'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/seq2seq:latest',\n 'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/seq2seq:latest'}\ncontainer = containers[region_name]\nprint('Using SageMaker Seq2Seq container: {} ({})'.format(container, region_name))",
"_____no_output_____"
]
],
[
[
"## Training the Machine Translation model",
"_____no_output_____"
]
],
[
[
"job_name = 'seq2seq-en-de-p2-xlarge-' + strftime(\"%Y-%m-%d-%H\", gmtime())\nprint(\"Training job\", job_name)\n\ncreate_training_params = \\\n{\n \"AlgorithmSpecification\": {\n \"TrainingImage\": container,\n \"TrainingInputMode\": \"File\"\n },\n \"RoleArn\": role,\n \"OutputDataConfig\": {\n \"S3OutputPath\": \"s3://{}/{}/\".format(bucket, prefix)\n },\n \"ResourceConfig\": {\n # Seq2Seq does not support multiple machines. Currently, it only supports single machine, multiple GPUs\n \"InstanceCount\": 1,\n \"InstanceType\": \"ml.p2.xlarge\", # We suggest one of [\"ml.p2.16xlarge\", \"ml.p2.8xlarge\", \"ml.p2.xlarge\"]\n \"VolumeSizeInGB\": 50\n },\n \"TrainingJobName\": job_name,\n \"HyperParameters\": {\n # Please refer to the documentation for complete list of parameters\n \"max_seq_len_source\": \"60\",\n \"max_seq_len_target\": \"60\",\n \"optimized_metric\": \"bleu\",\n \"batch_size\": \"64\", # Please use a larger batch size (256 or 512) if using ml.p2.8xlarge or ml.p2.16xlarge\n \"checkpoint_frequency_num_batches\": \"1000\",\n \"rnn_num_hidden\": \"512\",\n \"num_layers_encoder\": \"1\",\n \"num_layers_decoder\": \"1\",\n \"num_embed_source\": \"512\",\n \"num_embed_target\": \"512\",\n \"checkpoint_threshold\": \"3\",\n \"max_num_batches\": \"2100\"\n # Training will stop after 2100 iterations/batches.\n # This is just for demo purposes. Remove the above parameter if you want a better model.\n },\n \"StoppingCondition\": {\n \"MaxRuntimeInSeconds\": 48 * 3600\n },\n \"InputDataConfig\": [\n {\n \"ChannelName\": \"train\",\n \"DataSource\": {\n \"S3DataSource\": {\n \"S3DataType\": \"S3Prefix\",\n \"S3Uri\": \"s3://{}/{}/train/\".format(bucket, prefix),\n \"S3DataDistributionType\": \"FullyReplicated\"\n }\n },\n },\n {\n \"ChannelName\": \"vocab\",\n \"DataSource\": {\n \"S3DataSource\": {\n \"S3DataType\": \"S3Prefix\",\n \"S3Uri\": \"s3://{}/{}/vocab/\".format(bucket, prefix),\n \"S3DataDistributionType\": \"FullyReplicated\"\n }\n },\n },\n {\n \"ChannelName\": \"validation\",\n \"DataSource\": {\n \"S3DataSource\": {\n \"S3DataType\": \"S3Prefix\",\n \"S3Uri\": \"s3://{}/{}/validation/\".format(bucket, prefix),\n \"S3DataDistributionType\": \"FullyReplicated\"\n }\n },\n }\n ]\n}\n\nsagemaker_client = boto3.Session().client(service_name='sagemaker')\nsagemaker_client.create_training_job(**create_training_params)\n\nstatus = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']\nprint(status)",
"_____no_output_____"
],
[
"status = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']\nprint(status)\n# if the job failed, determine why\nif status == 'Failed':\n message = sage.describe_training_job(TrainingJobName=job_name)['FailureReason']\n print('Training failed with the following error: {}'.format(message))\n raise Exception('Training job failed')",
"_____no_output_____"
]
],
[
[
"> Now wait for the training job to complete and proceed to the next step after you see model artifacts in your S3 bucket.",
"_____no_output_____"
],
[
"You can jump to [Use a pretrained model](#Use-a-pretrained-model) as training might take some time.",
"_____no_output_____"
],
[
"## Inference\n\nA trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means translating sentence(s) from English to German.\nThis section involves several steps,\n- Create model - Create a model using the artifact (model.tar.gz) produced by training\n- Create Endpoint Configuration - Create a configuration defining an endpoint, using the above model\n- Create Endpoint - Use the configuration to create an inference endpoint.\n- Perform Inference - Perform inference on some input data using the endpoint.\n\n### Create model\nWe now create a SageMaker Model from the training output. Using the model, we can then create an Endpoint Configuration.",
"_____no_output_____"
]
],
[
[
"use_pretrained_model = False",
"_____no_output_____"
]
],
[
[
"### Use a pretrained model\n#### Please uncomment and run the cell below if you want to use a pretrained model, as training might take several hours/days to complete.",
"_____no_output_____"
]
],
[
[
"# use_pretrained_model = True\n# model_name = \"pretrained-en-de-model\"\n# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/model.tar.gz > model.tar.gz\n# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.src.json > vocab.src.json\n# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.trg.json > vocab.trg.json\n# upload_to_s3(bucket, prefix, 'pretrained_model', 'model.tar.gz')\n# model_data = \"s3://{}/{}/pretrained_model/model.tar.gz\".format(bucket, prefix)",
"_____no_output_____"
],
[
"%%time\n\nsage = boto3.client('sagemaker')\n\nif not use_pretrained_model:\n info = sage.describe_training_job(TrainingJobName=job_name)\n model_name=job_name\n model_data = info['ModelArtifacts']['S3ModelArtifacts']\n\nprint(model_name)\nprint(model_data)\n\nprimary_container = {\n 'Image': container,\n 'ModelDataUrl': model_data\n}\n\ncreate_model_response = sage.create_model(\n ModelName = model_name,\n ExecutionRoleArn = role,\n PrimaryContainer = primary_container)\n\nprint(create_model_response['ModelArn'])",
"_____no_output_____"
]
],
[
[
"### Create endpoint configuration\nUse the model to create an endpoint configuration. The endpoint configuration also contains information about the type and number of EC2 instances to use when hosting the model.\n\nSince SageMaker Seq2Seq is based on Neural Nets, we could use an ml.p2.xlarge (GPU) instance, but for this example we will use a free tier eligible ml.m4.xlarge.",
"_____no_output_____"
]
],
[
[
"from time import gmtime, strftime\n\nendpoint_config_name = 'Seq2SeqEndpointConfig-' + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\nprint(endpoint_config_name)\ncreate_endpoint_config_response = sage.create_endpoint_config(\n EndpointConfigName = endpoint_config_name,\n ProductionVariants=[{\n 'InstanceType':'ml.m4.xlarge',\n 'InitialInstanceCount':1,\n 'ModelName':model_name,\n 'VariantName':'AllTraffic'}])\n\nprint(\"Endpoint Config Arn: \" + create_endpoint_config_response['EndpointConfigArn'])",
"_____no_output_____"
]
],
[
[
"### Create endpoint\nLastly, we create the endpoint that serves up model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 10-15 minutes to complete.",
"_____no_output_____"
]
],
[
[
"%%time\nimport time\n\nendpoint_name = 'Seq2SeqEndpoint-' + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\nprint(endpoint_name)\ncreate_endpoint_response = sage.create_endpoint(\n EndpointName=endpoint_name,\n EndpointConfigName=endpoint_config_name)\nprint(create_endpoint_response['EndpointArn'])\n\nresp = sage.describe_endpoint(EndpointName=endpoint_name)\nstatus = resp['EndpointStatus']\nprint(\"Status: \" + status)\n\n# wait until the status has changed\nsage.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)\n\n# print the status of the endpoint\nendpoint_response = sage.describe_endpoint(EndpointName=endpoint_name)\nstatus = endpoint_response['EndpointStatus']\nprint('Endpoint creation ended with EndpointStatus = {}'.format(status))\n\nif status != 'InService':\n raise Exception('Endpoint creation failed.')",
"_____no_output_____"
]
],
[
[
"If you see the message,\n> Endpoint creation ended with EndpointStatus = InService\n\nthen congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the \"Endpoints\" tab in the AWS SageMaker console. \n\nWe will finally create a runtime object from which we can invoke the endpoint.",
"_____no_output_____"
]
],
[
[
"runtime = boto3.client(service_name='runtime.sagemaker') ",
"_____no_output_____"
]
],
[
[
"# Perform Inference",
"_____no_output_____"
],
[
"### Using JSON format for inference (Suggested for a single or small number of data instances)",
"_____no_output_____"
],
[
"#### Note that you don't have to convert string to text using the vocabulary mapping for inference using JSON mode",
"_____no_output_____"
]
],
[
[
"sentences = [\"you are so good !\",\n \"can you drive a car ?\",\n \"i want to watch a movie .\"\n ]\n\npayload = {\"instances\" : []}\nfor sent in sentences:\n payload[\"instances\"].append({\"data\" : sent})\n\nresponse = runtime.invoke_endpoint(EndpointName=endpoint_name, \n ContentType='application/json', \n Body=json.dumps(payload))\n\nresponse = response[\"Body\"].read().decode(\"utf-8\")\nresponse = json.loads(response)\nprint(response)",
"_____no_output_____"
]
],
[
[
"### Retrieving the Attention Matrix",
"_____no_output_____"
],
[
"Passing `\"attention_matrix\":\"true\"` in `configuration` of the data instance will return the attention matrix.",
"_____no_output_____"
]
],
[
[
"sentence = 'can you drive a car ?'\n\npayload = {\"instances\" : [{\n \"data\" : sentence,\n \"configuration\" : {\"attention_matrix\":\"true\"}\n }\n ]}\n\nresponse = runtime.invoke_endpoint(EndpointName=endpoint_name, \n ContentType='application/json', \n Body=json.dumps(payload))\n\nresponse = response[\"Body\"].read().decode(\"utf-8\")\nresponse = json.loads(response)['predictions'][0]\n\nsource = sentence\ntarget = response[\"target\"]\nattention_matrix = np.array(response[\"matrix\"])\n\nprint(\"Source: %s \\nTarget: %s\" % (source, target))",
"_____no_output_____"
],
[
"# Define a function for plotting the attentioan matrix\ndef plot_matrix(attention_matrix, target, source):\n source_tokens = source.split()\n target_tokens = target.split()\n assert attention_matrix.shape[0] == len(target_tokens)\n plt.imshow(attention_matrix.transpose(), interpolation=\"nearest\", cmap=\"Greys\")\n plt.xlabel(\"target\")\n plt.ylabel(\"source\")\n plt.gca().set_xticks([i for i in range(0, len(target_tokens))])\n plt.gca().set_yticks([i for i in range(0, len(source_tokens))])\n plt.gca().set_xticklabels(target_tokens)\n plt.gca().set_yticklabels(source_tokens)\n plt.tight_layout()",
"_____no_output_____"
],
[
"plot_matrix(attention_matrix, target, source)",
"_____no_output_____"
]
],
[
[
"### Using Protobuf format for inference (Suggested for efficient bulk inference)",
"_____no_output_____"
],
[
"Reading the vocabulary mappings as this mode of inference accepts list of integers and returns list of integers.",
"_____no_output_____"
]
],
[
[
"import io\nimport tempfile\nfrom record_pb2 import Record\nfrom create_vocab_proto import vocab_from_json, reverse_vocab, write_recordio, list_to_record_bytes, read_next\n\nsource = vocab_from_json(\"vocab.src.json\")\ntarget = vocab_from_json(\"vocab.trg.json\")\n\nsource_rev = reverse_vocab(source)\ntarget_rev = reverse_vocab(target)",
"_____no_output_____"
],
[
"sentences = [\"this is so cool\",\n \"i am having dinner .\",\n \"i am sitting in an aeroplane .\",\n \"come let us go for a long drive .\"]",
"_____no_output_____"
]
],
[
[
"Converting the string to integers, followed by protobuf encoding:",
"_____no_output_____"
]
],
[
[
"# Convert strings to integers using source vocab mapping. Out-of-vocabulary strings are mapped to 1 - the mapping for <unk>\nsentences = [[source.get(token, 1) for token in sentence.split()] for sentence in sentences]\nf = io.BytesIO()\nfor sentence in sentences:\n record = list_to_record_bytes(sentence, [])\n write_recordio(f, record)",
"_____no_output_____"
],
[
"response = runtime.invoke_endpoint(EndpointName=endpoint_name, \n ContentType='application/x-recordio-protobuf', \n Body=f.getvalue())\n\nresponse = response[\"Body\"].read()",
"_____no_output_____"
]
],
[
[
"Now, parse the protobuf response and convert list of integers back to strings",
"_____no_output_____"
]
],
[
[
"def _parse_proto_response(received_bytes):\n output_file = tempfile.NamedTemporaryFile()\n output_file.write(received_bytes)\n output_file.flush()\n target_sentences = []\n with open(output_file.name, 'rb') as datum:\n next_record = True\n while next_record:\n next_record = read_next(datum)\n if next_record:\n rec = Record()\n rec.ParseFromString(next_record)\n target = list(rec.features[\"target\"].int32_tensor.values)\n target_sentences.append(target)\n else:\n break\n return target_sentences",
"_____no_output_____"
],
[
"targets = _parse_proto_response(response)\nresp = [\" \".join([target_rev.get(token, \"<unk>\") for token in sentence]) for\n sentence in targets]\nprint(resp)",
"_____no_output_____"
]
],
[
[
"# Stop / Close the Endpoint (Optional)\n\nFinally, we should delete the endpoint before we close the notebook.",
"_____no_output_____"
]
],
[
[
"sage.delete_endpoint(EndpointName=endpoint_name)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0509a8a4240af1f976d5cc020152daa475602ef | 11,607 | ipynb | Jupyter Notebook | B_DS2.ipynb | eunyul24/eunyul24.github.io | 6629eb2488eb90a269870c2f511f047d6c5d7c75 | [
"MIT"
] | null | null | null | B_DS2.ipynb | eunyul24/eunyul24.github.io | 6629eb2488eb90a269870c2f511f047d6c5d7c75 | [
"MIT"
] | null | null | null | B_DS2.ipynb | eunyul24/eunyul24.github.io | 6629eb2488eb90a269870c2f511f047d6c5d7c75 | [
"MIT"
] | null | null | null | 31.975207 | 483 | 0.433015 | [
[
[
"<a href=\"https://colab.research.google.com/github/eunyul24/eunyul24.github.io/blob/master/B_DS2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
],
[
"import numpy as np\nimport csv",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"header = []\nuserId = []\nmovieId = []\nratings = []\ntest = []\nrownum = -1\n\nwith open('/content/drive/My Drive/Colab Notebooks/ml-20m/ratings.csv','r') as f:\n data = csv.reader(f)\n for row in data:\n rownum += 1\n if rownum == 0:\n header = row\n continue\n if int(row[3]) < 1388502017: \n userId.append(int(row[0]))\n movieId.append(int(row[1]))\n ratings.append(float(row[2]))\n else: test.append([int(row[0]), int(row[1]), float(row[2]), int(row[3])])\n \nprint(len(userId))\nprint(len(test))",
"19152913\n847350\n"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"userIdx = dict()\nfor i, uid in enumerate(np.unique(userId)):\n userIdx[uid] = i\n\nmovieIdx = dict()\nfor i, mid in enumerate(np.unique(movieId)):\n movieIdx[mid] = i\n\nX = np.zeros((len(ratings),2), dtype=int)\nfor i in range(len(userId)):\n X[i] = [userIdx[userId[i]], movieIdx[movieId[i]]]",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"class MatrixFactorization():\n def __init__(self, ratings, X, k = 10, learning_rate = 0.01, reg_param = 0.1, epochs = 20):\n \"\"\"\n param R: ratings\n param X: userId, movieId\n param k: latent parameter\n param learning_rate: alpha on weight update\n param reg_param: beta on weight update\n param epochs: training epochs\n \"\"\"\n\n self.ratings = ratings\n self.X = X\n self.num_users = len(np.unique(X[:, 0]))\n self.num_movies = len(np.unique(X[:, 1]))\n self.k = k\n self.learning_rate = learning_rate\n self.reg_param = reg_param\n self.epochs = epochs\n\n def fit(self):\n \"\"\"\n training Matrix Factorization : Update matrix latent weight and bias\n \n return: training_process\n \"\"\"\n\n # init latent features\n self.P = np.random.normal(size=(self.num_users, self.k))\n self.Q = np.random.normal(size=(self.num_movies, self.k))\n\n # init biases\n self.b = np.mean(self.ratings)\n self.b_P = np.zeros(self.num_users)\n self.b_Q = np.zeros(self.num_movies)\n\n # train while epochs\n self.training_process = []\n for epoch in range(self.epochs):\n for i,rating in enumerate(self.ratings):\n self.gradient_descent(self.X[i, 0], self.X[i, 1], rating)\n rmse = self.rmse()\n self.training_process.append((epoch,rmse))\n \n # print status\n if (epoch + 1) % 10 == 0:\n print(\"Iteration: %d ; RMSE = %.4f\" % (epoch + 1, rmse))\n \n return self.training_process\n \n\n def rmse(self):\n \"\"\"\n compute root mean square error\n \n return: rmse cost\n \"\"\"\n \n error = 0\n for i,rating in enumerate(ratings):\n error += pow(rating - self.get_prediction(self.X[i, 0], self.X[i, 1]), 2)\n return np.sqrt(error)\n\n\n def gradient_descent(self, i, j, rating):\n \"\"\"\n graident descent function\n\n param i: user index of matrix\n param j: item index of matrix\n param rating: rating of (i,j)\n \"\"\"\n\n # get error\n prediction = self.get_prediction(i, j)\n error = rating - prediction\n\n # update biases\n self.b_P[i] += self.learning_rate * (error - self.reg_param * self.b_P[i])\n self.b_Q[j] += self.learning_rate * (error - self.reg_param * self.b_Q[j])\n\n # update latent feature\n self.P[i, :] += self.learning_rate * (error * self.Q[j, :] - self.reg_param * self.P[i, :])\n self.Q[j, :] += self.learning_rate * (error * self.P[i, :] - self.reg_param * self.Q[j, :])\n\n\n def get_prediction(self, i, j):\n \"\"\"\n get predicted rating: user_i, item_j\n \n return: prediction of r_ij\n \"\"\"\n return self.b + self.b_P[i] + self.b_Q[j] + self.P[i, :].dot(self.Q[j, :].T)",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"MF = MatrixFactorization(ratings, X)\ntraining_process = MF.fit()\n\nprint(\"train RMSE:\", MF.rmse())",
"train RMSE: {} 3781.8531457378017\n"
],
[
"f = open('/content/drive/My Drive/Colab Notebooks/ml-20m/B_results_DS2.csv', 'w', encoding='utf-8')\nheader[2] = 'predected rating'\nwr = csv.writer(f)\nwr.writerow(header)\n\nerror = 0\n\nfor uId, mId, rating, time in test:\n if uId in userIdx.keys() and mId in movieIdx.keys():\n predicted = MF.get_prediction(userIdx[uId], movieIdx[mId])\n elif not uId in userIdx.keys() and mId in movieIdx.keys():\n predicted = np.mean([ratings[i] for i in np.where(X[:, 1] == movieIdx[mId])[0]])\n elif uId in userIdx.keys() and not mId in movieIdx.keys():\n predicted = np.mean([ratings[i] for i in np.where(X[:, 0] == userIdx[uId])[0]])\n else:\n predicted = np.mean(ratings)\n\n error += pow(rating - predicted, 2)\n \n wr.writerow([uId, mId, predicted,time])\n\nf.close()\nprint(\"test RMSE:\", np.sqrt(error))",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0509fff779e0e660759905c64e7cc03d5899af7 | 332,318 | ipynb | Jupyter Notebook | Exercises - Qasim/Python. Pandas, Viz/Capstone Project 1/911 Calls - o .ipynb | k21k/Python-Notes | fe59a52664e911c5cdbc19f77e94f1f892dcaa0b | [
"BSD-2-Clause"
] | null | null | null | Exercises - Qasim/Python. Pandas, Viz/Capstone Project 1/911 Calls - o .ipynb | k21k/Python-Notes | fe59a52664e911c5cdbc19f77e94f1f892dcaa0b | [
"BSD-2-Clause"
] | null | null | null | Exercises - Qasim/Python. Pandas, Viz/Capstone Project 1/911 Calls - o .ipynb | k21k/Python-Notes | fe59a52664e911c5cdbc19f77e94f1f892dcaa0b | [
"BSD-2-Clause"
] | null | null | null | 148.7547 | 39,136 | 0.857564 | [
[
[
"# 911 Calls Capstone Project - Solutions",
"_____no_output_____"
],
[
"For this capstone project we will be analyzing some 911 call data from [Kaggle](https://www.kaggle.com/mchirico/montcoalert). The data contains the following fields:\n\n* lat : String variable, Latitude\n* lng: String variable, Longitude\n* desc: String variable, Description of the Emergency Call\n* zip: String variable, Zipcode\n* title: String variable, Title\n* timeStamp: String variable, YYYY-MM-DD HH:MM:SS\n* twp: String variable, Township\n* addr: String variable, Address\n* e: String variable, Dummy variable (always 1)\n\nJust go along with this notebook and try to complete the instructions or answer the questions in bold using your Python and Data Science skills!",
"_____no_output_____"
],
[
"___\n* Import numpy and Pandas",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns",
"_____no_output_____"
]
],
[
[
"* Import visualization libraries and set %matplotlib inline.",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('911.csv')",
"_____no_output_____"
]
],
[
[
"* Read in the csv file as a dataframe called df",
"_____no_output_____"
]
],
[
[
"df.dtypes",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 99492 entries, 0 to 99491\nData columns (total 9 columns):\nlat 99492 non-null float64\nlng 99492 non-null float64\ndesc 99492 non-null object\nzip 86637 non-null float64\ntitle 99492 non-null object\ntimeStamp 99492 non-null object\ntwp 99449 non-null object\naddr 98973 non-null object\ne 99492 non-null int64\ndtypes: float64(3), int64(1), object(5)\nmemory usage: 6.8+ MB\n"
],
[
"df.head(3)",
"_____no_output_____"
]
],
[
[
"# Short Questions\n* What are the bottom 5 zipcodes for 911 calls?",
"_____no_output_____"
]
],
[
[
"df['zip'].value_counts().tail(5)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"* What are the top 5 townships (twp) for 911 calls?",
"_____no_output_____"
]
],
[
[
"df['twp'].value_counts().head(5)",
"_____no_output_____"
]
],
[
[
"* Take a look at the 'title' column, how many unique title codes are there?",
"_____no_output_____"
]
],
[
[
"df['title'].nunique()",
"_____no_output_____"
]
],
[
[
"# Adding New Features\n* In the titles column there are \"Reasons/Departments\" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called \"Reason\" that contains this string value.\n\n* *For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS.*",
"_____no_output_____"
]
],
[
[
"df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])\ndf.head()",
"_____no_output_____"
]
],
[
[
"* Most common Reason for a 911 call based off of this new column?",
"_____no_output_____"
]
],
[
[
"# df3 = df2.value_counts()\n# df3.columns= 'count'",
"_____no_output_____"
],
[
"df['Reason'].value_counts()",
"_____no_output_____"
],
[
"sns.countplot(x='Reason',data=df,palette='viridis')",
"_____no_output_____"
]
],
[
[
"___\n* Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?",
"_____no_output_____"
]
],
[
[
"# Convert it to DateTime object\ndf['timeStamp'] = pd.to_datetime(df['timeStamp'])",
"_____no_output_____"
],
[
"df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)\ndf['Month'] = df['timeStamp'].apply(lambda time: time.month)\ndf['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)",
"_____no_output_____"
],
[
"# map Day of week column according to the days in a week\ndmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}\ndf['Day of Week'] = df['Day of Week'].map(dmap)",
"_____no_output_____"
],
[
"sns.countplot(x='Day of Week',data=df,hue='Reason',palette='viridis')\n\n# To relocate the legend\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)",
"_____no_output_____"
],
[
"sns.countplot(x='Month',data=df,hue='Reason',palette='viridis')\n\n# To relocate the legend\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)",
"_____no_output_____"
]
],
[
[
"* You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...",
"_____no_output_____"
],
[
"* Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame.",
"_____no_output_____"
]
],
[
[
"byMonth = df.groupby('Month').count()\nbyMonth.head()",
"_____no_output_____"
],
[
"# Simple line plot of any column of byMonth\nbyMonth['twp'].plot()",
"_____no_output_____"
],
[
"# Now see if you can use seaborn's lmplot() to create a linear fit\n# on the number of calls per month. Keep in mind you\n# may need to reset the index to a column.\nsns.lmplot(x='Month',y='twp',data=byMonth.reset_index())",
"_____no_output_____"
],
[
"# Create a new column Date in the df\ndf['Date']=df['timeStamp'].apply(lambda t: t.date())",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"* Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.",
"_____no_output_____"
]
],
[
[
"# use .plot()\ndf.groupby('Date').count()['twp'].plot()\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"* Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call",
"_____no_output_____"
]
],
[
[
"# Traffic\ndf[df['Reason']=='Traffic'].groupby('Date').count()['twp'].plot()\nplt.title('Traffic')\nplt.tight_layout()",
"_____no_output_____"
],
[
"# Fire\ndf[df['Reason']=='Fire'].groupby('Date').count()['twp'].plot()\nplt.title('Fire')\nplt.tight_layout()",
"_____no_output_____"
],
[
"# EMS\ndf[df['Reason']=='EMS'].groupby('Date').count()['twp'].plot()\nplt.title('EMS')\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"* Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an [unstack](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html) method.",
"_____no_output_____"
]
],
[
[
"dayHour = df.groupby(by=['Day of Week','Hour']).count()['Reason'].unstack()\ndayHour.head()",
"_____no_output_____"
],
[
"dayHour.head()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,6))\nsns.heatmap(dayHour)",
"_____no_output_____"
],
[
"sns.clustermap(dayHour)",
"_____no_output_____"
]
],
[
[
"* Now repeat these same plots and operations, for a DataFrame that shows the Month as the column.",
"_____no_output_____"
]
],
[
[
"dayMonth = df.groupby(by=['Day of Week','Month']).count()['Reason'].unstack()\ndayMonth.head()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,6))\nsns.heatmap(dayMonth)",
"_____no_output_____"
],
[
"sns.clustermap(dayMonth)",
"_____no_output_____"
]
],
[
[
"# Excellent job! \nKeep exploring data however you see fit",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d050abea73e867ef35b586140e4761e69d99aa4c | 245,096 | ipynb | Jupyter Notebook | notebooks/sandbox-grow.ipynb | MarineLasbleis/GrowYourIC | 27cedf0b06cfb2790a60d0734bd0f883ad113a56 | [
"MIT"
] | 1 | 2018-10-10T14:37:34.000Z | 2018-10-10T14:37:34.000Z | notebooks/sandbox-grow.ipynb | syzeng-duduxi/GrowYourIC | 27cedf0b06cfb2790a60d0734bd0f883ad113a56 | [
"MIT"
] | 1 | 2020-11-03T18:14:26.000Z | 2020-11-13T10:00:50.000Z | notebooks/sandbox-grow.ipynb | syzeng-duduxi/GrowYourIC | 27cedf0b06cfb2790a60d0734bd0f883ad113a56 | [
"MIT"
] | 2 | 2018-10-10T14:37:35.000Z | 2022-03-22T10:13:34.000Z | 346.670438 | 110,492 | 0.911618 | [
[
[
"# Let's Grow your Own Inner Core!",
"_____no_output_____"
],
[
"### Choose a model in the list: \n - geodyn_trg.TranslationGrowthRotation()\n - geodyn_static.Hemispheres()\n\n### Choose a proxy type:\n - age\n - position\n - phi\n - theta\n - growth rate\n\n### set the parameters for the model : geodynModel.set_parameters(parameters)\n### set the units : geodynModel.define_units()\n\n### Choose a data set:\n - data.SeismicFromFile(filename) # Lauren's data set\n - data.RandomData(numbers_of_points)\n - data.PerfectSamplingEquator(numbers_of_points)\n organized on a cartesian grid. numbers_of_points is the number of points along the x or y axis. The total number of points is numbers_of_points**2*pi/4\n - as a special plot function to show streamlines: plot_c_vec(self,modelgeodyn)\n - data.PerfectSamplingEquatorRadial(Nr, Ntheta)\n same than below, but organized on a polar grid, not a cartesian grid.\n\n\n### Extract the info:\n - calculate the proxy value for all points of the data set: geodyn.evaluate_proxy(data_set, geodynModel)\n - extract the positions as numpy arrays: extract_rtp or extract_xyz\n - calculate other variables: positions.angular_distance_to_point(t,p, t_point, p_point)",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\n# import statements\nimport numpy as np\nimport matplotlib.pyplot as plt #for figures\nfrom mpl_toolkits.basemap import Basemap #to render maps\nimport math\nimport json #to write dict with parameters\n\nfrom GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data\n\nplt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures\ncm = plt.cm.get_cmap('viridis')\ncm2 = plt.cm.get_cmap('winter')",
"/Users/marine/.python-eggs/GrowYourIC-0.5-py3.5.egg-tmp/GrowYourIC/data/CM2008_data.mat\n"
]
],
[
[
"## Define the geodynamical model",
"_____no_output_____"
],
[
"Un-comment one of the model",
"_____no_output_____"
]
],
[
[
"## un-comment one of them\ngeodynModel = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper\n# geodynModel = geodyn_static.Hemispheres() #this is a static model, only hemispheres. ",
"_____no_output_____"
]
],
[
[
"Change the values of the parameters to get the model you want (here, parameters for .TranslationGrowthRotation())",
"_____no_output_____"
]
],
[
[
"age_ic_dim = 1e9 #in years\nrICB_dim = 1221. #in km\nv_g_dim = rICB_dim/age_ic_dim # in km/years #growth rate\nprint(\"Growth rate is {:.2e} km/years\".format(v_g_dim))\nv_g_dim_seconds = v_g_dim*1e3/(np.pi*1e7)\n\ntranslation_velocity_dim = 0.8*v_g_dim_seconds#4e-10 #0.8*v_g_dim_seconds#4e-10 #m.s, value for today's Earth with Q_cmb = 10TW (see Alboussiere et al. 2010)\ntime_translation = rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)\nmaxAge = 2.*time_translation/1e6\nprint(\"The translation recycles the inner core material in {0:.2e} million years\".format(maxAge))\nprint(\"Translation velocity is {0:.2e} km/years\".format(translation_velocity_dim*np.pi*1e7/1e3))\n\nunits = None #we give them already dimensionless parameters. \nrICB = 1.\nage_ic = 1.\nomega = 0.#0.5*np.pi/200e6*age_ic_dim#0.5*np.pi #0. #0.5*np.pi/200e6*age_ic_dim# 0.#0.5*np.pi#0.#0.5*np.pi/200e6*age_ic_dim #0. #-0.5*np.pi # Rotation rates has to be in ]-np.pi, np.pi[\nprint(\"Rotation rate is {:.2e}\".format(omega))\nvelocity_amplitude = translation_velocity_dim*age_ic_dim*np.pi*1e7/rICB_dim/1e3\nvelocity_center = [0., 100.]#center of the eastern hemisphere\nvelocity = geodyn_trg.translation_velocity(velocity_center, velocity_amplitude)\nexponent_growth = 1.#0.1#1\n\nprint(v_g_dim, velocity_amplitude, omega/age_ic_dim*180/np.pi*1e6)",
"Growth rate is 1.22e-06 km/years\nThe translation recycles the inner core material in 2.50e+03 million years\nTranslation velocity is 9.77e-07 km/years\nRotation rate is 0.00e+00\n1.221e-06 0.7999999999999999 0.0\n"
]
],
[
[
"Define a proxy type, and a proxy name (to be used in the figures to annotate the axes)\n\nYou can re-define it later if you want (or define another proxy_type2 if needed)",
"_____no_output_____"
]
],
[
[
"proxy_type = \"age\"#\"growth rate\"\nproxy_name = \"age (Myears)\" #growth rate (km/Myears)\"\nproxy_lim = [0, maxAge] #or None\n#proxy_lim = None\n\nfig_name = \"figures/test_\" #to name the figures\n\nprint(rICB, age_ic, velocity_amplitude, omega, exponent_growth, proxy_type)\nprint(velocity)",
"1.0 1.0 0.7999999999999999 0.0 1.0 age\n[ -1.38918542e-01 7.87846202e-01 4.89858720e-17]\n"
]
],
[
[
"### Parameters for the geodynamical model\n\nThis will input the different parameters in the model.",
"_____no_output_____"
]
],
[
[
"parameters = dict({'units': units,\n 'rICB': rICB, \n 'tau_ic':age_ic,\n 'vt': velocity,\n 'exponent_growth': exponent_growth,\n 'omega': omega,\n 'proxy_type': proxy_type})\ngeodynModel.set_parameters(parameters)\ngeodynModel.define_units()\n\nparam = parameters\nparam['vt'] = parameters['vt'].tolist() #for json serialization\n# write file with parameters, readable with json, byt also human-readable\nwith open(fig_name+'parameters.json', 'w') as f:\n json.dump(param, f)\n \nprint(parameters)",
"{'exponent_growth': 1.0, 'vt': [-0.13891854213354424, 0.7878462024097663, 4.8985871965894125e-17], 'proxy_type': 'age', 'omega': 0.0, 'tau_ic': 1.0, 'units': None, 'rICB': 1.0}\n"
]
],
[
[
"## Different data set and visualisations",
"_____no_output_____"
],
[
"### Perfect sampling at the equator (to visualise the flow lines)\n\nYou can add more points to get a better precision.",
"_____no_output_____"
]
],
[
[
"npoints = 10 #number of points in the x direction for the data set. \ndata_set = data.PerfectSamplingEquator(npoints, rICB = 1.)\ndata_set.method = \"bt_point\"\nproxy = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=\"age\", verbose = False)\ndata_set.plot_c_vec(geodynModel, proxy=proxy, cm=cm, nameproxy=\"age (Myears)\")\nplt.savefig(fig_name+\"equatorial_plot.pdf\", bbox_inches='tight')",
"===\n== Evaluate value of proxy for all points of the data set \n= Geodynamic model is Translation, Rotation and Growth\n= Proxy is age\n= Data set is Perfect sampling in the equatorial plane\n= Proxy is evaluated for bt_point\n= Number of points to examine: 60\n===\n"
]
],
[
[
"### Perfect sampling in the first 100km (to visualise the depth evolution)",
"_____no_output_____"
]
],
[
[
"data_meshgrid = data.Equator_upperpart(10,10)\ndata_meshgrid.method = \"bt_point\"\nproxy_meshgrid = geodyn.evaluate_proxy(data_meshgrid, geodynModel, proxy_type=proxy_type, verbose = False)\n#r, t, p = data_meshgrid.extract_rtp(\"bottom_turning_point\")\n\nfig3, ax3 = plt.subplots(figsize=(8, 2))\nX, Y, Z = data_meshgrid.mesh_RPProxy(proxy_meshgrid)\nsc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm)\nsc2 = ax3.contour(sc, levels=sc.levels[::15], colors = \"k\")\nax3.set_ylim(-0, 120)\nfig3.gca().invert_yaxis()\nax3.set_xlim(-180,180)\ncbar = fig3.colorbar(sc)\n#cbar.set_clim(0, maxAge)\ncbar.set_label(proxy_name)\nax3.set_xlabel(\"longitude\")\nax3.set_ylabel(\"depth below ICB (km)\")\n\nplt.savefig(fig_name+\"meshgrid.pdf\", bbox_inches='tight')",
"===\n== Evaluate value of proxy for all points of the data set \n= Geodynamic model is Translation, Rotation and Growth\n= Proxy is age\n= Data set is Meshgrid at the equator between 0 and 120km depth\n= Proxy is evaluated for bt_point\n= Number of points to examine: 100\n===\n"
],
[
"npoints = 20 #number of points in the x direction for the data set. \ndata_set = data.PerfectSamplingSurface(npoints, rICB = 1., depth=0.01)\ndata_set.method = \"bt_point\"\nproxy_surface = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose = False)\n#r, t, p = data_set.extract_rtp(\"bottom_turning_point\")\nX, Y, Z = data_set.mesh_TPProxy(proxy_surface)\n\n## map\nm, fig = plot_data.setting_map()\n\n\ny, x = m(Y, X)\nsc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none')\nplt.title(\"Dataset: {},\\n geodynamic model: {}\".format(data_set.name, geodynModel.name))\ncbar = plt.colorbar(sc)\ncbar.set_label(proxy_name)\nfig.savefig(fig_name+\"map_surface.pdf\", bbox_inches='tight')",
"===\n== Evaluate value of proxy for all points of the data set \n= Geodynamic model is Translation, Rotation and Growth\n= Proxy is age\n= Data set is Perfect sampling at the surface\n= Proxy is evaluated for bt_point\n= Number of points to examine: 400\n===\n"
]
],
[
[
"### Random data set, in the first 100km - bottom turning point only\n\n#### Calculate the data",
"_____no_output_____"
]
],
[
[
"# random data set\ndata_set_random = data.RandomData(300)\ndata_set_random.method = \"bt_point\"\n\nproxy_random = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=proxy_type, verbose=False)\ndata_path = \"../GrowYourIC/data/\"\ngeodynModel.data_path = data_path\n\nif proxy_type == \"age\":\n# ## domain size and Vp\n proxy_random_size = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=\"domain_size\", verbose=False)\n proxy_random_dV = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=\"dV_V\", verbose=False)",
"_____no_output_____"
],
[
"r, t, p = data_set_random.extract_rtp(\"bottom_turning_point\")\ndist = positions.angular_distance_to_point(t, p, *velocity_center)\n\n## map\nm, fig = plot_data.setting_map() \nx, y = m(p, t)\nsc = m.scatter(x, y, c=proxy_random,s=8, zorder=10, cmap=cm, edgecolors='none')\nplt.title(\"Dataset: {},\\n geodynamic model: {}\".format(data_set_random.name, geodynModel.name))\ncbar = plt.colorbar(sc)\ncbar.set_label(proxy_name)\nfig.savefig(fig_name+data_set_random.shortname+\"_map.pdf\", bbox_inches='tight')\n\n## phi and distance plots\nfig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))\nsc1 = ax[0,0].scatter(p, proxy_random, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)\nphi = np.linspace(-180,180, 50)\n#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)\n#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)\nax[0,0].set_xlabel(\"longitude\")\nax[0,0].set_ylabel(proxy_name)\nif proxy_lim is not None:\n ax[0,0].set_ylim(proxy_lim)\nsc2 = ax[0,1].scatter(dist, proxy_random, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)\nax[0,1].set_xlabel(\"angular distance to ({}, {})\".format(*velocity_center))\nphi = np.linspace(-90,90, 100)\nif proxy_type == \"age\":\n analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)\n ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)\n analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)\n ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)\nax[0,1].set_xlim([0,180])\nax[0,0].set_xlim([-180,180])\ncbar = fig.colorbar(sc1)\ncbar.set_label(\"longitude: abs(theta)\")\nif proxy_lim is not None:\n ax[0,1].set_ylim(proxy_lim)\n## figure with domain size and Vp\nif proxy_type == \"age\":\n sc3 = ax[1,0].scatter(dist, proxy_random_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)\n ax[1,0].set_xlabel(\"angular distance to ({}, {})\".format(*velocity_center))\n ax[1,0].set_ylabel(\"domain size (m)\")\n ax[1,0].set_xlim([0,180])\n ax[1,0].set_ylim([0, 2500.000])\n sc4 = ax[1,1].scatter(dist, proxy_random_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)\n ax[1,1].set_xlabel(\"angular distance to ({}, {})\".format(*velocity_center))\n ax[1,1].set_ylabel(\"dV/V\")\n ax[1,1].set_xlim([0,180])\n ax[1,1].set_ylim([-0.017, -0.002])\nfig.savefig(fig_name +data_set_random.shortname+ '_long_dist.pdf', bbox_inches='tight')\n\nfig, ax = plt.subplots(figsize=(8, 2))\nsc=ax.scatter(p,rICB_dim*(1.-r), c=proxy_random, s=10,cmap=cm, linewidth=0)\nax.set_ylim(-0,120)\nfig.gca().invert_yaxis()\nax.set_xlim(-180,180)\ncbar = fig.colorbar(sc)\nif proxy_lim is not None:\n cbar.set_clim(0, maxAge)\nax.set_xlabel(\"longitude\")\nax.set_ylabel(\"depth below ICB (km)\")\ncbar.set_label(proxy_name)\n\nfig.savefig(fig_name+data_set_random.shortname+\"_depth.pdf\", bbox_inches='tight')",
"_____no_output_____"
]
],
[
[
"### Real Data set from Waszek paper",
"_____no_output_____"
]
],
[
[
"## real data set\ndata_set = data.SeismicFromFile(\"../GrowYourIC/data/WD11.dat\")\ndata_set.method = \"bt_point\"\n \nproxy2 = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose=False)\n\nif proxy_type == \"age\":\n## domain size and DV/V\n proxy_size = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=\"domain_size\", verbose=False)\n proxy_dV = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=\"dV_V\", verbose=False)",
"_____no_output_____"
],
[
"r, t, p = data_set.extract_rtp(\"bottom_turning_point\")\ndist = positions.angular_distance_to_point(t, p, *velocity_center)\n\n## map\nm, fig = plot_data.setting_map() \nx, y = m(p, t)\nsc = m.scatter(x, y, c=proxy2,s=8, zorder=10, cmap=cm, edgecolors='none')\nplt.title(\"Dataset: {},\\n geodynamic model: {}\".format(data_set.name, geodynModel.name))\ncbar = plt.colorbar(sc)\ncbar.set_label(proxy_name)\nfig.savefig(fig_name+data_set.shortname+\"_map.pdf\", bbox_inches='tight')\n\n## phi and distance plots\nfig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))\nsc1 = ax[0,0].scatter(p, proxy2, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)\nphi = np.linspace(-180,180, 50)\n#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)\n#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)\nax[0,0].set_xlabel(\"longitude\")\nax[0,0].set_ylabel(proxy_name)\nif proxy_lim is not None:\n ax[0,0].set_ylim(proxy_lim)\nsc2 = ax[0,1].scatter(dist, proxy2, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)\nax[0,1].set_xlabel(\"angular distance to ({}, {})\".format(*velocity_center))\nphi = np.linspace(-90,90, 100)\nif proxy_type == \"age\":\n analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)\n ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)\n analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)\n ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)\nax[0,1].set_xlim([0,180])\nax[0,0].set_xlim([-180,180])\ncbar = fig.colorbar(sc1)\ncbar.set_label(\"longitude: abs(theta)\")\nif proxy_lim is not None:\n ax[0,1].set_ylim(proxy_lim)\n## figure with domain size and Vp\nif proxy_type == \"age\":\n sc3 = ax[1,0].scatter(dist, proxy_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)\n ax[1,0].set_xlabel(\"angular distance to ({}, {})\".format(*velocity_center))\n ax[1,0].set_ylabel(\"domain size (m)\")\n ax[1,0].set_xlim([0,180])\n ax[1,0].set_ylim([0, 2500.000])\n sc4 = ax[1,1].scatter(dist, proxy_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)\n ax[1,1].set_xlabel(\"angular distance to ({}, {})\".format(*velocity_center))\n ax[1,1].set_ylabel(\"dV/V\")\n ax[1,1].set_xlim([0,180])\n ax[1,1].set_ylim([-0.017, -0.002])\nfig.savefig(fig_name + data_set.shortname+'_long_dist.pdf', bbox_inches='tight')\n\nfig, ax = plt.subplots(figsize=(8, 2))\nsc=ax.scatter(p,rICB_dim*(1.-r), c=proxy2, s=10,cmap=cm, linewidth=0)\nax.set_ylim(-0,120)\nfig.gca().invert_yaxis()\nax.set_xlim(-180,180)\ncbar = fig.colorbar(sc)\nif proxy_lim is not None:\n cbar.set_clim(0, maxAge)\nax.set_xlabel(\"longitude\")\nax.set_ylabel(\"depth below ICB (km)\")\ncbar.set_label(proxy_name)\n\nfig.savefig(fig_name+data_set.shortname+\"_depth.pdf\", bbox_inches='tight')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d050b808aac5049e466b9c4a4104c0535f1c306f | 39,552 | ipynb | Jupyter Notebook | Running_screamingfrog_SEO_spider_in_Colab_notebook.ipynb | danzerzine/seospider-colab | 22757d087a574a9736abb4db9e43ddac73bd3c42 | [
"MIT"
] | 1 | 2021-11-19T15:55:45.000Z | 2021-11-19T15:55:45.000Z | Running_screamingfrog_SEO_spider_in_Colab_notebook.ipynb | danzerzine/seospider-colab | 22757d087a574a9736abb4db9e43ddac73bd3c42 | [
"MIT"
] | null | null | null | Running_screamingfrog_SEO_spider_in_Colab_notebook.ipynb | danzerzine/seospider-colab | 22757d087a574a9736abb4db9e43ddac73bd3c42 | [
"MIT"
] | 1 | 2021-11-19T16:49:28.000Z | 2021-11-19T16:49:28.000Z | 50.772786 | 12,201 | 0.646642 | [
[
[
"<a href=\"https://colab.research.google.com/github/danzerzine/seospider-colab/blob/main/Running_screamingfrog_SEO_spider_in_Colab_notebook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Запуск SEO бота Screaming Frog SEO spider в облаке через Google Colab",
"_____no_output_____"
],
[
"-------------",
"_____no_output_____"
],
[
"> *Protip: под задачу для крупного сайта лучше всего подходят High RAM (25GB) инстансы без GPU/TPU, доступные в PRO подписке*\n",
"_____no_output_____"
],
[
"###Косметическое улучшение: добавляем перенос строки для длинных однострочных команд \n\n",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML, display\n\ndef set_css():\n display(HTML('''\n <style>\n pre {\n white-space: pre-wrap;\n }\n </style>\n '''))\nget_ipython().events.register('pre_run_cell', set_css)",
"_____no_output_____"
]
],
[
[
"###Подключаем Google Drive в котором хранятся конфиги бота и куда будут сохраняться результаты обхода \n",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"_____no_output_____"
]
],
[
[
"###Узнаем внешний IP инстанса \nчтобы затем ручками добавить его в исключения файерволла cloudflare -- иначе очень быстро упремся в rate limit и нам начнут показывать страницу с проверкой на человекообразность",
"_____no_output_____"
]
],
[
[
"!wget -qO- http://ipecho.net/plain | xargs echo && wget -qO - icanhazip.com",
"_____no_output_____"
]
],
[
[
"###Устанавливаем последнюю версию seo spider, делаем мелкие дела по хозяйству\n* Обновляем установленные linux пакеты \n* Копируем настройки с десктопной версии SEO spider в локальную папку инстанса (это нужно чтобы передать токены авторизации к google search console, GA и так далее) ",
"_____no_output_____"
]
],
[
[
"#@title Settings directory on GDrive { vertical-output: true, display-mode: \"both\" }\nsettings_path = \"\" #@param {type:\"string\"}\n!wget https://download.screamingfrog.co.uk/products/seo-spider/screamingfrogseospider_16.3_all.deb\n!apt-get install screamingfrogseospider_16.3_all.deb\n!sudo apt-get update && sudo apt-get upgrade -y\n!mkdir -p ~/.ScreamingFrogSEOSpider\n!cp -r $settings_path/* ~/.ScreamingFrogSEOSpider",
"_____no_output_____"
]
],
[
[
"### Запускаем bash скрипт для донастройки инстанса и бота \nОн добавит виртуальный дисплей для вывода из JAVA, переключит бота в режим сохранения результатов на диске вместо RAM и т.д.",
"_____no_output_____"
]
],
[
[
"!wget https://raw.githubusercontent.com/fili/screaming-frog-on-google-compute-engine/master/gce-sf.sh -O install.sh && chmod +x install.sh && source ./install.sh",
"_____no_output_____"
]
],
[
[
"###Делаем симлинк скрытой папки с временными файлами и настройками бота\nна случай если придется что-то редактировать или вынимать оттуда наживую, иначе ее не будет видно в браузере файлов слева",
"_____no_output_____"
]
],
[
[
"!ln -s ~/.ScreamingFrogSEOSpider ~/ScreamingFrogSEOSpider",
"_____no_output_____"
]
],
[
[
"###Даем команду боту в headless режиме \nпрописываем все нужные флаги для экспорта, настроек, отчетов, выгрузок и так далее",
"_____no_output_____"
]
],
[
[
"#@title Crawl settings { vertical-output: true }\nurl_start = \"\" #@param {type:\"string\"}\nuse_gcs = \"\" #@param [\"\", \"--use-google-search-console \\\"account \\\"\"] {allow-input: true}\nconfig_path = \"\" #@param {type:\"string\"}\noutput_folder = \"\" #@param {type:\"string\"}\n\n!screamingfrogseospider --crawl \"$url_start\" $use_gcs --headless --config \"$config_path\" --output-folder \"$output_folder\" --timestamped-output --save-crawl --export-tabs \"Internal:All,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB\" --bulk-export \"Canonicals:Contains Canonical Inlinks,Canonicals:Self Referencing Inlinks,Canonicals:Canonicalised Inlinks,Canonicals:Missing Inlinks,Canonicals:Multiple Inlinks,Canonicals:Non-Indexable Canonical Inlinks,AMP:All Inlinks,AMP:Non-200 Response Inlinks,AMP:Missing Non-AMP Return Link Inlinks,AMP:Missing Canonical to Non-AMP Inlinks,AMP:Non-Indexable Canonical Inlinks,AMP:Indexable Inlinks,AMP:Non-Indexable Inlinks,Structured Data:Contains Structured Data,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:JSON-LD URLs,Structured Data:Microdata URLs,Structured Data:RDFa URLs,Sitemaps:URLs in Sitemap Inlinks,Sitemaps:Orphan URLs Inlinks,Sitemaps:Non-Indexable URLs in Sitemap Inlinks,Sitemaps:URLs in Multiple Sitemaps Inlinks\" --save-report \"Crawl Overview,Redirects:All Redirects,Redirects:Redirect Chains,Redirects:Redirect & Canonical Chains,Canonicals:Canonical Chains,Canonicals:Non-Indexable Canonicals,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Hreflang:All hreflang URLs,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non Canonical Return Links,Hreflang:Noindex Return Links,Insecure Content,SERP Summary,Orphan Pages,Structured Data:Validation Errors & Warnings Summary,Structured Data:Validation Errors & Warnings,Structured Data:Google Rich Results Features Summary,Structured Data:Google Rich Results Features,HTTP Headers:HTTP Header Summary,Cookies:Cookie Summary\" --export-format xlsx --export-custom-summary \"Site Crawled,Date,Time,Total URLs Encountered,Total URLs Crawled,Total Internal blocked by robots.txt,Total External blocked by robots.txt,URLs Displayed,Total Internal URLs,Total External URLs,Total Internal Indexable URLs,Total Internal Non-Indexable URLs,JavaScript:All,JavaScript:Uses Old AJAX Crawling Scheme URLs,JavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag,JavaScript:Page Title Only in Rendered HTML,JavaScript:Page Title Updated by JavaScript,JavaScript:H1 Only in Rendered HTML,JavaScript:H1 Updated by JavaScript,JavaScript:Meta Description Only in Rendered HTML,JavaScript:Meta Description Updated by JavaScript,JavaScript:Canonical Only in Rendered HTML,JavaScript:Canonical Mismatch,JavaScript:Noindex Only in Original HTML,JavaScript:Nofollow Only in Original HTML,JavaScript:Contains JavaScript Links,JavaScript:Contains JavaScript Content,JavaScript:Pages with Blocked Resources,H1:All,H1:Missing,H1:Duplicate,H1:Over X Characters,H1:Multiple,H2:All,H2:Missing,H2:Duplicate,H2:Over X Characters,H2:Multiple,Internal:All,Internal:HTML,Internal:JavaScript,Internal:CSS,Internal:Images,Internal:PDF,Internal:Flash,Internal:Other,Internal:Unknown,External:All,External:HTML,External:JavaScript,External:CSS,External:Images,External:PDF,External:Flash,External:Other,External:Unknown,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Content:All,Content:Spelling Errors,Content:Grammar Errors,Content:Near Duplicates,Content:Exact Duplicates,Content:Low Content Pages,Custom Extraction:All,Custom Search:All,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,Analytics:All,Analytics:Sessions Above 0,Analytics:Bounce Rate Above 70%,Analytics:No GA Data,Analytics:Non-Indexable with GA Data,Analytics:Orphan URLs,Search Console:All,Search Console:Clicks Above 0,Search Console:No GSC Data,Search Console:Non-Indexable with GSC Data,Search Console:Orphan URLs,Hreflang:All,Hreflang:Contains hreflang,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non-Canonical Return Links,Hreflang:Noindex Return Links,Hreflang:Incorrect Language & Region Codes,Hreflang:Multiple Entries,Hreflang:Missing Self Reference,Hreflang:Not Using Canonical,Hreflang:Missing X-Default,Hreflang:Missing,Images:All,Images:Over X KB,Images:Missing Alt Text,Images:Missing Alt Attribute,Images:Alt Text Over X Characters,Link Metrics:All,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,PageSpeed:All,PageSpeed:Eliminate Render-Blocking Resources,PageSpeed:Defer Offscreen Images,PageSpeed:Efficiently Encode Images,PageSpeed:Properly Size Images,PageSpeed:Minify CSS,PageSpeed:Minify JavaScript,PageSpeed:Reduce Unused CSS,PageSpeed:Reduce Unused JavaScript,PageSpeed:Serve Images in Next-Gen Formats,PageSpeed:Enable Text Compression,PageSpeed:Preconnect to Required Origins,PageSpeed:Reduce Server Response Times (TTFB),PageSpeed:Avoid Multiple Page Redirects,PageSpeed:Preload Key Requests,PageSpeed:Use Video Formats for Animated Content,PageSpeed:Avoid Excessive DOM Size,PageSpeed:Reduce JavaScript Execution Time,PageSpeed:Serve Static Assets with an Efficient Cache Policy,PageSpeed:Minimize Main-Thread Work,PageSpeed:Ensure Text Remains Visible During Webfont Load,PageSpeed:Image Elements Do Not Have Explicit Width & Height,PageSpeed:Avoid Large Layout Shifts,PageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers,PageSpeed:Request Errors,Pagination:All,Pagination:Contains Pagination,Pagination:First Page,Pagination:Paginated 2+ Pages,Pagination:Pagination URL Not in Anchor Tag,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Pagination:Non-Indexable,Pagination:Multiple Pagination URLs,Pagination:Pagination Loop,Pagination:Sequence Error,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Success (2xx),Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Security:All,Security:HTTP URLs,Security:HTTPS URLs,Security:Mixed Content,Security:Form URL Insecure,Security:Form on HTTP URL,Security:Unsafe Cross-Origin Links,Security:Missing HSTS Header,Security:Bad Content Type,Security:Missing X-Content-Type-Options Header,Security:Missing X-Frame-Options Header,Security:Protocol-Relative Resource Links,Security:Missing Content-Security-Policy Header,Security:Missing Secure Referrer-Policy Header,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,URL:All,URL:Non ASCII Characters,URL:Underscores,URL:Uppercase,URL:Parameters,URL:Over X Characters,URL:Multiple Slashes,URL:Repetitive Path,URL:Contains Space,URL:Broken Bookmark,URL:Internal Search,Depth 1,Depth 2,Depth 3,Depth 4,Depth 5,Depth 6,Depth 7,Depth 8,Depth 9,Depth 10+,Top Inlinks 1 URL,Top Inlinks 1 Number of Inlinks,Top Inlinks 2 URL,Top Inlinks 2 Number of Inlinks,Top Inlinks 3 URL,Top Inlinks 3 Number of Inlinks,Top Inlinks 4 URL,Top Inlinks 4 Number of Inlinks,Top Inlinks 5 URL,Top Inlinks 5 Number of Inlinks,Top Inlinks 6 URL,Top Inlinks 6 Number of Inlinks,Top Inlinks 7 URL,Top Inlinks 7 Number of Inlinks,Top Inlinks 8 URL,Top Inlinks 8 Number of Inlinks,Top Inlinks 9 URL,Top Inlinks 9 Number of Inlinks,Top Inlinks 10 URL,Top Inlinks 10 Number of Inlinks,Top Inlinks 11 URL,Top Inlinks 11 Number of Inlinks,Top Inlinks 12 URL,Top Inlinks 12 Number of Inlinks,Top Inlinks 13 URL,Top Inlinks 13 Number of Inlinks,Top Inlinks 14 URL,Top Inlinks 14 Number of Inlinks,Top Inlinks 15 URL,Top Inlinks 15 Number of Inlinks,Top Inlinks 16 URL,Top Inlinks 16 Number of Inlinks,Top Inlinks 17 URL,Top Inlinks 17 Number of Inlinks,Top Inlinks 18 URL,Top Inlinks 18 Number of Inlinks,Top Inlinks 19 URL,Top Inlinks 19 Number of Inlinks,Top Inlinks 20 URL,Top Inlinks 20 Number of Inlinks,Response Times 0s to 1s,Response Times 1s to 2s,Response Times 2s to 3s,Response Times 3s to 4s,Response Times 4s to 5s,Response Times 5s to 6s,Response Times 6s to 7s,Response Times 7s to 8s,Response Times 8s to 9s,Response Times 10s or more\" ",
"_____no_output_____"
]
],
[
[
"# ✦ *Colab Still Alive Console Script:*\n<p><font size=2px ><font color=\"red\"> Tip - Set a javascript interval to click on the connect button every 60 seconds. Open developer-settings (in your web-browser) with Ctrl+Shift+I then click on console tab and type this on the console prompt. (for mac press Option+Command+I)</font></p><b>Copy script in hidden cell and paste at your browser console !!! DO NOT CLOSE YOUR BROWSER IN ORDER TO STILL RUNNING SCRIPT</b>",
"_____no_output_____"
],
[
"<code>function ClickConnect(){\nconsole.log(\"Working\"); \ndocument.querySelector(\"colab-connect-button\").click() \n}setInterval(ClickConnect,6000)</code>",
"_____no_output_____"
],
[
"# *Что в итоге*\nНа выходе в идеале получаем \nпапку с датой обхода и следующими выгрузками в формате Excel\n\n\n**Tabs**:\n\n```\nInternal:All\nResponse Codes:All\nResponse Codes:Blocked by Robots.txt\nResponse Codes:Blocked Resource\nResponse Codes:No Response\nResponse Codes:Redirection (3xx)\nResponse Codes:Redirection (JavaScript)\nResponse Codes:Redirection (Meta Refresh)\nResponse Codes:Client Error (4xx)\nResponse Codes:Server Error (5xx)\nPage Titles:All\nPage Titles:Missing\nPage Titles:Duplicate\nPage Titles:Over X Characters\nPage Titles:Below X Characters\nPage Titles:Over X Pixels\nPage Titles:Below X Pixels\nPage Titles:Same as H1\nPage Titles:Multiple\nMeta Description:All\nMeta Description:Missing\nMeta Description:Duplicate\nMeta Description:Over X Characters\nMeta Description:Below X Characters\nMeta Description:Over X Pixels\nMeta Description:Below X Pixels\nMeta Description:Multiple\nMeta Keywords:All\nMeta Keywords:Missing\nMeta Keywords:Duplicate\nMeta Keywords:Multiple\nCanonicals:All\nCanonicals:Contains Canonical\nCanonicals:Self Referencing\nCanonicals:Canonicalised\nCanonicals:Missing\nCanonicals:Multiple\nCanonicals:Non-Indexable Canonical\nDirectives:All\nDirectives:Index\nDirectives:Noindex\nDirectives:Follow\nDirectives:Nofollow\nDirectives:None\nDirectives:NoArchive\nDirectives:NoSnippet\nDirectives:Max-Snippet\nDirectives:Max-Image-Preview\nDirectives:Max-Video-Preview\nDirectives:NoODP\nDirectives:NoYDIR\nDirectives:NoImageIndex\nDirectives:NoTranslate\nDirectives:Unavailable_After\nDirectives:Refresh\nAMP:All\nAMP:Non-200 Response\nAMP:Missing Non-AMP Return Link\nAMP:Missing Canonical to Non-AMP\nAMP:Non-Indexable Canonical\nAMP:Indexable\nAMP:Non-Indexable\nAMP:Missing <html amp> Tag\nAMP:Missing/Invalid <!doctype html> Tag\nAMP:Missing <head> Tag\nAMP:Missing <body> Tag\nAMP:Missing Canonical\nAMP:Missing/Invalid <meta charset> Tag\nAMP:Missing/Invalid <meta viewport> Tag\nAMP:Missing/Invalid AMP Script\nAMP:Missing/Invalid AMP Boilerplate\nAMP:Contains Disallowed HTML\nAMP:Other Validation Errors\nStructured Data:All\nStructured Data:Contains Structured Data\nStructured Data:Missing\nStructured Data:Validation Errors\nStructured Data:Validation Warnings\nStructured Data:Parse Errors\nStructured Data:Microdata URLs\nStructured Data:JSON-LD URLs\nStructured Data:RDFa URLs\nSitemaps:All\nSitemaps:URLs in Sitemap\nSitemaps:URLs not in Sitemap\nSitemaps:Orphan URLs\nSitemaps:Non-Indexable URLs in Sitemap\nSitemaps:URLs in Multiple Sitemaps\nSitemaps:XML Sitemap with over 50k URLs\nSitemaps:XML Sitemap over 50MB\" --bulk-export \"Canonicals:Contains Canonical Inlinks\nCanonicals:Self Referencing Inlinks\nCanonicals:Canonicalised Inlinks\nCanonicals:Missing Inlinks\nCanonicals:Multiple Inlinks\nCanonicals:Non-Indexable Canonical Inlinks\nAMP:All Inlinks\nAMP:Non-200 Response Inlinks\nAMP:Missing Non-AMP Return Link Inlinks\nAMP:Missing Canonical to Non-AMP Inlinks\nAMP:Non-Indexable Canonical Inlinks\nAMP:Indexable Inlinks\nAMP:Non-Indexable Inlinks\nStructured Data:Contains Structured Data\nStructured Data:Validation Errors\nStructured Data:Validation Warnings\nStructured Data:JSON-LD URLs\nStructured Data:Microdata URLs\nStructured Data:RDFa URLs\nSitemaps:URLs in Sitemap Inlinks\nSitemaps:Orphan URLs Inlinks\nSitemaps:Non-Indexable URLs in Sitemap Inlinks\nSitemaps:URLs in Multiple Sitemaps Inlinks\" --save-report \"Crawl Overview\nRedirects:All Redirects\nRedirects:Redirect Chains\nRedirects:Redirect & Canonical Chains\nCanonicals:Canonical Chains\nCanonicals:Non-Indexable Canonicals\nPagination:Non-200 Pagination URLs\nPagination:Unlinked Pagination URLs\nHreflang:All hreflang URLs\nHreflang:Non-200 hreflang URLs\nHreflang:Unlinked hreflang URLs\nHreflang:Missing Return Links\nHreflang:Inconsistent Language & Region Return Links\nHreflang:Non Canonical Return Links\nHreflang:Noindex Return Links\nInsecure Content\nSERP Summary\nOrphan Pages\nStructured Data:Validation Errors & Warnings Summary\nStructured Data:Validation Errors & Warnings\nStructured Data:Google Rich Results Features Summary\nStructured Data:Google Rich Results Features\nHTTP Headers:HTTP Header Summary\nCookies:Cookie Summary\n```\n\n**Summary**:\n\n```\nSite Crawled\nDate\nTime\nTotal URLs Encountered\nTotal URLs Crawled\nTotal Internal blocked by robots.txt\nTotal External blocked by robots.txt\nURLs Displayed\nTotal Internal URLs\nTotal External URLs\nTotal Internal Indexable URLs\nTotal Internal Non-Indexable URLs\nJavaScript:All\nJavaScript:Uses Old AJAX Crawling Scheme URLs\nJavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag\nJavaScript:Page Title Only in Rendered HTML\nJavaScript:Page Title Updated by JavaScript\nJavaScript:H1 Only in Rendered HTML\nJavaScript:H1 Updated by JavaScript\nJavaScript:Meta Description Only in Rendered HTML\nJavaScript:Meta Description Updated by JavaScript\nJavaScript:Canonical Only in Rendered HTML\nJavaScript:Canonical Mismatch\nJavaScript:Noindex Only in Original HTML\nJavaScript:Nofollow Only in Original HTML\nJavaScript:Contains JavaScript Links\nJavaScript:Contains JavaScript Content\nJavaScript:Pages with Blocked Resources\nH1:All\nH1:Missing\nH1:Duplicate\nH1:Over X Characters\nH1:Multiple\nH2:All\nH2:Missing\nH2:Duplicate\nH2:Over X Characters\nH2:Multiple\nInternal:All\nInternal:HTML\nInternal:JavaScript\nInternal:CSS\nInternal:Images\nInternal:PDF\nInternal:Flash\nInternal:Other\nInternal:Unknown\nExternal:All\nExternal:HTML\nExternal:JavaScript\nExternal:CSS\nExternal:Images\nExternal:PDF\nExternal:Flash\nExternal:Other\nExternal:Unknown\nAMP:All\nAMP:Non-200 Response\nAMP:Missing Non-AMP Return Link\nAMP:Missing Canonical to Non-AMP\nAMP:Non-Indexable Canonical\nAMP:Indexable\nAMP:Non-Indexable\nAMP:Missing <html amp> Tag\nAMP:Missing/Invalid <!doctype html> Tag\nAMP:Missing <head> Tag\nAMP:Missing <body> Tag\nAMP:Missing Canonical\nAMP:Missing/Invalid <meta charset> Tag\nAMP:Missing/Invalid <meta viewport> Tag\nAMP:Missing/Invalid AMP Script\nAMP:Missing/Invalid AMP Boilerplate\nAMP:Contains Disallowed HTML\nAMP:Other Validation Errors\nCanonicals:All\nCanonicals:Contains Canonical\nCanonicals:Self Referencing\nCanonicals:Canonicalised\nCanonicals:Missing\nCanonicals:Multiple\nCanonicals:Non-Indexable Canonical\nContent:All\nContent:Spelling Errors\nContent:Grammar Errors\nContent:Near Duplicates\nContent:Exact Duplicates\nContent:Low Content Pages\nCustom Extraction:All\nCustom Search:All\nDirectives:All\nDirectives:Index\nDirectives:Noindex\nDirectives:Follow\nDirectives:Nofollow\nDirectives:None\nDirectives:NoArchive\nDirectives:NoSnippet\nDirectives:Max-Snippet\nDirectives:Max-Image-Preview\nDirectives:Max-Video-Preview\nDirectives:NoODP\nDirectives:NoYDIR\nDirectives:NoImageIndex\nDirectives:NoTranslate\nDirectives:Unavailable_After\nDirectives:Refresh\nAnalytics:All\nAnalytics:Sessions Above 0\nAnalytics:Bounce Rate Above 70%\nAnalytics:No GA Data\nAnalytics:Non-Indexable with GA Data\nAnalytics:Orphan URLs\nSearch Console:All\nSearch Console:Clicks Above 0\nSearch Console:No GSC Data\nSearch Console:Non-Indexable with GSC Data\nSearch Console:Orphan URLs\nHreflang:All\nHreflang:Contains hreflang\nHreflang:Non-200 hreflang URLs\nHreflang:Unlinked hreflang URLs\nHreflang:Missing Return Links\nHreflang:Inconsistent Language & Region Return Links\nHreflang:Non-Canonical Return Links\nHreflang:Noindex Return Links\nHreflang:Incorrect Language & Region Codes\nHreflang:Multiple Entries\nHreflang:Missing Self Reference\nHreflang:Not Using Canonical\nHreflang:Missing X-Default\nHreflang:Missing\nImages:All\nImages:Over X KB\nImages:Missing Alt Text\nImages:Missing Alt Attribute\nImages:Alt Text Over X Characters\nLink Metrics:All\nMeta Description:All\nMeta Description:Missing\nMeta Description:Duplicate\nMeta Description:Over X Characters\nMeta Description:Below X Characters\nMeta Description:Over X Pixels\nMeta Description:Below X Pixels\nMeta Description:Multiple\nMeta Keywords:All\nMeta Keywords:Missing\nMeta Keywords:Duplicate\nMeta Keywords:Multiple\nPageSpeed:All\nPageSpeed:Eliminate Render-Blocking Resources\nPageSpeed:Defer Offscreen Images\nPageSpeed:Efficiently Encode Images\nPageSpeed:Properly Size Images\nPageSpeed:Minify CSS\nPageSpeed:Minify JavaScript\nPageSpeed:Reduce Unused CSS\nPageSpeed:Reduce Unused JavaScript\nPageSpeed:Serve Images in Next-Gen Formats\nPageSpeed:Enable Text Compression\nPageSpeed:Preconnect to Required Origins\nPageSpeed:Reduce Server Response Times (TTFB)\nPageSpeed:Avoid Multiple Page Redirects\nPageSpeed:Preload Key Requests\nPageSpeed:Use Video Formats for Animated Content\nPageSpeed:Avoid Excessive DOM Size\nPageSpeed:Reduce JavaScript Execution Time\nPageSpeed:Serve Static Assets with an Efficient Cache Policy\nPageSpeed:Minimize Main-Thread Work\nPageSpeed:Ensure Text Remains Visible During Webfont Load\nPageSpeed:Image Elements Do Not Have Explicit Width & Height\nPageSpeed:Avoid Large Layout Shifts\nPageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers\nPageSpeed:Request Errors\nPagination:All\nPagination:Contains Pagination\nPagination:First Page\nPagination:Paginated 2+ Pages\nPagination:Pagination URL Not in Anchor Tag\nPagination:Non-200 Pagination URLs\nPagination:Unlinked Pagination URLs\nPagination:Non-Indexable\nPagination:Multiple Pagination URLs\nPagination:Pagination Loop\nPagination:Sequence Error\nResponse Codes:All\nResponse Codes:Blocked by Robots.txt\nResponse Codes:Blocked Resource\nResponse Codes:No Response\nResponse Codes:Success (2xx)\nResponse Codes:Redirection (3xx)\nResponse Codes:Redirection (JavaScript)\nResponse Codes:Redirection (Meta Refresh)\nResponse Codes:Client Error (4xx)\nResponse Codes:Server Error (5xx)\nSecurity:All\nSecurity:HTTP URLs\nSecurity:HTTPS URLs\nSecurity:Mixed Content\nSecurity:Form URL Insecure\nSecurity:Form on HTTP URL\nSecurity:Unsafe Cross-Origin Links\nSecurity:Missing HSTS Header\nSecurity:Bad Content Type\nSecurity:Missing X-Content-Type-Options Header\nSecurity:Missing X-Frame-Options Header\nSecurity:Protocol-Relative Resource Links\nSecurity:Missing Content-Security-Policy Header\nSecurity:Missing Secure Referrer-Policy Header\nSitemaps:All\nSitemaps:URLs in Sitemap\nSitemaps:URLs not in Sitemap\nSitemaps:Orphan URLs\nSitemaps:Non-Indexable URLs in Sitemap\nSitemaps:URLs in Multiple Sitemaps\nSitemaps:XML Sitemap with over 50k URLs\nSitemaps:XML Sitemap over 50MB\nStructured Data:All\nStructured Data:Contains Structured Data\nStructured Data:Missing\nStructured Data:Validation Errors\nStructured Data:Validation Warnings\nStructured Data:Parse Errors\nStructured Data:Microdata URLs\nStructured Data:JSON-LD URLs\nStructured Data:RDFa URLs\nPage Titles:All\nPage Titles:Missing\nPage Titles:Duplicate\nPage Titles:Over X Characters\nPage Titles:Below X Characters\nPage Titles:Over X Pixels\nPage Titles:Below X Pixels\nPage Titles:Same as H1\nPage Titles:Multiple\nURL:All\nURL:Non ASCII Characters\nURL:Underscores\nURL:Uppercase\nURL:Parameters\nURL:Over X Characters\nURL:Multiple Slashes\nURL:Repetitive Path\nURL:Contains Space\nURL:Broken Bookmark\nURL:Internal Search\nDepth 1\nDepth 2\nDepth 3\nDepth 4\nDepth 5\nDepth 6\nDepth 7\nDepth 8\nDepth 9\nDepth 10+\nTop Inlinks 1 URL\nTop Inlinks 1 Number of Inlinks\nTop Inlinks 2 URL\nTop Inlinks 2 Number of Inlinks\nTop Inlinks 3 URL\nTop Inlinks 3 Number of Inlinks\nTop Inlinks 4 URL\nTop Inlinks 4 Number of Inlinks\nTop Inlinks 5 URL\nTop Inlinks 5 Number of Inlinks\nTop Inlinks 6 URL\nTop Inlinks 6 Number of Inlinks\nTop Inlinks 7 URL\nTop Inlinks 7 Number of Inlinks\nTop Inlinks 8 URL\nTop Inlinks 8 Number of Inlinks\nTop Inlinks 9 URL\nTop Inlinks 9 Number of Inlinks\nTop Inlinks 10 URL\nTop Inlinks 10 Number of Inlinks\nTop Inlinks 11 URL\nTop Inlinks 11 Number of Inlinks\nTop Inlinks 12 URL\nTop Inlinks 12 Number of Inlinks\nTop Inlinks 13 URL\nTop Inlinks 13 Number of Inlinks\nTop Inlinks 14 URL\nTop Inlinks 14 Number of Inlinks\nTop Inlinks 15 URL\nTop Inlinks 15 Number of Inlinks\nTop Inlinks 16 URL\nTop Inlinks 16 Number of Inlinks\nTop Inlinks 17 URL\nTop Inlinks 17 Number of Inlinks\nTop Inlinks 18 URL\nTop Inlinks 18 Number of Inlinks\nTop Inlinks 19 URL\nTop Inlinks 19 Number of Inlinks\nTop Inlinks 20 URL\nTop Inlinks 20 Number of Inlinks\nResponse Times 0s to 1s\nResponse Times 1s to 2s\nResponse Times 2s to 3s\nResponse Times 3s to 4s\nResponse Times 4s to 5s\nResponse Times 5s to 6s\nResponse Times 6s to 7s\nResponse Times 7s to 8s\nResponse Times 8s to 9s\nResponse Times 10s or more\" ```\n\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d050bd9430dc247f6160a0bae8185b3e03cdd065 | 1,341 | ipynb | Jupyter Notebook | my_first_jupyter_notebook.ipynb | nncastil/astr-119-session5 | da75b345c7dbee5193861a1228dadabf230c65e6 | [
"MIT"
] | null | null | null | my_first_jupyter_notebook.ipynb | nncastil/astr-119-session5 | da75b345c7dbee5193861a1228dadabf230c65e6 | [
"MIT"
] | null | null | null | my_first_jupyter_notebook.ipynb | nncastil/astr-119-session5 | da75b345c7dbee5193861a1228dadabf230c65e6 | [
"MIT"
] | null | null | null | 16.974684 | 62 | 0.478747 | [
[
[
"import numpy as np",
"_____no_output_____"
],
[
"n = 10 #define integer 10\nx = np.arange(n,dtype=float) #define array x = [0,9]",
"_____no_output_____"
],
[
"print(x)",
"_____no_output_____"
]
],
[
[
"# markdown\n\nwow",
"_____no_output_____"
],
[
"## for documentation\n\nlike comments but not in line",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d050bf2abaedca68b2fd21ffdcd0494e0d229fe7 | 8,695 | ipynb | Jupyter Notebook | scooter direction model.ipynb | A-Jatin/Scooter-Direction-Test | ff4070903faa630f356509979234755fde338587 | [
"MIT"
] | null | null | null | scooter direction model.ipynb | A-Jatin/Scooter-Direction-Test | ff4070903faa630f356509979234755fde338587 | [
"MIT"
] | null | null | null | scooter direction model.ipynb | A-Jatin/Scooter-Direction-Test | ff4070903faa630f356509979234755fde338587 | [
"MIT"
] | null | null | null | 23.186667 | 357 | 0.514549 | [
[
[
"#importing required libraries\n\nimport numpy as np\nimport cv2\nimport os\nos.chdir('C:/Users/JATIN/Downloads')\n",
"_____no_output_____"
],
[
"#loading the video\n\ncap = cv2.VideoCapture('Scooter Ride through Pune City Roads.mp4')",
"_____no_output_____"
],
[
"#reading the labels in a list\n\ny=[]\nfile = open('Label.txt', 'r')\nwhile 1:\n char = file.read(1) # read by character\n if not char: break\n y.append(char)\n\nfile.close()",
"_____no_output_____"
],
[
"y=np.array(y) #list to array",
"_____no_output_____"
],
[
"y.shape",
"_____no_output_____"
],
[
"np.unique(y,return_counts=True) #unique elements and their count",
"_____no_output_____"
],
[
"from keras.applications.resnet50 import ResNet50",
"C:\\Users\\JATIN\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n"
],
[
"from keras.models import Sequential\nfrom keras.layers import Dense,Flatten\n\nnew_model = Sequential()\nnew_model.add(ResNet50(include_top=False,input_shape=(3,224,224),classes=3))\nnew_model.add(Flatten())\nnew_model.add(Dense(3,activation='softmax'))",
"C:\\Users\\JATIN\\Anaconda3\\lib\\site-packages\\keras\\applications\\resnet50.py:274: UserWarning: You are using the TensorFlow backend, yet you are using the Theano image data format convention (`image_data_format=\"channels_first\"`). For best performance, set `image_data_format=\"channels_last\"` in your Keras config at ~/.keras/keras.json.\n warnings.warn('You are using the TensorFlow backend, yet you '\n"
],
[
"new_model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nresnet50 (Model) (None, 2048, 1, 1) 23587712 \n_________________________________________________________________\nflatten_6 (Flatten) (None, 2048) 0 \n_________________________________________________________________\ndense_5 (Dense) (None, 3) 6147 \n=================================================================\nTotal params: 23,593,859\nTrainable params: 23,540,739\nNon-trainable params: 53,120\n_________________________________________________________________\n"
],
[
"#reading the images as arrays \n\nimages = []\nfor filename in os.listdir('C:/Users/JATIN/Downloads/video'):\n img = cv2.imread(os.path.join('C:/Users/JATIN/Downloads/video',filename))\n if img is not None:\n images.append(img)",
"_____no_output_____"
],
[
"x=np.array(images) #images list to numpy array",
"_____no_output_____"
],
[
"del(images)",
"_____no_output_____"
],
[
"np.save(\"X.npy\",x) #saving array to load it faster",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"x=x.transpose(0,3,1,2)",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"y = keras.utils.to_categorical(y,num_classes=3)",
"_____no_output_____"
],
[
"#freezing the initial 7 layers\n\nfor layers in new_model.layers[:7]:\n layers.trainable = False",
"_____no_output_____"
],
[
"#compiling and training the model\n\nINIT_LR=0.03\nnew_model.compile(\n loss='categorical_crossentropy', #loss function\n optimizer=keras.optimizers.adamax(lr=INIT_LR), # for SGD\n metrics=['accuracy'] # report accuracy during training\n)\n\n# scheduler of learning rate (decay with epochs)\ndef lr_scheduler(epoch):\n return INIT_LR * 0.9 ** epoch\n\n# callback for printing of actual learning rate used by optimizer\nclass LrHistory(keras.callbacks.Callback):\n def on_epoch_begin(self, epoch, logs={}):\n print(\"Learning rate:\", K.get_value(model.optimizer.lr))\n \n\n\n\nnew_model.fit(\n x,y, # prepared data\n batch_size=64,\n epochs=10,\n callbacks=[keras.callbacks.LearningRateScheduler(lr_scheduler), \n LrHistory()],\n shuffle=True\n)\n\nnew_model.save('scooter_model.h5')",
"Learning rate: 0.03\nEpoch 1/10\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d050c5c2b4799d747bac764911c3aa2936b17317 | 10,098 | ipynb | Jupyter Notebook | MATH/18_Bayesian_rule.ipynb | CATERINA-SEUL/Data-Science-School | 6189b15a70274d77aa0430efab2d2db81cfe4408 | [
"MIT"
] | null | null | null | MATH/18_Bayesian_rule.ipynb | CATERINA-SEUL/Data-Science-School | 6189b15a70274d77aa0430efab2d2db81cfe4408 | [
"MIT"
] | null | null | null | MATH/18_Bayesian_rule.ipynb | CATERINA-SEUL/Data-Science-School | 6189b15a70274d77aa0430efab2d2db81cfe4408 | [
"MIT"
] | null | null | null | 23.538462 | 113 | 0.401961 | [
[
[
"----\n### 베이즈 정리\n\n - 데이터라는 조건이 주어졌을 때 조건부 확률을 구하는 공식\n \n - $P(A|B) = \\frac{P(B|A)P(A)}{P(B)}$\n \n \n ----\n - $P(A|B)$ : 사후확률(posterior). 사건 B가 발생한 후 갱신된 사건 A의 확률\n - $P(A)$ : 사전확률 (prior). 사건 B가 발생하기 전에 가지고 있던 사건 A의 확률\n - $P(B|A)$ : 가능도(likelihood). 사건 A가 발생한 경우 사건 B의 확률\n - $P(B)$ : 정규화상수(normalizing constant) 또는 증거(evidence). 확률의 크기 조정\n \n \n--- \n#### 베이즈 정리 확장1\n\n - $P(A_1|B)$ \n \n $= \\frac{P(B|A)P(A)}{P(B)}$\n \n $= \\frac{P(B|A_1)P(A_1)}{\\sum_iP(A_i,B)}$\n \n $= \\frac{P(B|A_1)P(A_1)}{\\sum_iP(B|A_I)P(A_i)}$\n \n \n \n - $P(A_i|B)$ 에서 $i$의 값이 바뀌어도 분자의 값만 비교하면 됨\n \n ---\n \n #### Classification 의 장점과 단점\n \n - 장점 : 첫번째 답이 아닐 때 2,3을 구할 수 있음. \n - 단점 : Class4개를 풀기 위해서 4개를 구해야함.... \n \n ---\n \n #### $A_1 = A , A_2 = A^\\complement$ 인 경우\n \n \n - $P(A|B)$\n \n $ = \\frac{P(B|A)P(A)}{P(B)}$\n \n $ = \\frac{P(B|A)P(A)}{P(B,A)+P(B,A^\\complement}$\n \n $ = \\frac{p(B|A)P(A)}{P(B|A)P(A) + P(B|A^\\complement)P(A^\\complement)}$\n \n $ = \\frac{P(B|A)P(A)}{P(B|A)P(A)+P(B|A^\\complement)(1-P(A)}$\n \n\n - 2진 분류 문제 \n---\n\n### 검사 시약 문제\n\n 1) 사건\n \n - 병에 걸리는 경우 : D\n - 양성반응을 보이는 경우 : S\n - 병에 걸린 사람이 양성 반응을 보이는 경우 : S|D\n - 양성 반응을 보이는 사람이 병에 걸려있을 경우 : D|S\n \n 2) 문제\n \n - $P(S|D) = 0.99$가 주어졌을 때, P(D|S)를 구하라.\n \n---- \n\n #### 베이즈 정리에 의해서 \n \n - $P(D|S) = \\frac{P(S|D)P(D)}{P(S)}$\n \n -- 현재 $P(S), P(D)$ 를 모르기 때문에 구할 수가 없다. \n\n---- \n\n 3) 추가 조사 정보\n \n - 이 병은 전체 인구 중에서 걸린 사람이 0.2%인 희귀병이다. \n \n : $P(D) = 0.002$\n \n \n - 이 병에 걸리지 않은 사람에게 시약검사를 했을 때, 양성반응이 나타날 확률은 5%이다. \n \n : $P(S|D^\\complement) = 0.05$\n \n \n---\n#### 베이즈 정리의 확장에 의해서 \n\n - $P(D|S)$\n \n $= \\frac{P(S|D)P(D)}{P(S)}$\n \n $ = \\frac{P(S|D)P(D)}{P(S,D)+P(S,D^\\complement)} $\n \n $ = \\frac{P(S|D)P(D)}{P(S|D)P(D)+P(S|D^\\complement)P(D^\\complement)}$\n \n $ = \\frac{P(S|D)P(D)}{P(S|D)P(D)+P(S|D^\\complement)(1-P(D))}$\n \n $ = \\frac{0.99\\cdot 0.002}{0.99\\cdot 0.002+0.05\\cdot (1-0.002)}$\n \n $ = 0.038$",
"_____no_output_____"
]
],
[
[
"round((0.99*0.002) / (0.99*0.002+0.05)*(1-0.002), 3)",
"_____no_output_____"
]
],
[
[
"----\n#### TabularCPD(variable, variable_card, value, evidence=None, evidence_card=None)\n\n - BayesianModel : 베이즈정리에 적용\n - TabularCPD : 조건부확률을 구현\n \n---- \n\n - variable : 확률 변수의 이름 문자열\n - variable_card : 확률변수가 가질 수 있는 경우의 수\n - value : 조건부확률 배열. 하나의 열(column)이 동일 조건을 뜻하므로, 하나의 열의 확률 합은 1이어야 한다.\n - evidence : 조건이 되는 확률변수의 이름 문자열 리스트\n - evidence_card : 조건이 되는 확률변수가 가질 수 있는 경우의 수 리스트\n \n 일반적인 확률을 구현할 때 : evidence = None , evidence_card = None",
"_____no_output_____"
],
[
"#### 병에 걸렸을 사전확률 $P(D) = P(X=1)$, 병에 걸리지 않았을 사전확률 $P(D^\\complement) = P(X = 0)$",
"_____no_output_____"
]
],
[
[
"from pgmpy.factors.discrete import TabularCPD",
"_____no_output_____"
],
[
"cpd_X = TabularCPD('X', 2, [[1-0.002, 0.002]])\nprint(cpd_X)",
"+------+-------+\n| X(0) | 0.998 |\n+------+-------+\n| X(1) | 0.002 |\n+------+-------+\n"
]
],
[
[
"#### 양성반응이 나올 확률 $P(S) = P(Y = 1)$, 음성 반응이 나올 확률 $P(S^\\complement) = P(Y=0)$\n\n - 확률 변수 $Y$ 에 확률을 베이즈 모형에 넣을 때는 $P(Y|X)$의 형태로 넣어야한다.\n \n - evidence : 조건이 되는 확률변수가 누구냐 ! \n - evidence_card : 몇가지 조건이 존재하는가 ! ",
"_____no_output_____"
]
],
[
[
"cpd_Y_on_X = TabularCPD('Y', 2, np.array(\n [[0.95, 0.01], [0.05, 0.99]]), evidence=['X'], evidence_card=[2])\n\nprint(cpd_Y_on_X)",
"+------+------+------+\n| X | X(0) | X(1) |\n+------+------+------+\n| Y(0) | 0.95 | 0.01 |\n+------+------+------+\n| Y(1) | 0.05 | 0.99 |\n+------+------+------+\n"
],
[
"from pgmpy.models import BayesianModel",
"_____no_output_____"
]
],
[
[
"#### BayesianModel(variables)\n\n - variables : 확률모형이 포함하는 확률변수 이름 문자열 리스트\n - add_cpds() : 조건부확률 추가\n - check_model() : 모형이 정상적인지 확인. True이면 정상모델",
"_____no_output_____"
]
],
[
[
"model = BayesianModel([('X','Y')])\nmodel.add_cpds(cpd_X,cpd_Y_on_X)\nmodel.check_model()",
"_____no_output_____"
],
[
"from pgmpy.inference import VariableElimination",
"_____no_output_____"
]
],
[
[
"#### VariableElimination (변수제거법) 을 사용한 추정을 제공\n \n#### query(variables, evidences)\n\n - query() 를 통해 사후확률 계산\n\n----\n\n - variables : 사후 확률을 계산할 확률변수의 이름 리스트\n - evidences : 조건이 되는 확률변수의 값을 나타내는 딕셔너리\n",
"_____no_output_____"
]
],
[
[
"inference = VariableElimination(model)\n\nposterior = inference.query(['X'], evidence={'Y':1})",
"Finding Elimination Order: : : 0it [00:00, ?it/s]\n0it [00:00, ?it/s]\n"
],
[
"print(posterior)",
"+------+----------+\n| X | phi(X) |\n+======+==========+\n| X(0) | 0.9618 |\n+------+----------+\n| X(1) | 0.0382 |\n+------+----------+\n"
]
],
[
[
"----\n#### 베이즈 정리 확장 2\n\n - 베이즈 정리는 사건 A의 확률이 사건 B에 의해 갱신된 확률을 계산하는 것. \n - 베이즈 정리 확장2에서는 이 상태에서 추가적으로 사건 C가 발생!\n \n - $P(A|B,C) = \\frac{P(C|A,B)P(A|B)}{P(C|B)}$\n \n----\n### 몬티 홀 문제\n\n - 확률변수 (random box) 정의\n \n 1) 자동차가 있는 문을 나타내는 확률변수 C : 0,1,2\n \n 2) 참가자가 선택한 문을 나타내는 확률변수 X : 0,1,2\n \n 3) 진행자가 열어준 문을 나타내는 확률변수 H : 0,1,2\n \n ---\n \n ##### 참가자와 진행자의 행위를 조건으로 자동차의 위치를 결과로 하는 조건부 확률을 푸는 문제\n \n FACT\n \n 1) 자동차를 놓는 진행자는 참가자의 선택을 예측할 수 없고, 참가자는 자동차를 볼 수 없으므로 자동차의 위치와 참가자의 선택은 서로 독립적\n \n - $P(C,X) = P(C)P(X)$\n \n 2) 진행자가 어떤 문을 여는가가 자동차의 위치 및 참가자의 선택에 좌우됨.\n \n - $P(H_0|C_0,X_1) = 0$\n - $P(H_1|C_0,X_1) = 0$\n - $P(H_2|C_0,X_1) = 1$ \n \n \n---- \n\n- 참가자가 1번 문을 선택하고, 진행자가 2번 문을 열어서 자동차가 없다는 것을 보인 경우, 0번 문 뒤에 차가 있을 확률\n\n $P(C_0|X_1,H_2) = \\frac{2}{3}$\n \n $ = \\frac{P(C_0,X_1,H_2)}{P(X_1,H_2)}$\n \n $ = \\frac{P(H_2|C_0,X_1)P(C_0,X_1)}{P(X_1,H_2)}$\n \n $ = \\frac{P(C_0)P(X_1)}{P(H_2|X_1)P(X)}$\n \n $ = \\frac{P(C_0)}{P(H_2|X_1)}$\n \n $ = \\frac{P(C_0)}{P(H_2,C_0|X_1)+P(H_2,C_1|X_1)+P(H_2,C_2|X_1)}$\n \n $ = \\frac{P(C_0)}{P(H_2|X_1,C_0)P(C_0)+P(H_2|X_1,C_1)P(C_1)+P(H_2|X_1,C_2)P(C_2)}$\n \n $ = \\frac{\\frac{1}{3}}{1\\cdot \\frac{1}{3} + \\frac{1}{2}\\cdot \\frac{1}{3}+0\\cdot \\frac{1}{3}}$\n \n $ = \\frac{2}{3}$",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d050ca16d8a39e587b46bba0882ae39dbcbad717 | 23,053 | ipynb | Jupyter Notebook | notebooks/ML.ipynb | samirelanduk/numberwang | aa4cca3c3ada4f759eb054b0f46494803a3b0dfd | [
"MIT"
] | null | null | null | notebooks/ML.ipynb | samirelanduk/numberwang | aa4cca3c3ada4f759eb054b0f46494803a3b0dfd | [
"MIT"
] | null | null | null | notebooks/ML.ipynb | samirelanduk/numberwang | aa4cca3c3ada4f759eb054b0f46494803a3b0dfd | [
"MIT"
] | null | null | null | 72.266458 | 12,808 | 0.747842 | [
[
[
"# Machine Learning\n\n## Overview\n\nMachine learning is the ability of computers to take a dataset of objects and learn patterns about them. This dataset is structured as a table, where each row is a vector representing some object by encoding their properties as the values of the vector. The columns represent **features** - properties that all the objects share.\n\nThere are, broadly speaking, two kinds of machine learning. **Supervised learning** has an extra column at the end of the dataset, and the program learns to predict the value of this based on the input features for some new object. If the output value is continuous, it is **regression**, otherwise it is **classification**. **Unsupervised learning** seeks to find patterns within the data by, for example, clustering.\n\n\n\n## Supervised Learning\n\nOne of the most critical concepts in supervised learning is the dataset. This represents the knowledge about the set of objects in question that you wish the machine to learn. It is essentially a table where the rows represent objects, and the columns represent the properties. 'Training' is essentially the creation of an object called a model, which can take a row missing the last column, and predict what its value will be by examining the data in the dataset. For example...",
"_____no_output_____"
]
],
[
[
"import pandas as pd\niris_dataset = pd.read_csv(\"../data/iris.csv\")\niris_dataset.head()",
"_____no_output_____"
]
],
[
[
"Here a dataset has been loaded from CSV into a pandas dataframe. Each row represents a flower, on which four measurements have been taken, and each flower belongs to one of three classes. A supervised learning model would take this dataset of 150 flowers and train such that any other flower for which the relevant measurements were known could have its class predicted. This would obviously be a classification problem, not regression.\n\nA very simple model would take just two features and map them to one of two classes. The dataset can be reduced to this form asd follows:",
"_____no_output_____"
]
],
[
[
"simple_iris = iris_dataset.iloc[0:100, [0, 2, 4]]\nsimple_iris.head()\nsimple_iris.tail()",
"_____no_output_____"
]
],
[
[
"Because this is just two dimensions, it can be easily visualised as a scatter plot.",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append(\"..\")\nimport numerus.learning as ml\nml.plot_dataset(simple_iris)",
"_____no_output_____"
]
],
[
[
"The data can be seen to be **linearly separable** - there is a line that can be drawn between them that would separate them perfectly.\n\nOne of the simplest classifiers for supervised learning is the perceptron. Perceptrons have a weights vector which they dot with an input vector to get some level of activation. If the activation is above some threshold, one class is predicted - otherwise the other is predicted. Training a perceptron means giving the model training inputs until it has values for the weights and threshold that effectively separate the classes.\n\nThe data must be split into training and test data, and then a perceptron created from the training data.",
"_____no_output_____"
]
],
[
[
"train_simple_iris, test_simple_iris = ml.split_data(simple_iris)\nml.plot_dataset(train_simple_iris, title=\"Training Data\")\nperceptron = ml.Perceptron(train_simple_iris)\nprint(perceptron)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d050cd8e88c3bcc780379081427f8a5a7ddbe1db | 7,346 | ipynb | Jupyter Notebook | community/aqua/optimization/clique.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 2 | 2017-11-09T16:33:14.000Z | 2018-02-26T00:42:17.000Z | community/aqua/optimization/clique.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 1 | 2019-04-12T07:43:25.000Z | 2020-02-07T13:32:18.000Z | community/aqua/optimization/clique.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 2 | 2019-03-24T21:00:25.000Z | 2019-03-24T21:57:10.000Z | 27.931559 | 373 | 0.540022 | [
[
[
"## _*Using Qiskit Aqua for clique problems*_\n\nThis Qiskit Aqua Optimization notebook demonstrates how to use the VQE quantum algorithm to compute the clique of a given graph. \n\nThe problem is defined as follows. A clique in a graph $G$ is a complete subgraph of $G$. That is, it is a subset $K$ of the vertices such that every two vertices in $K$ are the two endpoints of an edge in $G$. A maximal clique is a clique to which no more vertices can be added. A maximum clique is a clique that includes the largest possible number of vertices. \n\nWe will go through three examples to show (1) how to run the optimization in the non-programming way, (2) how to run the optimization in the programming way, (3) how to run the optimization with the VQE.\nWe will omit the details for the support of CPLEX, which are explained in other notebooks such as maxcut.\n\nNote that the solution may not be unique.",
"_____no_output_____"
],
[
"### The problem and a brute-force method.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nfrom qiskit import Aer\n\nfrom qiskit_aqua import run_algorithm\nfrom qiskit_aqua.input import EnergyInput\nfrom qiskit_aqua.translators.ising import clique\nfrom qiskit_aqua.algorithms import ExactEigensolver",
"_____no_output_____"
]
],
[
[
"first, let us have a look at the graph, which is in the adjacent matrix form.",
"_____no_output_____"
]
],
[
[
"K = 3 # K means the size of the clique\nnp.random.seed(100)\nnum_nodes = 5\nw = clique.random_graph(num_nodes, edge_prob=0.8, weight_range=10)\nprint(w) ",
"[[ 0. 4. 5. 3. -5.]\n [ 4. 0. 7. 0. 6.]\n [ 5. 7. 0. -4. 0.]\n [ 3. 0. -4. 0. 8.]\n [-5. 6. 0. 8. 0.]]\n"
]
],
[
[
"Let us try a brute-force method. Basically, we exhaustively try all the binary assignments. In each binary assignment, the entry of a vertex is either 0 (meaning the vertex is not in the clique) or 1 (meaning the vertex is in the clique). We print the binary assignment that satisfies the definition of the clique (Note the size is specified as K).",
"_____no_output_____"
]
],
[
[
"def brute_force():\n # brute-force way: try every possible assignment!\n def bitfield(n, L):\n result = np.binary_repr(n, L)\n return [int(digit) for digit in result]\n\n L = num_nodes # length of the bitstring that represents the assignment\n max = 2**L\n has_sol = False\n for i in range(max):\n cur = bitfield(i, L)\n cur_v = clique.satisfy_or_not(np.array(cur), w, K)\n if cur_v:\n has_sol = True\n break\n return has_sol, cur\n\nhas_sol, sol = brute_force()\nif has_sol:\n print(\"solution is \", sol)\nelse:\n print(\"no solution found for K=\", K)",
"solution is [1, 0, 0, 1, 1]\n"
]
],
[
[
"### Part I: run the optimization in the non-programming way",
"_____no_output_____"
]
],
[
[
"qubit_op, offset = clique.get_clique_qubitops(w, K)\nalgo_input = EnergyInput(qubit_op)\nparams = {\n 'problem': {'name': 'ising'},\n 'algorithm': {'name': 'ExactEigensolver'}\n}\nresult = run_algorithm(params, algo_input)\nx = clique.sample_most_likely(len(w), result['eigvecs'][0])\nising_sol = clique.get_graph_solution(x)\nif clique.satisfy_or_not(ising_sol, w, K):\n print(\"solution is\", ising_sol)\nelse:\n print(\"no solution found for K=\", K)",
"solution is [1. 0. 1. 1. 0.]\n"
]
],
[
[
"### Part II: run the optimization in the programming way",
"_____no_output_____"
]
],
[
[
"\nalgo = ExactEigensolver(algo_input.qubit_op, k=1, aux_operators=[])\nresult = algo.run()\nx = clique.sample_most_likely(len(w), result['eigvecs'][0])\nising_sol = clique.get_graph_solution(x)\nif clique.satisfy_or_not(ising_sol, w, K):\n print(\"solution is\", ising_sol)\nelse:\n print(\"no solution found for K=\", K) ",
"solution is [1. 0. 1. 1. 0.]\n"
]
],
[
[
"### Part III: run the optimization with the VQE",
"_____no_output_____"
]
],
[
[
"algorithm_cfg = {\n 'name': 'VQE',\n 'operator_mode': 'matrix'\n}\n\noptimizer_cfg = {\n 'name': 'COBYLA'\n}\n\nvar_form_cfg = {\n 'name': 'RY',\n 'depth': 5,\n 'entanglement': 'linear'\n}\n\nparams = {\n 'problem': {'name': 'ising', 'random_seed': 10598},\n 'algorithm': algorithm_cfg,\n 'optimizer': optimizer_cfg,\n 'variational_form': var_form_cfg\n}\nbackend = Aer.get_backend('statevector_simulator')\nresult = run_algorithm(params, algo_input, backend=backend)\nx = clique.sample_most_likely(len(w), result['eigvecs'][0])\nising_sol = clique.get_graph_solution(x)\n\nif clique.satisfy_or_not(ising_sol, w, K):\n print(\"solution is\", ising_sol)\nelse:\n print(\"no solution found for K=\", K)",
"solution is [1. 0. 1. 1. 0.]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d050df1cf04272225fe5a724316fb628497dda32 | 92,019 | ipynb | Jupyter Notebook | human_tests/Human_template_simulation.ipynb | ben-heil/ponyo | 09096eada5f31ab5c28a2e743fd0539d432ca7a7 | [
"BSD-3-Clause"
] | null | null | null | human_tests/Human_template_simulation.ipynb | ben-heil/ponyo | 09096eada5f31ab5c28a2e743fd0539d432ca7a7 | [
"BSD-3-Clause"
] | null | null | null | human_tests/Human_template_simulation.ipynb | ben-heil/ponyo | 09096eada5f31ab5c28a2e743fd0539d432ca7a7 | [
"BSD-3-Clause"
] | null | null | null | 142.444272 | 48,812 | 0.868581 | [
[
[
"# Test shifting template experiments",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n\nimport os\nimport sys\nimport pandas as pd\nimport numpy as np\nimport random\nimport umap\nimport glob\nimport pickle\nimport tensorflow as tf\nfrom keras.models import load_model\nfrom sklearn.decomposition import PCA\nfrom plotnine import (ggplot,\n labs, \n geom_point,\n aes, \n ggsave, \n theme_bw,\n theme,\n facet_wrap,\n scale_color_manual,\n guides, \n guide_legend,\n element_blank,\n element_text,\n element_rect,\n element_line,\n coords)\n\n\nimport warnings\nwarnings.filterwarnings(action='ignore')\n\nfrom ponyo import utils, train_vae_modules, simulate_expression_data",
"Using TensorFlow backend.\n"
],
[
"# Set seeds to get reproducible VAE trained models\n\n# The below is necessary in Python 3.2.3 onwards to\n# have reproducible behavior for certain hash-based operations.\n# See these references for further details:\n# https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development\n# https://docs.python.org/3.4/using/cmdline.html#envvar-PYTHONHASHSEED\n# https://github.com/keras-team/keras/issues/2280#issuecomment-306959926\n\nos.environ[\"PYTHONHASHSEED\"] = \"0\"\n\n# The below is necessary for starting Numpy generated random numbers\n# in a well-defined initial state.\nnp.random.seed(42)\n\n# The below is necessary for starting core Python generated random numbers\n# in a well-defined state.\nrandom.seed(12345)\n\n# The below tf.set_random_seed() will make random number generation\n# in the TensorFlow backend have a well-defined initial state.\ntf.set_random_seed(1234)",
"_____no_output_____"
],
[
"# Read in config variables\nbase_dir = os.path.abspath(os.path.join(os.getcwd(),\"../\"))\nconfig_filename = os.path.abspath(os.path.join(base_dir,\n \"human_tests\", \n \"config_test_human.tsv\"))\nparams = utils.read_config(config_filename)",
"_____no_output_____"
],
[
"# Load parameters\nlocal_dir = params[\"local_dir\"]\ndataset_name = params['dataset_name']\nanalysis_name = params[\"simulation_type\"]\nrpkm_data_filename = params[\"raw_data_filename\"]\nnormalized_data_filename = params[\"normalized_data_filename\"]\nmetadata_filename = params[\"metadata_filename\"]\nNN_architecture = params['NN_architecture']\nscaler_filename = params['scaler_transform_filename']\nnum_runs = params['num_simulated']\nmetadata_delimiter = params[\"metadata_delimiter\"]\nexperiment_id_colname = params['metadata_experiment_colname']\nsample_id_colname = params['metadata_sample_colname']\nproject_id = params['project_id']\n\nNN_dir = os.path.join(\n base_dir, \n dataset_name, \n \"models\", \n NN_architecture)",
"_____no_output_____"
],
[
"assert os.path.exists(rpkm_data_filename)",
"_____no_output_____"
]
],
[
[
"## Setup directories",
"_____no_output_____"
]
],
[
[
"utils.setup_dir(config_filename)",
"_____no_output_____"
]
],
[
[
"## Pre-process data",
"_____no_output_____"
]
],
[
[
"train_vae_modules.normalize_expression_data(base_dir,\n config_filename,\n rpkm_data_filename,\n normalized_data_filename)",
"input: dataset contains 50 samples and 5000 genes\nOutput: normalized dataset contains 50 samples and 5000 genes\n"
]
],
[
[
"## Train VAE",
"_____no_output_____"
]
],
[
[
"# Directory containing log information from VAE training\nvae_log_dir = os.path.join(\n base_dir, \n dataset_name,\n \"logs\",\n NN_architecture)",
"_____no_output_____"
],
[
"# Train VAE\ntrain_vae_modules.train_vae(config_filename,\n normalized_data_filename)",
"input dataset contains 50 samples and 5000 genes\nWARNING:tensorflow:From /home/alexandra/anaconda3/envs/test_ponyo/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\ntracking <tf.Variable 'Variable:0' shape=() dtype=float32> beta\nWARNING:tensorflow:From /home/alexandra/anaconda3/envs/test_ponyo/lib/python3.7/site-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nWARNING:tensorflow:From /home/alexandra/anaconda3/envs/test_ponyo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nTrain on 45 samples, validate on 5 samples\nEpoch 1/10\n45/45 [==============================] - 4s 88ms/step - loss: 2511.2365 - val_loss: 2078.2676\nEpoch 2/10\n45/45 [==============================] - 4s 79ms/step - loss: 1688.8236 - val_loss: 2374.3589\nEpoch 3/10\n45/45 [==============================] - 4s 79ms/step - loss: 1664.0755 - val_loss: 1454.6667\nEpoch 4/10\n45/45 [==============================] - 4s 79ms/step - loss: 1509.4538 - val_loss: 1387.5260\nEpoch 5/10\n45/45 [==============================] - 4s 79ms/step - loss: 1474.1985 - val_loss: 1371.2039\nEpoch 6/10\n45/45 [==============================] - 4s 79ms/step - loss: 1489.1452 - val_loss: 1350.6823\nEpoch 7/10\n45/45 [==============================] - 4s 79ms/step - loss: 1502.0319 - val_loss: 1949.6031\nEpoch 8/10\n45/45 [==============================] - 4s 79ms/step - loss: 1381.4732 - val_loss: 1232.3323\nEpoch 9/10\n45/45 [==============================] - 4s 79ms/step - loss: 1419.9623 - val_loss: 1151.1223\nEpoch 10/10\n45/45 [==============================] - 4s 79ms/step - loss: 1384.7468 - val_loss: 1161.4500\n"
]
],
[
[
"## Shift template experiment",
"_____no_output_____"
]
],
[
[
"#tmp result dir\ntmp = os.path.join(local_dir, \"pseudo_experiment\")\nos.makedirs(tmp, exist_ok=True)",
"_____no_output_____"
],
[
"# Load pickled file\nscaler = pickle.load(open(scaler_filename, \"rb\"))",
"_____no_output_____"
],
[
"# Run simulation\nnormalized_data = normalized_data = pd.read_csv(\n normalized_data_filename, header=0, sep=\"\\t\", index_col=0\n )\n\nfor run in range(num_runs):\n simulate_expression_data.shift_template_experiment(\n normalized_data,\n NN_architecture,\n dataset_name,\n scaler,\n metadata_filename,\n metadata_delimiter,\n experiment_id_colname,\n sample_id_colname,\n project_id,\n local_dir,\n base_dir,\n run)",
"_____no_output_____"
]
],
[
[
"## Visualize latent transform compendium",
"_____no_output_____"
]
],
[
[
"# Load VAE models\nmodel_encoder_filename = glob.glob(os.path.join(\n NN_dir,\n \"*_encoder_model.h5\"))[0]\n\nweights_encoder_filename = glob.glob(os.path.join(\n NN_dir,\n \"*_encoder_weights.h5\"))[0]\n\nmodel_decoder_filename = glob.glob(os.path.join(\n NN_dir,\n \"*_decoder_model.h5\"))[0]\n\nweights_decoder_filename = glob.glob(os.path.join(\n NN_dir,\n \"*_decoder_weights.h5\"))[0]\n\n# Load saved models\nloaded_model = load_model(model_encoder_filename)\nloaded_decode_model = load_model(model_decoder_filename)\n\nloaded_model.load_weights(weights_encoder_filename)\nloaded_decode_model.load_weights(weights_decoder_filename)",
"_____no_output_____"
],
[
"pca = PCA(n_components=2)",
"_____no_output_____"
],
[
"# Read data\nnormalized_compendium = pd.read_csv(normalized_data_filename, header=0, sep=\"\\t\", index_col=0)",
"_____no_output_____"
],
[
"# Encode normalized compendium into latent space\ncompendium_encoded = loaded_model.predict_on_batch(normalized_compendium)\n\ncompendium_encoded_df = pd.DataFrame(data=compendium_encoded, \n index=normalized_compendium.index)\n\n# Get and save PCA model\nmodel = pca.fit(compendium_encoded_df)\n\ncompendium_PCAencoded = model.transform(compendium_encoded_df)\n\ncompendium_PCAencoded_df = pd.DataFrame(data=compendium_PCAencoded,\n index=compendium_encoded_df.index,\n columns=['1','2'])\n\n# Add label\ncompendium_PCAencoded_df['experiment_id'] = 'background'",
"_____no_output_____"
],
[
"# Embedding of real template experiment (encoded)\ntemplate_filename = os.path.join(local_dir,\n \"pseudo_experiment\",\n \"template_normalized_data_\"+project_id+\"_test.txt\")\n\ntemplate_data = pd.read_csv(template_filename, header=0, sep='\\t', index_col=0)\n\n# Encode template experiment into latent space\ntemplate_encoded = loaded_model.predict_on_batch(template_data)\ntemplate_encoded_df = pd.DataFrame(data=template_encoded,\n index=template_data.index)\n\ntemplate_PCAencoded = model.transform(template_encoded_df)\n\ntemplate_PCAencoded_df = pd.DataFrame(data=template_PCAencoded,\n index=template_encoded_df.index,\n columns=['1','2'])\n\n# Add back label column\ntemplate_PCAencoded_df['experiment_id'] = 'template_experiment'",
"_____no_output_____"
],
[
"# Embedding of simulated experiment (encoded)\nencoded_simulated_filename = os.path.join(local_dir,\n \"pseudo_experiment\",\n \"selected_simulated_encoded_data_\"+project_id+\"_1.txt\")\n\nsimulated_encoded_df = pd.read_csv(encoded_simulated_filename,header=0, sep='\\t', index_col=0)\n\nsimulated_PCAencoded = model.transform(simulated_encoded_df)\n\nsimulated_PCAencoded_df = pd.DataFrame(data=simulated_PCAencoded,\n index=simulated_encoded_df.index,\n columns=['1','2'])\n\n# Add back label column\nsimulated_PCAencoded_df['experiment_id'] = 'simulated_experiment'",
"_____no_output_____"
],
[
"# Concatenate dataframes\ncombined_PCAencoded_df = pd.concat([compendium_PCAencoded_df, \n template_PCAencoded_df,\n simulated_PCAencoded_df])\n\nprint(combined_PCAencoded_df.shape)\ncombined_PCAencoded_df.head()",
"(60, 3)\n"
],
[
"# Plot\nfig = ggplot(combined_PCAencoded_df, aes(x='1', y='2'))\nfig += geom_point(aes(color='experiment_id'), alpha=0.2)\nfig += labs(x ='PCA 1',\n y = 'PCA 2',\n title = 'PCA original data with experiments (latent space)')\nfig += theme_bw()\nfig += theme(\n legend_title_align = \"center\",\n plot_background=element_rect(fill='white'),\n legend_key=element_rect(fill='white', colour='white'), \n legend_title=element_text(family='sans-serif', size=15),\n legend_text=element_text(family='sans-serif', size=12),\n plot_title=element_text(family='sans-serif', size=15),\n axis_text=element_text(family='sans-serif', size=12),\n axis_title=element_text(family='sans-serif', size=15)\n )\nfig += guides(colour=guide_legend(override_aes={'alpha': 1}))\nfig += scale_color_manual(['#bdbdbd', 'red', 'blue'])\nfig += geom_point(data=combined_PCAencoded_df[combined_PCAencoded_df['experiment_id'] == 'template_experiment'],\n alpha=0.2, \n color='blue')\nfig += geom_point(data=combined_PCAencoded_df[combined_PCAencoded_df['experiment_id'] == 'simulated_experiment'],\n alpha=0.1, \n color='red')\n\nprint(fig)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d050e0bfbac3c5287621d5bcff1a20d1a61154d8 | 17,789 | ipynb | Jupyter Notebook | 9.12.ipynb | ljmzyl/Work | 7b44bb8b05fb25ba648efb5801d46a6dfcd43167 | [
"Apache-2.0"
] | null | null | null | 9.12.ipynb | ljmzyl/Work | 7b44bb8b05fb25ba648efb5801d46a6dfcd43167 | [
"Apache-2.0"
] | null | null | null | 9.12.ipynb | ljmzyl/Work | 7b44bb8b05fb25ba648efb5801d46a6dfcd43167 | [
"Apache-2.0"
] | null | null | null | 19.634658 | 108 | 0.421384 | [
[
[
"# 选择\n## 布尔类型、数值和表达式\n\n- 注意:比较运算符的相等是两个等到,一个等到代表赋值\n- 在Python中可以用整型0来代表False,其他数字来代表True\n- 后面还会讲到 is 在判断语句中的用发",
"_____no_output_____"
]
],
[
[
"1== true",
"_____no_output_____"
],
[
"while 1:\n print('hahaha')",
"_____no_output_____"
]
],
[
[
"## 字符串的比较使用ASCII值",
"_____no_output_____"
]
],
[
[
"'a'>True",
"_____no_output_____"
],
[
"0<10>100",
"_____no_output_____"
],
[
"num=eval(input('>>'))\nif num>=90:\n print('A')\nelif 80<=num<90:\n print('B')\nelse :\n print('C')",
">>80\nB\n"
]
],
[
[
"## Markdown \n- https://github.com/younghz/Markdown",
"_____no_output_____"
],
[
"## EP:\n- <img src=\"../Photo/34.png\"></img>\n- 输入一个数字,判断其实奇数还是偶数",
"_____no_output_____"
],
[
"## 产生随机数字\n- 函数random.randint(a,b) 可以用来产生一个a和b之间且包括a和b的随机整数",
"_____no_output_____"
]
],
[
[
"import random\na=random.randint(1,5)\nprint(a)\nwhile True:\n num=eval(input('>>'))\n if num == a:\n print('Success')\n break\n elif num>a:\n print('太大了')\n elif num<a:\n print('太小了')",
"2\n>>5\n太大了\n>>2\nSuccess\n"
]
],
[
[
"## 其他random方法\n- random.random 返回0.0到1.0之间前闭后开区间的随机浮点\n- random.randrange(a,b) 前闭后开",
"_____no_output_____"
],
[
"## EP:\n- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字的和,并判定其是否正确\n- 进阶:写一个随机序号点名程序",
"_____no_output_____"
]
],
[
[
"import random\na=random.randint(1,5)\nb=random.randint(2,6)\nprint(a,b)\n# num=eval(input('>>'))\n# if num==a+b:\n# print('Success')\n# else :\n# print('失败')\nnum=a+b\nwhile 1:\n input('>>')\n if input == num:\n print('Success')\n break\n else :\n print('失败')",
"_____no_output_____"
]
],
[
[
"## if语句\n- 如果条件正确就执行一个单向if语句,亦即当条件为真的时候才执行if内部的语句\n- Python有很多选择语句:\n> - 单向if \n - 双向if-else\n - 嵌套if\n - 多向if-elif-else\n \n- 注意:当语句含有子语句的时候,那么一定至少要有一个缩进,也就是说如果有儿子存在,那么一定要缩进\n- 切记不可tab键和space混用,单用tab 或者 space\n- 当你输出的结果是无论if是否为真时都需要显示时,语句应该与if对齐",
"_____no_output_____"
]
],
[
[
"a=eval(input('>>'))\nif a<=30:\n b=input('>>')\n if b!='丑':\n c=input('>>')\n if c=='高':\n d=input('>>')\n if d=='是':\n print('见')\n else:\n print('不见')\n else :\n print('不见')\n else :\n print('不见')\nelse:\n print('too old')",
">>25\n>>帅\n>>高\n>>是\n见\n"
]
],
[
[
"## EP:\n- 用户输入一个数字,判断其实奇数还是偶数\n- 进阶:可以查看下4.5实例研究猜生日",
"_____no_output_____"
],
[
"## 双向if-else 语句\n- 如果条件为真,那么走if内部语句,否则走else内部语句",
"_____no_output_____"
],
[
"## EP:\n- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字,并判定其是否正确,如果正确打印“you‘re correct”,否则打印正确错误",
"_____no_output_____"
],
[
"## 嵌套if 和多向if-elif-else\n",
"_____no_output_____"
],
[
"## EP:\n- 提示用户输入一个年份,然后显示表示这一年的动物\n\n- 计算身体质量指数的程序\n- BMI = 以千克为单位的体重除以以米为单位的身高\n",
"_____no_output_____"
]
],
[
[
"a=eval(input('>>'))\nnum=a%12\nif num==0:\n print('猴')\nelif num == 1:\n print('鸡')\nelif num == 2:\n print('狗')\nelif num == 3:\n print('猪')\nelif num== 4:\n print('鼠')\nelif num== 5:\n print('牛')\nelif num== 6:\n print('虎')\nelif num== 7:\n print('兔')\nelif num== 8:\n print('龙')\nelif num== 9:\n print('蛇')\nelif num== 10:\n print('马')\nelse:\n print('羊')",
">>1991\n羊\n"
],
[
"w,h=eval(input('>>'))\nbmi=w/(h*h)\nprint(bmi)\nif bmi<18.5:\n print('超轻')\nelif 18.5<=bmi<25.0:\n print('标准')\nelif 25.0<=bmi<30.0:\n print('超重')\nelse :\n print('痴肥')",
">>60,1.84\n17.72211720226843\n超轻\n"
]
],
[
[
"## 逻辑运算符\n",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
],
[
[
"a=[1,2,3,4]\n1 not in a",
"_____no_output_____"
]
],
[
[
"## EP:\n- 判定闰年:一个年份如果能被4整除但不能被100整除,或者能被400整除,那么这个年份就是闰年\n- 提示用户输入一个年份,并返回是否是闰年\n- 提示用户输入一个数字,判断其是否为水仙花数",
"_____no_output_____"
]
],
[
[
"year=eval(input('>>'))\na=year%4==0\nb=year%100!=0\nc=year%400==0\nif (a or c) and b :\n print('闰年')\nelse :\n print('非闰年')",
">>400\n非闰年\n"
],
[
"n=eval(input('>>'))\na1=n//100\na2=n//10%10\na3=n%10\ns=a1**3+a2**3+a3**3\nif s == n:\n print('是水仙花数')\nelse :\n print('结束')",
">>154\n结束\n"
]
],
[
[
"## 实例研究:彩票\n",
"_____no_output_____"
]
],
[
[
"import random\na1=random.randint(0,9)\na2=random.randint(0,9)\nprint(a1,a2)\na=str(a1)+str(a2)\nnum=input('>>')\nif num==a:\n print('一等奖')\nelif (num[0]==a[1] and (num[1]== a[0])):\n print('二等奖')\nelif ((num[0]==a[0]) or (num[1]==a[0]) or (num[0]==a[1]) or (num[1]==a[1])):\n print('三等奖')\nelse :\n ('未中奖')",
"8 4\n>>48\n二等奖\n"
]
],
[
[
"# Homework\n- 1\n",
"_____no_output_____"
]
],
[
[
"import math\na,b,c=eval(input('>>'))\npan=b**2-4*a*c\nr1=((-b)+math.sqrt(pan))/(2*a)\nr2=((-b)-math.sqrt(pan))/(2*a)\nif pan>0:\n print(r1,r2)\nelif pan==0:\n print(r1)\nelse :\n print('The equation has no real roots')",
">>1,3,1\n-0.3819660112501051 -2.618033988749895\n"
]
],
[
[
"- 2\n",
"_____no_output_____"
]
],
[
[
"import random\na1=random.randint(0,99)\na2=random.randint(0,99)\nprint(a1,a2)\nnum=eval(input('>>'))\nnumber=a1+a2\nif num == number:\n print('True')\nelse :\n print('False')",
"93 42\n>>12\nFalse\n"
]
],
[
[
"- 3\n",
"_____no_output_____"
]
],
[
[
"day = eval(input('今天是哪一天(星期天是0,星期一是1,。。。,星期六是6):'))\ndays = eval(input('今天之后到未来某天的天数:'))\nn = day + days\nif day==0:\n a='星期日'\nelif day==1:\n a='星期一'\nelif day==2:\n a='星期二'\nelif day==3:\n a='星期三'\nelif day==4:\n a='星期四'\nelif day==5:\n a='星期五'\nelif day==6:\n a='星期六'\nif n%7 ==0:\n print('今天是'+str(a)+'并且'+str(days)+'天之后是星期天')\nelif n%7 ==1:\n print('今天是'+str(a)+'并且'+str(days)+'天之后是星期一')\nelif n%7 ==2:\n print('今天是'+str(a)+'并且'+str(days)+'天之后是星期二')\nelif n%7 ==3:\n print('今天是'+str(a)+'并且'+str(days)+'天之后是星期三')\nelif n%7 ==4:\n print('今天是'+str(a)+'并且'+str(days)+'天之后是星期四')\nelif n%7 ==5:\n print('今天是'+str(a)+'并且'+str(days)+'天之后是星期五')\nelif n%7 ==6:\n print('今天是'+str(a)+'并且'+str(days)+'天之后是星期六')",
"今天是哪一天(星期天是0,星期一是1,。。。,星期六是6):1\n今天之后到未来某天的天数:3\n今天是星期一并且3天之后是星期四\n"
]
],
[
[
"- 4\n",
"_____no_output_____"
]
],
[
[
"a,b,c = eval(input('输入三个整数:'))\nif a>=b and b>=c:\n print(c,b,a)\nelif a>=b and b<=c and a>=c:\n print(b,c,a)\nelif b>=a and a>=c :\n print(c,a,b)\nelif b>=a and a<=c and b>=c:\n print(a,c,b)\nelif c>=b and b>=a:\n print(a,b,c)\nelif c>=b and b<=a and c>=a:\n print(b,a,c)",
"输入三个整数:2,1,3\n1 2 3\n"
]
],
[
[
"- 5\n",
"_____no_output_____"
]
],
[
[
"a1,a2=eval(input('输入第一种重量和价钱:'))\nb1,b2=eval(input('输入第一种重量和价钱:'))\nnum1=a2/a1\nnum2=b2/b1\nif num1>num2:\n print('购买第二种更加合适')\nelse :\n print('购买第一种更合适')",
"_____no_output_____"
]
],
[
[
"- 6\n",
"_____no_output_____"
]
],
[
[
"m,year=eval(input('输入月份和年'))\na=year%4==0\nb=year%100!=0\nc=year%400==0\nr=[1,3,5,7,8,10,12]\nif (a or c) and b and m==2:\n print(str(year)+'年'+str(m)+'月有29天')\nelif ((m==1) or (m==3) or (m==5) or (m==7) or (m==8) or (m==10) or (m==12)):\n print(str(year)+'年'+str(m)+'月有31天')\nelif ((m==4) or (m==6) or (m==9) or (m==11)):\n print(str(year)+'年'+str(m)+'月有30天')\nelse :\n print(str(year)+'年'+str(m)+'月有28天')",
"_____no_output_____"
]
],
[
[
"- 7\n",
"_____no_output_____"
]
],
[
[
"import random\na=random.randint(0,1)\nprint(a)\nnum=eval(input('>>'))\nif a==num:\n print('正确')\nelse :\n print('错误')",
"0\n>>1\n错误\n"
]
],
[
[
"- 8\n",
"_____no_output_____"
]
],
[
[
"a=eval(input('输入1,2或0:'))\nimport random\nd=random.randint(0,3) \nif d==a:\n print('平局')\nelif a==0 and d==1:\n print('你输了')\nelif a==0 and d==2:\n print('你赢了')\nelif a==1 and d==0:\n print('你赢了')\nelif a==1 and d==2:\n print('你输了')\nelif a==2 and d==1:\n print('你赢了')\nelif a==2 and d==0:\n print('你输了')",
"_____no_output_____"
]
],
[
[
"- 9\n",
"_____no_output_____"
]
],
[
[
"y = eval(input('请输入年份:'))\nm = eval(input('请输入月份:'))\nq = eval(input('请输入天数:'))\nj = y//100//1\nk = y%100\nif m == 1:\n m = 13\nelif m == 2:\n m = 14\nh = (q + (26*(m+1))/10//1+k+k/4//1+j/4//1+5*j)%7\nprint(round(h))",
"_____no_output_____"
]
],
[
[
"- 10\n",
"_____no_output_____"
]
],
[
[
"import random\nsize=['Ace',2,3,4,5,6,7,8,9,10,'Jack','Queen','King']\nA=random.randint(0,len(size)-1)\ncolor=['Diamond','Heart','Spade','Club']\nB=random.randint(0,len(color)-1)\nprint('The card you picked is the ' + str(size[A]) + ' of ' + str(color[B]))",
"_____no_output_____"
]
],
[
[
"- 11\n",
"_____no_output_____"
]
],
[
[
"x = input('Enter a three-digit integer:')\nif x[0] == x[2] :\n print(str(x)+'is a palindrome')\nelse:\n print(str(x)+'is not a palindrome')",
"_____no_output_____"
]
],
[
[
"- 12\n",
"_____no_output_____"
]
],
[
[
"lenght1,lenght2,lenght3, =eval(input('Enter three adges:'))\nperimeter = lenght1 + lenght2 + lenght3\nif lenght1 + lenght2 > lenght3 and lenght1 + lenght3 > lenght2 and lenght2 + lenght3 > lenght1:\n print('The perimeter is',perimeter)\nelse:\n print('The perimeter invalid')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d050e315128caee87fa61692d5be6f9f41e69499 | 4,937 | ipynb | Jupyter Notebook | tensorflow/day1/exercise/01_05_tensor_multi_regression_keras.ipynb | daludaluking/LG_AI_all_in_one- | e0855af811deb1e5cf1695430bd52a8eb3d48827 | [
"Apache-2.0"
] | null | null | null | tensorflow/day1/exercise/01_05_tensor_multi_regression_keras.ipynb | daludaluking/LG_AI_all_in_one- | e0855af811deb1e5cf1695430bd52a8eb3d48827 | [
"Apache-2.0"
] | null | null | null | tensorflow/day1/exercise/01_05_tensor_multi_regression_keras.ipynb | daludaluking/LG_AI_all_in_one- | e0855af811deb1e5cf1695430bd52a8eb3d48827 | [
"Apache-2.0"
] | null | null | null | 4,937 | 4,937 | 0.642495 | [
[
[
"\nimport tensorflow as tf\n\n## data 선언\nx_data = [[2.,0.,7.], [6.,4.,2.], [5.,2.,4.],[8.,4.,1]]\ny_data = [[75], [95], [91], [97]]\ntest_data=[[5.,5.,5.]]\nprint(len(x_data),len(x_data[1])) # 행크기 , 열크기\n\n\n",
"4 3\n"
],
[
"## tf.keras를 활용한 perceptron 모델 구현.\nmodel = tf.keras.Sequential() ## 모델 만들기 위해 sequential 매서드를 선언. 이를 통해 모델을 만들 수 있다.\nmodel.add(tf.keras.layers.Dense(1, input_dim=3)) # 선언된 모델에 add를 통해 쌓아감. , 현재는 입력 변수 갯수 3, perceptron 1개.\nmodel.summary() ## 설계한 모델 프린트\n\n",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 1) 4 \n=================================================================\nTotal params: 4\nTrainable params: 4\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# 모델 loss, 학습 방법 결정하기\noptimizer=tf.keras.optimizers.SGD(learning_rate=0.001) ### 경사 하강법으로 global min 에 찾아가는 최적화 방법 선언.\nloss=tf.keras.losses.mse ## 예측값 과 정답의 오차값 정의. mse는 mean squre error로 (예측값 - 정답)^2 를 의미\nmetrics=tf.keras.metrics.mae ### 학습하면서 평가할 메트릭스 선언 mse는 mean_absolute_error |예측값 - 정답| 를 의미\n\n# 모델 컴파일하기\nmodel.compile(loss=loss, metrics=[metrics], optimizer=optimizer)\n\n# 모델 동작하기\nmodel.fit(x_data, y_data, epochs=1000, batch_size=4)\n\n",
"_____no_output_____"
],
[
"# 결과를 출력합니다.\nprint(model.weights)\nprint(\" test data [5.,5.,5.] 예측 값 : \", model.predict(test_data))",
"[<tf.Variable 'dense/kernel:0' shape=(3, 1) dtype=float32, numpy=\narray([[6.2578945],\n [9.3588295],\n [8.829167 ]], dtype=float32)>, <tf.Variable 'dense/bias:0' shape=(1,) dtype=float32, numpy=array([2.3467107], dtype=float32)>]\n test data [5.,5.,5.] 예측 값 : [[124.576164]]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d050e5bda0c18ac8e35480505e0162d4e1dfb817 | 5,003 | ipynb | Jupyter Notebook | qiskit/advanced/aqua/finance/simulation/option_pricing.ipynb | Sahar2/qiskit-tutorials | 7db6cf939aa5d0d56b67eac5877ddda243d12ec0 | [
"Apache-2.0"
] | null | null | null | qiskit/advanced/aqua/finance/simulation/option_pricing.ipynb | Sahar2/qiskit-tutorials | 7db6cf939aa5d0d56b67eac5877ddda243d12ec0 | [
"Apache-2.0"
] | null | null | null | qiskit/advanced/aqua/finance/simulation/option_pricing.ipynb | Sahar2/qiskit-tutorials | 7db6cf939aa5d0d56b67eac5877ddda243d12ec0 | [
"Apache-2.0"
] | 1 | 2019-06-27T06:55:00.000Z | 2019-06-27T06:55:00.000Z | 41.347107 | 400 | 0.65041 | [
[
[
"<img src=\"../../../../../images/qiskit_header.png\" alt=\"Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook\" align=\"middle\">",
"_____no_output_____"
],
[
"# _*Qiskit Finance: Option Pricing*_ \n\nThe latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorials.\n\n***\n### Contributors\nStefan Woerner<sup>[1]</sup>, Daniel Egger<sup>[1]</sup>, Christa Zoufal<sup>[1]</sup>, Shaohan Hu<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>\n\n### Affliation\n- <sup>[1]</sup>IBMQ",
"_____no_output_____"
],
[
"In this notebook we provide an overview of the available Qiskit Finance tutorials on how to use Quantum Amplitude Estimation (QAE) for option pricing. We analyze different types of options with increasing complexity, featuring:\n- single asset / multi asset (basket) options,\n- piecewise linear payoff functions (arbitrary number of break points, possibly non-continuous), and\n- path-dependency (sum/average, barrier, etc.).\n\nThe basic ideas on using QAE for option pricing and risk analysis are provided here:<br>\n<a href=\"https://www.nature.com/articles/s41534-019-0130-6\">Quantum Risk Analysis. Stefan Woerner, Daniel J. Egger (2019)</a>.\n\nA Qiskit Aqua tutorial on QAE can be found here:<br>\n<a href=\"../../aqua/general/amplitude_estimation.ipynb\">Qiskit Tutorial on QAE</a>\n\nWe provide tutorials for the following types simple options:\n\n- <a href=\"european_call_option_pricing.ipynb\">European Call Option</a> (univariate, payoff with 2 segments)\n- <a href=\"european_put_option_pricing.ipynb\">European Put Option</a> (univariate, payoff with 2 segments)\n- <a href=\"bull_spread_pricing.ipynb\">Bull Spread</a> (univariate, payoff with 3 segments)\n\nNote that the provided framework can cover all options of this type, i.e., options that are fully determined by a piecewise linear payoff with respect to the spot price at maturity of the underlying asset.\nHowever, the framework also allows to price more complex options, for instance, options that depend on multiple assets or are path-dependent:\n\n- <a href=\"basket_option_pricing.ipynb\">Basket Option</a> (multivariate, payoff with 2 segments)\n- <a href=\"asian_barrier_spread_pricing.ipynb\">Asian Barrier Spread</a> (multivariate, path-dependent, payoff with 3 segments)\n\nMore examples on option pricing with a quantum computer can be found in the [Qiskit Finance Community](https://github.com/Qiskit/qiskit-tutorials-community/tree/master/finance) section of the Qiskit Tutorials.\n\nAll examples illustrate how to use the genereric Qiskit Finance framework to construct QAE-operators (uncertainty problems). The same framework can be easily adjusted to estimate risk as well, for instance, the Value at Risk (VaR) or the Conditional Value at Risk (CVaR, also known as Expected Shortfall). How to use Qiskit Finance for risk analysis is illustrated in the following tutorial:\n<a href=\"credit_risk_analysis.ipynb\">Credit Risk Analysis</a>.\n\nAn example of how quantum Generative Adversarial Networks (qGANs) can be used to learn and efficiently load generic random distributions for option pricing can be found here:\n<a href=\"../machine_learning/qgan_option_pricing.ipynb\">QGANs to learn and load random distributions for option pricing</a>",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
d050e66710780025802e78c03fb7b82f6ddd52c7 | 176,949 | ipynb | Jupyter Notebook | weeklysales_vs_week.ipynb | Mbathula007/wallmart_sales_prediction | 673bd01c1d940664951d0da8f4d6234f41deab2b | [
"MIT"
] | null | null | null | weeklysales_vs_week.ipynb | Mbathula007/wallmart_sales_prediction | 673bd01c1d940664951d0da8f4d6234f41deab2b | [
"MIT"
] | null | null | null | weeklysales_vs_week.ipynb | Mbathula007/wallmart_sales_prediction | 673bd01c1d940664951d0da8f4d6234f41deab2b | [
"MIT"
] | null | null | null | 197.267559 | 138,576 | 0.887346 | [
[
[
"import pandas as pd\ndf = pd.read_csv(\"train.csv\")",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df1 = pd.read_csv(\"stores.csv\")",
"_____no_output_____"
],
[
"df1.head()",
"_____no_output_____"
],
[
"temp_df = df[df[\"Store\"] == 1]",
"_____no_output_____"
],
[
"temp_df.head()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nK = []\nfor i in range(len(temp_df)):\n K.append(i)\nplt.figure(figsize = (40,20))\nplt.plot(K,temp_df[\"Weekly_Sales\"],color = 'b')",
"_____no_output_____"
],
[
"temp_df.head()",
"_____no_output_____"
],
[
"temp_df.dtypes",
"_____no_output_____"
],
[
"temp_df[\"Date\"] = pd.to_datetime(temp_df[\"Date\"])",
"<ipython-input-13-d1b1abd8c9d6>:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n temp_df[\"Date\"] = pd.to_datetime(temp_df[\"Date\"])\n"
],
[
"type(temp_df[\"Date\"][0].year)",
"_____no_output_____"
],
[
"K = []\nfor i in range(len(temp_df)):\n K.append(temp_df[\"Date\"][i].year)",
"_____no_output_____"
],
[
"len(K) == len(temp_df)",
"_____no_output_____"
],
[
"temp_df[\"year\"] = K",
"<ipython-input-25-00325a1d735d>:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n temp_df[\"year\"] = K\n"
],
[
"temp_df.columns",
"_____no_output_____"
],
[
"cycle1 = temp_df[temp_df[\"year\"] == 2010]",
"_____no_output_____"
],
[
"cycle1.sort_values(by = \"Date\")",
"_____no_output_____"
],
[
"cycle1.to_csv(\"cycle1/cycle1.csv\")",
"_____no_output_____"
],
[
"T = cycle1.groupby(\"Date\")[\"Weekly_Sales\"].sum()",
"_____no_output_____"
],
[
"type(T)",
"_____no_output_____"
],
[
"len(list(T))",
"_____no_output_____"
],
[
"y = []\nx = []\nfor i in range(len(T)) :\n y.append(T[i])\n x.append(i)",
"_____no_output_____"
],
[
"plt.plot(x,y,color = 'r')",
"_____no_output_____"
],
[
"import numpy as np\nnp.save(\"cycle1/y_cumilative\",np.array(y))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d050ed7e1349910361f0f1c11fe0061ec0818d56 | 267,128 | ipynb | Jupyter Notebook | projects/synchallenge/submissions/Darius Irani/syn_classification_notebook.ipynb | wrgr/intersession2018 | 096dd01fc49a9ce4f95e144c90ae677adfcbb00e | [
"Apache-2.0"
] | null | null | null | projects/synchallenge/submissions/Darius Irani/syn_classification_notebook.ipynb | wrgr/intersession2018 | 096dd01fc49a9ce4f95e144c90ae677adfcbb00e | [
"Apache-2.0"
] | null | null | null | projects/synchallenge/submissions/Darius Irani/syn_classification_notebook.ipynb | wrgr/intersession2018 | 096dd01fc49a9ce4f95e144c90ae677adfcbb00e | [
"Apache-2.0"
] | null | null | null | 795.02381 | 69,112 | 0.951222 | [
[
[
"# Synapse Classification Challenge\n# Introduction to Connectomics 2017\n# Darius Irani\n\nyour_name = 'irani_darius'\n\n!pip install mahotas\n!pip install ndparse\n%matplotlib inline ",
"Requirement already satisfied: mahotas in /opt/conda/lib/python3.6/site-packages\nRequirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from mahotas)\nRequirement already satisfied: ndparse in /opt/conda/lib/python3.6/site-packages\n"
],
[
"# Load data\n\nimport numpy as np\nimport tensorflow as tf\n\ndata = np.load('./synchallenge2017_training.npz')\n\nimtrain = data['imtrain']\nannotrain = data['annotrain']\nytrain = data['ytrain']\n\ndata = np.load('./synchallenge2017_validation.npz')\n\nimvalid = data['imvalid']\nannovalid = data['annovalid']\nyvalid = data['yvalid']",
"_____no_output_____"
],
[
"# Define feature extraction code\n\nimport skimage.feature as skif\n\ndef extract_features(imdata):\n xtrain = []\n for im in imdata:\n fvector = []\n # 50th percentile based on intensity\n fvector.append(np.percentile(im,50))\n\n # add a contrast feature\n g = skif.greycomatrix(im, [1, 2], [0, np.pi/2],normed=True, symmetric=True)\n homogeneity = skif.greycoprops(g, 'homogeneity')\n\n # explict way to add feature elements one at a time\n homogeneity = np.ravel(homogeneity)\n for i in homogeneity:\n fvector.append(i)\n \n # compute Harris corner measure response image\n cor = skif.corner_harris(im,method='k',k=0.1,eps=1.5e-06,sigma=2)\n cor = np.ravel(cor)\n for i in cor:\n fvector.append(i)\n \n # edge filter an image using the Canny algorithm\n can = skif.canny(im,sigma=1.5, low_threshold=None, high_threshold=None)\n can = np.ravel(can)\n for i in can:\n fvector.append(i)\n \n # extract FAST corners for a given image\n fast = skif.corner_shi_tomasi(im,sigma=2)\n fast = np.ravel(fast)\n for i in fast:\n fvector.append(i)\n \n fvector = np.asarray(fvector)\n xtrain.append(fvector)\n\n return np.asarray(xtrain)\n ",
"_____no_output_____"
],
[
"# Extract Features from training\n\nxtrain = extract_features(imtrain)\n# Train Classifier\n\nfrom sklearn.ensemble import RandomForestClassifier\nclf = RandomForestClassifier(n_estimators=200)\nclf = clf.fit(xtrain, ytrain)\n",
"_____no_output_____"
],
[
"# Extract features from validation set\nxvalid = extract_features(imvalid)\n",
"_____no_output_____"
],
[
"# Run Classifier on validation set\nscoresvalid = clf.predict_proba(xvalid)",
"_____no_output_____"
],
[
"# Best f1 score report on validation set\n\nfrom sklearn.metrics import f1_score\n\n# Can add post-processing here if desired\n\nprob_syn = scoresvalid[:,1]\n\n# default threshold\nprint('default f1 score: {}'.format(np.round(f1_score(yvalid, prob_syn >=0.5),2)))\n\nf1_out = 0\nthresh = 0\nfor i in np.arange(0.0, 1, 0.05):\n f1_test = f1_score(yvalid, prob_syn > i)\n if f1_test > f1_out:\n f1_out = f1_test\n thresh = i\n\nprint('My best validation f1-score is: {} at {} threshold.'.format(np.round(f1_out,2), thresh))",
"default f1 score: 0.85\nMy best validation f1-score is: 0.86 at 0.45 threshold.\n"
],
[
"# here we can inspect results\n\nvalid_labels = np.asarray(prob_syn > thresh,dtype='int')\n# find images we did well on\nidx_correct_syn = np.where((valid_labels == yvalid) & (yvalid == 1))[0]\nidx_correct_nosyn = np.where((valid_labels == yvalid) & (yvalid == 0))[0]\n# find images we did poorly on\n\nidx_wrong_syn = np.where((valid_labels != yvalid) & (yvalid == 1))[0]\nidx_wrong_nosyn = np.where((valid_labels != yvalid) & (yvalid == 0))[0]\nimport ndparse as ndp\n\nprint('synapse present - true positive')\nndp.plot(imvalid[idx_correct_syn[3]])\n\nprint('no synapse present - true negative')\nndp.plot(imvalid[idx_correct_nosyn[3]])\n\nprint('synapse present - false negative')\nndp.plot(imvalid[idx_wrong_syn[3]])\n\nprint('no synapse present - false positive')\nndp.plot(imvalid[idx_wrong_nosyn[3]])",
"synapse present - true positive\n"
],
[
"# Validate performance on test set (should only run/score once!)\n\ndata = np.load('./synchallenge2017_test_notruth.npz')\n\nimtest = data['imtest']\nannotest = data['annotest']\n\n# Extract features from test set\nxtest = extract_features(imtest)\n\n# Run classifier on test set\nscoretest = clf.predict_proba(xvalid)\n\n# Post-processing\nprob_syntest = scoretest[:,1]\nsyntest_predict = prob_syntest > thresh\nsyntest_predict = np.asarray(syntest_predict,dtype = 'uint8')\n\n# save file and upload to google docs with label vector\nnp.save(your_name+'_synchallenge_testdata.npy',syntest_predict)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d050fc9bf3550e0dfe381f1941decaf8e5a5caf9 | 250,929 | ipynb | Jupyter Notebook | dom/data_preparation.ipynb | Nissman/DataPreparation | 9b6a92a8d15a51c053b4cd14674189fbab07e398 | [
"MIT"
] | null | null | null | dom/data_preparation.ipynb | Nissman/DataPreparation | 9b6a92a8d15a51c053b4cd14674189fbab07e398 | [
"MIT"
] | null | null | null | dom/data_preparation.ipynb | Nissman/DataPreparation | 9b6a92a8d15a51c053b4cd14674189fbab07e398 | [
"MIT"
] | null | null | null | 125,464.5 | 250,928 | 0.889841 | [
[
[
"# Загрузка зависимостей\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.preprocessing import MinMaxScaler",
"_____no_output_____"
],
[
"#Часто используемые функции\ndef hist_show(a, b = 50):\n plt.hist(a, bins = b)\n plt.show()\n \n \ndef replace_zero_to_mean(a):\n mean_data = int(a.mean())\n return a.replace(0, mean_data)\n \n \ndef mm_scaler(a):\n a = np.array(a).reshape(-1, 1)\n a =MinMaxScaler().fit_transform(a).flatten()\n return a\n\n\ndef standard_scaler(a):\n a = np.array(a).reshape(-1, 1)\n a =StandardScaler().fit_transform(a).flatten()\n return a",
"_____no_output_____"
],
[
"# Загрузка и анализ набора данных\ncountry_dataset = pd.read_csv('Набор_3_страны_мира.csv', sep=';')\ncountry_dataset.head(10)",
"_____no_output_____"
],
[
"# Создаем набор данных, в котором будут храниться обработанные данные\ndataset = pd.DataFrame()",
"_____no_output_____"
],
[
"# столбец \"region\"\ndata = country_dataset['region']\ndata = pd.get_dummies(data)\ndata = np.array([data[i[1]] * (i[0]+1) for i in enumerate(data)]).flatten()\ndata = data[data != 0]\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['region'] = mm_scaler(data**0.5) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"population\"\ndata = country_dataset['population']\nhist_show(data)",
"_____no_output_____"
],
[
"data = np.clip(data, 0, 94000000)\nhist_show(data)",
"_____no_output_____"
],
[
"data = replace_zero_to_mean(data)\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['population'] = mm_scaler(np.log(data)) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"area\"\ndata = country_dataset['area']\nhist_show(data)",
"_____no_output_____"
],
[
"data = np.clip(data, 0, 1275200)\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['area'] = mm_scaler(np.log(data)) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"infant_mortality\"\ncountry_dataset['infant_mortality'] = country_dataset['infant_mortality'].astype(str)\ncountry_dataset['infant_mortality'] = [x.replace(',', '.') for x in country_dataset['infant_mortality']]\ncountry_dataset['infant_mortality'] = country_dataset['infant_mortality'].astype(float)\ndata = country_dataset['infant_mortality']\nplt.hist(data, bins = 50)\nplt.show()",
"/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:839: RuntimeWarning: invalid value encountered in greater_equal\n keep = (tmp_a >= first_edge)\n/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:840: RuntimeWarning: invalid value encountered in less_equal\n keep &= (tmp_a <= last_edge)\n"
],
[
"data = data.replace(0, data.mean())\nplt.hist(data, bins = 50)\nplt.show()",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"plt.hist(np.log(data), bins = 50)\nplt.show()",
"_____no_output_____"
],
[
"dataset['infant_mortality'] = mm_scaler(np.log(data)) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"gdp\"\ndata = country_dataset['gdp']\nhist_show(data)",
"/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:839: RuntimeWarning: invalid value encountered in greater_equal\n keep = (tmp_a >= first_edge)\n/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:840: RuntimeWarning: invalid value encountered in less_equal\n keep &= (tmp_a <= last_edge)\n"
],
[
"data = np.clip(data, 0, 38000)\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['gdp'] = mm_scaler(data**0.5) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"literacy\"\ncountry_dataset['literacy'] = country_dataset['literacy'].astype(str)\ncountry_dataset['literacy'] = [x.replace(',', '.') for x in country_dataset['literacy']]\ncountry_dataset['literacy'] = country_dataset['literacy'].astype(float)\ndata = country_dataset['literacy']\nhist_show(data)",
"/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:839: RuntimeWarning: invalid value encountered in greater_equal\n keep = (tmp_a >= first_edge)\n/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:840: RuntimeWarning: invalid value encountered in less_equal\n keep &= (tmp_a <= last_edge)\n"
],
[
"data = replace_zero_to_mean(data)\nhist_show(data)",
"_____no_output_____"
],
[
"data = np.clip(data, 38, 100)\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['literacy'] = mm_scaler(data**0.5) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"arable\"\ncountry_dataset['arable'] = country_dataset['arable'].astype(str)\ncountry_dataset['arable'] = [x.replace(',', '.') for x in country_dataset['arable']]\ncountry_dataset['arable'] = country_dataset['arable'].astype(float)\ndata = country_dataset['arable']\nhist_show(data)",
"/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:839: RuntimeWarning: invalid value encountered in greater_equal\n keep = (tmp_a >= first_edge)\n/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:840: RuntimeWarning: invalid value encountered in less_equal\n keep &= (tmp_a <= last_edge)\n"
],
[
"data = replace_zero_to_mean(data)\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['arable'] = mm_scaler(data**0.5) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"birthrate\"\ncountry_dataset['birthrate'] = country_dataset['birthrate'].astype(str)\ncountry_dataset['birthrate'] = [x.replace(',', '.') for x in country_dataset['birthrate']]\ncountry_dataset['birthrate'] = country_dataset['birthrate'].astype(float)\ndata = country_dataset['birthrate']\nhist_show(data)",
"/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:839: RuntimeWarning: invalid value encountered in greater_equal\n keep = (tmp_a >= first_edge)\n/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:840: RuntimeWarning: invalid value encountered in less_equal\n keep &= (tmp_a <= last_edge)\n"
],
[
"data = replace_zero_to_mean(data)\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['birthrate'] = mm_scaler(data**0.5) \ndataset.head(10)",
"_____no_output_____"
],
[
"# столбец \"deathrate\"\ncountry_dataset['deathrate'] = country_dataset['deathrate'].astype(str)\ncountry_dataset['deathrate'] = [x.replace(',', '.') for x in country_dataset['deathrate']]\ncountry_dataset['deathrate'] = country_dataset['deathrate'].astype(float)\ndata = country_dataset['deathrate']\nhist_show(data)",
"/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:839: RuntimeWarning: invalid value encountered in greater_equal\n keep = (tmp_a >= first_edge)\n/srv/conda/envs/notebook/lib/python3.7/site-packages/numpy/lib/histograms.py:840: RuntimeWarning: invalid value encountered in less_equal\n keep &= (tmp_a <= last_edge)\n"
],
[
"data = np.clip(data, 0, 23)\nhist_show(data)",
"_____no_output_____"
],
[
"data = replace_zero_to_mean(data)\nhist_show(data)",
"_____no_output_____"
],
[
"hist_show(data**0.5)\nhist_show(np.log(data))",
"_____no_output_____"
],
[
"dataset['deathrate'] = mm_scaler(data**0.5) \ndataset.head(10)",
"_____no_output_____"
],
[
"dataset.to_csv('prepared_data.csv')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d050feabdb8b636e093a0ad8d60eb8ebd91f3f98 | 2,386 | ipynb | Jupyter Notebook | notebooks/knn - scikitlearn.ipynb | lincolwn/minicurso-ufc | ac1cd97cf7a40b022531ed53ef2b4d8c089a0f0d | [
"MIT"
] | null | null | null | notebooks/knn - scikitlearn.ipynb | lincolwn/minicurso-ufc | ac1cd97cf7a40b022531ed53ef2b4d8c089a0f0d | [
"MIT"
] | null | null | null | notebooks/knn - scikitlearn.ipynb | lincolwn/minicurso-ufc | ac1cd97cf7a40b022531ed53ef2b4d8c089a0f0d | [
"MIT"
] | null | null | null | 22.509434 | 79 | 0.520117 | [
[
[
"from sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"entradas, saidas = [], []\nwith open('../haberman.data', 'r') as file:\n for linha in file.readlines():\n attr = linha.replace('\\n', '').split(',')\n entradas.append([int(attr[0]), int(attr[2])])\n saidas.append(int(attr[3]))",
"_____no_output_____"
],
[
"p = 0.6 # porcentagem dos dados para treinamento",
"_____no_output_____"
],
[
"limite = int(p * len(entradas))",
"_____no_output_____"
],
[
"knn = KNeighborsClassifier(n_neighbors=15)\nknn.fit(entradas[:limite], saidas[:limite])\nlabels = knn.predict(entradas[limite:])\n\nacertos = 0\nfor i, classe in enumerate(labels):\n if classe == saidas[i + limite]:\n acertos += 1\n\nprint(f'conjunto de treinamento: {limite}')\nprint(f'conjunto de teste: {len(entradas) - limite}')\nprint(f'total de acertos: {acertos}')\nprint(f'percentual de acertos: {100*(acertos/(len(entradas) - limite))}')",
"conjunto de treinamento: 183\nconjunto de teste: 123\ntotal de acertos: 92\npercentual de acertos: 74.79674796747967\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d051060f97fbbf6404674ce33024c773d4ffa76e | 1,563 | ipynb | Jupyter Notebook | SVD.ipynb | twy80/svd_image | 625ea344b79de5a6a6fed3f287c7d5491a70cec9 | [
"MIT"
] | null | null | null | SVD.ipynb | twy80/svd_image | 625ea344b79de5a6a6fed3f287c7d5491a70cec9 | [
"MIT"
] | null | null | null | SVD.ipynb | twy80/svd_image | 625ea344b79de5a6a6fed3f287c7d5491a70cec9 | [
"MIT"
] | null | null | null | 36.348837 | 342 | 0.605246 | [
[
[
"# A concise explanation of SVD\n\nThe SVD of a matrix $A$ is the factorization of $A$ into the product of three matrices $A = UDV^T$, where the columns of $U$ and $V$ are orthonormal and the matrix $D$ is diagonal with nonnegative real entries. This decomposition can also be written as\n\n$$ A = \\sum_{k=1}^{r} \\sigma_k u_k v_k^T$$\n\nwhere the singular value $\\sigma_k$ is the $k$-th element of $D$ arranged in decending order, and $u_k$ and $v_k$ are the $k$-th columns of $U$ and $V$. The approximation of $A$ can thus be obtained as follows:\n\n$$ \\tilde{A} = \\sum_{k=1}^{n} \\sigma_k u_k v_k^T$$\n\nwhere $n (\\le r)$ is the rank of the compressed matrix $\\tilde{A}$. If $A$ contains pixels of an image, $\\tilde{A}$ is a compressed image using a reduced amount of memory. This is what is meant by image compression via SVD. If a color image is given, the SVD can be performed to each of the three channels (e.g. red, green & blue).\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d0510fbe666b7a9b0b6a3bf385a5d169eb831368 | 198,661 | ipynb | Jupyter Notebook | Training/baseline_cnn.ipynb | hammad-mohi/FacialAgeEstimator | 1372e4a72afb94b5ae32394109fae9e9982c8b0b | [
"MIT"
] | null | null | null | Training/baseline_cnn.ipynb | hammad-mohi/FacialAgeEstimator | 1372e4a72afb94b5ae32394109fae9e9982c8b0b | [
"MIT"
] | null | null | null | Training/baseline_cnn.ipynb | hammad-mohi/FacialAgeEstimator | 1372e4a72afb94b5ae32394109fae9e9982c8b0b | [
"MIT"
] | null | null | null | 394.952286 | 20,292 | 0.941886 | [
[
[
"import numpy as np\nimport time\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torchvision\nimport torchvision.models as models\nimport torchvision.transforms as transforms\nimport matplotlib.pyplot as plt\nfrom PIL import Image",
"_____no_output_____"
],
[
"%run Accuracy_Module.py\n%run DataLoading.py\n%run load_and_organize_dataset.py\n%run training_module.py",
"_____no_output_____"
],
[
"train_loader, val_loader, test_loader = load_dataset(32)",
"_____no_output_____"
],
[
"class conv_net(nn.Module):\n def __init__(self):\n super(conv_net, self).__init__()\n self.name = \"cnn\"\n self.conv1 = nn.Conv2d(3, 12, 5) \n self.pool1 = nn.MaxPool2d(5, 5)\n self.conv2 = nn.Conv2d(12, 48, 5)\n self.pool2 = nn.MaxPool2d(2, 2)\n self.conv3 = nn.Conv2d(48, 96, 5)\n self.fc1 = nn.Linear(3456, 3456)\n self.fc2 = nn.Linear(3456, 1024)\n self.fc3 = nn.Linear(1024, 256)\n self.fc4 = nn.Linear(256, 1)\n def forward(self, x):\n x = x.cuda()\n x = self.pool1(F.relu(self.conv1(x)))\n x = self.pool2(F.relu(self.conv2(x)))\n x = self.pool2(F.relu(self.conv3(x)))\n x = x.view(x.shape[0],-1)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = F.relu(self.fc3(x))\n x = self.fc4(x)\n\n x = x.squeeze(1)\n \n return x",
"_____no_output_____"
],
[
"model = conv_net()\nmodel.cuda()\ntrain_net(model, train_loader, val_loader, batch_size=32, learning_rate=1e-5, num_epochs=5)",
"0.00% complete for this epoch\n18.87% complete for this epoch\n37.74% complete for this epoch\n56.60% complete for this epoch\n75.47% complete for this epoch\n94.34% complete for this epoch\nEpoch: 1, Training Loss: 0.981, Training R^2: 0.006, Validation R^2: 0.010\n113.21% complete for this epoch\n132.08% complete for this epoch\n150.94% complete for this epoch\n169.81% complete for this epoch\n188.68% complete for this epoch\nEpoch: 2, Training Loss: 34.208, Training R^2: 0.059, Validation R^2: 0.064\n207.55% complete for this epoch\n226.42% complete for this epoch\n245.28% complete for this epoch\n264.15% complete for this epoch\n283.02% complete for this epoch\nEpoch: 3, Training Loss: 1.822, Training R^2: 0.175, Validation R^2: 0.179\n301.89% complete for this epoch\n320.75% complete for this epoch\n339.62% complete for this epoch\n358.49% complete for this epoch\n377.36% complete for this epoch\n396.23% complete for this epoch\nEpoch: 4, Training Loss: 4.870, Training R^2: 0.335, Validation R^2: 0.336\n415.09% complete for this epoch\n433.96% complete for this epoch\n452.83% complete for this epoch\n471.70% complete for this epoch\n490.57% complete for this epoch\nEpoch: 5, Training Loss: 8.798, Training R^2: 0.408, Validation R^2: 0.409\n"
],
[
"get_off_accuracy(vgg_class, test_loader)",
"+/- 1 years accuracy: 19.58%\n+/- 5 years accuracy: 69.27%\n+/- 10 years accuracy: 88.58%\n"
],
[
"get_off_accuracy(vgg_class, val_loader)",
"+/- 1 years accuracy: 18.93%\n+/- 5 years accuracy: 67.63%\n+/- 10 years accuracy: 88.88%\n"
],
[
"get_off_accuracy(vgg_class, train_loader)",
"+/- 1 years accuracy: 29.74%\n+/- 5 years accuracy: 92.92%\n+/- 10 years accuracy: 99.33%\n"
],
[
"k = 0\nfor image, label in test_loader:\n img = image[0]\n img = np.transpose(img, [1,2,0])\n img = img / 2 + 0.5\n plt.subplot(3, 5, k+1)\n plt.axis('off')\n plt.imshow(img)\n plt.show()\n k += 1\n print(label[0], vgg_class(image)[0])\n if k > 10:\n break",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0511fb628dffb6c0581b084fb223660221d9587 | 262,323 | ipynb | Jupyter Notebook | seminar-5-dt-rf/5_1_dt_2_draft.ipynb | kurmukovai/iitp-ml-ds | 73ec8392fc3701b0e8e17ad12a9ad4f7889f47c1 | [
"MIT"
] | 1 | 2022-02-17T07:16:44.000Z | 2022-02-17T07:16:44.000Z | seminar-5-dt-rf/5_1_dt_2_draft.ipynb | kurmukovai/iitp-ml-ds | 73ec8392fc3701b0e8e17ad12a9ad4f7889f47c1 | [
"MIT"
] | null | null | null | seminar-5-dt-rf/5_1_dt_2_draft.ipynb | kurmukovai/iitp-ml-ds | 73ec8392fc3701b0e8e17ad12a9ad4f7889f47c1 | [
"MIT"
] | null | null | null | 243.116775 | 88,224 | 0.916751 | [
[
[
"import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# 1. Деревья решений для классификации (продолжение)\n\nНа прошлом занятии мы разобрали идею Деревьев решений:\n\n\n\n\nДавайте теперь разберемся **как происходит разделения в каждом узле** то есть как проходит этап **обучения модели**. Есть как минимум две причины в этом разобраться : во-первых это позволит нам решать задачи классификации на 3 и более классов, во-вторых это даст нам возможность считать *важность* признаков в обученной модели.\n\nДля начала посмотрим какие бывают деревья решений",
"_____no_output_____"
],
[
"\n----\nДерево решений вообще говоря **не обязано быть бинарным**, на практике однако используются именно бинарные деревья, поскольку для любоого не бинарного дерева решений **можно построить бинарное** (при этом увеличится глубина дерева).\n\n### 1. Деревья решений использую простой одномерный предикат для разделения объектов\n\nИмеется ввиду что в каждом узле разделение объектов (и создание двух новых узлов) происходит **по 1 (одному)** признаку: \n\n*Все объекты со значением некоторого признака меньше трешхолда отправляются в один узел, а больше - в другой:*\n\n$$\n[x_j < t]\n$$\n\nВообще говоря это совсем не обязательно, например в каждом отдельном узле можно строить любую модель (например логистическую регрессию или KNN), рассматривая сразу несколько признаков.\n\n### 2. Оценка качества \n\nМы говорили про простой функционал качества разбиения (**выбора трешхолда**): количество ошибок (1-accuracy). \nНа практике используются два критерия: Gini's impurity index и Information gain.\n\n**Индекс Джини**\n$$\nI_{Gini} = 1 - \\sum_i^K p_i^2 \n$$\n\nгде $K$ - количество классов, a $p_i = \\frac{|n_i|}{n}$ - доля представителей $i$ - ого класса в данном узле\n\n\n**Энтропия**\n\n$$\nH(p) = - \\sum_i^K p_i\\log(p_i)\n$$\n\n**Информационный критерий**\n$$\nIG(p) = H(\\text{parent}) - H(\\text{child})\n$$\n\n\n#### Разделение производится по тому трешхолду и тому признаку по которому взвешенное среднее функционала качества в узлах потомках наименьшее.\n\n\n### 3. Критерий остановки\n\nМы с вами говорили о таких параметрах Решающего дерева как минимальное число объектов в листе,\nи минимальное число объектов в узле, для того чтобы он был разделен на два. Еще один критерий - \nглубина дерева. Возможны и другие.\n\n* Ограничение числа объектов в листе\n* Ограничение числа объектов в узле, для того чтобы он был разделен\n* Ограничение глубины дерева\n* Ограничение минимального прироста Энтропии или Информационного критерия при разделении\n* Остановка в случае если все объекты в листе принадлежат к одному классу\n\nНа прошлой лекции мы обсуждали технику которая называется **Прунинг** (pruning) это альтернатива Критериям остановки, когда сначала строится переобученное дерево, а затем она каким то образом упрощается. На практике по ряду причин чаще используются критерии остановки, а не прунинг.\n\nПодробнее см. https://github.com/esokolov/ml-course-hse/blob/master/2018-fall/lecture-notes/lecture07-trees.pdf\n\nОссобенности разбиения непрерывных признаков\n* http://kevinmeurer.com/a-simple-guide-to-entropy-based-discretization/\n* http://clear-lines.com/blog/post/Discretizing-a-continuous-variable-using-Entropy.aspx\n---",
"_____no_output_____"
],
[
"## 1.1. Оценка качества разделения в узле",
"_____no_output_____"
]
],
[
[
"def gini_impurity(y_current):\n \n n = y_current.shape[0]\n val, count = np.unique(y_current, return_counts=True)\n gini = 1 - ((count/n)**2).sum()\n \n return gini\n\ndef entropy(y_current):\n \n gini = 1\n n = y_current.shape[0]\n val, count = np.unique(y_current, return_counts=True)\n p = count/n\n igain = p.dot(np.log(p))\n \n return igain",
"_____no_output_____"
],
[
"n = 100\nY_example = np.zeros((100,100))\n\nfor i in range(100):\n for j in range(i, 100):\n Y_example[i, j] = 1\n \ngini = [gini_impurity(y) for y in Y_example]\nig = [-entropy(y) for y in Y_example]",
"_____no_output_____"
],
[
"plt.figure(figsize=(7,7))\n\nplt.plot(np.linspace(0,1,100), gini, label='Index Gini');\nplt.plot(np.linspace(0,1,100), ig, label ='Entropy');\nplt.legend()\nplt.xlabel('Доля примеров\\n положительного класса')\nplt.ylabel('Значение оптимизируемого\\n функционала');",
"_____no_output_____"
]
],
[
[
"## 1.2. Пример работы Решающего дерева",
"_____no_output_____"
],
[
"**Индекс Джини** и **Информационный критерий** это меры сбалансированности вектора (насколько значения объектов в наборе однородны). Максимальная неоднородность когда объектов разных классов поровну. Максимальная однородность когда в наборе объекты одного класса. \n\nРазбивая множество объектов на два подмножества, мы стремимся уменьшить неоднородность в каждом подмножестве.\nПосмотрем на примере Ирисов Фишера",
"_____no_output_____"
],
[
"### Ирисы Фишера",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_iris\nfrom sklearn.tree import DecisionTreeClassifier\n\n\niris = load_iris()\nmodel = DecisionTreeClassifier()\nmodel = model.fit(iris.data, iris.target)",
"_____no_output_____"
],
[
"feature_names = ['sepal length', 'sepal width', 'petal length', 'petal width']\ntarget_names = ['setosa', 'versicolor', 'virginica']",
"_____no_output_____"
],
[
"model.feature_importances_",
"_____no_output_____"
],
[
"np.array(model.decision_path(iris.data).todense())[0]",
"_____no_output_____"
],
[
"np.array(model.decision_path(iris.data).todense())[90]",
"_____no_output_____"
],
[
"iris.data[0]",
"_____no_output_____"
],
[
"model.predict(iris.data)",
"_____no_output_____"
],
[
"model.tree_.node_count",
"_____no_output_____"
]
],
[
[
"### Цифры. Интерпретируемость",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_digits\n\nX, y = load_digits(n_class=2, return_X_y=True)",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,12))\nfor i in range(9):\n ax = plt.subplot(3,3,i+1)\n ax.imshow(X[i].reshape(8,8), cmap='gray')",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score",
"_____no_output_____"
],
[
"model = DecisionTreeClassifier()\nmodel.fit(X, y)\ny_pred = model.predict(X)\n\nprint(accuracy_score(y, y_pred))\nprint(X.shape)",
"1.0\n(360, 64)\n"
],
[
"np.array(model.decision_path(X).todense())[0]",
"_____no_output_____"
],
[
"model.feature_importances_",
"_____no_output_____"
],
[
"plt.imshow(model.feature_importances_.reshape(8,8));",
"_____no_output_____"
],
[
"from sklearn.tree import export_graphviz\n\nexport_graphviz(model, out_file='tree.dot', filled=True)",
"_____no_output_____"
],
[
"# #sudo apt-get install graphviz\n\n# !dot -Tpng 'tree.dot' -o 'tree.png'\n\n# ",
"_____no_output_____"
],
[
"np.array(model.decision_path(X).todense())[0]",
"_____no_output_____"
],
[
"plt.imshow(X[0].reshape(8,8))",
"_____no_output_____"
]
],
[
[
"## 2.3. Решающие деревья легко обобщаются на задачу многоклассовой классификации\n\n### Пример с рукописными цифрами",
"_____no_output_____"
]
],
[
[
"X, y = load_digits(n_class=10, return_X_y=True)",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,12))\nfor i in range(9):\n ax = plt.subplot(3,3,i+1)\n ax.imshow(X[i].reshape(8,8), cmap='gray')\n ax.set_title(y[i])\n ax.set_xticks([])\n ax.set_yticks([])",
"_____no_output_____"
],
[
"model = DecisionTreeClassifier()\nmodel.fit(X, y)\ny_pred = model.predict(X)\n\nprint(accuracy_score(y, y_pred))",
"1.0\n"
],
[
"plt.imshow(model.feature_importances_.reshape(8,8));",
"_____no_output_____"
],
[
"model.feature_importances_",
"_____no_output_____"
]
],
[
[
"### Вопрос: откуда мы получаем feature importance?",
"_____no_output_____"
],
[
"## 2.4. Пример на котором дерево решений строит очень сложную разделяющую кривую\n\nПример взят отсюда https://habr.com/ru/company/ods/blog/322534/#slozhnyy-sluchay-dlya-derevev-resheniy .\n\nКак мы помним Деревья используют одномерный предикат для разделени множества объектов.\nЭто значит что если данные плохо разделимы по **каждому** (индивидуальному) признаку по отдельности, результирующее решающее правило может оказаться очень сложным.",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier",
"_____no_output_____"
],
[
"def form_linearly_separable_data(n=500, x1_min=0, x1_max=30, x2_min=0, x2_max=30):\n data, target = [], []\n for i in range(n):\n x1, x2 = np.random.randint(x1_min, x1_max), np.random.randint(x2_min, x2_max)\n\n if np.abs(x1 - x2) > 0.5:\n data.append([x1, x2])\n target.append(np.sign(x1 - x2))\n return np.array(data), np.array(target)\n\nX, y = form_linearly_separable_data()\nplt.figure(figsize=(10,10))\nplt.scatter(X[:, 0], X[:, 1], c=y, cmap='autumn');",
"_____no_output_____"
]
],
[
[
"Давайте посмотрим как данные выглядит в проекции на 1 ось",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(15,5))\nax1 = plt.subplot(1,2,1)\nax1.set_title('Проекция на ось $X_0$')\nax1.hist(X[y==1, 0], alpha=.3);\nax1.hist(X[y==-1, 0], alpha=.6);\n\nax2 = plt.subplot(1,2,2)\nax2.set_title('Проекция на ось $X_1$')\nax2.hist(X[y==1, 1], alpha=.3);\nax2.hist(X[y==-1, 1], alpha=.6);\n",
"_____no_output_____"
],
[
"def get_grid(data, eps=0.01):\n x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1\n y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1\n return np.meshgrid(np.arange(x_min, x_max, eps),\n np.arange(y_min, y_max, eps))",
"_____no_output_____"
],
[
"tree = DecisionTreeClassifier(random_state=17).fit(X, y)\n\n\nxx, yy = get_grid(X, eps=.05)\npredicted = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\nplt.figure(figsize=(10,10))\nplt.pcolormesh(xx, yy, predicted, cmap='autumn', alpha=0.3)\nplt.scatter(X[y==1, 0], X[y==1, 1], marker='x', s=100, cmap='autumn', linewidth=1.5)\nplt.scatter(X[y==-1, 0], X[y==-1, 1], marker='o', s=100, cmap='autumn', edgecolors='k',linewidth=1.5)\nplt.title('Easy task. Decision tree compexifies everything');",
"<ipython-input-33-7799b03e2b13>:7: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3. Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This will become an error two minor releases later.\n plt.pcolormesh(xx, yy, predicted, cmap='autumn', alpha=0.3)\n"
],
[
"# export_graphviz(tree, out_file='complex_tree.dot', filled=True)\n# !dot -Tpng 'complex_tree.dot' -o 'complex_tree.png'",
"_____no_output_____"
]
],
[
[
"## 2.5. Деревья решений для регрессии (кратко)\n\nсм. sklearn.DecisionTreeRegressor",
"_____no_output_____"
],
[
"# 3. Ансамблирование деревьев. Случайный лес.",
"_____no_output_____"
],
[
"Что если у нас несколько классификаторов (каждый может быть не очень *умным*) ошибающихся на разных объектах\nТогда если в качестве предсказания мы будем использовать *моду* мы можем расчитывать на лучшую предсказательную силу.\n\n\n### Идея 1\n\nКак получить модели которые ошибаются в разных местах?\nДавайте брать *тупые* деревья но учить их на **разных подвыборках признаков** !",
"_____no_output_____"
],
[
"### Идея 2\n\nКак получить модели которые ошибаются в разных местах?\n\nДавайте брать *тупые* деревья, но учить их на **разных подвыборках объектов** !",
"_____no_output_____"
],
[
"### Результат: Случайный лес.\n\nsklearn.ensemble RandomForrest",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d05121b11e5c910585c8ba9ca32e5e0fc91ac29e | 16,692 | ipynb | Jupyter Notebook | examples/v0.0.2/ex01.ipynb | RayanaPalharini/GOES | 9530fa780d793d05b15e9ca6d397d5ebe0546a92 | [
"BSD-3-Clause"
] | 1 | 2021-02-22T21:00:46.000Z | 2021-02-22T21:00:46.000Z | examples/v0.0.2/ex01.ipynb | RayanaPalharini/GOES | 9530fa780d793d05b15e9ca6d397d5ebe0546a92 | [
"BSD-3-Clause"
] | null | null | null | examples/v0.0.2/ex01.ipynb | RayanaPalharini/GOES | 9530fa780d793d05b15e9ca6d397d5ebe0546a92 | [
"BSD-3-Clause"
] | null | null | null | 35.21519 | 132 | 0.560328 | [
[
[
"import GOES",
"_____no_output_____"
],
[
"# gets help of GOES\nhelp(GOES.download)",
"Help on function download in module GOES.downloads.download_data:\n\ndownload(Satellite, Product, DateTimeIni=None, DateTimeFin=None, Domain=None, Channel=None, Rename_fmt=False, PathOut='')\n Download data of GOES-16 and GOES-17 from Amazon server.\n This function is based on the code of\n blaylockbk https://gist.github.com/blaylockbk/d60f4fce15a7f0475f975fc57da9104d\n \n \n Parameters\n ----------\n Satellite : string\n Indicates serie of GOES, the options are:\n goes16\n goes17\n \n \n Product : string\n Indicates the instrument and level of product. The products can be list using:\n GOES.show_products()\n \n \n DateTimeIni : string (None)\n String that indicates the initial datetime, their structure\n must be yyyymmdd-HHMMSS\n Example:\n DateTimeIni='20180520-183000'\n \n \n DateTimeFin : string (None)\n String that indicates the final datetime, their structure\n must be yyyymmdd-HHMMSS\n Example:\n DateTimeFin='20180520-183000'\n \n \n Domain : string (None)\n This parameter just is necessary with Mesoescale products.\n The options are:\n M1 : Mesoscale 1\n M2 : Mesoscale 2\n \n \n Channel : string list (None)\n This parameter just is necessary with ABI-L1b-Rad and ABI-L2-CMIP products.\n String list indicates the channel or channels that will be download.\n The channels can be mentioned individually as elements of the list\n or as a sequence of channels separated by a hyphen ('-').\n Example:\n Channel = ['02','08','09','10','11','13']\n Channel = ['02','08-11','13']\n \n \n Rename_fmt : bool (False) or string\n Is an optional parameter and its default value is Rename_fmt=False which\n indicates that the file name is kept. If would you like that the file name\n just keep the start time of scan you have to define the format of datetime.\n See the next link to know about datetime format:\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes).\n Example:\n Rename_fmt = '%Y%m%d%H%M%S'\n Rename_fmt = '%Y%m%d%H%M'\n Rename_fmt = '%Y%j%H%M'\n \n \n PathOut : string\n Optional string that indicates the folder where data will be download.\n The default value is folder where python was open.\n\n"
],
[
"# gets products name\nGOES.show_products()",
" \nProducts for goes16:\n\tABI-L1b-RadC\n\tABI-L1b-RadF\n\tABI-L1b-RadM\n\tABI-L2-ACHAC\n\tABI-L2-ACHAF\n\tABI-L2-ACHAM\n\tABI-L2-ACHTF\n\tABI-L2-ACHTM\n\tABI-L2-ACMC\n\tABI-L2-ACMF\n\tABI-L2-ACMM\n\tABI-L2-ACTPC\n\tABI-L2-ACTPF\n\tABI-L2-ACTPM\n\tABI-L2-ADPC\n\tABI-L2-ADPF\n\tABI-L2-ADPM\n\tABI-L2-AODC\n\tABI-L2-AODF\n\tABI-L2-CMIPC\n\tABI-L2-CMIPF\n\tABI-L2-CMIPM\n\tABI-L2-CODC\n\tABI-L2-CODF\n\tABI-L2-CPSC\n\tABI-L2-CPSF\n\tABI-L2-CPSM\n\tABI-L2-CTPC\n\tABI-L2-CTPF\n\tABI-L2-DMWC\n\tABI-L2-DMWF\n\tABI-L2-DMWM\n\tABI-L2-DSIC\n\tABI-L2-DSIF\n\tABI-L2-DSIM\n\tABI-L2-DSRC\n\tABI-L2-DSRF\n\tABI-L2-DSRM\n\tABI-L2-FDCC\n\tABI-L2-FDCF\n\tABI-L2-LSTC\n\tABI-L2-LSTF\n\tABI-L2-LSTM\n\tABI-L2-LVMPC\n\tABI-L2-LVMPF\n\tABI-L2-LVMPM\n\tABI-L2-LVTPC\n\tABI-L2-LVTPF\n\tABI-L2-LVTPM\n\tABI-L2-MCMIPC\n\tABI-L2-MCMIPF\n\tABI-L2-MCMIPM\n\tABI-L2-RRQPEF\n\tABI-L2-RSRC\n\tABI-L2-RSRF\n\tABI-L2-SSTF\n\tABI-L2-TPWC\n\tABI-L2-TPWF\n\tABI-L2-TPWM\n\tABI-L2-VAAF\n\tGLM-L2-LCFA\n\tSUVI-L1b-Fe093\n\tSUVI-L1b-Fe13\n\tSUVI-L1b-Fe131\n\tSUVI-L1b-Fe17\n\tSUVI-L1b-Fe171\n\tSUVI-L1b-Fe195\n\tSUVI-L1b-Fe284\n\tSUVI-L1b-He303\n \nProducts for goes17:\n\tABI-L1b-RadC\n\tABI-L1b-RadF\n\tABI-L1b-RadM\n\tABI-L2-ACHAC\n\tABI-L2-ACHAF\n\tABI-L2-ACHAM\n\tABI-L2-ACHTF\n\tABI-L2-ACHTM\n\tABI-L2-ACMC\n\tABI-L2-ACMF\n\tABI-L2-ACMM\n\tABI-L2-ACTPC\n\tABI-L2-ACTPF\n\tABI-L2-ACTPM\n\tABI-L2-ADPC\n\tABI-L2-ADPF\n\tABI-L2-ADPM\n\tABI-L2-AODC\n\tABI-L2-AODF\n\tABI-L2-CMIPC\n\tABI-L2-CMIPF\n\tABI-L2-CMIPM\n\tABI-L2-CODC\n\tABI-L2-CODF\n\tABI-L2-CPSC\n\tABI-L2-CPSF\n\tABI-L2-CPSM\n\tABI-L2-CTPC\n\tABI-L2-CTPF\n\tABI-L2-DMWC\n\tABI-L2-DMWF\n\tABI-L2-DMWM\n\tABI-L2-DSIC\n\tABI-L2-DSIF\n\tABI-L2-DSIM\n\tABI-L2-DSRC\n\tABI-L2-DSRF\n\tABI-L2-DSRM\n\tABI-L2-FDCC\n\tABI-L2-FDCF\n\tABI-L2-LSTC\n\tABI-L2-LSTF\n\tABI-L2-LSTM\n\tABI-L2-LVMPC\n\tABI-L2-LVMPF\n\tABI-L2-LVMPM\n\tABI-L2-LVTPC\n\tABI-L2-LVTPF\n\tABI-L2-LVTPM\n\tABI-L2-MCMIPC\n\tABI-L2-MCMIPF\n\tABI-L2-MCMIPM\n\tABI-L2-RRQPEF\n\tABI-L2-RSRC\n\tABI-L2-RSRF\n\tABI-L2-SSTF\n\tABI-L2-TPWC\n\tABI-L2-TPWF\n\tABI-L2-TPWM\n\tABI-L2-VAAF\n\tGLM-L2-LCFA\n\tSUVI-L1b-Fe093\n\tSUVI-L1b-Fe13\n\tSUVI-L1b-Fe131\n\tSUVI-L1b-Fe17\n\tSUVI-L1b-Fe171\n\tSUVI-L1b-Fe195\n\tSUVI-L1b-Fe284\n\tSUVI-L1b-He303\n \nDescriptions of products in the next link: \n\thttps://docs.opendata.aws/noaa-goes16/cics-readme.html#about-the-data \n\n"
],
[
"# download one ABI's channels of full disk\nGOES.download('goes16', 'ABI-L1b-RadF',\n DateTimeIni = '20200320-203000', DateTimeFin = '20200320-205000', \n Channel = ['13'], PathOut='/home/joao/Downloads/')",
"Channel list: ['13'] \n\n \nServer: s3://noaa-goes16/ABI-L1b-RadF/2020/080/20/\nPathOut: /home/joao/Downloads/\n\tOR_ABI-L1b-RadF-M6C13_G16_s20200802030177_e20200802039497_c20200802039578.nc\t100%\t2.47 min\n\tOR_ABI-L1b-RadF-M6C13_G16_s20200802040177_e20200802049497_c20200802049570.nc\t100%\t2.43 min\n"
],
[
"# download some ABI's channels of full disk\nGOES.download('goes16', 'ABI-L1b-RadF',\n DateTimeIni = '20200320-203000', DateTimeFin = '20200320-205000',\n Channel = ['08-10','13'], PathOut='/home/joao/Downloads/')",
"Channel list: ['08', '09', '10', '13'] \n\n \nServer: s3://noaa-goes16/ABI-L1b-RadF/2020/080/20/\nPathOut: /home/joao/Downloads/\n\tOR_ABI-L1b-RadF-M6C08_G16_s20200802030177_e20200802039485_c20200802039554.nc\t100%\t3.02 min\n\tOR_ABI-L1b-RadF-M6C08_G16_s20200802040177_e20200802049485_c20200802049553.nc\t100%\t2.43 min\n\tOR_ABI-L1b-RadF-M6C09_G16_s20200802030177_e20200802039491_c20200802039569.nc\t100%\t2.08 min\n\tOR_ABI-L1b-RadF-M6C09_G16_s20200802040177_e20200802049491_c20200802049560.nc\t100%\t1.58 min\n\tOR_ABI-L1b-RadF-M6C10_G16_s20200802030177_e20200802039497_c20200802039560.nc\t100%\t1.40 min\n\tOR_ABI-L1b-RadF-M6C10_G16_s20200802040177_e20200802049497_c20200802049556.nc\t100%\t1.72 min\n\tOR_ABI-L1b-RadF-M6C13_G16_s20200802030177_e20200802039497_c20200802039578.nc\t100%\t2.33 min\n\tOR_ABI-L1b-RadF-M6C13_G16_s20200802040177_e20200802049497_c20200802049570.nc\t100%\t1.20 min\n"
],
[
"# download ABI channels of mesoscale 1\nGOES.download('goes16', 'ABI-L1b-RadM',\n DateTimeIni = '20200320-203000', DateTimeFin = '20200320-204000',\n Domain='M1', Channel = ['08','13'], PathOut='/home/joao/Downloads/')",
"Channel list: ['08', '13'] \n\n \nServer: s3://noaa-goes16/ABI-L1b-RadM/2020/080/20/\nPathOut: /home/joao/Downloads/\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802030253_e20200802030310_c20200802030372.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802031224_e20200802031281_c20200802031335.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802032224_e20200802032281_c20200802032330.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802033224_e20200802033281_c20200802033335.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802034224_e20200802034281_c20200802034340.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802035224_e20200802035281_c20200802035338.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802036224_e20200802036281_c20200802036329.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802037224_e20200802037281_c20200802037322.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802038224_e20200802038281_c20200802038336.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C08_G16_s20200802039224_e20200802039281_c20200802039334.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802030253_e20200802030321_c20200802030389.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802031224_e20200802031293_c20200802031339.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802032224_e20200802032293_c20200802032359.nc\t100%\t0.03 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802033224_e20200802033293_c20200802033348.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802034224_e20200802034293_c20200802034342.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802035224_e20200802035293_c20200802035341.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802036224_e20200802036292_c20200802036354.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802037224_e20200802037293_c20200802037355.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802038224_e20200802038293_c20200802038358.nc\t100%\t0.02 min\n\tOR_ABI-L1b-RadM1-M6C13_G16_s20200802039224_e20200802039293_c20200802039342.nc\t100%\t0.02 min\n"
],
[
"# download some ABI's channels of full disk and cut file name,\n# keeping the start scan time with format that you want (Rename_fmt = 'YourFormatDateTime').\nGOES.download('goes16', 'ABI-L1b-RadF',\n DateTimeIni = '20200320-203000', DateTimeFin = '20200320-205000',\n Channel = ['08','13'], Rename_fmt = '%Y%m%d%H%M', PathOut='/home/joao/Downloads/')",
"Channel list: ['08', '13'] \n\n \nServer: s3://noaa-goes16/ABI-L1b-RadF/2020/080/20/\nPathOut: /home/joao/Downloads/\n\tOR_ABI-L1b-RadF-M6C08_G16_s202003202030.nc\t100%\t2.38 min\n\tOR_ABI-L1b-RadF-M6C08_G16_s202003202040.nc\t100%\t1.28 min\n\tOR_ABI-L1b-RadF-M6C13_G16_s202003202030.nc\t100%\t1.25 min\n\tOR_ABI-L1b-RadF-M6C13_G16_s202003202040.nc\t100%\t2.47 min\n"
],
[
"# download GLM data\nGOES.download('goes16', 'GLM-L2-LCFA',\n DateTimeIni = '20200320-203000', DateTimeFin = '20200320-203200',\n PathOut='/home/joao/Downloads/')",
" \nServer: s3://noaa-goes16/GLM-L2-LCFA/2020/080/20/\nPathOut: /home/joao/Downloads/\n\tOR_GLM-L2-LCFA_G16_s20200802030000_e20200802030200_c20200802030227.nc\t100%\t0.02 min\n\tOR_GLM-L2-LCFA_G16_s20200802030200_e20200802030400_c20200802030430.nc\t100%\t0.02 min\n\tOR_GLM-L2-LCFA_G16_s20200802030400_e20200802031000_c20200802031031.nc\t100%\t0.02 min\n\tOR_GLM-L2-LCFA_G16_s20200802031000_e20200802031200_c20200802031228.nc\t100%\t0.02 min\n\tOR_GLM-L2-LCFA_G16_s20200802031200_e20200802031400_c20200802031425.nc\t100%\t0.02 min\n\tOR_GLM-L2-LCFA_G16_s20200802031400_e20200802032000_c20200802032026.nc\t100%\t0.03 min\n\tOR_GLM-L2-LCFA_G16_s20200802032000_e20200802032200_c20200802032228.nc\t100%\t0.03 min\n"
],
[
"# download RRQPEF product\nGOES.download('goes16', 'ABI-L2-RRQPEF',\n DateTimeIni = '20200320-203000', DateTimeFin = '20200320-205000',\n PathOut='/home/joao/Downloads/')",
" \nServer: s3://noaa-goes16/ABI-L2-RRQPEF/2020/080/20/\nPathOut: /home/joao/Downloads/\n\tOR_ABI-L2-RRQPEF-M6_G16_s20200802030177_e20200802039485_c20200802040007.nc\t100%\t0.05 min\n\tOR_ABI-L2-RRQPEF-M6_G16_s20200802040177_e20200802049485_c20200802050036.nc\t100%\t0.05 min\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0512fad30615ef1ed7734e8559a6a2dcc2a4793 | 10,022 | ipynb | Jupyter Notebook | analysis/matmul-analysis.ipynb | bsc-dom/optanedc-miniapps | e44697375099429bab6904d9bf7d235e05930033 | [
"CC0-1.0"
] | null | null | null | analysis/matmul-analysis.ipynb | bsc-dom/optanedc-miniapps | e44697375099429bab6904d9bf7d235e05930033 | [
"CC0-1.0"
] | null | null | null | analysis/matmul-analysis.ipynb | bsc-dom/optanedc-miniapps | e44697375099429bab6904d9bf7d235e05930033 | [
"CC0-1.0"
] | null | null | null | 32.329032 | 112 | 0.460986 | [
[
[
"# Boilerplate that all notebooks reuse:\nfrom analysis_common import *\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Kernel analysis",
"_____no_output_____"
]
],
[
[
"df = read_ods(\"./results.ods\", \"matmul-kernel\")\n\nexpand_modes(df)\n\nprint(df[\"MODE\"].unique())\n#############################################\n# Disregard the store result for the kernel #\n#############################################\ndf.loc[df[\"MODE\"] == \"AD (volatile result)\", \"MODE\"] = \"AD\"\norder = ['DRAM', 'AD', 'AD (in-place FMA)', 'MM (hot)', 'MM (cold)']\nhue_order = [7000, 1000]\n# Split the two families of experiments\ndf_rowcol = df[df.MATRIX_SIDE != 0]\ndf = df[df.MATRIX_SIDE == 0]\n\nsns.barplot(x='MODE', y='TIMING',\n data=df[(df.BLOCKSIZE == 1000)],\n capsize=0.1,\n order=order,\n palette=custom_kernel_palette(6))\nplt.title(\"Submatrix size: 1000x1000 (small object)\")\nplt.xticks(rotation=25, horizontalalignment='right')\nplt.show()\n\nsns.barplot(x='MODE', y='TIMING',\n data=df[(df.BLOCKSIZE == 7000)],\n capsize=0.1,\n order=order,\n palette=custom_kernel_palette(6))\nplt.title(\"Submatrix size: 7000x7000 (big object)\")\nplt.xticks(rotation=25, horizontalalignment='right')\nplt.show()\n\n###################################\n\n# sns.barplot(x='MODE', y='TIMING',\n# data=df_rowcol[(df_rowcol.BLOCKSIZE == 1000)],\n# capsize=0.1,\n# order=order,\n# palette=palette)\n# plt.title(\"BLOCKSIZE: 1k || row x col\")\n# plt.xticks(rotation=25, horizontalalignment='right')\n# plt.show()\n\n# sns.barplot(x='MODE', y='TIMING',\n# data=df_rowcol[(df_rowcol.BLOCKSIZE == 7000)],\n# capsize=0.1,\n# order=order,\n# palette=palette)\n# plt.title(\"BLOCKSIZE: 7k || row x col\")\n# plt.xticks(rotation=25, horizontalalignment='right')\n# plt.show()\n\n# Remove MM-NVM as it is outlier-ish\n#df = df[df.MODE != 'MM-NVM']\n# ... or maybe not? trying set_ylim maybe:\n#axes = plt.gca()\n#axes.set_ylim([0,1.5])\n#plt.title(\"...\")\n#plt.show()",
"_____no_output_____"
],
[
"df.loc[(df.BLOCKSIZE == 1000), \"NORMALIZED\"] = df.TIMING \ndf.loc[(df.BLOCKSIZE == 7000), \"NORMALIZED\"] = df.TIMING / (7*7*7)\n\nax = sns.barplot(y='MODE', x='NORMALIZED',\n data=df,\n capsize=0.1,\n order=order,\n hue_order=hue_order,\n hue=\"BLOCKSIZE\",\n palette=\"muted\")\n\nkernel_plot_tweaks(ax, 7*7*7, legend_title=\"Submatrix blocksize\")\n\nplt.savefig(\"matmul-kernel.pdf\", bbox_inches='tight')\nplt.show()\n",
"_____no_output_____"
],
[
"kernel_times = df.groupby([\"BLOCKSIZE\", \"MODE\"]).min()\nkernel_times",
"_____no_output_____"
],
[
"#rowcol_times = df_rowcol.groupby([\"BLOCKSIZE\", \"MODE\"]).min()\n#rowcol_times",
"_____no_output_____"
]
],
[
[
"# Matmul results analysis",
"_____no_output_____"
]
],
[
[
"df = read_ods(\"./results.ods\", \"matmul-app\")\nexpand_modes(df)\ndf",
"_____no_output_____"
],
[
"for bs in [1000, 7000]:\n df.loc[(df.BLOCKSIZE == bs) & (df.MODE == \"DRAM\"), \"ATOM_KERNEL\"] = \\\n kernel_times.loc[(bs, \"DRAM\"), \"TIMING\"]\n df.loc[(df.BLOCKSIZE == bs) & (df.MODE == \"AD (volatile result)\"), \"ATOM_KERNEL\"] = \\\n kernel_times.loc[(bs, \"AD\"), \"TIMING\"]\n df.loc[(df.BLOCKSIZE == bs) & (df.MODE == \"AD (store result)\"), \"ATOM_KERNEL\"] = \\\n kernel_times.loc[(bs, \"AD\"), \"TIMING\"]\n df.loc[(df.BLOCKSIZE == bs) & (df.MODE == \"AD (in-place FMA)\"), \"ATOM_KERNEL\"] = \\\n kernel_times.loc[(bs, \"AD (in-place FMA)\"), \"TIMING\"]\n df.loc[(df.BLOCKSIZE == bs) & (df.MODE == \"DAOS (volatile result)\"), \"ATOM_KERNEL\"] = \\\n kernel_times.loc[(bs, \"DRAM\"), \"TIMING\"]\n df.loc[(df.BLOCKSIZE == bs) & (df.MODE == \"DAOS (store result)\"), \"ATOM_KERNEL\"] = \\\n kernel_times.loc[(bs, \"DRAM\"), \"TIMING\"]\n\ndf.loc[(df.BLOCKSIZE == 1000) \n & (df.MATRIX_SIDE == 42) \n & (df.MODE == \"MM\"),\n \"ATOM_KERNEL\"] = kernel_times.loc[(1000, \"MM (hot)\"), \"TIMING\"]\ndf.loc[(df.BLOCKSIZE == 7000) \n & (df.MATRIX_SIDE == 6)\n & (df.MODE == \"MM\"),\n \"ATOM_KERNEL\"] = kernel_times.loc[(7000, \"MM (hot)\"), \"TIMING\"]\ndf.loc[(df.BLOCKSIZE == 1000) \n & (df.MATRIX_SIDE == 84)\n & (df.MODE == \"MM\"),\n \"ATOM_KERNEL\"] = kernel_times.loc[(1000, \"MM (cold)\"), \"TIMING\"]\ndf.loc[(df.BLOCKSIZE == 7000) \n & (df.MATRIX_SIDE == 12)\n & (df.MODE == \"MM\"),\n \"ATOM_KERNEL\"] = kernel_times.loc[(7000, \"MM (cold)\"), \"TIMING\"]\n\ndf[\"KERNEL_TIME\"] = df[\"MATRIX_SIDE\"]**3 * df[\"ATOM_KERNEL\"]\n\n# Sanity check\nnull_values = df[df.isnull().values]\nif len(null_values) > 0:\n print('There are null values, check null_values variable')",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"# Article image generation",
"_____no_output_____"
]
],
[
[
"sns.set(style=\"whitegrid\")\n\norder = ['DRAM', 'AD (volatile result)', 'AD (store result)', 'AD (in-place FMA)', \n 'MM', 'DAOS (volatile result)', 'DAOS (store result)']\n\nsmall = (\n ((df.BLOCKSIZE == 1000) & (df.MATRIX_SIDE == 42)) |\n ((df.BLOCKSIZE == 7000) & (df.MATRIX_SIDE == 6))\n)\n\nbig = (\n ((df.BLOCKSIZE == 1000) & (df.MATRIX_SIDE == 84)) |\n ((df.BLOCKSIZE == 7000) & (df.MATRIX_SIDE == 12))\n)\n\nax = sns.barplot(y='MODE', x=\"TIMING\",\n data=df[small],\n capsize=0.1,\n order=order,\n hue_order=hue_order,\n palette=\"colorblind\",\n hue=df.BLOCKSIZE)\n\nbottom = sns.barplot(y='MODE', x=\"KERNEL_TIME\",\n data=df[small],\n capsize=0,\n order=order,\n hue_order=hue_order,\n palette=\"pastel\",\n hue=df.BLOCKSIZE)\n\ncrop_axis(ax, 800)\nylabel_tweaks(ax, [2, 5], ['non-active', 'active'], 0.40, 0.005)\nlegend_tweaks(bottom, [\"big objects\", \"small objects\", \"kernel comp.\"], placement='upper center')\nax.set_xlabel(\"execution time (s)\")\nplt.title(\"Small dataset\")\nsave_tweaks(\"matmul-small.pdf\", big=True)\nplt.show()\n\nax = sns.barplot(y='MODE', x=\"TIMING\",\n data=df[big],\n capsize=0.1,\n order=order,\n hue_order=hue_order,\n palette=\"colorblind\",\n hue=df.BLOCKSIZE)\n\nannotate_dram(ax)\n\nbottom = sns.barplot(y='MODE', x=\"KERNEL_TIME\",\n data=df[big],\n capsize=0,\n order=order,\n hue_order=hue_order,\n palette=\"pastel\",\n hue=df.BLOCKSIZE)\n\ncrop_axis(ax, 6000)\nylabel_tweaks(ax, [2, 5], ['non-active', 'active'], 0.40, 0.005)\nlegend_tweaks(bottom, [\"big objects\", \"small objects\", \"kernel comp.\"], placement='upper center')\nax.set_xlabel(\"execution time (s)\")\nplt.title(\"Big dataset\")\nsave_tweaks(\"matmul-big.pdf\", big=True)\nplt.show()",
"_____no_output_____"
],
[
"df.groupby([\"BLOCKSIZE\", \"MATRIX_SIDE\", \"MODE\"]).mean()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d051424204a00800979907ba47ac21caa012176b | 43,364 | ipynb | Jupyter Notebook | session/sentiment/fast-text.ipynb | Jeansding/Malaya | fdf1af178ecc5ec4575298612101362ccc4a94fb | [
"MIT"
] | 2 | 2019-06-23T20:19:22.000Z | 2020-04-16T13:02:32.000Z | session/sentiment/fast-text.ipynb | Jeansding/Malaya | fdf1af178ecc5ec4575298612101362ccc4a94fb | [
"MIT"
] | null | null | null | session/sentiment/fast-text.ipynb | Jeansding/Malaya | fdf1af178ecc5ec4575298612101362ccc4a94fb | [
"MIT"
] | null | null | null | 31.174694 | 302 | 0.448206 | [
[
[
"import re\nimport numpy as np\nimport pandas as pd\nimport collections\nfrom sklearn import metrics\nfrom sklearn.preprocessing import LabelEncoder\nimport tensorflow as tf\nfrom sklearn.model_selection import train_test_split\nfrom unidecode import unidecode\nfrom tqdm import tqdm\nimport time",
"_____no_output_____"
],
[
"rules_normalizer = {\n 'experience': 'pengalaman',\n 'bagasi': 'bagasi',\n 'kg': 'kampung',\n 'kilo': 'kilogram',\n 'g': 'gram',\n 'grm': 'gram',\n 'k': 'okay',\n 'abgkat': 'abang dekat',\n 'abis': 'habis',\n 'ade': 'ada',\n 'adoi': 'aduh',\n 'adoii': 'aduhh',\n 'aerodarat': 'kapal darat',\n 'agkt': 'angkat',\n 'ahh': 'ah',\n 'ailior': 'air liur',\n 'airasia': 'air asia x',\n 'airasiax': 'penerbangan',\n 'airline': 'penerbangan',\n 'airlines': 'penerbangan',\n 'airport': 'lapangan terbang',\n 'airpot': 'lapangan terbang',\n 'aje': 'sahaja',\n 'ajelah': 'sahajalah',\n 'ajer': 'sahaja',\n 'ak': 'aku',\n 'aq': 'aku',\n 'all': 'semua',\n 'ambik': 'ambil',\n 'amek': 'ambil',\n 'amer': 'amir',\n 'amik': 'ambil',\n 'ana': 'saya',\n 'angkt': 'angkat',\n 'anual': 'tahunan',\n 'apapun': 'apa pun',\n 'ape': 'apa',\n 'arab': 'arab',\n 'area': 'kawasan',\n 'aritu': 'hari itu',\n 'ask': 'tanya',\n 'astro': 'astro',\n 'at': 'pada',\n 'attitude': 'sikap',\n 'babi': 'khinzir',\n 'back': 'belakang',\n 'bag': 'beg',\n 'bang': 'abang',\n 'bangla': 'bangladesh',\n 'banyk': 'banyak',\n 'bard': 'pujangga',\n 'bargasi': 'bagasi',\n 'bawak': 'bawa',\n 'bawanges': 'bawang',\n 'be': 'jadi',\n 'behave': 'berkelakuan baik',\n 'belagak': 'berlagak',\n 'berdisiplin': 'berdisplin',\n 'berenti': 'berhenti',\n 'beskal': 'basikal',\n 'bff': 'rakan karib',\n 'bg': 'bagi',\n 'bgi': 'bagi',\n 'biase': 'biasa',\n 'big': 'besar',\n 'bike': 'basikal',\n 'bile': 'bila',\n 'binawe': 'binatang',\n 'bini': 'isteri',\n 'bkn': 'bukan',\n 'bla': 'bila',\n 'blom': 'belum',\n 'bnyak': 'banyak',\n 'body': 'tubuh',\n 'bole': 'boleh',\n 'boss': 'bos',\n 'bowling': 'boling',\n 'bpe': 'berapa',\n 'brand': 'jenama',\n 'brg': 'barang',\n 'briefing': 'taklimat',\n 'brng': 'barang',\n 'bro': 'abang',\n 'bru': 'baru',\n 'bruntung': 'beruntung',\n 'bsikal': 'basikal',\n 'btnggjwb': 'bertanggungjawab',\n 'btul': 'betul',\n 'buatlh': 'buatlah',\n 'buh': 'letak',\n 'buka': 'buka',\n 'but': 'tetapi',\n 'bwk': 'bawa',\n 'by': 'dengan',\n 'byr': 'bayar',\n 'bz': 'sibuk',\n 'camera': 'kamera',\n 'camni': 'macam ini',\n 'cane': 'macam mana',\n 'cant': 'tak boleh',\n 'carakerja': 'cara kerja',\n 'care': 'jaga',\n 'cargo': 'kargo',\n 'cctv': 'kamera litar tertutup',\n 'celako': 'celaka',\n 'cer': 'cerita',\n 'cheap': 'murah',\n 'check': 'semak',\n 'ciput': 'sedikit',\n 'cite': 'cerita',\n 'citer': 'cerita',\n 'ckit': 'sikit',\n 'ckp': 'cakap',\n 'class': 'kelas',\n 'cm': 'macam',\n 'cmni': 'macam ini',\n 'cmpak': 'campak',\n 'committed': 'komited',\n 'company': 'syarikat',\n 'complain': 'aduan',\n 'corn': 'jagung',\n 'couldnt': 'tak boleh',\n 'cr': 'cari',\n 'crew': 'krew',\n 'cube': 'cuba',\n 'cuma': 'cuma',\n 'curinyaa': 'curinya',\n 'cust': 'pelanggan',\n 'customer': 'pelanggan',\n 'd': 'di',\n 'da': 'dah',\n 'dn': 'dan',\n 'dahh': 'dah',\n 'damaged': 'rosak',\n 'dapek': 'dapat',\n 'day': 'hari',\n 'dazrin': 'dazrin',\n 'dbalingnya': 'dibalingnya',\n 'de': 'ada',\n 'deep': 'dalam',\n 'deliberately': 'sengaja',\n 'depa': 'mereka',\n 'dessa': 'desa',\n 'dgn': 'dengan',\n 'dh': 'dah',\n 'didunia': 'di dunia',\n 'diorang': 'mereka',\n 'diorng': 'mereka',\n 'direct': 'secara terus',\n 'diving': 'junam',\n 'dkt': 'dekat',\n 'dlempar': 'dilempar',\n 'dlm': 'dalam',\n 'dlt': 'padam',\n 'dlu': 'dulu',\n 'done': 'siap',\n 'dont': 'jangan',\n 'dorg': 'mereka',\n 'dpermudhkn': 'dipermudahkan',\n 'dpt': 'dapat',\n 'dr': 'dari',\n 'dri': 'dari',\n 'dsb': 'dan sebagainya',\n 'dy': 'dia',\n 'educate': 'mendidik',\n 'ensure': 'memastikan',\n 'everything': 'semua',\n 'ewahh': 'wah',\n 'expect': 'sangka',\n 'fb': 'facebook',\n 'fired': 'pecat',\n 'first': 'pertama',\n 'fkr': 'fikir',\n 'flight': 'kapal terbang',\n 'for': 'untuk',\n 'free': 'percuma',\n 'friend': 'kawan',\n 'fyi': 'untuk pengetahuan anda',\n 'gantila': 'gantilah',\n 'gantirugi': 'ganti rugi',\n 'gentlemen': 'lelaki budiman',\n 'gerenti': 'jaminan',\n 'gile': 'gila',\n 'gk': 'juga',\n 'gnti': 'ganti',\n 'go': 'pergi',\n 'gomen': 'kerajaan',\n 'goment': 'kerajaan',\n 'good': 'baik',\n 'ground': 'tanah',\n 'guarno': 'macam mana',\n 'hampa': 'mereka',\n 'hampeh': 'teruk',\n 'hanat': 'jahanam',\n 'handle': 'kawal',\n 'handling': 'kawalan',\n 'hanta': 'hantar',\n 'haritu': 'hari itu',\n 'hate': 'benci',\n 'have': 'ada',\n 'hawau': 'celaka',\n 'henpon': 'telefon',\n 'heran': 'hairan',\n 'him': 'dia',\n 'his': 'dia',\n 'hmpa': 'mereka',\n 'hntr': 'hantar',\n 'hotak': 'otak',\n 'hr': 'hari',\n 'i': 'saya',\n 'hrga': 'harga',\n 'hrp': 'harap',\n 'hu': 'sedih',\n 'humble': 'merendah diri',\n 'ibon': 'ikon',\n 'ichi': 'inci',\n 'idung': 'hidung',\n 'if': 'jika',\n 'ig': 'instagram',\n 'iklas': 'ikhlas',\n 'improve': 'menambah baik',\n 'in': 'masuk',\n 'isn t': 'tidak',\n 'isyaallah': 'insyallah',\n 'ja': 'sahaja',\n 'japan': 'jepun',\n 'jd': 'jadi',\n 'je': 'saja',\n 'jee': 'saja',\n 'jek': 'saja',\n 'jepun': 'jepun',\n 'jer': 'saja',\n 'jerr': 'saja',\n 'jez': 'saja',\n 'jg': 'juga',\n 'jgk': 'juga',\n 'jgn': 'jangan',\n 'jgnla': 'janganlah',\n 'jibake': 'celaka',\n 'jjur': 'jujur',\n 'job': 'kerja',\n 'jobscope': 'skop kerja',\n 'jogja': 'jogjakarta',\n 'jpam': 'jpam',\n 'jth': 'jatuh',\n 'jugak': 'juga',\n 'ka': 'ke',\n 'kalo': 'kalau',\n 'kalu': 'kalau',\n 'kang': 'nanti',\n 'kantoi': 'temberang',\n 'kasi': 'beri',\n 'kat': 'dekat',\n 'kbye': 'ok bye',\n 'kearah': 'ke arah',\n 'kecik': 'kecil',\n 'keja': 'kerja',\n 'keje': 'kerja',\n 'kejo': 'kerja',\n 'keksongan': 'kekosongan',\n 'kemana': 'ke mana',\n 'kene': 'kena',\n 'kenekan': 'kenakan',\n 'kesah': 'kisah',\n 'ketempat': 'ke tempat',\n 'kije': 'kerja',\n 'kijo': 'kerja',\n 'kiss': 'cium',\n 'kite': 'kita',\n 'kito': 'kita',\n 'kje': 'kerja',\n 'kjr': 'kerja',\n 'kk': 'okay',\n 'kmi': 'kami',\n 'kt': 'kat',\n 'tlg': 'tolong',\n 'kl': 'kuala lumpur',\n 'klai': 'kalau',\n 'klau': 'kalau',\n 'klia': 'klia',\n 'klo': 'kalau',\n 'klu': 'kalau',\n 'kn': 'kan',\n 'knapa': 'kenapa',\n 'kne': 'kena',\n 'ko': 'kau',\n 'kompom': 'sah',\n 'korang': 'kamu semua',\n 'korea': 'korea',\n 'korg': 'kamu semua',\n 'kot': 'mungkin',\n 'krja': 'kerja',\n 'ksalahan': 'kesalahan',\n 'kta': 'kita',\n 'kuar': 'keluar',\n 'kut': 'mungkin',\n 'la': 'lah',\n 'laa': 'lah',\n 'lahabau': 'celaka',\n 'lahanat': 'celaka',\n 'lainda': 'lain dah',\n 'lak': 'pula',\n 'last': 'akhir',\n 'le': 'lah',\n 'leader': 'ketua',\n 'leave': 'pergi',\n 'ler': 'lah',\n 'less': 'kurang',\n 'letter': 'surat',\n 'lg': 'lagi',\n 'lgi': 'lagi',\n 'lngsong': 'langsung',\n 'lol': 'hehe',\n 'lorr': 'lah',\n 'low': 'rendah',\n 'lps': 'lepas',\n 'luggage': 'bagasi',\n 'lumbe': 'lumba',\n 'lyak': 'layak',\n 'maap': 'maaf',\n 'maapkan': 'maafkan',\n 'mahai': 'mahal',\n 'mampos': 'mampus',\n 'mart': 'kedai',\n 'mau': 'mahu',\n 'mcm': 'macam',\n 'mcmtu': 'macam itu',\n 'memerlukn': 'memerlukan',\n 'mengembirakan': 'menggembirakan',\n 'mengmbilnyer': 'mengambilnya',\n 'mengtasi': 'mengatasi',\n 'mg': 'memang',\n 'mihak': 'memihak',\n 'min': 'admin',\n 'mingu': 'minggu',\n 'mintak': 'minta',\n 'mjtuhkn': 'menjatuhkan',\n 'mkyong': 'mak yong',\n 'mlibatkn': 'melibatkan',\n 'mmg': 'memang',\n 'mmnjang': 'memanjang',\n 'mmpos': 'mampus',\n 'mn': 'mana',\n 'mna': 'mana',\n 'mntak': 'minta',\n 'mntk': 'minta',\n 'mnyusun': 'menyusun',\n 'mood': 'suasana',\n 'most': 'paling',\n 'mr': 'tuan',\n 'msa': 'masa',\n 'msia': 'malaysia',\n 'mst': 'mesti',\n 'mu': 'awak',\n 'much': 'banyak',\n 'muko': 'muka',\n 'mum': 'emak',\n 'n': 'dan',\n 'nah': 'nah',\n 'nanny': 'nenek',\n 'napo': 'kenapa',\n 'nati': 'nanti',\n 'ngan': 'dengan',\n 'ngn': 'dengan',\n 'ni': 'ini',\n 'nie': 'ini',\n 'nii': 'ini',\n 'nk': 'nak',\n 'nmpk': 'nampak',\n 'nye': 'nya',\n 'ofis': 'pejabat',\n 'ohh': 'oh',\n 'oii': 'hoi',\n 'one': 'satu',\n 'online': 'dalam talian',\n 'or': 'atau',\n 'org': 'orang',\n 'orng': 'orang',\n 'otek': 'otak',\n 'p': 'pergi',\n 'paid': 'dah bayar',\n 'palabana': 'kepala otak',\n 'pasni': 'lepas ini',\n 'passengers': 'penumpang',\n 'passengger': 'penumpang',\n 'pastu': 'lepas itu',\n 'pd': 'pada',\n 'pegi': 'pergi',\n 'pekerje': 'pekerja',\n 'pekrja': 'pekerja',\n 'perabih': 'perabis',\n 'perkerja': 'pekerja',\n 'pg': 'pergi',\n 'phuii': 'puih',\n 'pikir': 'fikir',\n 'pilot': 'juruterbang',\n 'pk': 'fikir',\n 'pkerja': 'pekerja',\n 'pkerjaan': 'pekerjaan',\n 'pki': 'pakai',\n 'please': 'tolong',\n 'pls': 'tolong',\n 'pn': 'pun',\n 'pnh': 'pernah',\n 'pnt': 'penat',\n 'pnya': 'punya',\n 'pon': 'pun',\n 'priority': 'keutamaan',\n 'properties': 'harta benda',\n 'ptugas': 'petugas',\n 'pub': 'kelab malam',\n 'pulak': 'pula',\n 'puye': 'punya',\n 'pwrcuma': 'percuma',\n 'pyahnya': 'payahnya',\n 'quality': 'kualiti',\n 'quit': 'keluar',\n 'ramly': 'ramly',\n 'rege': 'harga',\n 'reger': 'harga',\n 'report': 'laporan',\n 'resigned': 'meletakkan jawatan',\n 'respect': 'hormat',\n 'rizal': 'rizal',\n 'rosak': 'rosak',\n 'rosok': 'rosak',\n 'rse': 'rasa',\n 'sacked': 'buang',\n 'sado': 'tegap',\n 'salute': 'sanjung',\n 'sam': 'sama',\n 'same': 'sama',\n 'samp': 'sampah',\n 'sbb': 'sebab',\n 'sbgai': 'sebagai',\n 'sblm': 'sebelum',\n 'sblum': 'sebelum',\n 'sbnarnya': 'sebenarnya',\n 'sbum': 'sebelum',\n 'sdg': 'sedang',\n 'sebb': 'sebab',\n 'sebijik': 'sebiji',\n 'see': 'lihat',\n 'seen': 'dilihat',\n 'selangor': 'selangor',\n 'selfie': 'swafoto',\n 'sempoi': 'cantik',\n 'senaraihitam': 'senarai hitam',\n 'seorg': 'seorang',\n 'service': 'perkhidmatan',\n 'sgt': 'sangat',\n 'shared': 'kongsi',\n 'shirt': 'kemeja',\n 'shut': 'tutup',\n 'sib': 'nasib',\n 'skali': 'sekali',\n 'sket': 'sikit',\n 'sma': 'sama',\n 'smoga': 'semoga',\n 'smpoi': 'cantik',\n 'sndiri': 'sendiri',\n 'sndr': 'sendiri',\n 'sndri': 'sendiri',\n 'sne': 'sana',\n 'so': 'jadi',\n 'sop': 'tatacara pengendalian piawai',\n 'sorang': 'seorang',\n 'spoting': 'pembintikan',\n 'sronok': 'seronok',\n 'ssh': 'susah',\n 'staff': 'staf',\n 'standing': 'berdiri',\n 'start': 'mula',\n 'steady': 'mantap',\n 'stiap': 'setiap',\n 'stress': 'stres',\n 'student': 'pelajar',\n 'study': 'belajar',\n 'studycase': 'kajian kes',\n 'sure': 'pasti',\n 'sykt': 'syarikat',\n 'tah': 'entah',\n 'taik': 'tahi',\n 'takan': 'tak akan',\n 'takat': 'setakat',\n 'takde': 'tak ada',\n 'takkan': 'tak akan',\n 'taknak': 'tak nak',\n 'tang': 'tentang',\n 'tanggungjawab': 'bertanggungjawab',\n 'taraa': 'sementara',\n 'tau': 'tahu',\n 'tbabit': 'terbabit',\n 'team': 'pasukan',\n 'terbaekk': 'terbaik',\n 'teruknye': 'teruknya',\n 'tgk': 'tengok',\n 'that': 'itu',\n 'thinking': 'fikir',\n 'those': 'itu',\n 'time': 'masa',\n 'tk': 'tak',\n 'tnggongjwb': 'tanggungjawab',\n 'tngok': 'tengok',\n 'tngu': 'tunggu',\n 'to': 'kepada',\n 'tosak': 'rosak',\n 'tp': 'tapi',\n 'tpi': 'tapi',\n 'tpon': 'telefon',\n 'transfer': 'pindah',\n 'trgelak': 'tergelak',\n 'ts': 'tan sri',\n 'tstony': 'tan sri tony',\n 'tu': 'itu',\n 'tuh': 'itu',\n 'tula': 'itulah',\n 'umeno': 'umno',\n 'unfortunately': 'malangnya',\n 'unhappy': 'tidak gembira',\n 'up': 'naik',\n 'upkan': 'naikkan',\n 'ur': 'awak',\n 'utk': 'untuk',\n 'very': 'sangat',\n 'viral': 'tular',\n 'vote': 'undi',\n 'warning': 'amaran',\n 'warranty': 'waranti',\n 'wassap': 'whatsapp',\n 'wat': 'apa',\n 'weii': 'wei',\n 'well': 'maklumlah',\n 'win': 'menang',\n 'with': 'dengan',\n 'wt': 'buat',\n 'x': 'tak',\n 'tw': 'tahu',\n 'ye': 'ya',\n 'yee': 'ya',\n 'yg': 'yang',\n 'yng': 'yang',\n 'you': 'awak',\n 'your': 'awak',\n 'sakai': 'selekeh',\n 'rmb': 'billion ringgit',\n 'rmj': 'juta ringgit',\n 'rmk': 'ribu ringgit',\n 'rm': 'ringgit',\n}",
"_____no_output_____"
],
[
"permulaan = [\n 'bel',\n 'se',\n 'ter',\n 'men',\n 'meng',\n 'mem',\n 'memper',\n 'di',\n 'pe',\n 'me',\n 'ke',\n 'ber',\n 'pen',\n 'per',\n]\n\nhujung = ['kan', 'kah', 'lah', 'tah', 'nya', 'an', 'wan', 'wati', 'ita']\n\ndef naive_stemmer(word):\n assert isinstance(word, str), 'input must be a string'\n hujung_result = [e for e in hujung if word.endswith(e)]\n if len(hujung_result):\n hujung_result = max(hujung_result, key = len)\n if len(hujung_result):\n word = word[: -len(hujung_result)]\n permulaan_result = [e for e in permulaan if word.startswith(e)]\n if len(permulaan_result):\n permulaan_result = max(permulaan_result, key = len)\n if len(permulaan_result):\n word = word[len(permulaan_result) :]\n return word\n\ndef build_dataset(words, n_words):\n count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]\n counter = collections.Counter(words).most_common(n_words)\n count.extend(counter)\n dictionary = dict()\n for word, _ in count:\n dictionary[word] = len(dictionary)\n data = list()\n unk_count = 0\n for word in words:\n index = dictionary.get(word, 3)\n if index == 0:\n unk_count += 1\n data.append(index)\n count[0][1] = unk_count\n reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))\n return data, count, dictionary, reversed_dictionary\n\n\ndef classification_textcleaning(string):\n string = re.sub(\n 'http\\S+|www.\\S+',\n '',\n ' '.join(\n [i for i in string.split() if i.find('#') < 0 and i.find('@') < 0]\n ),\n )\n string = unidecode(string).replace('.', ' . ').replace(',', ' , ')\n string = re.sub('[^A-Za-z ]+', ' ', string)\n string = re.sub(r'[ ]+', ' ', string.lower()).strip()\n string = [rules_normalizer.get(w, w) for w in string.split()]\n string = [naive_stemmer(word) for word in string]\n return ' '.join([word for word in string if len(word) > 1])\n\n\ndef str_idx(corpus, dic, maxlen, UNK = 3):\n X = np.zeros((len(corpus), maxlen))\n for i in range(len(corpus)):\n for no, k in enumerate(corpus[i].split()[:maxlen][::-1]):\n X[i, -1 - no] = dic.get(k, UNK)\n return X",
"_____no_output_____"
],
[
"classification_textcleaning('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya')",
"_____no_output_____"
],
[
"df = pd.read_csv('sentiment-data-v2.csv')\nY = LabelEncoder().fit_transform(df.label)\nwith open('polarity-negative-translated.txt','r') as fopen:\n texts = fopen.read().split('\\n')\nlabels = [0] * len(texts)\n\nwith open('polarity-positive-translated.txt','r') as fopen:\n positive_texts = fopen.read().split('\\n')\nlabels += [1] * len(positive_texts)\ntexts += positive_texts\ntexts += df.iloc[:,1].tolist()\nlabels += Y.tolist()\n\nassert len(labels) == len(texts)",
"_____no_output_____"
],
[
"import json\nwith open('bm-amazon.json') as fopen:\n amazon = json.load(fopen)\n \nwith open('bm-imdb.json') as fopen:\n imdb = json.load(fopen)\n \nwith open('bm-yelp.json') as fopen:\n yelp = json.load(fopen)\n \ntexts += amazon['negative']\nlabels += [0] * len(amazon['negative'])\ntexts += amazon['positive']\nlabels += [1] * len(amazon['positive'])\n\ntexts += imdb['negative']\nlabels += [0] * len(imdb['negative'])\ntexts += imdb['positive']\nlabels += [1] * len(imdb['positive'])\n\ntexts += yelp['negative']\nlabels += [0] * len(yelp['negative'])\ntexts += yelp['positive']\nlabels += [1] * len(yelp['positive'])",
"_____no_output_____"
],
[
"import os\nfor i in [i for i in os.listdir('negative') if 'Store' not in i]:\n with open('negative/'+i) as fopen:\n a = json.load(fopen)\n texts += a\n labels += [0] * len(a)",
"_____no_output_____"
],
[
"import os\nfor i in [i for i in os.listdir('positive') if 'Store' not in i]:\n with open('positive/'+i) as fopen:\n a = json.load(fopen)\n texts += a\n labels += [1] * len(a)",
"_____no_output_____"
],
[
"for i in range(len(texts)):\n texts[i] = classification_textcleaning(texts[i])",
"_____no_output_____"
],
[
"concat = ' '.join(texts).split()\nvocabulary_size = len(list(set(concat)))\ndata, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)\nprint('vocab from size: %d'%(vocabulary_size))\nprint('Most common words', count[4:10])\nprint('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])",
"vocab from size: 120097\nMost common words [('saya', 533028), ('yang', 204446), ('tidak', 164296), ('untuk', 129707), ('anda', 126091), ('hari', 88975)]\nSample data [2670, 229, 363, 235, 235, 94, 1358, 5, 78, 678] ['ringkas', 'bodoh', 'bosan', 'kanak', 'kanak', 'lelaki', 'remaja', 'yang', 'begitu', 'muda']\n"
],
[
"max_features = len(dictionary)\nmaxlen = 100\nbatch_size = 32\nembedded_size = 256",
"_____no_output_____"
],
[
"train_X, test_X, train_Y, test_Y = train_test_split(texts, \n labels,\n test_size = 0.2)",
"_____no_output_____"
],
[
"class Model:\n def __init__(\n self, embedded_size, dict_size, dimension_output, learning_rate\n ):\n\n self.X = tf.placeholder(tf.int32, [None, None])\n self.Y = tf.placeholder(tf.int32, [None])\n encoder_embeddings = tf.Variable(\n tf.random_uniform([dict_size, embedded_size], -1, 1)\n )\n encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)\n self.logits = tf.identity(\n tf.layers.dense(\n tf.reduce_mean(encoder_embedded, 1), dimension_output\n ),\n name = 'logits',\n )\n self.cost = tf.reduce_mean(\n tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits = self.logits, labels = self.Y\n )\n )\n self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(\n self.cost\n )\n correct_pred = tf.equal(\n tf.argmax(self.logits, 1, output_type = tf.int32), self.Y\n )\n self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\n",
"_____no_output_____"
],
[
"tf.reset_default_graph()\nsess = tf.InteractiveSession()\nmodel = Model(embedded_size, max_features, 2, 5e-4)\nsess.run(tf.global_variables_initializer())\nsaver = tf.train.Saver(tf.trainable_variables())\nsaver.save(sess, 'fast-text/model.ckpt')",
"_____no_output_____"
],
[
"strings = ','.join(\n [\n n.name\n for n in tf.get_default_graph().as_graph_def().node\n if ('Variable' in n.op\n or 'Placeholder' in n.name\n or 'logits' in n.name)\n and 'Adam' not in n.name\n and 'beta' not in n.name\n ]\n)",
"_____no_output_____"
],
[
"strings.split(',')",
"_____no_output_____"
],
[
"tf.trainable_variables()",
"_____no_output_____"
],
[
"from tqdm import tqdm\nimport time\n\nEARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0\n\nwhile True:\n lasttime = time.time()\n if CURRENT_CHECKPOINT == EARLY_STOPPING:\n print('break epoch:%d\\n' % (EPOCH))\n break\n\n train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0\n pbar = tqdm(\n range(0, len(train_X), batch_size), desc = 'train minibatch loop'\n )\n for i in pbar:\n batch_x = str_idx(train_X[i : min(i + batch_size, len(train_X))], dictionary, maxlen)\n batch_y = train_Y[i : min(i + batch_size, len(train_X))]\n batch_x_expand = np.expand_dims(batch_x,axis = 1)\n acc, cost, _ = sess.run(\n [model.accuracy, model.cost, model.optimizer],\n feed_dict = {\n model.Y: batch_y,\n model.X: batch_x\n },\n )\n assert not np.isnan(cost)\n train_loss += cost\n train_acc += acc\n pbar.set_postfix(cost = cost, accuracy = acc)\n\n pbar = tqdm(range(0, len(test_X), batch_size), desc = 'test minibatch loop')\n for i in pbar:\n batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)\n batch_y = test_Y[i : min(i + batch_size, len(test_X))]\n batch_x_expand = np.expand_dims(batch_x,axis = 1)\n acc, cost = sess.run(\n [model.accuracy, model.cost],\n feed_dict = {\n model.Y: batch_y,\n model.X: batch_x\n },\n )\n test_loss += cost\n test_acc += acc\n pbar.set_postfix(cost = cost, accuracy = acc)\n\n train_loss /= len(train_X) / batch_size\n train_acc /= len(train_X) / batch_size\n test_loss /= len(test_X) / batch_size\n test_acc /= len(test_X) / batch_size\n\n if test_acc > CURRENT_ACC:\n print(\n 'epoch: %d, pass acc: %f, current acc: %f'\n % (EPOCH, CURRENT_ACC, test_acc)\n )\n CURRENT_ACC = test_acc\n CURRENT_CHECKPOINT = 0\n else:\n CURRENT_CHECKPOINT += 1\n \n print('time taken:', time.time() - lasttime)\n print(\n 'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\\n'\n % (EPOCH, train_loss, train_acc, test_loss, test_acc)\n )\n EPOCH += 1",
"train minibatch loop: 100%|██████████| 16876/16876 [08:00<00:00, 35.16it/s, accuracy=0.778, cost=0.409]\ntest minibatch loop: 100%|██████████| 4219/4219 [00:17<00:00, 243.38it/s, accuracy=0.897, cost=0.376]\ntrain minibatch loop: 0%| | 4/16876 [00:00<07:45, 36.22it/s, accuracy=0.75, cost=0.425] "
],
[
"real_Y, predict_Y = [], []\n\npbar = tqdm(\n range(0, len(test_X), batch_size), desc = 'validation minibatch loop'\n)\nfor i in pbar:\n batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)\n batch_y = test_Y[i : min(i + batch_size, len(test_X))]\n predict_Y += np.argmax(\n sess.run(\n model.logits, feed_dict = {model.X: batch_x, model.Y: batch_y}\n ),\n 1,\n ).tolist()\n real_Y += batch_y",
"validation minibatch loop: 100%|██████████| 4219/4219 [00:07<00:00, 539.93it/s]\n"
],
[
"saver.save(sess, 'fast-text/model.ckpt')",
"_____no_output_____"
],
[
"print(\n metrics.classification_report(\n real_Y, predict_Y, target_names = ['negative', 'positive']\n )\n)",
" precision recall f1-score support\n\n negative 0.79 0.78 0.78 70568\n positive 0.76 0.77 0.77 64437\n\n micro avg 0.77 0.77 0.77 135005\n macro avg 0.77 0.77 0.77 135005\nweighted avg 0.77 0.77 0.77 135005\n\n"
],
[
"text = 'kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya'\nnew_vector = str_idx([classification_textcleaning(text)], dictionary, len(text.split()))\nsess.run(tf.nn.softmax(model.logits), feed_dict={model.X:new_vector})",
"_____no_output_____"
],
[
"import json\nwith open('fast-text-sentiment.json','w') as fopen:\n fopen.write(json.dumps({'dictionary':dictionary,'reverse_dictionary':rev_dictionary}))",
"_____no_output_____"
],
[
"def freeze_graph(model_dir, output_node_names):\n\n if not tf.gfile.Exists(model_dir):\n raise AssertionError(\n \"Export directory doesn't exists. Please specify an export \"\n 'directory: %s' % model_dir\n )\n\n checkpoint = tf.train.get_checkpoint_state(model_dir)\n input_checkpoint = checkpoint.model_checkpoint_path\n\n absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])\n output_graph = absolute_model_dir + '/frozen_model.pb'\n clear_devices = True\n with tf.Session(graph = tf.Graph()) as sess:\n saver = tf.train.import_meta_graph(\n input_checkpoint + '.meta', clear_devices = clear_devices\n )\n saver.restore(sess, input_checkpoint)\n output_graph_def = tf.graph_util.convert_variables_to_constants(\n sess,\n tf.get_default_graph().as_graph_def(),\n output_node_names.split(','),\n )\n with tf.gfile.GFile(output_graph, 'wb') as f:\n f.write(output_graph_def.SerializeToString())\n print('%d ops in the final graph.' % len(output_graph_def.node))",
"_____no_output_____"
],
[
"freeze_graph('fast-text', strings)",
"INFO:tensorflow:Restoring parameters from fast-text/model.ckpt\nINFO:tensorflow:Froze 3 variables.\nINFO:tensorflow:Converted 3 variables to const ops.\n16 ops in the final graph.\n"
],
[
"def load_graph(frozen_graph_filename):\n with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:\n graph_def = tf.GraphDef()\n graph_def.ParseFromString(f.read())\n with tf.Graph().as_default() as graph:\n tf.import_graph_def(graph_def)\n return graph",
"_____no_output_____"
],
[
"g = load_graph('fast-text/frozen_model.pb')\nx = g.get_tensor_by_name('import/Placeholder:0')\nlogits = g.get_tensor_by_name('import/logits:0')\ntest_sess = tf.InteractiveSession(graph = g)\ntest_sess.run(tf.nn.softmax(logits), feed_dict = {x: new_vector})",
"/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py:1702: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).\n warnings.warn('An interactive session is already active. This can '\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0514d489ed095cb9e58a7ed286ae45ee2555a76 | 161,602 | ipynb | Jupyter Notebook | Python BESO/project_old.ipynb | archilless/Minin | 0ef7ca59ec204bbf0e1886ac74fbea09654648a4 | [
"MIT"
] | 1 | 2017-12-19T10:38:37.000Z | 2017-12-19T10:38:37.000Z | Python BESO/project_old.ipynb | archilless/Minin | 0ef7ca59ec204bbf0e1886ac74fbea09654648a4 | [
"MIT"
] | null | null | null | Python BESO/project_old.ipynb | archilless/Minin | 0ef7ca59ec204bbf0e1886ac74fbea09654648a4 | [
"MIT"
] | 1 | 2020-01-26T14:36:52.000Z | 2020-01-26T14:36:52.000Z | 261.491909 | 23,760 | 0.887353 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.special\nimport copy\n\n\ndef empty_mask(size):\n return np.zeros((size,size))\n\ndef circular_mask(size):\n y,x = np.mgrid[:size, :size]\n M = np.zeros((size,size))\n x0 = y0 = (size-1)/2\n r = size/4\n M[(x-x0)**2+(y-y0)**2<=r**2]=1\n return M\n\ndef rectangle_mask(size):\n y,x = np.mgrid[:size, :size]\n M = np.zeros((size,size))\n x0 = y0 = (size-1)/2\n r = size/4\n M[((x-x0)**2<=r**2)*((y-y0)**2<=r**2)]=1\n return M\n \n\n \ndef get_plane_wave(E0,k,size):\n y,x = np.mgrid[:size, :size]\n a = np.pi*0/180\n E = E0*np.exp(-1j*k*(x*np.cos(a)+y*np.sin(a)))\n return(E) \n \ndef get_greenfun(r,k):\n return (1j/4)*scipy.special.hankel1(0,k*r)\n\ndef get_green_matrix(k,size):\n j,i = np.mgrid[:size, :size]\n ij_block = np.sqrt((i-1/2)**2+j**2)\n green_mat = get_greenfun(ij_block,k)\n return green_mat\n\n# def get_toeplitz_mat(ij_block):\n# ij_block = copy.deepcopy(ij_block)\n# T = np.block([[ij_block,ij_block[:,:0:-1]],\n# [ij_block[:0:-1,:],ij_block[:0:-1,:0:-1]]])\n# return T\n\ndef get_toeplitz_mat(ij_block):\n ij_block = copy.deepcopy(ij_block)\n T1 = np.hstack((ij_block,ij_block[:,:0:-1]))\n T2 = np.hstack((ij_block[:0:-1,:],ij_block[:0:-1,:0:-1]))\n T = np.vstack((T1,T2))\n return T\n \ndef G_matvec(vec,k):\n size = int(np.sqrt(vec.shape[0]))\n G_block = get_green_matrix(k,size)\n G = get_toeplitz_mat(G_block)\n mat = np.zeros((2*size-1,2*size-1),dtype = np.complex64)\n mat_block = vec.reshape((-1,size))\n mat[:size,:size] = mat_block\n out_mat = np.fft.ifft2(np.fft.fft2(G)*np.fft.fft2(mat))\n out = out_mat[:size,:size].reshape((-1,1))\n return out\n\ndef get_eps_from_mask(e,mask):\n return (e-1)*mask.reshape((-1,1))+1\n\ndef matvec(x,eps,k):\n x = x.reshape((-1,1))\n #print(x)\n size = x.shape[0]\n chi = k**2*(eps - 1)\n return x-G_matvec(x*chi,k)\n\ndef old_matvec(x,mask,k,e):\n eps = get_eps_from_mask(e,mask)\n return matvec(x,eps,k)\n\ndef visualize(data,title = \"\",cmap='jet',):\n plt.title(title)\n neg = plt.imshow(data, cmap=cmap, interpolation='none')\n plt.colorbar(neg)\n plt.show()\n\n\n \ndef solve(E,eps0,eps1):\n return E\n\n\nsize = 16\ne =1.5# 2.25\nk = 2*np.pi/(size/1)\nF = get_plane_wave(1,k,size)\n#mask = empty_mask(size)\n#mask = rectangle_mask(size)\nmask = circular_mask(size)\neps = get_eps_from_mask(e,mask)\nvisualize(F.real,\"Initial field (real part)\")\nvisualize(mask,\"Mask\",\"gray\")\n\n \n\n",
"_____no_output_____"
],
[
"import scipy.sparse.linalg as spla\nimport inspect\nimport time\n\nx_last = get_plane_wave(1,k,size).reshape(-1,1)\ndef plot__solution_re_im_abs_mask(solution, size):\n solution_re = solution.real.reshape(-1,size)\n solution_im = solution.imag.reshape(-1,size)\n solution_abs = np.abs(solution).reshape(-1,size)\n solution_abs_mask = np.abs(solution).reshape(-1,size)*(1-mask)\n visualize(solution_re,\"Real\")\n visualize(solution_im,\"Imag\")\n visualize(solution_abs,\"Abs\",\"gray\")\n visualize(solution_abs_mask,\"Abs with mask\")\n return solution_re, solution_im, solution_abs, solution_abs_mask\n\ndef plot_relative_residuals_norms(t, residuals, relative_vector):\n plt.semilogy(t, residuals/np.linalg.norm(relative_vector), 'x-', label=\"Generalized Minimal RESidual iterations\")\n plt.legend()\n plt.title('Relative residual (depends on time), number of iterations = %i' % len(residuals))\n plt.xlabel('Seconds')\n plt.ylabel('Relative residual norm')\n plt.show()\n plt.semilogy(np.arange(len(residuals), 0, -1), residuals/np.linalg.norm(relative_vector), label=\"Generalized Minimal RESidual iterations\")\n plt.legend()\n plt.title('Relative residual (depends on number of step), number of iterations = %i' % len(residuals))\n plt.xlabel('Number of step') \n plt.ylabel('Relative residual norm')\n plt.show()\n \ndef gmres_solver(A, b, x0, maxiter, tol, \n draw_graph_flag = False, \n convergence_info = False, \n display_convergence_info = False,\n display_achieved_tolerance = False):\n gmres_residuals_with_t = []\n t0 = time.time()\n solution, info = spla.gmres(A, b, x0=x0, maxiter = maxiter, tol = tol, restart = maxiter, callback = lambda x:\n gmres_residuals_with_t.append([(inspect.currentframe().f_back).f_locals['resid'], time.time()])\n )\n if len(gmres_residuals_with_t)>1:\n gmres_residuals_with_t = np.array(gmres_residuals_with_t).T\n gmres_residuals_with_t[1] = gmres_residuals_with_t[1]-t0\n gmres_t, gmres_residuals = gmres_residuals_with_t\n else:\n gmres_t, gmres_residuals = [],[]\n if (display_convergence_info == True):\n if (info == 0):\n print(\"Status: Converged, successful exit\")\n else:\n if (info > 0):\n print(\"Status: Convergence to tolerance not achieved, number of iterations\")\n else:\n print(\"Status: Illegal input or breakdown\")\n if ( draw_graph_flag == True ):\n plot_relative_residuals_norms(gmres_t, gmres_residuals, b) \n if ( display_achieved_tolerance == True):\n print('Achieved tolerance = ', np.linalg.norm(A.dot(solution.reshape(-1,1))-b)/np.linalg.norm(b))\n if (convergence_info == True):\n return solution, info\n return solution\n\ndef launch_solver(eps, k, x0 = None ,maxiter=300, tol = 1e-6):\n global x_last\n size = int(np.sqrt(eps.shape[0]))\n A = spla.LinearOperator(shape = (size**2, size**2), matvec = lambda x: matvec(x,eps,k))\n b = get_plane_wave(1,k,size).reshape(-1,1)\n if x0 is None:\n x0 = x_last\n solution, info = gmres_solver(A, b, x0, \n maxiter=maxiter, \n tol=tol,\n convergence_info = True)\n x_last = solution.reshape(-1,1)\n return solution, info\n\ndef show_residuals(eps, k, maxiter=300, tol = 1e-6):\n size = int(np.sqrt(eps.shape[0]))\n A = spla.LinearOperator(shape = (size**2, size**2), matvec = lambda x: matvec(x,eps,k))\n b = get_plane_wave(1,k,size).reshape(-1,1)\n x0 = np.ones(size**2).reshape(-1,1)\n gmres_solver(A, b, x0, \n maxiter=maxiter, \n tol=tol,\n draw_graph_flag = True)\n \nt = time.time()\nsolution, info = launch_solver(eps=eps, k=k)\nprint(t-time.time())\nshow_residuals(eps=eps, k=k)\nsolution_re, solution_im, solution_abs, solution_abs_mask = plot__solution_re_im_abs_mask(solution, size)",
"-0.010435104370117188\n"
],
[
"def choose_direction(eps, k, maxiter=300, tol=1e-6, x=None):\n if x is None:\n x, info = launch_solver(eps=eps, k=k, maxiter=maxiter, tol=tol)\n x_abs = np.abs(x)\n x_max = np.max(x_abs)\n indeces = np.argwhere( x_abs == x_max )\n choose_direction = np.zeros(x.shape[0], dtype = np.complex64)\n choose_direction[indeces] = (np.sign(x.real)/2+1j*np.sign(x.imag)/2)[indeces]/indeces.shape[0]\n return choose_direction\n\ndef get_Jacobi_diagonal(mask, e, k, eps = None, x0 = None , maxiter=300, tol = 1e-6):\n if eps is None:\n eps = get_eps_from_mask(e,mask)\n solution, info = launch_solver(eps=eps, x0=x0, k=k, maxiter=maxiter, tol = tol)\n solution_with_coeff = k**2*(e-1)*solution\n zero_vector = np.zeros(solution_with_coeff.shape[0], dtype = np.complex64)\n Jacobi_diagonal = np.zeros(solution.shape[0], dtype = np.complex64 )\n for i in range(solution.shape[0]):\n solution_sparse_column = zero_vector.copy()\n solution_sparse_column[i] = solution_with_coeff[i]\n A = spla.LinearOperator(shape = (size**2, size**2), matvec = lambda x: matvec(x,eps,k))\n b = G_matvec(solution_sparse_column, k) \n Jacobi_diagonal[i] = gmres_solver(A=A, b=b, x0=solution, maxiter=maxiter, tol=tol)[i]\n return Jacobi_diagonal\n\ndef get_grad(mask, e=e, k=k, x = None, eps = None, x0 = None , maxiter=300, tol = 1e-6):\n if eps is None:\n eps = get_eps_from_mask(e,mask)\n solution, info = launch_solver(eps=eps, x0=x0, k=k, maxiter=maxiter, tol = tol)\n direction = choose_direction(eps=eps, k=k, maxiter=maxiter, tol=tol, x=solution)\n solution_with_coeff = k**2*(e-1)*solution\n zero_vector = np.zeros(solution_with_coeff.shape[0], dtype = np.complex64)\n Jacobi_diagonal = np.zeros(solution.shape[0], dtype = np.complex64 )\n for i in np.argwhere(direction!=0):\n solution_sparse_column = zero_vector.copy()\n solution_sparse_column[i] = solution_with_coeff[i]\n A = spla.LinearOperator(shape = (size**2, size**2), matvec = lambda x: matvec(x,eps,k))\n b = G_matvec(solution_sparse_column, k) \n Jacobi_diagonal[i] = gmres_solver(A=A, b=b, x0=solution, maxiter=maxiter, tol=tol)[i]\n return np.abs(Jacobi_diagonal)\n\nprint(get_grad(mask, e, k, maxiter=300, tol = 1e-6))",
"[ 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 1.19321716 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. ]\n"
],
[
"from scipy.optimize import minimize\n\ndef plot_solution(y):\n mask = get_fild_value(y,20)\n \n print(np.min(mask))\n print(np.max(mask))\n eps = get_eps_from_mask(e,mask).reshape((-1,1))\n print(np.min(eps))\n print(np.max(eps))\n field, info = launch_solver(eps=eps, k=k)\n visualize(mask,\"Mask\",\"gray\")\n #visualize(field.real.reshape(-1,size),\"Field (Real part)\")\n visualize(np.abs(field).reshape(-1,size),\"Field (Abs)\")\n print(objective(y))\n print(np.max(np.abs(field)))\n\ni=0\ndef get_fild_value(y,p):\n x = (np.tanh(p*y)+1)/2\n return x\n\ndef callback(x):\n global i\n i+=1\n print(i)\n\ndef penalty(x,p):\n return np.sum(1-x**p-(1-x)**p)\n #return np.sum(x*(1-x))\n#obj = 0\ndef objective(y):\n\n mask = get_fild_value(y,4)\n eps = get_eps_from_mask(e,mask).reshape((-1,1))\n field, info = launch_solver(eps=eps, k=k)\n\n \n #global obj\n mask = get_fild_value(y,20)\n eps = get_eps_from_mask(e,mask).reshape((-1,1))\n field, info = launch_solver(eps=eps, k=k)\n if info !=0:\n raise RuntimeError()\n obj = -np.max(np.abs(field))#+penalty(mask,20)*1\n #print(obj)\n return obj\n\n# x_empty_ind = np.argwhere((-0.1<mask)*(mask<0.1))\n# x_empty = x[x_empty_ind]\n# x_empty = x\n# if info != 0:\n# raise RuntimeError()\n# if x_empty.shape[0]!=0:\n# #print(np.max(x_empty.imag))\n# obj = -np.max(np.abs(x_empty))+penalty(mask,20)*0.001\n# else:\n# obj = penalty(mask,20)*0.001\n# #print(obj)\n# return obj\n\ndef get_random_mask(size):\n mask = np.random.rand(size,size)\n return mask\n\n# def search_with_restarts(num):\n \n#y = np.random.random(size,size)\n# mask =circular_mask(size)\nnoize = (get_random_mask(size)-0.5)*10\n# mask = (mask + noize)/np.max(noize+0.001)\ny = circular_mask(size)-0.5+noize\nobj0 = objective(y)\nmask = get_fild_value(y,20)\nplot_solution(y)\n#bns = tuple((0,1) for _ in range(size**2))\nsol = minimize(objective,y,method = \"BFGS\",options={'maxiter': 10, 'gtol':1e-9}, callback = callback)\nbest_y = sol.x.reshape(-1,size)\nplot_solution(best_y)\nprint(obj0)\n\n\n ",
"0.0\n1.0\n1.0\n1.5\n"
],
[
"# import cvxpy as cvx\n\n# size = 2\n# k = 2*np.pi/(size/7)\n# F = get_plane_wave(1,k,size)\n\n# x = cvx.Variable(size**2)\n# eps = cvx.Variable(size**2)\n# y = cvx.Variable(1)\n\n\n\n# # lambda val: matvec2(val,eps,k,e\n# obj = cvx.Maximize(y)\n# #A = spla.LinearOperator(shape = (size**2, size**2), matvec = lambda val: val)\n# #print(A.dot([1,1,0,0]))\n# costrs = [x>F.reshape(-1,1),y>=x]\n# prob = cvx.Problem(obj,costrs)\n# prob.solve()\n# print(prob.value)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d051586f1ebb0e9fb7555a1b0eeae207e357df95 | 113,203 | ipynb | Jupyter Notebook | notebooks/Dashboard_Data.ipynb | Altaf410/An-Exploration-of-the-Unbanked-in-the-US | 260fb6b9a412584594fc1273952c4aa41e6d656e | [
"MIT"
] | null | null | null | notebooks/Dashboard_Data.ipynb | Altaf410/An-Exploration-of-the-Unbanked-in-the-US | 260fb6b9a412584594fc1273952c4aa41e6d656e | [
"MIT"
] | null | null | null | notebooks/Dashboard_Data.ipynb | Altaf410/An-Exploration-of-the-Unbanked-in-the-US | 260fb6b9a412584594fc1273952c4aa41e6d656e | [
"MIT"
] | 4 | 2020-12-19T22:23:21.000Z | 2020-12-28T23:05:59.000Z | 31.81647 | 1,517 | 0.321458 | [
[
[
"# DASHBOARD LINK\n\nhttps://public.tableau.com/profile/altaf.lakhi2442#!/vizhome/UnbankedExploration/Dashboard1",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport seaborn as sns",
"_____no_output_____"
],
[
"CPS_df = pd.read_csv(\"../data/processed/CPS_2009_2017_clean.csv\")\nACS_df = pd.read_csv(\"../data/processed/ACS_2011_2017_clean.csv\")\nNFCS_df = pd.read_csv(\"../data/processed/NFCS_2009_2018_clean.csv\")",
"C:\\Users\\Desmond\\anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3071: DtypeWarning: Columns (3,4,5,6) have mixed types.Specify dtype option on import or set low_memory=False.\n has_raised = await self.run_ast_nodes(code_ast.body, cell_name,\n"
],
[
"frames = [CPS_df, ACS_df, NFCS_df]",
"_____no_output_____"
],
[
"#declaring STATE list\nSTATES = [\"Alabama\",\"Alaska\",\"Arizona\",\"Arkansas\",\"California\",\"Colorado\",\n \"Connecticut\",\"Delaware\",\"District of Columbia\", \"Florida\",\"Georgia\",\"Hawaii\",\n \"Idaho\",\"Illinois\", \"Indiana\",\"Iowa\",\"Kansas\",\"Kentucky\",\"Louisiana\",\"Maine\",\n \"Maryland\",\"Massachusetts\",\"Michigan\",\"Minnesota\",\"Mississippi\",\"Missouri\",\"Montana\",\n \"Nebraska\",\"Nevada\",\"New Hampshire\",\"New Jersey\",\"New Mexico\",\"New York\",\n \"North Carolina\",\"North Dakota\",\"Ohio\",\"Oklahoma\",\"Oregon\",\"Pennsylvania\",\n \"Rhode Island\",\"South Carolina\",\"South Dakota\",\"Tennessee\",\"Texas\",\"Utah\",\n \"Vermont\",\"Virginia\",\"Washington\",\"West Virginia\",\"Wisconsin\",\"Wyoming\"]",
"_____no_output_____"
],
[
"#generating state:state_number dictionary\nSTATE_FIPS = list(frames[0].STATEFIP.unique())\n\nSTATE = {}\nfor state, name in zip(STATE_FIPS, STATES):\n STATE[state] = name",
"_____no_output_____"
],
[
"#generating STATE column for pertinent dfs\nCPS_df[\"STATE\"] = CPS_df.STATEFIP.map(STATE)\nACS_df[\"STATE\"] = ACS_df.STATEFIP.map(STATE)",
"_____no_output_____"
],
[
"counties = pd.read_csv(\"../data/external/county_fips_master.csv\", engine='python')",
"_____no_output_____"
]
],
[
[
"# Aggregatting CPS Data",
"_____no_output_____"
]
],
[
[
"pop_prop = pd.read_csv(\"../data/interim/population_proportions\")\npop_prop.head()",
"_____no_output_____"
],
[
"pop_prop = pop_prop[[\"YEAR\", \"BUNBANKED\", \"STATEFIP\"]]\n\npop_prop",
"_____no_output_____"
],
[
"state_year_agg = []\n\nfor year in pop_prop.YEAR.unique():\n holder = pop_prop[pop_prop.YEAR == year]\n state_year_agg.append(holder)\n #national_agg_sums = [pop_prop[pop_prop.STATEFIP == state].BUNBANKED.sum() for state in pop_prop.STATEFIP.unique()]\n #print(f\"{year}\")\n #display(holder)",
"_____no_output_____"
],
[
"state_survey_pop_agg = pd.concat(state_year_agg)",
"_____no_output_____"
],
[
"state_survey_pop_agg[\"STATE\"] = state_survey_pop_agg.STATEFIP.map(STATE)",
"_____no_output_____"
],
[
"state_survey_pop_agg",
"_____no_output_____"
],
[
"state_survey_pop_agg.rename(columns = {\"BUNBANKED\": \"SURVEY_POP\"}, inplace = True)",
"_____no_output_____"
],
[
"state_survey_pop_agg",
"_____no_output_____"
],
[
"CPS_agg = pd.DataFrame()\nCPS_agg[\"STATE\"] = CPS_df.STATE\nCPS_agg[\"UNDERBANKED\"] = CPS_df.BUNBANKED\nCPS_agg[\"YEAR\"] = CPS_df.YEAR\n\n#copying aggregation before grouping for additional breakdowns\nCPS_reason_agg = CPS_agg.copy(deep=True)\n\nCPS_agg = CPS_agg.groupby([\"YEAR\", \"STATE\"]).count()\n\nCPS_agg = CPS_agg.reset_index()\n\nCPS_agg",
"_____no_output_____"
],
[
"state_survey_pop_agg = state_survey_pop_agg[state_survey_pop_agg.YEAR.isin(CPS_agg.YEAR.unique())].reset_index()\n\nstate_survey_pop_agg",
"_____no_output_____"
],
[
"CPS_agg[\"SURVEY_POP\"] = state_survey_pop_agg.SURVEY_POP",
"_____no_output_____"
],
[
"CPS_agg",
"_____no_output_____"
],
[
"CPS_agg.to_csv(\"../data/processed/Dashboard_Data/CPS_STATE_Aggregate.csv\")",
"_____no_output_____"
],
[
"#Isolating the specific northwest while \nPNW = [\"Washington\", \"Oregon\", \"Wyoming\", \"Montana\", \"Idaho\"]",
"_____no_output_____"
],
[
"PNW_CPS_agg = CPS_agg[CPS_agg.STATE.isin(PNW)]\n\nPNW_CPS_agg",
"_____no_output_____"
],
[
"PNW_CPS_agg.to_csv(\"../data/processed/Dashboard_Data/CPS_PNW_STATE_Aggregate.csv\")",
"_____no_output_____"
]
],
[
[
"----------------------------------------------------------------------------------------------",
"_____no_output_____"
],
[
"# Aggregatting ACS Data",
"_____no_output_____"
]
],
[
[
"#ACS_df = pd.read_csv(\"../data/processed/ACS_2011_2017_clean\")\n#ACS_df[\"STATE\"] = ACS_df.STATEFIP.map(STATE)",
"_____no_output_____"
],
[
"ACS_df.head()",
"_____no_output_____"
],
[
"ACS_df.HHWT",
"_____no_output_____"
],
[
"ACS_df = ACS_df.drop(columns = ['Unnamed: 0'])",
"_____no_output_____"
],
[
"filtering_columns = ACS_df.columns",
"_____no_output_____"
],
[
"filtering_columns = filtering_columns.drop([\"STATE\", \"YEAR\", \"SAMPLE\", \"REGION\", 'STATEFIP'])",
"_____no_output_____"
],
[
"filtering_columns",
"_____no_output_____"
],
[
"pivot_df = ACS_df.copy(deep=True)\n#using filter to generate multiple pivot tables for data vizualization\nfor _filter in filtering_columns:\n pivot_df[f\"{_filter}_COUNTS\"] = pivot_df[_filter]\n pivot_df_final = pivot_df[[\"YEAR\", \"REGION\", \"STATE\", _filter, f\"{_filter}_COUNTS\"]].groupby([\"YEAR\", \"REGION\", \"STATE\", _filter]).count()\n #display(pivot_df[[\"YEAR\", \"REGION\", \"STATE\", _filter, f\"{_filter}_COUNTS\"]].groupby([\"YEAR\", \"REGION\", \"STATE\", _filter]).count())\n #display(pivot_df_final)\n pivot_df_final.to_csv(f\"../data/processed/Dashboard_Data/{_filter}_ACS_AGG.csv\")",
"_____no_output_____"
],
[
"ACS_df.groupby([\"YEAR\", \"REGION\", \"STATE\", \"CINETHH\"]).count()#.value_counts()",
"_____no_output_____"
],
[
"ACS_df.columns",
"_____no_output_____"
]
],
[
[
"* HHINCOME = House Hold Income\n* MARST = Marital Status\n* OCC2010 = Occupation\n* CINETHH = Access to Internet\n* CILAPTOP = Laptop, desktop, or notebook computer\n* CISMRTPHN = Smartphone\n* CITABLET = Tablet or other portable wireless computer\n* CIHAND = Handheld Computer\n* CIHISPEED = Broadband (high speed) Internet service such as cable, fiber optic, or DSL service\n* CISAT = Satellite internet service\n* CIDIAL = Dial-up Service\n* CIOTHSVC = Other Internet Service",
"_____no_output_____"
]
],
[
[
"ACS_agg = pd.DataFrame()\nACS_agg[\"STATE\"] = ACS_df.STATE\nACS_agg[\"OCC2010\"] = ACS_df.OCC2010\nACS_agg[\"CINETHH\"] = ACS_df.CINETHH\nACS_agg[\"CILAPTOP\"] = ACS_df.CILAPTOP\nACS_agg[\"CISMRTPHN\"] = ACS_df.CISMRTPHN\nACS_agg[\"CITABLET\"] = ACS_df.CITABLET\nACS_agg[\"CIHAND\"] = ACS_df.CIHAND\nACS_agg[\"CIHISPEED\"] = ACS_df.CIHISPEED\nACS_agg[\"CISAT\"] = ACS_df.CISAT\nACS_agg[\"CIDIAL\"] = ACS_df.CIDIAL\nACS_agg[\"CIOTHSVC\"] = ACS_df.CIOTHSVC\nACS_agg[\"YEAR\"] = ACS_df.YEAR",
"_____no_output_____"
],
[
"ACS_agg = ACS_agg.groupby([\"STATE\", \"YEAR\"]).count()",
"_____no_output_____"
],
[
"ACS_agg = ACS_agg.reset_index()\n\nACS_agg",
"_____no_output_____"
],
[
"ACS_agg.to_csv(\"../data/processed/Dashboard_Data/ACS_STATE_Aggregate.csv\")",
"_____no_output_____"
]
],
[
[
"----------------------------------------------------------------------------------------------",
"_____no_output_____"
],
[
"# Aggregating NFCS",
"_____no_output_____"
]
],
[
[
"NFCS_df.head()",
"_____no_output_____"
],
[
"NFCS_df.drop(\"Unnamed: 0\", axis=1,inplace=True)",
"_____no_output_____"
],
[
"#declaring STATE list\nSTATES = [\"Alabama\",\"Alaska\",\"Arizona\",\"Arkansas\",\"California\",\"Colorado\",\n \"Connecticut\",\"Delaware\",\"District of Columbia\", \"Florida\",\"Georgia\",\"Hawaii\",\n \"Idaho\",\"Illinois\", \"Indiana\",\"Iowa\",\"Kansas\",\"Kentucky\",\"Louisiana\",\"Maine\",\n \"Maryland\",\"Massachusetts\",\"Michigan\",\"Minnesota\",\"Mississippi\",\"Missouri\",\"Montana\",\n \"Nebraska\",\"Nevada\",\"New Hampshire\",\"New Jersey\",\"New Mexico\",\"New York\",\n \"North Carolina\",\"North Dakota\",\"Ohio\",\"Oklahoma\",\"Oregon\",\"Pennsylvania\",\n \"Rhode Island\",\"South Carolina\",\"South Dakota\",\"Tennessee\",\"Texas\",\"Utah\",\n \"Vermont\",\"Virginia\",\"Washington\",\"West Virginia\",\"Wisconsin\",\"Wyoming\"]",
"_____no_output_____"
],
[
"#generating state:state_number dictionary\nSTATE_NFCS = list(NFCS_df.STATE.unique())\nSTATE_NFCS.sort()\n\nSTATE = {}\nfor state, name in zip(STATE_NFCS, STATES):\n STATE[state] = name",
"_____no_output_____"
],
[
"NFCS_df.STATE = NFCS_df.STATE.map(STATE)",
"_____no_output_____"
],
[
"NFCS_df.STATE",
"_____no_output_____"
],
[
"NFCS_agg = NFCS_df.groupby([\"STATE\", \"YEAR\"]).count()\n\nNFCS_agg",
"_____no_output_____"
],
[
"factors = list(NFCS_df.columns)",
"_____no_output_____"
],
[
"factors.remove(\"STATE\")\nfactors.remove(\"YEAR\")",
"_____no_output_____"
],
[
"#using filter to generate multiple pivot tables for data vizualization\npivot_df = NFCS_df.copy(deep=True)\nfor factor in factors:\n pivot_df[f\"{factor}_COUNTS\"] = pivot_df[factor]\n pivot_df_final = pivot_df[[\"YEAR\", \"STATE\", factor, f\"{factor}_COUNTS\"]].groupby([\"YEAR\", \"STATE\", factor]).count()\n #display(pivot_df[[\"YEAR\", \"REGION\", \"STATE\", factor, f\"{factor}_COUNTS\"]].groupby([\"YEAR\", \"REGION\", \"STATE\", factor]).count())\n display(pivot_df_final)\n pivot_df_final.to_csv(f\"../data/processed/Dashboard_Data/{factor}_NFCS_AGG.csv\")",
"_____no_output_____"
],
[
"NFCS_agg.to_csv(\"../data/processed/Dashboard_Data/NFCS_STATE_Aggregate.csv\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0515bf6bbf8f6b213dd4b99961794061578ff5c | 9,997 | ipynb | Jupyter Notebook | aml/configuration.ipynb | kawo123/azure-e2e-ml | 349fde1e94447babc925e336fae714962d1122be | [
"MIT"
] | null | null | null | aml/configuration.ipynb | kawo123/azure-e2e-ml | 349fde1e94447babc925e336fae714962d1122be | [
"MIT"
] | null | null | null | aml/configuration.ipynb | kawo123/azure-e2e-ml | 349fde1e94447babc925e336fae714962d1122be | [
"MIT"
] | null | null | null | 48.529126 | 505 | 0.623987 | [
[
[
"Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"# Configuration\n\n_**Setting up your Azure Machine Learning services workspace and configuring your notebook library**_\n\n---\n---\n\n## Table of Contents\n\n1. [Introduction](#Introduction)\n 1. What is an Azure Machine Learning workspace\n1. [Setup](#Setup)\n 1. Azure subscription\n 1. Azure ML SDK and other library installation\n 1. Azure Container Instance registration\n1. [Configure your Azure ML Workspace](#Configure%20your%20Azure%20ML%20workspace)\n 1. Workspace parameters\n 1. Access your workspace\n 1. Create a new workspace\n 1. Create compute resources\n1. [Next steps](#Next%20steps)\n\n---\n\n## Introduction\n\nThis notebook configures your library of notebooks to connect to an Azure Machine Learning (ML) workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook library to use an existing workspace or create a new workspace.\n\nTypically you will need to run this notebook only once per notebook library as all other notebooks will use connection information that is written here. If you want to redirect your notebook library to work with a different workspace, then you should re-run this notebook.\n\nIn this notebook you will\n* Learn about getting an Azure subscription\n* Specify your workspace parameters\n* Access or create your workspace\n* Add a default compute cluster for your workspace\n\n### What is an Azure Machine Learning workspace\n\nAn Azure ML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.",
"_____no_output_____"
],
[
"## Setup\n\nThis section describes activities required before you can access any Azure ML services functionality.",
"_____no_output_____"
],
[
"### 1. Azure Subscription\n\nIn order to create an Azure ML Workspace, first you need access to an Azure subscription. An Azure subscription allows you to manage storage, compute, and other assets in the Azure cloud. You can [create a new subscription](https://azure.microsoft.com/en-us/free/) or access existing subscription information from the [Azure portal](https://portal.azure.com). Later in this notebook you will need information such as your subscription ID in order to create and access AML workspaces.\n\n### 2. Azure ML SDK and other library installation\n\nIf you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n\nAlso install following libraries to your environment. Many of the example notebooks depend on them\n\n```\n(myenv) $ conda install -y matplotlib tqdm scikit-learn\n```\n\nOnce installation is complete, the following cell checks the Azure ML SDK version:",
"_____no_output_____"
]
],
[
[
"import azureml.core\n\nprint(\"This notebook was created using version 1.0.74.1 of the Azure ML SDK\")\nprint(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")",
"_____no_output_____"
]
],
[
[
"## Configure your Azure ML workspace\n\n### Workspace parameters\n\nTo use an AML Workspace, you will need to import the Azure ML SDK and supply the following information:\n* Your subscription id\n* A resource group name\n* (optional) The region that will host your workspace\n* A name for your workspace\n\nYou can get your subscription ID from the [Azure portal](https://portal.azure.com).\n\nYou will also need access to a [_resource group_](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups), which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n\nThe region to host your workspace will be used if you are creating a new workspace. You do not need to specify this if you are using an existing workspace. You can find the list of supported regions [here](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=machine-learning-service). You should pick a region that is close to your location or that contains your data.\n\nThe name for your workspace is unique within the subscription and should be descriptive enough to discern among other AML Workspaces. The subscription may be used only by you, or it may be used by your department or your entire enterprise, so choose a name that makes sense for your situation.\n\nThe following cell allows you to specify your workspace parameters. This cell uses the python method `os.getenv` to read values from environment variables which is useful for automation. If no environment variable exists, the parameters will be set to the specified default values. \n\nIf you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n\nReplace the default values in the cell below with your workspace parameters",
"_____no_output_____"
]
],
[
[
"import os\n\nsubscription_id = os.getenv(\"SUBSCRIPTION_ID\", default=\"<my-subscription-id>\")\nresource_group = os.getenv(\"RESOURCE_GROUP\", default=\"<my-resource-group>\")\nworkspace_name = os.getenv(\"WORKSPACE_NAME\", default=\"<my-workspace-name>\")\nworkspace_region = os.getenv(\"WORKSPACE_REGION\", default=\"eastus2\")",
"_____no_output_____"
]
],
[
[
"### Access your workspace\n\nThe following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified workspace doesn't exist or you don't have permissions to access it. ",
"_____no_output_____"
]
],
[
[
"from azureml.core import Workspace\n\ntry:\n ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n # write the details of the workspace to a configuration file to the notebook library\n ws.write_config()\n print(\"Workspace configuration succeeded. Skip the workspace creation steps below\")\nexcept:\n print(\"Workspace not accessible. Change your parameters or create a new workspace below\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0515c6cd126ab1e80ff25ace13057c0d7fc9cfe | 6,046 | ipynb | Jupyter Notebook | online_judges/busiest_period/busiest_period_challenge.ipynb | benkeesey/interactive-coding-challenges | 4994452a729f4bcfab5c8a4225f2b5e004b79075 | [
"Apache-2.0"
] | 27,173 | 2015-07-06T12:36:05.000Z | 2022-03-31T23:56:41.000Z | online_judges/busiest_period/busiest_period_challenge.ipynb | benkeesey/interactive-coding-challenges | 4994452a729f4bcfab5c8a4225f2b5e004b79075 | [
"Apache-2.0"
] | 143 | 2015-07-07T05:13:11.000Z | 2021-12-07T17:05:54.000Z | online_judges/busiest_period/busiest_period_challenge.ipynb | benkeesey/interactive-coding-challenges | 4994452a729f4bcfab5c8a4225f2b5e004b79075 | [
"Apache-2.0"
] | 4,657 | 2015-07-06T13:28:02.000Z | 2022-03-31T10:11:28.000Z | 25.948498 | 185 | 0.507608 | [
[
[
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).",
"_____no_output_____"
],
[
"# Challenge Notebook",
"_____no_output_____"
],
[
"## Problem: Given an array of (unix_timestamp, num_people, EventType.ENTER or EventType.EXIT), find the busiest period.\n\n* [Constraints](#Constraints)\n* [Test Cases](#Test-Cases)\n* [Algorithm](#Algorithm)\n* [Code](#Code)\n* [Unit Test](#Unit-Test)\n* [Solution Notebook](#Solution-Notebook)",
"_____no_output_____"
],
[
"## Constraints\n\n* Can we assume the input array is valid?\n * Check for None\n* Can we assume the elements of the input array are valid?\n * Yes\n* Is the input sorted by time?\n * No\n* Can you have enter and exit elements for the same timestamp?\n * Yes you can, order of enter and exit is not guaranteed\n* Could we have multiple enter events (or multiple exit events) for the same timestamp?\n * No\n* What is the format of the output?\n * An array of timestamps [t1, t2]\n* Can we assume the starting number of people is zero?\n * Yes\n* Can we assume the inputs are valid?\n * No\n* Can we assume this fits memory?\n * Yes",
"_____no_output_____"
],
[
"## Test Cases\n\n* None -> TypeError\n* [] -> None\n* General case\n\n<pre>\ntimestamp num_people event_type\n1 2 EventType.ENTER\n3 1 EventType.ENTER\n3 2 EventType.EXIT\n7 3 EventType.ENTER\n8 2 EventType.EXIT\n9 2 EventType.EXIT\n\nresult = Period(7, 8)\n</pre>",
"_____no_output_____"
],
[
"## Algorithm\n\nRefer to the [Solution Notebook](). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.",
"_____no_output_____"
],
[
"## Code",
"_____no_output_____"
]
],
[
[
"from enum import Enum\n\n\nclass Data(object):\n\n def __init__(self, timestamp, num_people, event_type):\n self.timestamp = timestamp\n self.num_people = num_people\n self.event_type = event_type\n\n def __lt__(self, other):\n return self.timestamp < other.timestamp\n\n\nclass Period(object):\n\n def __init__(self, start, end):\n self.start = start\n self.end = end\n\n def __eq__(self, other):\n return self.start == other.start and self.end == other.end\n\n def __repr__(self):\n return str(self.start) + ', ' + str(self.end)\n\n\nclass EventType(Enum):\n\n ENTER = 0\n EXIT = 1",
"_____no_output_____"
],
[
"class Solution(object):\n\n def find_busiest_period(self, data):\n # TODO: Implement me\n pass",
"_____no_output_____"
]
],
[
[
"## Unit Test",
"_____no_output_____"
],
[
"**The following unit test is expected to fail until you solve the challenge.**",
"_____no_output_____"
]
],
[
[
"# %load test_find_busiest_period.py\nimport unittest\n\n\nclass TestSolution(unittest.TestCase):\n\n def test_find_busiest_period(self):\n solution = Solution()\n self.assertRaises(TypeError, solution.find_busiest_period, None)\n self.assertEqual(solution.find_busiest_period([]), None)\n data = [\n Data(3, 2, EventType.EXIT),\n Data(1, 2, EventType.ENTER),\n Data(3, 1, EventType.ENTER),\n Data(7, 3, EventType.ENTER),\n Data(9, 2, EventType.EXIT),\n Data(8, 2, EventType.EXIT),\n ]\n self.assertEqual(solution.find_busiest_period(data), Period(7, 8))\n print('Success: test_find_busiest_period')\n\n\ndef main():\n test = TestSolution()\n test.test_find_busiest_period()\n\n\nif __name__ == '__main__':\n main()",
"_____no_output_____"
]
],
[
[
"## Solution Notebook\n\nReview the [Solution Notebook]() for a discussion on algorithms and code solutions.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d051627aac19c924e408391ecfb882365dbf5994 | 10,975 | ipynb | Jupyter Notebook | classical-systems/CS16_Probabilistic_States.ipynb | dev-aditya/QWorld_Summer_School_2021 | 1b8711327845617ca8dc32ff2a20f461d0ee01c7 | [
"Apache-2.0",
"CC-BY-4.0"
] | 1 | 2021-08-15T10:57:16.000Z | 2021-08-15T10:57:16.000Z | classical-systems/CS16_Probabilistic_States.ipynb | dev-aditya/QWorld_Summer_School_2021 | 1b8711327845617ca8dc32ff2a20f461d0ee01c7 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | classical-systems/CS16_Probabilistic_States.ipynb | dev-aditya/QWorld_Summer_School_2021 | 1b8711327845617ca8dc32ff2a20f461d0ee01c7 | [
"Apache-2.0",
"CC-BY-4.0"
] | 3 | 2021-08-11T11:12:38.000Z | 2021-09-14T09:15:08.000Z | 38.374126 | 309 | 0.532301 | [
[
[
"<a href=\"https://qworld.net\" target=\"_blank\" align=\"left\"><img src=\"../qworld/images/header.jpg\" align=\"left\"></a>\n$ \\newcommand{\\bra}[1]{\\langle #1|} $\n$ \\newcommand{\\ket}[1]{|#1\\rangle} $\n$ \\newcommand{\\braket}[2]{\\langle #1|#2\\rangle} $\n$ \\newcommand{\\dot}[2]{ #1 \\cdot #2} $\n$ \\newcommand{\\biginner}[2]{\\left\\langle #1,#2\\right\\rangle} $\n$ \\newcommand{\\mymatrix}[2]{\\left( \\begin{array}{#1} #2\\end{array} \\right)} $\n$ \\newcommand{\\myvector}[1]{\\mymatrix{c}{#1}} $\n$ \\newcommand{\\myrvector}[1]{\\mymatrix{r}{#1}} $\n$ \\newcommand{\\mypar}[1]{\\left( #1 \\right)} $\n$ \\newcommand{\\mybigpar}[1]{ \\Big( #1 \\Big)} $\n$ \\newcommand{\\sqrttwo}{\\frac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\dsqrttwo}{\\dfrac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\onehalf}{\\frac{1}{2}} $\n$ \\newcommand{\\donehalf}{\\dfrac{1}{2}} $\n$ \\newcommand{\\hadamard}{ \\mymatrix{rr}{ \\sqrttwo & \\sqrttwo \\\\ \\sqrttwo & -\\sqrttwo }} $\n$ \\newcommand{\\vzero}{\\myvector{1\\\\0}} $\n$ \\newcommand{\\vone}{\\myvector{0\\\\1}} $\n$ \\newcommand{\\stateplus}{\\myvector{ \\sqrttwo \\\\ \\sqrttwo } } $\n$ \\newcommand{\\stateminus}{ \\myrvector{ \\sqrttwo \\\\ -\\sqrttwo } } $\n$ \\newcommand{\\myarray}[2]{ \\begin{array}{#1}#2\\end{array}} $\n$ \\newcommand{\\X}{ \\mymatrix{cc}{0 & 1 \\\\ 1 & 0} } $\n$ \\newcommand{\\I}{ \\mymatrix{rr}{1 & 0 \\\\ 0 & 1} } $\n$ \\newcommand{\\Z}{ \\mymatrix{rr}{1 & 0 \\\\ 0 & -1} } $\n$ \\newcommand{\\Htwo}{ \\mymatrix{rrrr}{ \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} } } $\n$ \\newcommand{\\CNOT}{ \\mymatrix{cccc}{1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0} } $\n$ \\newcommand{\\norm}[1]{ \\left\\lVert #1 \\right\\rVert } $\n$ \\newcommand{\\pstate}[1]{ \\lceil \\mspace{-1mu} #1 \\mspace{-1.5mu} \\rfloor } $\n$ \\newcommand{\\greenbit}[1] {\\mathbf{{\\color{green}#1}}} $\n$ \\newcommand{\\bluebit}[1] {\\mathbf{{\\color{blue}#1}}} $\n$ \\newcommand{\\redbit}[1] {\\mathbf{{\\color{red}#1}}} $\n$ \\newcommand{\\brownbit}[1] {\\mathbf{{\\color{brown}#1}}} $\n$ \\newcommand{\\blackbit}[1] {\\mathbf{{\\color{black}#1}}} $",
"_____no_output_____"
],
[
"<font style=\"font-size:28px;\" align=\"left\"><b>Probabilistic States </b></font>\n<br>\n_prepared by Abuzer Yakaryilmaz_\n<br><br>\n[<img src=\"../qworld/images/watch_lecture.jpg\" align=\"left\">](https://youtu.be/tJjrF7WgT1g)\n<br><br><br>",
"_____no_output_____"
],
[
"Suppose that Asja tosses a fair coin secretly.\n\nAs we do not see the result, our information about the outcome will be probabilistic:\n\n$\\rightarrow$ The outcome is heads with probability $0.5$ and the outcome will be tails with probability $0.5$.\n\nIf the coin has a bias $ \\dfrac{Pr(Head)}{Pr(Tail)} = \\dfrac{3}{1}$, then our information about the outcome will be as follows:\n\n$\\rightarrow$ The outcome will be heads with probability $ 0.75 $ and the outcome will be tails with probability $ 0.25 $.",
"_____no_output_____"
],
[
"<i><u>Explanation</u>: The probability of getting heads is three times of the probability of getting tails.\n <ul>\n <li>The total probability is 1. </li>\n <li> We divide the whole probability 1 into four parts (three parts are for heads and one part is for tail),\n <li> one part is $ \\dfrac{1}{4} = 0.25$,</li>\n <li> and then give three parts for heads ($0.75$) and one part for tails ($0.25$).</li>\n </ul></i>",
"_____no_output_____"
],
[
"<h3> Listing probabilities as a column </h3>\n\nWe have two different outcomes: heads (0) and tails (1).\n\nWe use a column of size 2 to show the probabilities of getting heads and getting tails.\n\nFor the fair coin, our information after the coin-flip will be $ \\myvector{0.5 \\\\ 0.5} $. \n\nFor the biased coin, it will be $ \\myvector{0.75 \\\\ 0.25} $.\n\nThe first entry shows the probability of getting heads, and the second entry shows the probability of getting tails.\n\n $ \\myvector{0.5 \\\\ 0.5} $ and $ \\myvector{0.75 \\\\ 0.25} $ are two examples of 2-dimensional (column) vectors.",
"_____no_output_____"
],
[
"<h3> Task 1 </h3>\n\nSuppose that Balvis secretly flips a coin having the bias $ \\dfrac{Pr(Heads)}{Pr(Tails)} = \\dfrac{1}{4}$.\n\nRepresent your information about the outcome as a column vector.",
"_____no_output_____"
],
[
"<h3> Task 2 </h3>\n\nSuppose that Fyodor secretly rolls a loaded (tricky) dice with the bias \n\n$$ Pr(1):Pr(2):Pr(3):Pr(4):Pr(5):Pr(6) = 7:5:4:2:6:1 . $$\n\nRepresent your information about the result as a column vector. Remark that the size of your column vector should be 6.\n\nYou may use python for your calculations.",
"_____no_output_____"
]
],
[
[
"#\n# your code is here\n#\n",
"_____no_output_____"
]
],
[
[
"<a href=\"CS16_Probabilistic_States_Solutions.ipynb#task2\">click for our solution</a>",
"_____no_output_____"
],
[
"<h3> Vector representation </h3>\n\nSuppose that we have a system with 4 distiguishable states: $ s_1 $, $s_2 $, $s_3$, and $s_4$. \n\nWe expect the system to be in one of them at any moment. \n\nBy speaking with probabilities, we say that the system is in one of the states with probability 1, and in any other state with probability 0. \n\nBy using our column representation, we can show each state as a column vector (by using the vectors in standard basis of $ \\mathbb{R}^4 $):\n\n$\n e_1 = \\myvector{1\\\\ 0 \\\\ 0 \\\\ 0}, e_2 = \\myvector{0 \\\\ 1 \\\\ 0 \\\\ 0}, e_3 = \\myvector{0 \\\\ 0 \\\\ 1 \\\\ 0}, \n \\mbox{ and } e_4 = \\myvector{0 \\\\ 0 \\\\ 0 \\\\ 1}.\n$",
"_____no_output_____"
],
[
"This representation helps us to represent our information on a system when it is in more than one state with certain probabilities. \n\nRemember the case in which the coins are tossed secretly. \n\nFor example, suppose that the system is in states $ s_1 $, $ s_2 $, $ s_3 $, and $ s_4 $ with probabilities $ 0.20 $, $ 0.25 $, $ 0.40 $, and $ 0.15 $, respectively. \n\n(<i>The total probability must be 1, i.e., $ 0.20+0.25+0.40+0.15 = 1.00 $</i>)\n\nThen, we can say that the system is in the following probabilistic state:\n\n$ 0.20 \\cdot e_1 + 0.25 \\cdot e2 + 0.40 \\cdot e_3 + 0.15 \\cdot e4 $\n\n$ = 0.20 \\cdot \\myvector{1\\\\ 0 \\\\ 0 \\\\ 0} + 0.25 \\cdot \\myvector{0\\\\ 1 \\\\ 0 \\\\ 0} + 0.40 \\cdot \\myvector{0\\\\ 0 \\\\ 1 \\\\ 0} + 0.15 \\cdot \\myvector{0\\\\ 0 \\\\ 0 \\\\ 1} $\n\n$ = \\myvector{0.20\\\\ 0 \\\\ 0 \\\\ 0} + \\myvector{0\\\\ 0.25 \\\\ 0 \\\\ 0} + \\myvector{0\\\\ 0 \\\\0.40 \\\\ 0} + \\myvector{0\\\\ 0 \\\\ 0 \\\\ 0.15 } = \\myvector{ 0.20 \\\\ 0.25 \\\\ 0.40 \\\\ 0.15 }, $\n\nwhere the summation of entries must be 1.",
"_____no_output_____"
],
[
"<h3> Probabilistic state </h3>\n\nA probabilistic state is a linear combination of the vectors in the standard basis. \n \nHere coefficients (scalars) must satisfy certain properties:\n<ol>\n <li> Each coefficient is non-negative </li>\n <li> The summation of coefficients is 1 </li>\n</ol>\n\n\nAlternatively, we can say that a probabilistic state is a probability distribution over deterministic states.\n\nWe can show all information as a single mathematical object, which is called as a stochastic vector.\n\n<i> Remark that the state of any linear system is a linear combination of the vectors in the basis. </i> ",
"_____no_output_____"
],
[
"<h3> Task 3 </h3>\n\nFor a system with 4 states, randomly create a probabilistic state, and print its entries, e.g., $ 0.16~~0.17~~0.02~~0.65 $.\n\n<i>Hint: You may pick your random numbers between 0 and 100 (or 1000), and then normalize each value by dividing the summation of all numbers.</i>",
"_____no_output_____"
]
],
[
[
"#\n# your solution is here\n#\n",
"_____no_output_____"
]
],
[
[
"<a href=\"CS16_Probabilistic_States_Solutions.ipynb#task3\">click for our solution</a>",
"_____no_output_____"
],
[
"<h3> Task 4 [extra] </h3>\n\nAs given in the hint for Task 3, you may pick your random numbers between 0 and $ 10^k $. For better precision, you may take bigger values of $ k $.\n\nWrite a function that randomly creates a probabilisitic state of size $ n $ with a precision up to $ k $ digits. \n\nTest your function.",
"_____no_output_____"
]
],
[
[
"#\n# your solution is here\n#\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d05166f00a0f8158f35708ea05d1c7272b6f21ff | 151,223 | ipynb | Jupyter Notebook | notebooks/ma02_data_sa_vader.ipynb | CouchCat/ma-zdash-nlp | 3be2411a4b195e6401fd799f0b76b83e71daba8f | [
"MIT"
] | null | null | null | notebooks/ma02_data_sa_vader.ipynb | CouchCat/ma-zdash-nlp | 3be2411a4b195e6401fd799f0b76b83e71daba8f | [
"MIT"
] | 1 | 2021-03-19T13:49:33.000Z | 2021-03-19T13:49:41.000Z | notebooks/ma02_data_sa_vader.ipynb | CouchCat/ma-zdash-nlp | 3be2411a4b195e6401fd799f0b76b83e71daba8f | [
"MIT"
] | null | null | null | 151,223 | 151,223 | 0.584018 | [
[
[
"# Sentiment Analysis: Data Gathering 1 (Vader)\n\n\nThe original sentiments of domain dataset are unclean, especially for the neutral sentiment. Instead of manually going through and correcting sentiments by hand certain techniques are employed to assist this process. This notebook implements the first data annotation pipeline for the sentiment analysis task, which utilizes NLTK's VADER sentiment classifier in order to quickly get a different baseline sentiment to compare with the original. This process has been performed iteratively by manually inspecting the results and modiying VADER's internal library, which contains pre-defined weights towards certain sentiments. \n\nData used here are texts that have been cleaned from stopwords (see ma_eda_all.ipynb)\nsince certain words / phrases affect the results negatively, e.g. \"kind regards\", \"good day\", etc.\n\nData used is also the normalized version in order to better target certain words and update the weights within VADER's vocabulary since some words, e.g. \"worn\", \"hole\", etc., are considered more negative in this domain as opposed to what VADER would classify it normally.\n\n### Notes\n* Data: feedback_39k\n* Texts have been removed from certain stopwords that might skew the results of VADER\n* Using normalized words to better target words\n* Tuned by updating vocabulary of VADER\n\n### Goal\n* Add additional column for VADER sentiments pos/neu/neg\n\n### Results\n* Passable results to help with manual tasks\n* Very different sentiment distributions than original sentiments\n* Not good if too few words",
"_____no_output_____"
]
],
[
[
"import nltk\nnltk.download('vader_lexicon')\nnltk.download('punkt')",
"[nltk_data] Downloading package vader_lexicon to /root/nltk_data...\n[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Unzipping tokenizers/punkt.zip.\n"
],
[
"import re\nimport pandas as pd\nimport seaborn as sns; sns.set()\nfrom nltk.sentiment.vader import SentimentIntensityAnalyzer\n\nsns.set(style='white', context='notebook', palette='deep')",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"PROJECT_PATH = '/content/drive/MyDrive/Colab/data/ma_data/'\nDATA = PROJECT_PATH + 'feedback_all_normalized.csv'\nDATA_EXPORT = PROJECT_PATH + 'feedback_all_vader_1.csv'",
"_____no_output_____"
],
[
"sia = SentimentIntensityAnalyzer()",
"_____no_output_____"
],
[
"print(sia.lexicon)",
"{'$:': -1.5, '%)': -0.4, '%-)': -1.5, '&-:': -0.4, '&:': -0.7, \"( '}{' )\": 1.6, '(%': -0.9, \"('-:\": 2.2, \"(':\": 2.3, '((-:': 2.1, '(*': 1.1, '(-%': -0.7, '(-*': 1.3, '(-:': 1.6, '(-:0': 2.8, '(-:<': -0.4, '(-:o': 1.5, '(-:O': 1.5, '(-:{': -0.1, '(-:|>*': 1.9, '(-;': 1.3, '(-;|': 2.1, '(8': 2.6, '(:': 2.2, '(:0': 2.4, '(:<': -0.2, '(:o': 2.5, '(:O': 2.5, '(;': 1.1, '(;<': 0.3, '(=': 2.2, '(?:': 2.1, '(^:': 1.5, '(^;': 1.5, '(^;0': 2.0, '(^;o': 1.9, '(o:': 1.6, \")':\": -2.0, \")-':\": -2.1, ')-:': -2.1, ')-:<': -2.2, ')-:{': -2.1, '):': -1.8, '):<': -1.9, '):{': -2.3, ');<': -2.6, '*)': 0.6, '*-)': 0.3, '*-:': 2.1, '*-;': 2.4, '*:': 1.9, '*<|:-)': 1.6, '*\\\\0/*': 2.3, '*^:': 1.6, ',-:': 1.2, \"---'-;-{@\": 2.3, '--<--<@': 2.2, '.-:': -1.2, '..###-:': -1.7, '..###:': -1.9, '/-:': -1.3, '/:': -1.3, '/:<': -1.4, '/=': -0.9, '/^:': -1.0, '/o:': -1.4, '0-8': 0.1, '0-|': -1.2, '0:)': 1.9, '0:-)': 1.4, '0:-3': 1.5, '0:03': 1.9, '0;^)': 1.6, '0_o': -0.3, '10q': 2.1, '1337': 2.1, '143': 3.2, '1432': 2.6, '14aa41': 2.4, '182': -2.9, '187': -3.1, '2g2b4g': 2.8, '2g2bt': -0.1, '2qt': 2.1, '3:(': -2.2, '3:)': 0.5, '3:-(': -2.3, '3:-)': -1.4, '4col': -2.2, '4q': -3.1, '5fs': 1.5, '8)': 1.9, '8-d': 1.7, '8-o': -0.3, '86': -1.6, '8d': 2.9, ':###..': -2.4, ':$': -0.2, ':&': -0.6, \":'(\": -2.2, \":')\": 2.3, \":'-(\": -2.4, \":'-)\": 2.7, ':(': -1.9, ':)': 2.0, ':*': 2.5, ':-###..': -2.5, ':-&': -0.5, ':-(': -1.5, ':-)': 1.3, ':-))': 2.8, ':-*': 1.7, ':-,': 1.1, ':-.': -0.9, ':-/': -1.2, ':-<': -1.5, ':-d': 2.3, ':-D': 2.3, ':-o': 0.1, ':-p': 1.5, ':-[': -1.6, ':-\\\\': -0.9, ':-c': -1.3, ':-|': -0.7, ':-||': -2.5, ':-Þ': 0.9, ':/': -1.4, ':3': 2.3, ':<': -2.1, ':>': 2.1, ':?)': 1.3, ':?c': -1.6, ':@': -2.5, ':d': 2.3, ':D': 2.3, ':l': -1.7, ':o': -0.4, ':p': 1.4, ':s': -1.2, ':[': -2.0, ':\\\\': -1.3, ':]': 2.2, ':^)': 2.1, ':^*': 2.6, ':^/': -1.2, ':^\\\\': -1.0, ':^|': -1.0, ':c': -2.1, ':c)': 2.0, ':o)': 2.1, ':o/': -1.4, ':o\\\\': -1.1, ':o|': -0.6, ':{': -1.9, ':|': -0.4, ':}': 2.1, ':Þ': 1.1, ';)': 0.9, ';-)': 1.0, ';-*': 2.2, ';-]': 0.7, ';d': 0.8, ';D': 0.8, ';]': 0.6, ';^)': 1.4, '</3': -3.0, '<3': 1.9, '<:': 2.1, '<:-|': -1.4, '=)': 2.2, '=-3': 2.0, '=-d': 2.4, '=-D': 2.4, '=/': -1.4, '=3': 2.1, '=d': 2.3, '=D': 2.3, '=l': -1.2, '=\\\\': -1.2, '=]': 1.6, '=p': 1.3, '=|': -0.8, '>-:': -2.0, '>.<': -1.3, '>:': -2.1, '>:(': -2.7, '>:)': 0.4, '>:-(': -2.7, '>:-)': -0.4, '>:/': -1.6, '>:o': -1.2, '>:p': 1.0, '>:[': -2.1, '>:\\\\': -1.7, '>;(': -2.9, '>;)': 0.1, '>_>^': 2.1, '@:': -2.1, '@>-->--': 2.1, \"@}-;-'---\": 2.2, 'aas': 2.5, 'aayf': 2.7, 'afu': -2.9, 'alol': 2.8, 'ambw': 2.9, 'aml': 3.4, 'atab': -1.9, 'awol': -1.3, 'ayc': 0.2, 'ayor': -1.2, 'aug-00': 0.3, 'bfd': -2.7, 'bfe': -2.6, 'bff': 2.9, 'bffn': 1.0, 'bl': 2.3, 'bsod': -2.2, 'btd': -2.1, 'btdt': -0.1, 'bz': 0.4, 'b^d': 2.6, 'cwot': -2.3, \"d-':\": -2.5, 'd8': -3.2, 'd:': 1.2, 'd:<': -3.2, 'd;': -2.9, 'd=': 1.5, 'doa': -2.3, 'dx': -3.0, 'ez': 1.5, 'fav': 2.0, 'fcol': -1.8, 'ff': 1.8, 'ffs': -2.8, 'fkm': -2.4, 'foaf': 1.8, 'ftw': 2.0, 'fu': -3.7, 'fubar': -3.0, 'fwb': 2.5, 'fyi': 0.8, 'fysa': 0.4, 'g1': 1.4, 'gg': 1.2, 'gga': 1.7, 'gigo': -0.6, 'gj': 2.0, 'gl': 1.3, 'gla': 2.5, 'gn': 1.2, 'gr8': 2.7, 'grrr': -0.4, 'gt': 1.1, 'h&k': 2.3, 'hagd': 2.2, 'hagn': 2.2, 'hago': 1.2, 'hak': 1.9, 'hand': 2.2, 'hho1/2k': 1.4, 'hhoj': 2.0, 'hhok': 0.9, 'hugz': 2.0, 'hi5': 1.9, 'idk': -0.4, 'ijs': 0.7, 'ilu': 3.4, 'iluaaf': 2.7, 'ily': 3.4, 'ily2': 2.6, 'iou': 0.7, 'iyq': 2.3, 'j/j': 2.0, 'j/k': 1.6, 'j/p': 1.4, 'j/t': -0.2, 'j/w': 1.0, 'j4f': 1.4, 'j4g': 1.7, 'jho': 0.8, 'jhomf': 1.0, 'jj': 1.0, 'jk': 0.9, 'jp': 0.8, 'jt': 0.9, 'jw': 1.6, 'jealz': -1.2, 'k4y': 2.3, 'kfy': 2.3, 'kia': -3.2, 'kk': 1.5, 'kmuf': 2.2, 'l': 2.0, 'l&r': 2.2, 'laoj': 1.3, 'lmao': 2.9, 'lmbao': 1.8, 'lmfao': 2.5, 'lmso': 2.7, 'lol': 1.8, 'lolz': 2.7, 'lts': 1.6, 'ly': 2.6, 'ly4e': 2.7, 'lya': 3.3, 'lyb': 3.0, 'lyl': 3.1, 'lylab': 2.7, 'lylas': 2.6, 'lylb': 1.6, 'm8': 1.4, 'mia': -1.2, 'mml': 2.0, 'mofo': -2.4, 'muah': 2.3, 'mubar': -1.0, 'musm': 0.9, 'mwah': 2.5, 'n1': 1.9, 'nbd': 1.3, 'nbif': -0.5, 'nfc': -2.7, 'nfw': -2.4, 'nh': 2.2, 'nimby': -0.8, 'nimjd': -0.7, 'nimq': -0.2, 'nimy': -1.4, 'nitl': -1.5, 'nme': -2.1, 'noyb': -0.7, 'np': 1.4, 'ntmu': 1.4, 'o-8': -0.5, 'o-:': -0.3, 'o-|': -1.1, 'o.o': -0.8, 'O.o': -0.6, 'o.O': -0.6, 'o:': -0.2, 'o:)': 1.5, 'o:-)': 2.0, 'o:-3': 2.2, 'o:3': 2.3, 'o:<': -0.3, 'o;^)': 1.6, 'ok': 1.2, 'o_o': -0.5, 'O_o': -0.5, 'o_O': -0.5, 'pita': -2.4, 'pls': 0.3, 'plz': 0.3, 'pmbi': 0.8, 'pmfji': 0.3, 'pmji': 0.7, 'po': -2.6, 'ptl': 2.6, 'pu': -1.1, 'qq': -2.2, 'qt': 1.8, 'r&r': 2.4, 'rofl': 2.7, 'roflmao': 2.5, 'rotfl': 2.6, 'rotflmao': 2.8, 'rotflmfao': 2.5, 'rotflol': 3.0, 'rotgl': 2.9, 'rotglmao': 1.8, 's:': -1.1, 'sapfu': -1.1, 'sete': 2.8, 'sfete': 2.7, 'sgtm': 2.4, 'slap': 0.6, 'slaw': 2.1, 'smh': -1.3, 'snafu': -2.5, 'sob': -1.0, 'swak': 2.3, 'tgif': 2.3, 'thks': 1.4, 'thx': 1.5, 'tia': 2.3, 'tmi': -0.3, 'tnx': 1.1, 'true': 1.8, 'tx': 1.5, 'txs': 1.1, 'ty': 1.6, 'tyvm': 2.5, 'urw': 1.9, 'vbg': 2.1, 'vbs': 3.1, 'vip': 2.3, 'vwd': 2.6, 'vwp': 2.1, 'wag': -0.2, 'wd': 2.7, 'wilco': 0.9, 'wp': 1.0, 'wtf': -2.8, 'wtg': 2.1, 'wth': -2.4, 'x-d': 2.6, 'x-p': 1.7, 'xd': 2.8, 'xlnt': 3.0, 'xoxo': 3.0, 'xoxozzz': 2.3, 'xp': 1.6, 'xqzt': 1.6, 'xtc': 0.8, 'yolo': 1.1, 'yoyo': 0.4, 'yvw': 1.6, 'yw': 1.8, 'ywia': 2.5, 'zzz': -1.2, '[-;': 0.5, '[:': 1.3, '[;': 1.0, '[=': 1.7, '\\\\-:': -1.0, '\\\\:': -1.0, '\\\\:<': -1.7, '\\\\=': -1.1, '\\\\^:': -1.3, '\\\\o/': 2.2, '\\\\o:': -1.2, ']-:': -2.1, ']:': -1.6, ']:<': -2.5, '^<_<': 1.4, '^urs': -2.8, 'abandon': -1.9, 'abandoned': -2.0, 'abandoner': -1.9, 'abandoners': -1.9, 'abandoning': -1.6, 'abandonment': -2.4, 'abandonments': -1.7, 'abandons': -1.3, 'abducted': -2.3, 'abduction': -2.8, 'abductions': -2.0, 'abhor': -2.0, 'abhorred': -2.4, 'abhorrent': -3.1, 'abhors': -2.9, 'abilities': 1.0, 'ability': 1.3, 'aboard': 0.1, 'absentee': -1.1, 'absentees': -0.8, 'absolve': 1.2, 'absolved': 1.5, 'absolves': 1.3, 'absolving': 1.6, 'abuse': -3.2, 'abused': -2.3, 'abuser': -2.6, 'abusers': -2.6, 'abuses': -2.6, 'abusing': -2.0, 'abusive': -3.2, 'abusively': -2.8, 'abusiveness': -2.5, 'abusivenesses': -3.0, 'accept': 1.6, 'acceptabilities': 1.6, 'acceptability': 1.1, 'acceptable': 1.3, 'acceptableness': 1.3, 'acceptably': 1.5, 'acceptance': 2.0, 'acceptances': 1.7, 'acceptant': 1.6, 'acceptation': 1.3, 'acceptations': 0.9, 'accepted': 1.1, 'accepting': 1.6, 'accepts': 1.3, 'accident': -2.1, 'accidental': -0.3, 'accidentally': -1.4, 'accidents': -1.3, 'accomplish': 1.8, 'accomplished': 1.9, 'accomplishes': 1.7, 'accusation': -1.0, 'accusations': -1.3, 'accuse': -0.8, 'accused': -1.2, 'accuses': -1.4, 'accusing': -0.7, 'ache': -1.6, 'ached': -1.6, 'aches': -1.0, 'achievable': 1.3, 'aching': -2.2, 'acquit': 0.8, 'acquits': 0.1, 'acquitted': 1.0, 'acquitting': 1.3, 'acrimonious': -1.7, 'active': 1.7, 'actively': 1.3, 'activeness': 0.6, 'activenesses': 0.8, 'actives': 1.1, 'adequate': 0.9, 'admirability': 2.4, 'admirable': 2.6, 'admirableness': 2.2, 'admirably': 2.5, 'admiral': 1.3, 'admirals': 1.5, 'admiralties': 1.6, 'admiralty': 1.2, 'admiration': 2.5, 'admirations': 1.6, 'admire': 2.1, 'admired': 2.3, 'admirer': 1.8, 'admirers': 1.7, 'admires': 1.5, 'admiring': 1.6, 'admiringly': 2.3, 'admit': 0.8, 'admits': 1.2, 'admitted': 0.4, 'admonished': -1.9, 'adopt': 0.7, 'adopts': 0.7, 'adorability': 2.2, 'adorable': 2.2, 'adorableness': 2.5, 'adorably': 2.1, 'adoration': 2.9, 'adorations': 2.2, 'adore': 2.6, 'adored': 1.8, 'adorer': 1.7, 'adorers': 2.1, 'adores': 1.6, 'adoring': 2.6, 'adoringly': 2.4, 'adorn': 0.9, 'adorned': 0.8, 'adorner': 1.3, 'adorners': 0.9, 'adorning': 1.0, 'adornment': 1.3, 'adornments': 0.8, 'adorns': 0.5, 'advanced': 1.0, 'advantage': 1.0, 'advantaged': 1.4, 'advantageous': 1.5, 'advantageously': 1.9, 'advantageousness': 1.6, 'advantages': 1.5, 'advantaging': 1.6, 'adventure': 1.3, 'adventured': 1.3, 'adventurer': 1.2, 'adventurers': 0.9, 'adventures': 1.4, 'adventuresome': 1.7, 'adventuresomeness': 1.3, 'adventuress': 0.8, 'adventuresses': 1.4, 'adventuring': 2.3, 'adventurism': 1.5, 'adventurist': 1.4, 'adventuristic': 1.7, 'adventurists': 1.2, 'adventurous': 1.4, 'adventurously': 1.3, 'adventurousness': 1.8, 'adversarial': -1.5, 'adversaries': -1.0, 'adversary': -0.8, 'adversative': -1.2, 'adversatively': -0.1, 'adversatives': -1.0, 'adverse': -1.5, 'adversely': -0.8, 'adverseness': -0.6, 'adversities': -1.5, 'adversity': -1.8, 'affected': -0.6, 'affection': 2.4, 'affectional': 1.9, 'affectionally': 1.5, 'affectionate': 1.9, 'affectionately': 2.2, 'affectioned': 1.8, 'affectionless': -2.0, 'affections': 1.5, 'afflicted': -1.5, 'affronted': 0.2, 'aggravate': -2.5, 'aggravated': -1.9, 'aggravates': -1.9, 'aggravating': -1.2, 'aggress': -1.3, 'aggressed': -1.4, 'aggresses': -0.5, 'aggressing': -0.6, 'aggression': -1.2, 'aggressions': -1.3, 'aggressive': -0.6, 'aggressively': -1.3, 'aggressiveness': -1.8, 'aggressivities': -1.4, 'aggressivity': -0.6, 'aggressor': -0.8, 'aggressors': -0.9, 'aghast': -1.9, 'agitate': -1.7, 'agitated': -2.0, 'agitatedly': -1.6, 'agitates': -1.4, 'agitating': -1.8, 'agitation': -1.0, 'agitational': -1.2, 'agitations': -1.3, 'agitative': -1.3, 'agitato': -0.1, 'agitator': -1.4, 'agitators': -2.1, 'agog': 1.9, 'agonise': -2.1, 'agonised': -2.3, 'agonises': -2.4, 'agonising': -1.5, 'agonize': -2.3, 'agonized': -2.2, 'agonizes': -2.3, 'agonizing': -2.7, 'agonizingly': -2.3, 'agony': -1.8, 'agree': 1.5, 'agreeability': 1.9, 'agreeable': 1.8, 'agreeableness': 1.8, 'agreeablenesses': 1.3, 'agreeably': 1.6, 'agreed': 1.1, 'agreeing': 1.4, 'agreement': 2.2, 'agreements': 1.1, 'agrees': 0.8, 'alarm': -1.4, 'alarmed': -1.4, 'alarming': -0.5, 'alarmingly': -2.6, 'alarmism': -0.3, 'alarmists': -1.1, 'alarms': -1.1, 'alas': -1.1, 'alert': 1.2, 'alienation': -1.1, 'alive': 1.6, 'allergic': -1.2, 'allow': 0.9, 'alone': -1.0, 'alright': 1.0, 'amaze': 2.5, 'amazed': 2.2, 'amazedly': 2.1, 'amazement': 2.5, 'amazements': 2.2, 'amazes': 2.2, 'amazing': 2.8, 'amazon': 0.7, 'amazonite': 0.2, 'amazons': -0.1, 'amazonstone': 1.0, 'amazonstones': 0.2, 'ambitious': 2.1, 'ambivalent': 0.5, 'amor': 3.0, 'amoral': -1.6, 'amoralism': -0.7, 'amoralisms': -0.7, 'amoralities': -1.2, 'amorality': -1.5, 'amorally': -1.0, 'amoretti': 0.2, 'amoretto': 0.6, 'amorettos': 0.3, 'amorino': 1.2, 'amorist': 1.6, 'amoristic': 1.0, 'amorists': 0.1, 'amoroso': 2.3, 'amorous': 1.8, 'amorously': 2.3, 'amorousness': 2.0, 'amorphous': -0.2, 'amorphously': 0.1, 'amorphousness': 0.3, 'amort': -2.1, 'amortise': 0.5, 'amortised': -0.2, 'amortises': 0.1, 'amortizable': 0.5, 'amortization': 0.6, 'amortizations': 0.2, 'amortize': -0.1, 'amortized': 0.8, 'amortizes': 0.6, 'amortizing': 0.8, 'amusable': 0.7, 'amuse': 1.7, 'amused': 1.8, 'amusedly': 2.2, 'amusement': 1.5, 'amusements': 1.5, 'amuser': 1.1, 'amusers': 1.3, 'amuses': 1.7, 'amusia': 0.3, 'amusias': -0.4, 'amusing': 1.6, 'amusingly': 0.8, 'amusingness': 1.8, 'amusive': 1.7, 'anger': -2.7, 'angered': -2.3, 'angering': -2.2, 'angerly': -1.9, 'angers': -2.3, 'angrier': -2.3, 'angriest': -3.1, 'angrily': -1.8, 'angriness': -1.7, 'angry': -2.3, 'anguish': -2.9, 'anguished': -1.8, 'anguishes': -2.1, 'anguishing': -2.7, 'animosity': -1.9, 'annoy': -1.9, 'annoyance': -1.3, 'annoyances': -1.8, 'annoyed': -1.6, 'annoyer': -2.2, 'annoyers': -1.5, 'annoying': -1.7, 'annoys': -1.8, 'antagonism': -1.9, 'antagonisms': -1.2, 'antagonist': -1.9, 'antagonistic': -1.7, 'antagonistically': -2.2, 'antagonists': -1.7, 'antagonize': -2.0, 'antagonized': -1.4, 'antagonizes': -0.5, 'antagonizing': -2.7, 'anti': -1.3, 'anticipation': 0.4, 'anxieties': -0.6, 'anxiety': -0.7, 'anxious': -1.0, 'anxiously': -0.9, 'anxiousness': -1.0, 'aok': 2.0, 'apathetic': -1.2, 'apathetically': -0.4, 'apathies': -0.6, 'apathy': -1.2, 'apeshit': -0.9, 'apocalyptic': -3.4, 'apologise': 1.6, 'apologised': 0.4, 'apologises': 0.8, 'apologising': 0.2, 'apologize': 0.4, 'apologized': 1.3, 'apologizes': 1.5, 'apologizing': -0.3, 'apology': 0.2, 'appall': -2.4, 'appalled': -2.0, 'appalling': -1.5, 'appallingly': -2.0, 'appalls': -1.9, 'appease': 1.1, 'appeased': 0.9, 'appeases': 0.9, 'appeasing': 1.0, 'applaud': 2.0, 'applauded': 1.5, 'applauding': 2.1, 'applauds': 1.4, 'applause': 1.8, 'appreciate': 1.7, 'appreciated': 2.3, 'appreciates': 2.3, 'appreciating': 1.9, 'appreciation': 2.3, 'appreciations': 1.7, 'appreciative': 2.6, 'appreciatively': 1.8, 'appreciativeness': 1.6, 'appreciator': 2.6, 'appreciators': 1.5, 'appreciatory': 1.7, 'apprehensible': 1.1, 'apprehensibly': -0.2, 'apprehension': -2.1, 'apprehensions': -0.9, 'apprehensively': -0.3, 'apprehensiveness': -0.7, 'approval': 2.1, 'approved': 1.8, 'approves': 1.7, 'ardent': 2.1, 'arguable': -1.0, 'arguably': -1.0, 'argue': -1.4, 'argued': -1.5, 'arguer': -1.6, 'arguers': -1.4, 'argues': -1.6, 'arguing': -2.0, 'argument': -1.5, 'argumentative': -1.5, 'argumentatively': -1.8, 'argumentive': -1.5, 'arguments': -1.7, 'arrest': -1.4, 'arrested': -2.1, 'arrests': -1.9, 'arrogance': -2.4, 'arrogances': -1.9, 'arrogant': -2.2, 'arrogantly': -1.8, 'ashamed': -2.1, 'ashamedly': -1.7, 'ass': -2.5, 'assassination': -2.9, 'assassinations': -2.7, 'assault': -2.8, 'assaulted': -2.4, 'assaulting': -2.3, 'assaultive': -2.8, 'assaults': -2.5, 'asset': 1.5, 'assets': 0.7, 'assfucking': -2.5, 'assholes': -2.8, 'assurance': 1.4, 'assurances': 1.4, 'assure': 1.4, 'assured': 1.5, 'assuredly': 1.6, 'assuredness': 1.4, 'assurer': 0.9, 'assurers': 1.1, 'assures': 1.3, 'assurgent': 1.3, 'assuring': 1.6, 'assuror': 0.5, 'assurors': 0.7, 'astonished': 1.6, 'astound': 1.7, 'astounded': 1.8, 'astounding': 1.8, 'astoundingly': 2.1, 'astounds': 2.1, 'attachment': 1.2, 'attachments': 1.1, 'attack': -2.1, 'attacked': -2.0, 'attacker': -2.7, 'attackers': -2.7, 'attacking': -2.0, 'attacks': -1.9, 'attract': 1.5, 'attractancy': 0.9, 'attractant': 1.3, 'attractants': 1.4, 'attracted': 1.8, 'attracting': 2.1, 'attraction': 2.0, 'attractions': 1.8, 'attractive': 1.9, 'attractively': 2.2, 'attractiveness': 1.8, 'attractivenesses': 2.1, 'attractor': 1.2, 'attractors': 1.2, 'attracts': 1.7, 'audacious': 0.9, 'authority': 0.3, 'aversion': -1.9, 'aversions': -1.1, 'aversive': -1.6, 'aversively': -0.8, 'avert': -0.7, 'averted': -0.3, 'averts': -0.4, 'avid': 1.2, 'avoid': -1.2, 'avoidance': -1.7, 'avoidances': -1.1, 'avoided': -1.4, 'avoider': -1.8, 'avoiders': -1.4, 'avoiding': -1.4, 'avoids': -0.7, 'await': 0.4, 'awaited': -0.1, 'awaits': 0.3, 'award': 2.5, 'awardable': 2.4, 'awarded': 1.7, 'awardee': 1.8, 'awardees': 1.2, 'awarder': 0.9, 'awarders': 1.3, 'awarding': 1.9, 'awards': 2.0, 'awesome': 3.1, 'awful': -2.0, 'awkward': -0.6, 'awkwardly': -1.3, 'awkwardness': -0.7, 'axe': -0.4, 'axed': -1.3, 'backed': 0.1, 'backing': 0.1, 'backs': -0.2, 'bad': -2.5, 'badass': -0.6, 'badly': -2.1, 'bailout': -0.4, 'bamboozle': -1.5, 'bamboozled': -1.5, 'bamboozles': -1.5, 'ban': -2.6, 'banish': -1.9, 'bankrupt': -2.6, 'bankster': -2.1, 'banned': -2.0, 'bargain': 0.8, 'barrier': -0.5, 'bashful': -0.1, 'bashfully': 0.2, 'bashfulness': -0.8, 'bastard': -2.5, 'bastardies': -1.8, 'bastardise': -2.1, 'bastardised': -2.3, 'bastardises': -2.3, 'bastardising': -2.6, 'bastardization': -2.4, 'bastardizations': -2.1, 'bastardize': -2.4, 'bastardized': -2.0, 'bastardizes': -1.8, 'bastardizing': -2.3, 'bastardly': -2.7, 'bastards': -3.0, 'bastardy': -2.7, 'battle': -1.6, 'battled': -1.2, 'battlefield': -1.6, 'battlefields': -0.9, 'battlefront': -1.2, 'battlefronts': -0.8, 'battleground': -1.7, 'battlegrounds': -0.6, 'battlement': -0.4, 'battlements': -0.4, 'battler': -0.8, 'battlers': -0.2, 'battles': -1.6, 'battleship': -0.1, 'battleships': -0.5, 'battlewagon': -0.3, 'battlewagons': -0.5, 'battling': -1.1, 'beaten': -1.8, 'beatific': 1.8, 'beating': -2.0, 'beaut': 1.6, 'beauteous': 2.5, 'beauteously': 2.6, 'beauteousness': 2.7, 'beautician': 1.2, 'beauticians': 0.4, 'beauties': 2.4, 'beautification': 1.9, 'beautifications': 2.4, 'beautified': 2.1, 'beautifier': 1.7, 'beautifiers': 1.7, 'beautifies': 1.8, 'beautiful': 2.9, 'beautifuler': 2.1, 'beautifulest': 2.6, 'beautifully': 2.7, 'beautifulness': 2.6, 'beautify': 2.3, 'beautifying': 2.3, 'beauts': 1.7, 'beauty': 2.8, 'belittle': -1.9, 'belittled': -2.0, 'beloved': 2.3, 'benefic': 1.4, 'benefice': 0.4, 'beneficed': 1.1, 'beneficence': 2.8, 'beneficences': 1.5, 'beneficent': 2.3, 'beneficently': 2.2, 'benefices': 1.1, 'beneficial': 1.9, 'beneficially': 2.4, 'beneficialness': 1.7, 'beneficiaries': 1.8, 'beneficiary': 2.1, 'beneficiate': 1.0, 'beneficiation': 0.4, 'benefit': 2.0, 'benefits': 1.6, 'benefitted': 1.7, 'benefitting': 1.9, 'benevolence': 1.7, 'benevolences': 1.9, 'benevolent': 2.7, 'benevolently': 1.4, 'benevolentness': 1.2, 'benign': 1.3, 'benignancy': 0.6, 'benignant': 2.2, 'benignantly': 1.1, 'benignities': 0.9, 'benignity': 1.3, 'benignly': 0.2, 'bereave': -2.1, 'bereaved': -2.1, 'bereaves': -1.9, 'bereaving': -1.3, 'best': 3.2, 'betray': -3.2, 'betrayal': -2.8, 'betrayed': -3.0, 'betraying': -2.5, 'betrays': -2.5, 'better': 1.9, 'bias': -0.4, 'biased': -1.1, 'bitch': -2.8, 'bitched': -2.6, 'bitcheries': -2.3, 'bitchery': -2.7, 'bitches': -2.9, 'bitchier': -2.0, 'bitchiest': -3.0, 'bitchily': -2.6, 'bitchiness': -2.6, 'bitching': -1.1, 'bitchy': -2.3, 'bitter': -1.8, 'bitterbrush': -0.2, 'bitterbrushes': -0.6, 'bittered': -1.8, 'bitterer': -1.9, 'bitterest': -2.3, 'bittering': -1.2, 'bitterish': -1.6, 'bitterly': -2.0, 'bittern': -0.2, 'bitterness': -1.7, 'bitterns': -0.4, 'bitterroots': -0.2, 'bitters': -0.4, 'bittersweet': -0.3, 'bittersweetness': -0.6, 'bittersweets': -0.2, 'bitterweeds': -0.5, 'bizarre': -1.3, 'blah': -0.4, 'blam': -0.2, 'blamable': -1.8, 'blamably': -1.8, 'blame': -1.4, 'blamed': -2.1, 'blameful': -1.7, 'blamefully': -1.6, 'blameless': 0.7, 'blamelessly': 0.9, 'blamelessness': 0.6, 'blamer': -2.1, 'blamers': -2.0, 'blames': -1.7, 'blameworthiness': -1.6, 'blameworthy': -2.3, 'blaming': -2.2, 'bless': 1.8, 'blessed': 2.9, 'blesseder': 2.0, 'blessedest': 2.8, 'blessedly': 1.7, 'blessedness': 1.6, 'blesser': 2.6, 'blessers': 1.9, 'blesses': 2.6, 'blessing': 2.2, 'blessings': 2.5, 'blind': -1.7, 'bliss': 2.7, 'blissful': 2.9, 'blithe': 1.2, 'block': -1.9, 'blockbuster': 2.9, 'blocked': -1.1, 'blocking': -1.6, 'blocks': -0.9, 'bloody': -1.9, 'blurry': -0.4, 'bold': 1.6, 'bolder': 1.2, 'boldest': 1.6, 'boldface': 0.3, 'boldfaced': -0.1, 'boldfaces': 0.1, 'boldfacing': 0.1, 'boldly': 1.5, 'boldness': 1.5, 'boldnesses': 0.9, 'bolds': 1.3, 'bomb': -2.2, 'bonus': 2.5, 'bonuses': 2.6, 'boost': 1.7, 'boosted': 1.5, 'boosting': 1.4, 'boosts': 1.3, 'bore': -1.0, 'boreal': -0.3, 'borecole': -0.2, 'borecoles': -0.3, 'bored': -1.1, 'boredom': -1.3, 'boredoms': -1.1, 'boreen': 0.1, 'boreens': 0.2, 'boreholes': -0.2, 'borer': -0.4, 'borers': -1.2, 'bores': -1.3, 'borescopes': -0.1, 'boresome': -1.3, 'boring': -1.3, 'bother': -1.4, 'botheration': -1.7, 'botherations': -1.3, 'bothered': -1.3, 'bothering': -1.6, 'bothers': -0.8, 'bothersome': -1.3, 'boycott': -1.3, 'boycotted': -1.7, 'boycotting': -1.7, 'boycotts': -1.4, 'brainwashing': -1.5, 'brave': 2.4, 'braved': 1.9, 'bravely': 2.3, 'braver': 2.4, 'braveries': 2.0, 'bravery': 2.2, 'braves': 1.9, 'bravest': 2.3, 'breathtaking': 2.0, 'bribe': -0.8, 'bright': 1.9, 'brighten': 1.9, 'brightened': 2.1, 'brightener': 1.0, 'brighteners': 1.0, 'brightening': 2.5, 'brightens': 1.5, 'brighter': 1.6, 'brightest': 3.0, 'brightly': 1.5, 'brightness': 1.6, 'brightnesses': 1.4, 'brights': 0.4, 'brightwork': 1.1, 'brilliance': 2.9, 'brilliances': 2.9, 'brilliancies': 2.3, 'brilliancy': 2.6, 'brilliant': 2.8, 'brilliantine': 0.8, 'brilliantines': 2.0, 'brilliantly': 3.0, 'brilliants': 1.9, 'brisk': 0.6, 'broke': -1.8, 'broken': -2.1, 'brooding': 0.1, 'brutal': -3.1, 'brutalise': -2.7, 'brutalised': -2.9, 'brutalises': -3.2, 'brutalising': -2.8, 'brutalities': -2.6, 'brutality': -3.0, 'brutalization': -2.1, 'brutalizations': -2.3, 'brutalize': -2.9, 'brutalized': -2.4, 'brutalizes': -3.2, 'brutalizing': -3.4, 'brutally': -3.0, 'bullied': -3.1, 'bullshit': -2.8, 'bully': -2.2, 'bullying': -2.9, 'bummer': -1.6, 'buoyant': 0.9, 'burden': -1.9, 'burdened': -1.7, 'burdener': -1.3, 'burdeners': -1.7, 'burdening': -1.4, 'burdens': -1.5, 'burdensome': -1.8, 'bwahaha': 0.4, 'bwahahah': 2.5, 'calm': 1.3, 'calmative': 1.1, 'calmatives': 0.5, 'calmed': 1.6, 'calmer': 1.5, 'calmest': 1.6, 'calming': 1.7, 'calmly': 1.3, 'calmness': 1.7, 'calmnesses': 1.6, 'calmodulin': 0.2, 'calms': 1.3, \"can't stand\": -2.0, 'cancel': -1.0, 'cancelled': -1.0, 'cancelling': -0.8, 'cancels': -0.9, 'cancer': -3.4, 'capable': 1.6, 'captivated': 1.6, 'care': 2.2, 'cared': 1.8, 'carefree': 1.7, 'careful': 0.6, 'carefully': 0.5, 'carefulness': 2.0, 'careless': -1.5, 'carelessly': -1.0, 'carelessness': -1.4, 'carelessnesses': -1.6, 'cares': 2.0, 'caring': 2.2, 'casual': 0.8, 'casually': 0.7, 'casualty': -2.4, 'catastrophe': -3.4, 'catastrophic': -2.2, 'cautious': -0.4, 'celebrate': 2.7, 'celebrated': 2.7, 'celebrates': 2.7, 'celebrating': 2.7, 'censor': -2.0, 'censored': -0.6, 'censors': -1.2, 'certain': 1.1, 'certainly': 1.4, 'certainties': 0.9, 'certainty': 1.0, 'chagrin': -1.9, 'chagrined': -1.4, 'challenge': 0.3, 'challenged': -0.4, 'challenger': 0.5, 'challengers': 0.4, 'challenges': 0.3, 'challenging': 0.6, 'challengingly': -0.6, 'champ': 2.1, 'champac': -0.2, 'champagne': 1.2, 'champagnes': 0.5, 'champaign': 0.2, 'champaigns': 0.5, 'champaks': -0.2, 'champed': 1.0, 'champer': -0.1, 'champers': 0.5, 'champerties': -0.1, 'champertous': 0.3, 'champerty': -0.2, 'champignon': 0.4, 'champignons': 0.2, 'champing': 0.7, 'champion': 2.9, 'championed': 1.2, 'championing': 1.8, 'champions': 2.4, 'championship': 1.9, 'championships': 2.2, 'champs': 1.8, 'champy': 1.0, 'chance': 1.0, 'chances': 0.8, 'chaos': -2.7, 'chaotic': -2.2, 'charged': -0.8, 'charges': -1.1, 'charitable': 1.7, 'charitableness': 1.9, 'charitablenesses': 1.6, 'charitably': 1.4, 'charities': 2.2, 'charity': 1.8, 'charm': 1.7, 'charmed': 2.0, 'charmer': 1.9, 'charmers': 2.1, 'charmeuse': 0.3, 'charmeuses': 0.4, 'charming': 2.8, 'charminger': 1.5, 'charmingest': 2.4, 'charmingly': 2.2, 'charmless': -1.8, 'charms': 1.9, 'chastise': -2.5, 'chastised': -2.2, 'chastises': -1.7, 'chastising': -1.7, 'cheat': -2.0, 'cheated': -2.3, 'cheater': -2.5, 'cheaters': -1.9, 'cheating': -2.6, 'cheats': -1.8, 'cheer': 2.3, 'cheered': 2.3, 'cheerer': 1.7, 'cheerers': 1.8, 'cheerful': 2.5, 'cheerfuller': 1.9, 'cheerfullest': 3.2, 'cheerfully': 2.1, 'cheerfulness': 2.1, 'cheerier': 2.6, 'cheeriest': 2.2, 'cheerily': 2.5, 'cheeriness': 2.5, 'cheering': 2.3, 'cheerio': 1.2, 'cheerlead': 1.7, 'cheerleader': 0.9, 'cheerleaders': 1.2, 'cheerleading': 1.2, 'cheerleads': 1.2, 'cheerled': 1.5, 'cheerless': -1.7, 'cheerlessly': -0.8, 'cheerlessness': -1.7, 'cheerly': 2.4, 'cheers': 2.1, 'cheery': 2.6, 'cherish': 1.6, 'cherishable': 2.0, 'cherished': 2.3, 'cherisher': 2.2, 'cherishers': 1.9, 'cherishes': 2.2, 'cherishing': 2.0, 'chic': 1.1, 'childish': -1.2, 'chilling': -0.1, 'choke': -2.5, 'choked': -2.1, 'chokes': -2.0, 'choking': -2.0, 'chuckle': 1.7, 'chuckled': 1.2, 'chucklehead': -1.9, 'chuckleheaded': -1.3, 'chuckleheads': -1.1, 'chuckler': 0.8, 'chucklers': 1.2, 'chuckles': 1.1, 'chucklesome': 1.1, 'chuckling': 1.4, 'chucklingly': 1.2, 'clarifies': 0.9, 'clarity': 1.7, 'classy': 1.9, 'clean': 1.7, 'cleaner': 0.7, 'clear': 1.6, 'cleared': 0.4, 'clearly': 1.7, 'clears': 0.3, 'clever': 2.0, 'cleverer': 2.0, 'cleverest': 2.6, 'cleverish': 1.0, 'cleverly': 2.3, 'cleverness': 2.3, 'clevernesses': 1.4, 'clouded': -0.2, 'clueless': -1.5, 'cock': -0.6, 'cocksucker': -3.1, 'cocksuckers': -2.6, 'cocky': -0.5, 'coerced': -1.5, 'collapse': -2.2, 'collapsed': -1.1, 'collapses': -1.2, 'collapsing': -1.2, 'collide': -0.3, 'collides': -1.1, 'colliding': -0.5, 'collision': -1.5, 'collisions': -1.1, 'colluding': -1.2, 'combat': -1.4, 'combats': -0.8, 'comedian': 1.6, 'comedians': 1.2, 'comedic': 1.7, 'comedically': 2.1, 'comedienne': 0.6, 'comediennes': 1.6, 'comedies': 1.7, 'comedo': 0.3, 'comedones': -0.8, 'comedown': -0.8, 'comedowns': -0.9, 'comedy': 1.5, 'comfort': 1.5, 'comfortable': 2.3, 'comfortableness': 1.3, 'comfortably': 1.8, 'comforted': 1.8, 'comforter': 1.9, 'comforters': 1.2, 'comforting': 1.7, 'comfortingly': 1.7, 'comfortless': -1.8, 'comforts': 2.1, 'commend': 1.9, 'commended': 1.9, 'commit': 1.2, 'commitment': 1.6, 'commitments': 0.5, 'commits': 0.1, 'committed': 1.1, 'committing': 0.3, 'compassion': 2.0, 'compassionate': 2.2, 'compassionated': 1.6, 'compassionately': 1.7, 'compassionateness': 0.9, 'compassionates': 1.6, 'compassionating': 1.6, 'compassionless': -2.6, 'compelled': 0.2, 'compelling': 0.9, 'competent': 1.3, 'competitive': 0.7, 'complacent': -0.3, 'complain': -1.5, 'complainant': -0.7, 'complainants': -1.1, 'complained': -1.7, 'complainer': -1.8, 'complainers': -1.3, 'complaining': -0.8, 'complainingly': -1.7, 'complains': -1.6, 'complaint': -1.2, 'complaints': -1.7, 'compliment': 2.1, 'complimentarily': 1.7, 'complimentary': 1.9, 'complimented': 1.8, 'complimenting': 2.3, 'compliments': 1.7, 'comprehensive': 1.0, 'conciliate': 1.0, 'conciliated': 1.1, 'conciliates': 1.1, 'conciliating': 1.3, 'condemn': -1.6, 'condemnation': -2.8, 'condemned': -1.9, 'condemns': -2.3, 'confidence': 2.3, 'confident': 2.2, 'confidently': 2.1, 'conflict': -1.3, 'conflicting': -1.7, 'conflictive': -1.8, 'conflicts': -1.6, 'confront': -0.7, 'confrontation': -1.3, 'confrontational': -1.6, 'confrontationist': -1.0, 'confrontationists': -1.2, 'confrontations': -1.5, 'confronted': -0.8, 'confronter': -0.3, 'confronters': -1.3, 'confronting': -0.6, 'confronts': -0.9, 'confuse': -0.9, 'confused': -1.3, 'confusedly': -0.6, 'confusedness': -1.5, 'confuses': -1.3, 'confusing': -0.9, 'confusingly': -1.4, 'confusion': -1.2, 'confusional': -1.2, 'confusions': -0.9, 'congrats': 2.4, 'congratulate': 2.2, 'congratulation': 2.9, 'congratulations': 2.9, 'consent': 0.9, 'consents': 1.0, 'considerate': 1.9, 'consolable': 1.1, 'conspiracy': -2.4, 'constrained': -0.4, 'contagion': -2.0, 'contagions': -1.5, 'contagious': -1.4, 'contempt': -2.8, 'contemptibilities': -2.0, 'contemptibility': -0.9, 'contemptible': -1.6, 'contemptibleness': -1.9, 'contemptibly': -1.4, 'contempts': -1.0, 'contemptuous': -2.2, 'contemptuously': -2.4, 'contemptuousness': -1.1, 'contend': 0.2, 'contender': 0.5, 'contented': 1.4, 'contentedly': 1.9, 'contentedness': 1.4, 'contentious': -1.2, 'contentment': 1.5, 'contestable': 0.6, 'contradict': -1.3, 'contradictable': -1.0, 'contradicted': -1.3, 'contradicting': -1.3, 'contradiction': -1.0, 'contradictions': -1.3, 'contradictious': -1.9, 'contradictor': -1.0, 'contradictories': -0.5, 'contradictorily': -0.9, 'contradictoriness': -1.4, 'contradictors': -1.6, 'contradictory': -1.4, 'contradicts': -1.4, 'controversial': -0.8, 'controversially': -1.1, 'convince': 1.0, 'convinced': 1.7, 'convincer': 0.6, 'convincers': 0.3, 'convinces': 0.7, 'convincing': 1.7, 'convincingly': 1.6, 'convincingness': 0.7, 'convivial': 1.2, 'cool': 1.3, 'cornered': -1.1, 'corpse': -2.7, 'costly': -0.4, 'courage': 2.2, 'courageous': 2.4, 'courageously': 2.3, 'courageousness': 2.1, 'courteous': 2.3, 'courtesy': 1.5, 'cover-up': -1.2, 'coward': -2.0, 'cowardly': -1.6, 'coziness': 1.5, 'cramp': -0.8, 'crap': -1.6, 'crappy': -2.6, 'crash': -1.7, 'craze': -0.6, 'crazed': -0.5, 'crazes': 0.2, 'crazier': -0.1, 'craziest': -0.2, 'crazily': -1.5, 'craziness': -1.6, 'crazinesses': -1.0, 'crazing': -0.5, 'crazy': -1.4, 'crazyweed': 0.8, 'create': 1.1, 'created': 1.0, 'creates': 1.1, 'creatin': 0.1, 'creatine': 0.2, 'creating': 1.2, 'creatinine': 0.4, 'creation': 1.1, 'creationism': 0.7, 'creationisms': 1.1, 'creationist': 0.8, 'creationists': 0.5, 'creations': 1.6, 'creative': 1.9, 'creatively': 1.5, 'creativeness': 1.8, 'creativities': 1.7, 'creativity': 1.6, 'credit': 1.6, 'creditabilities': 1.4, 'creditability': 1.9, 'creditable': 1.8, 'creditableness': 1.2, 'creditably': 1.7, 'credited': 1.5, 'crediting': 0.6, 'creditor': -0.1, 'credits': 1.5, 'creditworthiness': 1.9, 'creditworthy': 2.4, 'crestfallen': -2.5, 'cried': -1.6, 'cries': -1.7, 'crime': -2.5, 'criminal': -2.4, 'criminals': -2.7, 'crisis': -3.1, 'critic': -1.1, 'critical': -1.3, 'criticise': -1.9, 'criticised': -1.8, 'criticises': -1.3, 'criticising': -1.7, 'criticism': -1.9, 'criticisms': -0.9, 'criticizable': -1.0, 'criticize': -1.6, 'criticized': -1.5, 'criticizer': -1.5, 'criticizers': -1.6, 'criticizes': -1.4, 'criticizing': -1.5, 'critics': -1.2, 'crude': -2.7, 'crudely': -1.2, 'crudeness': -2.0, 'crudenesses': -2.0, 'cruder': -2.0, 'crudes': -1.1, 'crudest': -2.4, 'cruel': -2.8, 'crueler': -2.3, 'cruelest': -2.6, 'crueller': -2.4, 'cruellest': -2.9, 'cruelly': -2.8, 'cruelness': -2.9, 'cruelties': -2.3, 'cruelty': -2.9, 'crush': -0.6, 'crushed': -1.8, 'crushes': -1.9, 'crushing': -1.5, 'cry': -2.1, 'crying': -2.1, 'cunt': -2.2, 'cunts': -2.9, 'curious': 1.3, 'curse': -2.5, 'cut': -1.1, 'cute': 2.0, 'cutely': 1.3, 'cuteness': 2.3, 'cutenesses': 1.9, 'cuter': 2.3, 'cutes': 1.8, 'cutesie': 1.0, 'cutesier': 1.5, 'cutesiest': 2.2, 'cutest': 2.8, 'cutesy': 2.1, 'cutey': 2.1, 'cuteys': 1.5, 'cutie': 1.5, 'cutiepie': 2.0, 'cuties': 2.2, 'cuts': -1.2, 'cutting': -0.5, 'cynic': -1.4, 'cynical': -1.6, 'cynically': -1.3, 'cynicism': -1.7, 'cynicisms': -1.7, 'cynics': -0.3, 'd-:': 1.6, 'damage': -2.2, 'damaged': -1.9, 'damager': -1.9, 'damagers': -2.0, 'damages': -1.9, 'damaging': -2.3, 'damagingly': -2.0, 'damn': -1.7, 'damnable': -1.7, 'damnableness': -1.8, 'damnably': -1.7, 'damnation': -2.6, 'damnations': -1.4, 'damnatory': -2.6, 'damned': -1.6, 'damnedest': -0.5, 'damnified': -2.8, 'damnifies': -1.8, 'damnify': -2.2, 'damnifying': -2.4, 'damning': -1.4, 'damningly': -2.0, 'damnit': -2.4, 'damns': -2.2, 'danger': -2.4, 'dangered': -2.4, 'dangering': -2.5, 'dangerous': -2.1, 'dangerously': -2.0, 'dangerousness': -2.0, 'dangers': -2.2, 'daredevil': 0.5, 'daring': 1.5, 'daringly': 2.1, 'daringness': 1.4, 'darings': 0.4, 'darkest': -2.2, 'darkness': -1.0, 'darling': 2.8, 'darlingly': 1.6, 'darlingness': 2.3, 'darlings': 2.2, 'dauntless': 2.3, 'daze': -0.7, 'dazed': -0.7, 'dazedly': -0.4, 'dazedness': -0.5, 'dazes': -0.3, 'dead': -3.3, 'deadlock': -1.4, 'deafening': -1.2, 'dear': 1.6, 'dearer': 1.9, 'dearest': 2.6, 'dearie': 2.2, 'dearies': 1.0, 'dearly': 1.8, 'dearness': 2.0, 'dears': 1.9, 'dearth': -2.3, 'dearths': -0.9, 'deary': 1.9, 'death': -2.9, 'debonair': 0.8, 'debt': -1.5, 'decay': -1.7, 'decayed': -1.6, 'decayer': -1.6, 'decayers': -1.6, 'decaying': -1.7, 'decays': -1.7, 'deceit': -2.0, 'deceitful': -1.9, 'deceive': -1.7, 'deceived': -1.9, 'deceives': -1.6, 'deceiving': -1.4, 'deception': -1.9, 'decisive': 0.9, 'dedicated': 2.0, 'defeat': -2.0, 'defeated': -2.1, 'defeater': -1.4, 'defeaters': -0.9, 'defeating': -1.6, 'defeatism': -1.3, 'defeatist': -1.7, 'defeatists': -2.1, 'defeats': -1.3, 'defeature': -1.9, 'defeatures': -1.5, 'defect': -1.4, 'defected': -1.7, 'defecting': -1.8, 'defection': -1.4, 'defections': -1.5, 'defective': -1.9, 'defectively': -2.1, 'defectiveness': -1.8, 'defectives': -1.8, 'defector': -1.9, 'defectors': -1.3, 'defects': -1.7, 'defence': 0.4, 'defenceman': 0.4, 'defencemen': 0.6, 'defences': -0.2, 'defender': 0.4, 'defenders': 0.3, 'defense': 0.5, 'defenseless': -1.4, 'defenselessly': -1.1, 'defenselessness': -1.3, 'defenseman': 0.1, 'defensemen': -0.4, 'defenses': 0.7, 'defensibility': 0.4, 'defensible': 0.8, 'defensibly': 0.1, 'defensive': 0.1, 'defensively': -0.6, 'defensiveness': -0.4, 'defensives': -0.3, 'defer': -1.2, 'deferring': -0.7, 'defiant': -0.9, 'deficit': -1.7, 'definite': 1.1, 'definitely': 1.7, 'degradable': -1.0, 'degradation': -2.4, 'degradations': -1.5, 'degradative': -2.0, 'degrade': -1.9, 'degraded': -1.8, 'degrader': -2.0, 'degraders': -2.0, 'degrades': -2.1, 'degrading': -2.8, 'degradingly': -2.7, 'dehumanize': -1.8, 'dehumanized': -1.9, 'dehumanizes': -1.5, 'dehumanizing': -2.4, 'deject': -2.2, 'dejected': -2.2, 'dejecting': -2.3, 'dejects': -2.0, 'delay': -1.3, 'delayed': -0.9, 'delectable': 2.9, 'delectables': 1.4, 'delectably': 2.8, 'delicate': 0.2, 'delicately': 1.0, 'delicates': 0.6, 'delicatessen': 0.4, 'delicatessens': 0.4, 'delicious': 2.7, 'deliciously': 1.9, 'deliciousness': 1.8, 'delight': 2.9, 'delighted': 2.3, 'delightedly': 2.4, 'delightedness': 2.1, 'delighter': 2.0, 'delighters': 2.6, 'delightful': 2.8, 'delightfully': 2.7, 'delightfulness': 2.1, 'delighting': 1.6, 'delights': 2.0, 'delightsome': 2.3, 'demand': -0.5, 'demanded': -0.9, 'demanding': -0.9, 'demonstration': 0.4, 'demoralized': -1.6, 'denied': -1.9, 'denier': -1.5, 'deniers': -1.1, 'denies': -1.8, 'denounce': -1.4, 'denounces': -1.9, 'deny': -1.4, 'denying': -1.4, 'depress': -2.2, 'depressant': -1.6, 'depressants': -1.6, 'depressed': -2.3, 'depresses': -2.2, 'depressible': -1.7, 'depressing': -1.6, 'depressingly': -2.3, 'depression': -2.7, 'depressions': -2.2, 'depressive': -1.6, 'depressively': -2.1, 'depressives': -1.5, 'depressor': -1.8, 'depressors': -1.7, 'depressurization': -0.3, 'depressurizations': -0.4, 'depressurize': -0.5, 'depressurized': -0.3, 'depressurizes': -0.3, 'depressurizing': -0.7, 'deprival': -2.1, 'deprivals': -1.2, 'deprivation': -1.8, 'deprivations': -1.8, 'deprive': -2.1, 'deprived': -2.1, 'depriver': -1.6, 'deprivers': -1.4, 'deprives': -1.7, 'depriving': -2.0, 'derail': -1.2, 'derailed': -1.4, 'derails': -1.3, 'deride': -1.1, 'derided': -0.8, 'derides': -1.0, 'deriding': -1.5, 'derision': -1.2, 'desirable': 1.3, 'desire': 1.7, 'desired': 1.1, 'desirous': 1.3, 'despair': -1.3, 'despaired': -2.7, 'despairer': -1.3, 'despairers': -1.3, 'despairing': -2.3, 'despairingly': -2.2, 'despairs': -2.7, 'desperate': -1.3, 'desperately': -1.6, 'desperateness': -1.5, 'desperation': -2.0, 'desperations': -2.2, 'despise': -1.4, 'despised': -1.7, 'despisement': -2.4, 'despisements': -2.5, 'despiser': -1.8, 'despisers': -1.6, 'despises': -2.0, 'despising': -2.7, 'despondent': -2.1, 'destroy': -2.5, 'destroyed': -2.2, 'destroyer': -2.0, 'destroyers': -2.3, 'destroying': -2.6, 'destroys': -2.6, 'destruct': -2.4, 'destructed': -1.9, 'destructibility': -1.8, 'destructible': -1.5, 'destructing': -2.5, 'destruction': -2.7, 'destructionist': -2.6, 'destructionists': -2.1, 'destructions': -2.3, 'destructive': -3.0, 'destructively': -2.4, 'destructiveness': -2.4, 'destructivity': -2.2, 'destructs': -2.4, 'detached': -0.5, 'detain': -1.8, 'detained': -1.7, 'detention': -1.5, 'determinable': 0.9, 'determinableness': 0.2, 'determinably': 0.9, 'determinacy': 1.0, 'determinant': 0.2, 'determinantal': -0.3, 'determinate': 0.8, 'determinately': 1.2, 'determinateness': 1.1, 'determination': 1.7, 'determinations': 0.8, 'determinative': 1.1, 'determinatives': 0.9, 'determinator': 1.1, 'determined': 1.4, 'devastate': -3.1, 'devastated': -3.0, 'devastates': -2.8, 'devastating': -3.3, 'devastatingly': -2.4, 'devastation': -1.8, 'devastations': -1.9, 'devastative': -3.2, 'devastator': -2.8, 'devastators': -2.9, 'devil': -3.4, 'deviled': -1.6, 'devilfish': -0.8, 'devilfishes': -0.6, 'deviling': -2.2, 'devilish': -2.1, 'devilishly': -1.6, 'devilishness': -2.3, 'devilkin': -2.4, 'devilled': -2.3, 'devilling': -1.8, 'devilment': -1.9, 'devilments': -1.1, 'devilries': -1.6, 'devilry': -2.8, 'devils': -2.7, 'deviltries': -1.5, 'deviltry': -2.8, 'devilwood': -0.8, 'devilwoods': -1.0, 'devote': 1.4, 'devoted': 1.7, 'devotedly': 1.6, 'devotedness': 2.0, 'devotee': 1.6, 'devotees': 0.5, 'devotement': 1.5, 'devotements': 1.1, 'devotes': 1.6, 'devoting': 2.1, 'devotion': 2.0, 'devotional': 1.2, 'devotionally': 2.2, 'devotionals': 1.2, 'devotions': 1.8, 'diamond': 1.4, 'dick': -2.3, 'dickhead': -3.1, 'die': -2.9, 'died': -2.6, 'difficult': -1.5, 'difficulties': -1.2, 'difficultly': -1.7, 'difficulty': -1.4, 'diffident': -1.0, 'dignified': 2.2, 'dignifies': 2.0, 'dignify': 1.8, 'dignifying': 2.1, 'dignitaries': 0.6, 'dignitary': 1.9, 'dignities': 1.4, 'dignity': 1.7, 'dilemma': -0.7, 'dipshit': -2.1, 'dire': -2.0, 'direful': -3.1, 'dirt': -1.4, 'dirtier': -1.4, 'dirtiest': -2.4, 'dirty': -1.9, 'disabling': -2.1, 'disadvantage': -1.8, 'disadvantaged': -1.7, 'disadvantageous': -1.8, 'disadvantageously': -2.1, 'disadvantageousness': -1.6, 'disadvantages': -1.7, 'disagree': -1.6, 'disagreeable': -1.7, 'disagreeableness': -1.7, 'disagreeablenesses': -1.9, 'disagreeably': -1.5, 'disagreed': -1.3, 'disagreeing': -1.4, 'disagreement': -1.5, 'disagreements': -1.8, 'disagrees': -1.3, 'disappear': -0.9, 'disappeared': -0.9, 'disappears': -1.4, 'disappoint': -1.7, 'disappointed': -2.1, 'disappointedly': -1.7, 'disappointing': -2.2, 'disappointingly': -1.9, 'disappointment': -2.3, 'disappointments': -2.0, 'disappoints': -1.6, 'disaster': -3.1, 'disasters': -2.6, 'disastrous': -2.9, 'disbelieve': -1.2, 'discard': -1.0, 'discarded': -1.4, 'discarding': -0.7, 'discards': -1.0, 'discomfort': -1.8, 'discomfortable': -1.6, 'discomforted': -1.6, 'discomforting': -1.6, 'discomforts': -1.3, 'disconsolate': -2.3, 'disconsolation': -1.7, 'discontented': -1.8, 'discord': -1.7, 'discounted': 0.2, 'discourage': -1.8, 'discourageable': -1.2, 'discouraged': -1.7, 'discouragement': -2.0, 'discouragements': -1.8, 'discourager': -1.7, 'discouragers': -1.9, 'discourages': -1.9, 'discouraging': -1.9, 'discouragingly': -1.8, 'discredited': -1.9, 'disdain': -2.1, 'disgrace': -2.2, 'disgraced': -2.0, 'disguise': -1.0, 'disguised': -1.1, 'disguises': -1.0, 'disguising': -1.3, 'disgust': -2.9, 'disgusted': -2.4, 'disgustedly': -3.0, 'disgustful': -2.6, 'disgusting': -2.4, 'disgustingly': -2.9, 'disgusts': -2.1, 'dishearten': -2.0, 'disheartened': -2.2, 'disheartening': -1.8, 'dishearteningly': -2.0, 'disheartenment': -2.3, 'disheartenments': -2.2, 'disheartens': -2.2, 'dishonest': -2.7, 'disillusion': -1.0, 'disillusioned': -1.9, 'disillusioning': -1.3, 'disillusionment': -1.7, 'disillusionments': -1.5, 'disillusions': -1.6, 'disinclined': -1.1, 'disjointed': -1.3, 'dislike': -1.6, 'disliked': -1.7, 'dislikes': -1.7, 'disliking': -1.3, 'dismal': -3.0, 'dismay': -1.8, 'dismayed': -1.9, 'dismaying': -2.2, 'dismayingly': -1.9, 'dismays': -1.8, 'disorder': -1.7, 'disorganized': -1.2, 'disoriented': -1.5, 'disparage': -2.0, 'disparaged': -1.4, 'disparages': -1.6, 'disparaging': -2.2, 'displeased': -1.9, 'dispute': -1.7, 'disputed': -1.4, 'disputes': -1.1, 'disputing': -1.7, 'disqualified': -1.8, 'disquiet': -1.3, 'disregard': -1.1, 'disregarded': -1.6, 'disregarding': -0.9, 'disregards': -1.4, 'disrespect': -1.8, 'disrespected': -2.0, 'disruption': -1.5, 'disruptions': -1.4, 'disruptive': -1.3, 'dissatisfaction': -2.2, 'dissatisfactions': -1.9, 'dissatisfactory': -2.0, 'dissatisfied': -1.6, 'dissatisfies': -1.8, 'dissatisfy': -2.2, 'dissatisfying': -2.4, 'distort': -1.3, 'distorted': -1.7, 'distorting': -1.1, 'distorts': -1.4, 'distract': -1.2, 'distractable': -1.3, 'distracted': -1.4, 'distractedly': -0.9, 'distractibility': -1.3, 'distractible': -1.5, 'distracting': -1.2, 'distractingly': -1.4, 'distraction': -1.6, 'distractions': -1.0, 'distractive': -1.6, 'distracts': -1.3, 'distraught': -2.6, 'distress': -2.4, 'distressed': -1.8, 'distresses': -1.6, 'distressful': -2.2, 'distressfully': -1.7, 'distressfulness': -2.4, 'distressing': -1.7, 'distressingly': -2.2, 'distrust': -1.8, 'distrusted': -2.4, 'distrustful': -2.1, 'distrustfully': -1.8, 'distrustfulness': -1.6, 'distrusting': -2.1, 'distrusts': -1.3, 'disturb': -1.7, 'disturbance': -1.6, 'disturbances': -1.4, 'disturbed': -1.6, 'disturber': -1.4, 'disturbers': -2.1, 'disturbing': -2.3, 'disturbingly': -2.3, 'disturbs': -1.9, 'dithering': -0.5, 'divination': 1.7, 'divinations': 1.1, 'divinatory': 1.6, 'divine': 2.6, 'divined': 0.8, 'divinely': 2.9, 'diviner': 0.3, 'diviners': 1.2, 'divines': 0.8, 'divinest': 2.7, 'diving': 0.3, 'divining': 0.9, 'divinise': 0.5, 'divinities': 1.8, 'divinity': 2.7, 'divinize': 2.3, 'dizzy': -0.9, 'dodging': -0.4, 'dodgy': -0.9, 'dolorous': -2.2, 'dominance': 0.8, 'dominances': -0.1, 'dominantly': 0.2, 'dominants': 0.2, 'dominate': -0.5, 'dominates': 0.2, 'dominating': -1.2, 'domination': -0.2, 'dominations': -0.3, 'dominative': -0.7, 'dominators': -0.4, 'dominatrices': -0.2, 'dominatrix': -0.5, 'dominatrixes': 0.6, 'doom': -1.7, 'doomed': -3.2, 'doomful': -2.1, 'dooming': -2.8, 'dooms': -1.1, 'doomsayer': -0.7, 'doomsayers': -1.7, 'doomsaying': -1.5, 'doomsayings': -1.5, 'doomsday': -2.8, 'doomsdayer': -2.2, 'doomsdays': -2.4, 'doomster': -2.2, 'doomsters': -1.6, 'doomy': -1.1, 'dork': -1.4, 'dorkier': -1.1, 'dorkiest': -1.2, 'dorks': -0.5, 'dorky': -1.1, 'doubt': -1.5, 'doubtable': -1.5, 'doubted': -1.1, 'doubter': -1.6, 'doubters': -1.3, 'doubtful': -1.4, 'doubtfully': -1.2, 'doubtfulness': -1.2, 'doubting': -1.4, 'doubtingly': -1.4, 'doubtless': 0.9, 'doubtlessly': 1.2, 'doubtlessness': 0.8, 'doubts': -1.2, 'douche': -1.5, 'douchebag': -3.0, 'downcast': -1.8, 'downhearted': -2.3, 'downside': -1.0, 'drag': -0.9, 'dragged': -0.2, 'drags': -0.7, 'drained': -1.5, 'dread': -2.0, 'dreaded': -2.7, 'dreadful': -1.9, 'dreadfully': -2.7, 'dreadfulness': -3.2, 'dreadfuls': -2.4, 'dreading': -2.4, 'dreadlock': -0.4, 'dreadlocks': -0.2, 'dreadnought': -0.6, 'dreadnoughts': -0.4, 'dreads': -1.4, 'dream': 1.0, 'dreams': 1.7, 'dreary': -1.4, 'droopy': -0.8, 'drop': -1.1, 'drown': -2.7, 'drowned': -2.9, 'drowns': -2.2, 'drunk': -1.4, 'dubious': -1.5, 'dud': -1.0, 'dull': -1.7, 'dullard': -1.6, 'dullards': -1.8, 'dulled': -1.5, 'duller': -1.7, 'dullest': -1.7, 'dulling': -1.1, 'dullish': -1.1, 'dullness': -1.4, 'dullnesses': -1.9, 'dulls': -1.0, 'dullsville': -2.4, 'dully': -1.1, 'dumb': -2.3, 'dumbass': -2.6, 'dumbbell': -0.8, 'dumbbells': -0.2, 'dumbcane': -0.3, 'dumbcanes': -0.6, 'dumbed': -1.4, 'dumber': -1.5, 'dumbest': -2.3, 'dumbfound': -0.1, 'dumbfounded': -1.6, 'dumbfounder': -1.0, 'dumbfounders': -1.0, 'dumbfounding': -0.8, 'dumbfounds': -0.3, 'dumbhead': -2.6, 'dumbheads': -1.9, 'dumbing': -0.5, 'dumbly': -1.3, 'dumbness': -1.9, 'dumbs': -1.5, 'dumbstruck': -1.0, 'dumbwaiter': 0.2, 'dumbwaiters': -0.1, 'dump': -1.6, 'dumpcart': -0.6, 'dumped': -1.7, 'dumper': -1.2, 'dumpers': -0.8, 'dumpier': -1.4, 'dumpiest': -1.6, 'dumpiness': -1.2, 'dumping': -1.3, 'dumpings': -1.1, 'dumpish': -1.8, 'dumpling': 0.4, 'dumplings': -0.3, 'dumps': -1.7, 'dumpster': -0.6, 'dumpsters': -1.0, 'dumpy': -1.7, 'dupe': -1.5, 'duped': -1.8, 'dwell': 0.5, 'dwelled': 0.4, 'dweller': 0.3, 'dwellers': -0.3, 'dwelling': 0.1, 'dwells': -0.1, 'dynamic': 1.6, 'dynamical': 1.2, 'dynamically': 1.5, 'dynamics': 1.1, 'dynamism': 1.6, 'dynamisms': 1.2, 'dynamist': 1.4, 'dynamistic': 1.5, 'dynamists': 0.9, 'dynamite': 0.7, 'dynamited': -0.9, 'dynamiter': -1.2, 'dynamiters': 0.4, 'dynamites': -0.3, 'dynamitic': 0.9, 'dynamiting': 0.2, 'dynamometer': 0.3, 'dynamometers': 0.3, 'dynamometric': 0.3, 'dynamometry': 0.6, 'dynamos': 0.3, 'dynamotor': 0.6, 'dysfunction': -1.8, 'eager': 1.5, 'eagerly': 1.6, 'eagerness': 1.7, 'eagers': 1.6, 'earnest': 2.3, 'ease': 1.5, 'eased': 1.2, 'easeful': 1.5, 'easefully': 1.4, 'easel': 0.3, 'easement': 1.6, 'easements': 0.4, 'eases': 1.3, 'easier': 1.8, 'easiest': 1.8, 'easily': 1.4, 'easiness': 1.6, 'easing': 1.0, 'easy': 1.9, 'easygoing': 1.3, 'easygoingness': 1.5, 'ecstacy': 3.3, 'ecstasies': 2.3, 'ecstasy': 2.9, 'ecstatic': 2.3, 'ecstatically': 2.8, 'ecstatics': 2.9, 'eerie': -1.5, 'eery': -0.9, 'effective': 2.1, 'effectively': 1.9, 'efficiencies': 1.6, 'efficiency': 1.5, 'efficient': 1.8, 'efficiently': 1.7, 'effin': -2.3, 'egotism': -1.4, 'egotisms': -1.0, 'egotist': -2.3, 'egotistic': -1.4, 'egotistical': -0.9, 'egotistically': -1.8, 'egotists': -1.7, 'elated': 3.2, 'elation': 1.5, 'elegance': 2.1, 'elegances': 1.8, 'elegancies': 1.6, 'elegancy': 2.1, 'elegant': 2.1, 'elegantly': 1.9, 'embarrass': -1.2, 'embarrassable': -1.6, 'embarrassed': -1.5, 'embarrassedly': -1.1, 'embarrasses': -1.7, 'embarrassing': -1.6, 'embarrassingly': -1.7, 'embarrassment': -1.9, 'embarrassments': -1.7, 'embittered': -0.4, 'embrace': 1.3, 'emergency': -1.6, 'emotional': 0.6, 'empathetic': 1.7, 'emptied': -0.7, 'emptier': -0.7, 'emptiers': -0.7, 'empties': -0.7, 'emptiest': -1.8, 'emptily': -1.0, 'emptiness': -1.9, 'emptinesses': -1.5, 'emptins': -0.3, 'empty': -0.8, 'emptying': -0.6, 'enchanted': 1.6, 'encourage': 2.3, 'encouraged': 1.5, 'encouragement': 1.8, 'encouragements': 2.1, 'encourager': 1.5, 'encouragers': 1.5, 'encourages': 1.9, 'encouraging': 2.4, 'encouragingly': 2.0, 'endorse': 1.3, 'endorsed': 1.0, 'endorsement': 1.3, 'endorses': 1.4, 'enemies': -2.2, 'enemy': -2.5, 'energetic': 1.9, 'energetically': 1.8, 'energetics': 0.3, 'energies': 0.9, 'energise': 2.2, 'energised': 2.1, 'energises': 2.2, 'energising': 1.9, 'energization': 1.6, 'energizations': 1.5, 'energize': 2.1, 'energized': 2.3, 'energizer': 2.1, 'energizers': 1.7, 'energizes': 2.1, 'energizing': 2.0, 'energy': 1.1, 'engage': 1.4, 'engaged': 1.7, 'engagement': 2.0, 'engagements': 0.6, 'engager': 1.1, 'engagers': 1.0, 'engages': 1.0, 'engaging': 1.4, 'engagingly': 1.5, 'engrossed': 0.6, 'enjoy': 2.2, 'enjoyable': 1.9, 'enjoyableness': 1.9, 'enjoyably': 1.8, 'enjoyed': 2.3, 'enjoyer': 2.2, 'enjoyers': 2.2, 'enjoying': 2.4, 'enjoyment': 2.6, 'enjoyments': 2.0, 'enjoys': 2.3, 'enlighten': 2.3, 'enlightened': 2.2, 'enlightening': 2.3, 'enlightens': 1.7, 'ennui': -1.2, 'enrage': -2.6, 'enraged': -1.7, 'enrages': -1.8, 'enraging': -2.8, 'enrapture': 3.0, 'enslave': -3.1, 'enslaved': -1.7, 'enslaves': -1.6, 'ensure': 1.6, 'ensuring': 1.1, 'enterprising': 2.3, 'entertain': 1.3, 'entertained': 1.7, 'entertainer': 1.6, 'entertainers': 1.0, 'entertaining': 1.9, 'entertainingly': 1.9, 'entertainment': 1.8, 'entertainments': 2.3, 'entertains': 2.4, 'enthral': 0.4, 'enthuse': 1.6, 'enthused': 2.0, 'enthuses': 1.7, 'enthusiasm': 1.9, 'enthusiasms': 2.0, 'enthusiast': 1.5, 'enthusiastic': 2.2, 'enthusiastically': 2.6, 'enthusiasts': 1.4, 'enthusing': 1.9, 'entitled': 1.1, 'entrusted': 0.8, 'envied': -1.1, 'envier': -1.0, 'enviers': -1.1, 'envies': -0.8, 'envious': -1.1, 'envy': -1.1, 'envying': -0.8, 'envyingly': -1.3, 'erroneous': -1.8, 'error': -1.7, 'errors': -1.4, 'escape': 0.7, 'escapes': 0.5, 'escaping': 0.2, 'esteemed': 1.9, 'ethical': 2.3, 'euphoria': 3.3, 'euphoric': 3.2, 'eviction': -2.0, 'evil': -3.4, 'evildoer': -3.1, 'evildoers': -2.4, 'evildoing': -3.1, 'evildoings': -2.5, 'eviler': -2.1, 'evilest': -2.5, 'eviller': -2.9, 'evillest': -3.3, 'evilly': -3.4, 'evilness': -3.1, 'evils': -2.7, 'exaggerate': -0.6, 'exaggerated': -0.4, 'exaggerates': -0.6, 'exaggerating': -0.7, 'exasperated': -1.8, 'excel': 2.0, 'excelled': 2.2, 'excellence': 3.1, 'excellences': 2.5, 'excellencies': 2.4, 'excellency': 2.5, 'excellent': 2.7, 'excellently': 3.1, 'excelling': 2.5, 'excels': 2.5, 'excelsior': 0.7, 'excitabilities': 1.5, 'excitability': 1.2, 'excitable': 1.5, 'excitableness': 1.0, 'excitant': 1.8, 'excitants': 1.2, 'excitation': 1.8, 'excitations': 1.8, 'excitative': 0.3, 'excitatory': 1.1, 'excite': 2.1, 'excited': 1.4, 'excitedly': 2.3, 'excitement': 2.2, 'excitements': 1.9, 'exciter': 1.9, 'exciters': 1.4, 'excites': 2.1, 'exciting': 2.2, 'excitingly': 1.9, 'exciton': 0.3, 'excitonic': 0.2, 'excitons': 0.8, 'excitor': 0.5, 'exclude': -0.9, 'excluded': -1.4, 'exclusion': -1.2, 'exclusive': 0.5, 'excruciate': -2.7, 'excruciated': -1.3, 'excruciates': -1.0, 'excruciating': -3.3, 'excruciatingly': -2.9, 'excruciation': -3.4, 'excruciations': -1.9, 'excuse': 0.3, 'exempt': 0.4, 'exhaust': -1.2, 'exhausted': -1.5, 'exhauster': -1.3, 'exhausters': -1.3, 'exhaustibility': -0.8, 'exhaustible': -1.0, 'exhausting': -1.5, 'exhaustion': -1.5, 'exhaustions': -1.1, 'exhaustive': -0.5, 'exhaustively': -0.7, 'exhaustiveness': -1.1, 'exhaustless': 0.2, 'exhaustlessness': 0.9, 'exhausts': -1.1, 'exhilarated': 3.0, 'exhilarates': 2.8, 'exhilarating': 1.7, 'exonerate': 1.8, 'exonerated': 1.8, 'exonerates': 1.6, 'exonerating': 1.0, 'expand': 1.3, 'expands': 0.4, 'expel': -1.9, 'expelled': -1.0, 'expelling': -1.6, 'expels': -1.6, 'exploit': -0.4, 'exploited': -2.0, 'exploiting': -1.9, 'exploits': -1.4, 'exploration': 0.9, 'explorations': 0.3, 'expose': -0.6, 'exposed': -0.3, 'exposes': -0.5, 'exposing': -1.1, 'extend': 0.7, 'extends': 0.5, 'exuberant': 2.8, 'exultant': 3.0, 'exultantly': 1.4, 'fab': 2.0, 'fabulous': 2.4, 'fabulousness': 2.8, 'fad': 0.9, 'fag': -2.1, 'faggot': -3.4, 'faggots': -3.2, 'fail': -2.5, 'failed': -2.3, 'failing': -2.3, 'failingly': -1.4, 'failings': -2.2, 'faille': 0.1, 'fails': -1.8, 'failure': -2.3, 'failures': -2.0, 'fainthearted': -0.3, 'fair': 1.3, 'faith': 1.8, 'faithed': 1.3, 'faithful': 1.9, 'faithfully': 1.8, 'faithfulness': 1.9, 'faithless': -1.0, 'faithlessly': -0.9, 'faithlessness': -1.8, 'faiths': 1.8, 'fake': -2.1, 'fakes': -1.8, 'faking': -1.8, 'fallen': -1.5, 'falling': -0.6, 'falsified': -1.6, 'falsify': -2.0, 'fame': 1.9, 'fan': 1.3, 'fantastic': 2.6, 'fantastical': 2.0, 'fantasticalities': 2.1, 'fantasticality': 1.7, 'fantasticalness': 1.3, 'fantasticate': 1.5, 'fantastico': 0.4, 'farce': -1.7, 'fascinate': 2.4, 'fascinated': 2.1, 'fascinates': 2.0, 'fascination': 2.2, 'fascinating': 2.5, 'fascist': -2.6, 'fascists': -0.8, 'fatal': -2.5, 'fatalism': -0.6, 'fatalisms': -1.7, 'fatalist': -0.5, 'fatalistic': -1.0, 'fatalists': -1.2, 'fatalities': -2.9, 'fatality': -3.5, 'fatally': -3.2, 'fatigue': -1.0, 'fatigued': -1.4, 'fatigues': -1.3, 'fatiguing': -1.2, 'fatiguingly': -1.5, 'fault': -1.7, 'faulted': -1.4, 'faultfinder': -0.8, 'faultfinders': -1.5, 'faultfinding': -2.1, 'faultier': -2.1, 'faultiest': -2.1, 'faultily': -2.0, 'faultiness': -1.5, 'faulting': -1.4, 'faultless': 2.0, 'faultlessly': 2.0, 'faultlessness': 1.1, 'faults': -2.1, 'faulty': -1.3, 'fave': 1.9, 'favor': 1.7, 'favorable': 2.1, 'favorableness': 2.2, 'favorably': 1.6, 'favored': 1.8, 'favorer': 1.3, 'favorers': 1.4, 'favoring': 1.8, 'favorite': 2.0, 'favorited': 1.7, 'favorites': 1.8, 'favoritism': 0.7, 'favoritisms': 0.7, 'favors': 1.0, 'favour': 1.9, 'favoured': 1.8, 'favourer': 1.6, 'favourers': 1.6, 'favouring': 1.3, 'favours': 1.8, 'fear': -2.2, 'feared': -2.2, 'fearful': -2.2, 'fearfuller': -2.2, 'fearfullest': -2.5, 'fearfully': -2.2, 'fearfulness': -1.8, 'fearing': -2.7, 'fearless': 1.9, 'fearlessly': 1.1, 'fearlessness': 1.1, 'fears': -1.8, 'fearsome': -1.7, 'fed up': -1.8, 'feeble': -1.2, 'feeling': 0.5, 'felonies': -2.5, 'felony': -2.5, 'ferocious': -0.4, 'ferociously': -1.1, 'ferociousness': -1.0, 'ferocities': -1.0, 'ferocity': -0.7, 'fervent': 1.1, 'fervid': 0.5, 'festival': 2.2, 'festivalgoer': 1.3, 'festivalgoers': 1.2, 'festivals': 1.5, 'festive': 2.0, 'festively': 2.2, 'festiveness': 2.4, 'festivities': 2.1, 'festivity': 2.2, 'feud': -1.4, 'feudal': -0.8, 'feudalism': -0.9, 'feudalisms': -0.2, 'feudalist': -0.9, 'feudalistic': -1.1, 'feudalities': -0.4, 'feudality': -0.5, 'feudalization': -0.3, 'feudalize': -0.5, 'feudalized': -0.8, 'feudalizes': -0.1, 'feudalizing': -0.7, 'feudally': -0.6, 'feudaries': -0.3, 'feudary': -0.8, 'feudatories': -0.5, 'feudatory': -0.1, 'feuded': -2.2, 'feuding': -1.6, 'feudist': -1.1, 'feudists': -0.7, 'feuds': -1.4, 'fiasco': -2.3, 'fidgety': -1.4, 'fiery': -1.4, 'fiesta': 2.1, 'fiestas': 1.5, 'fight': -1.6, 'fighter': 0.6, 'fighters': -0.2, 'fighting': -1.5, 'fightings': -1.9, 'fights': -1.7, 'fine': 0.8, 'fire': -1.4, 'fired': -2.6, 'firing': -1.4, 'fit': 1.5, 'fitness': 1.1, 'flagship': 0.4, 'flatter': 0.4, 'flattered': 1.6, 'flatterer': -0.3, 'flatterers': 0.3, 'flatteries': 1.2, 'flattering': 1.3, 'flatteringly': 1.0, 'flatters': 0.6, 'flattery': 0.4, 'flawless': 2.3, 'flawlessly': 0.8, 'flees': -0.7, 'flexibilities': 1.0, 'flexibility': 1.4, 'flexible': 0.9, 'flexibly': 1.3, 'flirtation': 1.7, 'flirtations': -0.1, 'flirtatious': 0.5, 'flirtatiously': -0.1, 'flirtatiousness': 0.6, 'flirted': -0.2, 'flirter': -0.4, 'flirters': 0.6, 'flirtier': -0.1, 'flirtiest': 0.4, 'flirting': 0.8, 'flirts': 0.7, 'flirty': 0.6, 'flop': -1.4, 'flops': -1.4, 'flu': -1.6, 'flunk': -1.3, 'flunked': -2.1, 'flunker': -1.9, 'flunkers': -1.6, 'flunkey': -1.8, 'flunkeys': -0.6, 'flunkies': -1.4, 'flunking': -1.5, 'flunks': -1.8, 'flunky': -1.8, 'flustered': -1.0, 'focused': 1.6, 'foe': -1.9, 'foehns': 0.2, 'foeman': -1.8, 'foemen': -0.3, 'foes': -2.0, 'foetal': -0.1, 'foetid': -2.3, 'foetor': -3.0, 'foetors': -2.1, 'foetus': 0.2, 'foetuses': 0.2, 'fond': 1.9, 'fondly': 1.9, 'fondness': 2.5, 'fool': -1.9, 'fooled': -1.6, 'fooleries': -1.8, 'foolery': -1.8, 'foolfish': -0.8, 'foolfishes': -0.4, 'foolhardier': -1.5, 'foolhardiest': -1.3, 'foolhardily': -1.0, 'foolhardiness': -1.6, 'foolhardy': -1.4, 'fooling': -1.7, 'foolish': -1.1, 'foolisher': -1.7, 'foolishest': -1.4, 'foolishly': -1.8, 'foolishness': -1.8, 'foolishnesses': -2.0, 'foolproof': 1.6, 'fools': -2.2, 'foolscaps': -0.8, 'forbid': -1.3, 'forbiddance': -1.4, 'forbiddances': -1.0, 'forbidden': -1.8, 'forbidder': -1.6, 'forbidders': -1.5, 'forbidding': -1.9, 'forbiddingly': -1.9, 'forbids': -1.3, 'forced': -2.0, 'foreclosure': -0.5, 'foreclosures': -2.4, 'forgave': 1.4, 'forget': -0.9, 'forgetful': -1.1, 'forgivable': 1.7, 'forgivably': 1.6, 'forgive': 1.1, 'forgiven': 1.6, 'forgiveness': 1.1, 'forgiver': 1.7, 'forgivers': 1.2, 'forgives': 1.7, 'forgiving': 1.9, 'forgivingly': 1.4, 'forgivingness': 1.8, 'forgotten': -0.9, 'fortunate': 1.9, 'fought': -1.3, 'foughten': -1.9, 'frantic': -1.9, 'frantically': -1.4, 'franticness': -0.7, 'fraud': -2.8, 'frauds': -2.3, 'fraudster': -2.5, 'fraudsters': -2.4, 'fraudulence': -2.3, 'fraudulent': -2.2, 'freak': -1.9, 'freaked': -1.2, 'freakier': -1.3, 'freakiest': -1.6, 'freakiness': -1.4, 'freaking': -1.8, 'freakish': -2.1, 'freakishly': -0.8, 'freakishness': -1.4, 'freakout': -1.8, 'freakouts': -1.5, 'freaks': -0.4, 'freaky': -1.5, 'free': 2.3, 'freebase': -0.1, 'freebased': 0.8, 'freebases': 0.8, 'freebasing': -0.4, 'freebee': 1.3, 'freebees': 1.3, 'freebie': 1.8, 'freebies': 1.8, 'freeboard': 0.3, 'freeboards': 0.7, 'freeboot': -0.7, 'freebooter': -1.7, 'freebooters': -0.2, 'freebooting': -0.8, 'freeborn': 1.2, 'freed': 1.7, 'freedman': 1.1, 'freedmen': 0.7, 'freedom': 3.2, 'freedoms': 1.2, 'freedwoman': 1.6, 'freedwomen': 1.3, 'freeform': 0.9, 'freehand': 0.5, 'freehanded': 1.4, 'freehearted': 1.5, 'freehold': 0.7, 'freeholder': 0.5, 'freeholders': 0.1, 'freeholds': 1.0, 'freeing': 2.1, 'freelance': 1.2, 'freelanced': 0.7, 'freelancer': 1.1, 'freelancers': 0.4, 'freelances': 0.7, 'freelancing': 0.4, 'freeload': -1.9, 'freeloaded': -1.6, 'freeloader': -0.7, 'freeloaders': -0.1, 'freeloading': -1.3, 'freeloads': -1.3, 'freely': 1.9, 'freeman': 1.7, 'freemartin': -0.5, 'freemasonries': 0.7, 'freemasonry': 0.3, 'freemen': 1.5, 'freeness': 1.6, 'freenesses': 1.7, 'freer': 1.1, 'freers': 1.0, 'frees': 1.2, 'freesia': 0.4, 'freesias': 0.4, 'freest': 1.6, 'freestanding': 1.1, 'freestyle': 0.7, 'freestyler': 0.4, 'freestylers': 0.8, 'freestyles': 0.3, 'freethinker': 1.0, 'freethinkers': 1.0, 'freethinking': 1.1, 'freeware': 0.7, 'freeway': 0.2, 'freewheel': 0.5, 'freewheeled': 0.3, 'freewheeler': 0.2, 'freewheelers': -0.3, 'freewheeling': 0.5, 'freewheelingly': 0.8, 'freewheels': 0.6, 'freewill': 1.0, 'freewriting': 0.8, 'freeze': 0.2, 'freezers': -0.1, 'freezes': -0.1, 'freezing': -0.4, 'freezingly': -1.6, 'frenzy': -1.3, 'fresh': 1.3, 'friend': 2.2, 'friended': 1.7, 'friending': 1.8, 'friendless': -1.5, 'friendlessness': -0.3, 'friendlier': 2.0, 'friendlies': 2.2, 'friendliest': 2.6, 'friendlily': 1.8, 'friendliness': 2.0, 'friendly': 2.2, 'friends': 2.1, 'friendship': 1.9, 'friendships': 1.6, 'fright': -1.6, 'frighted': -1.4, 'frighten': -1.4, 'frightened': -1.9, 'frightening': -2.2, 'frighteningly': -2.1, 'frightens': -1.7, 'frightful': -2.3, 'frightfully': -1.7, 'frightfulness': -1.9, 'frighting': -1.5, 'frights': -1.1, 'frisky': 1.0, 'frowning': -1.4, 'frustrate': -2.0, 'frustrated': -2.4, 'frustrates': -1.9, 'frustrating': -1.9, 'frustratingly': -2.0, 'frustration': -2.1, 'frustrations': -2.0, 'fuck': -2.5, 'fucked': -3.4, 'fucker': -3.3, 'fuckers': -2.9, 'fuckface': -3.2, 'fuckhead': -3.1, 'fucks': -2.1, 'fucktard': -3.1, 'fud': -1.1, 'fuked': -2.5, 'fuking': -3.2, 'fulfill': 1.9, 'fulfilled': 1.8, 'fulfills': 1.0, 'fume': -1.2, 'fumed': -1.8, 'fumeless': 0.3, 'fumelike': -0.7, 'fumer': 0.7, 'fumers': -0.8, 'fumes': -0.1, 'fumet': 0.4, 'fumets': -0.4, 'fumette': -0.6, 'fuming': -2.7, 'fun': 2.3, 'funeral': -1.5, 'funerals': -1.6, 'funky': -0.4, 'funned': 2.3, 'funnel': 0.1, 'funneled': 0.1, 'funnelform': 0.5, 'funneling': -0.1, 'funnelled': -0.1, 'funnelling': 0.1, 'funnels': 0.4, 'funner': 2.2, 'funnest': 2.9, 'funnier': 1.7, 'funnies': 1.3, 'funniest': 2.6, 'funnily': 1.9, 'funniness': 1.8, 'funninesses': 1.6, 'funning': 1.8, 'funny': 1.9, 'funnyman': 1.4, 'funnymen': 1.3, 'furious': -2.7, 'furiously': -1.9, 'fury': -2.7, 'futile': -1.9, 'gag': -1.4, 'gagged': -1.3, 'gain': 2.4, 'gained': 1.6, 'gaining': 1.8, 'gains': 1.4, 'gallant': 1.7, 'gallantly': 1.9, 'gallantry': 2.6, 'geek': -0.8, 'geekier': 0.2, 'geekiest': -0.1, 'geeks': -0.4, 'geeky': -0.6, 'generosities': 2.6, 'generosity': 2.3, 'generous': 2.3, 'generously': 1.8, 'generousness': 2.4, 'genial': 1.8, 'gentle': 1.9, 'gentler': 1.4, 'gentlest': 1.8, 'gently': 2.0, 'ghost': -1.3, 'giddy': -0.6, 'gift': 1.9, 'giggle': 1.8, 'giggled': 1.5, 'giggler': 0.6, 'gigglers': 1.4, 'giggles': 0.8, 'gigglier': 1.0, 'giggliest': 1.7, 'giggling': 1.5, 'gigglingly': 1.1, 'giggly': 1.0, 'giver': 1.4, 'givers': 1.7, 'giving': 1.4, 'glad': 2.0, 'gladly': 1.4, 'glamor': 2.1, 'glamorise': 1.3, 'glamorised': 1.8, 'glamorises': 2.1, 'glamorising': 1.2, 'glamorization': 1.6, 'glamorize': 1.7, 'glamorized': 2.1, 'glamorizer': 2.4, 'glamorizers': 1.6, 'glamorizes': 2.4, 'glamorizing': 1.8, 'glamorous': 2.3, 'glamorously': 2.1, 'glamors': 1.4, 'glamour': 2.4, 'glamourize': 0.8, 'glamourless': -1.6, 'glamourous': 2.0, 'glamours': 1.9, 'glee': 3.2, 'gleeful': 2.9, 'gloom': -2.6, 'gloomed': -1.9, 'gloomful': -2.1, 'gloomier': -1.5, 'gloomiest': -1.8, 'gloominess': -1.8, 'gloominesses': -1.0, 'glooming': -1.8, 'glooms': -0.9, 'gloomy': -0.6, 'gloried': 2.4, 'glories': 2.1, 'glorification': 2.0, 'glorified': 2.3, 'glorifier': 2.3, 'glorifiers': 1.6, 'glorifies': 2.2, 'glorify': 2.7, 'glorifying': 2.4, 'gloriole': 1.5, 'glorioles': 1.2, 'glorious': 3.2, 'gloriously': 2.9, 'gloriousness': 2.6, 'glory': 2.5, 'glum': -2.1, 'gn8': 0.6, 'god': 1.1, 'goddam': -2.5, 'goddammed': -2.4, 'goddamn': -2.1, 'goddamned': -1.8, 'goddamns': -2.1, 'goddams': -1.9, 'godsend': 2.8, 'good': 1.9, 'goodness': 2.0, 'gorgeous': 3.0, 'gorgeously': 2.3, 'gorgeousness': 2.9, 'gorgeousnesses': 2.1, 'gossip': -0.7, 'gossiped': -1.1, 'gossiper': -1.1, 'gossipers': -1.1, 'gossiping': -1.6, 'gossipmonger': -1.0, 'gossipmongers': -1.4, 'gossipped': -1.3, 'gossipping': -1.8, 'gossipries': -0.8, 'gossipry': -1.2, 'gossips': -1.3, 'gossipy': -1.3, 'grace': 1.8, 'graced': 0.9, 'graceful': 2.0, 'gracefuller': 2.2, 'gracefullest': 2.8, 'gracefully': 2.4, 'gracefulness': 2.2, 'graces': 1.6, 'gracile': 1.7, 'graciles': 0.6, 'gracilis': 0.4, 'gracility': 1.2, 'gracing': 1.3, 'gracioso': 1.0, 'gracious': 2.6, 'graciously': 2.3, 'graciousness': 2.4, 'grand': 2.0, 'grandee': 1.1, 'grandees': 1.2, 'grander': 1.7, 'grandest': 2.4, 'grandeur': 2.4, 'grandeurs': 2.1, 'grant': 1.5, 'granted': 1.0, 'granting': 1.3, 'grants': 0.9, 'grateful': 2.0, 'gratefuller': 1.8, 'gratefully': 2.1, 'gratefulness': 2.2, 'graticule': 0.1, 'graticules': 0.2, 'gratification': 1.6, 'gratifications': 1.8, 'gratified': 1.6, 'gratifies': 1.5, 'gratify': 1.3, 'gratifying': 2.3, 'gratifyingly': 2.0, 'gratin': 0.4, 'grating': -0.4, 'gratingly': -0.2, 'gratings': -0.8, 'gratins': 0.2, 'gratis': 0.2, 'gratitude': 2.3, 'gratz': 2.0, 'grave': -1.6, 'graved': -0.9, 'gravel': -0.5, 'graveled': -0.5, 'graveless': -1.3, 'graveling': -0.4, 'gravelled': -0.9, 'gravelling': -0.4, 'gravelly': -0.9, 'gravels': -0.5, 'gravely': -1.5, 'graven': -0.9, 'graveness': -1.5, 'graver': -1.1, 'gravers': -1.2, 'graves': -1.2, 'graveside': -0.8, 'gravesides': -1.6, 'gravest': -1.3, 'gravestone': -0.7, 'gravestones': -0.5, 'graveyard': -1.2, 'graveyards': -1.2, 'great': 3.1, 'greater': 1.5, 'greatest': 3.2, 'greed': -1.7, 'greedier': -2.0, 'greediest': -2.8, 'greedily': -1.9, 'greediness': -1.7, 'greeds': -1.0, 'greedy': -1.3, 'greenwash': -1.8, 'greenwashing': -0.4, 'greet': 1.3, 'greeted': 1.1, 'greeting': 1.6, 'greetings': 1.8, 'greets': 0.6, 'grey': 0.2, 'grief': -2.2, 'grievance': -2.1, 'grievances': -1.5, 'grievant': -0.8, 'grievants': -1.1, 'grieve': -1.6, 'grieved': -2.0, 'griever': -1.9, 'grievers': -0.3, 'grieves': -2.1, 'grieving': -2.3, 'grievous': -2.0, 'grievously': -1.7, 'grievousness': -2.7, 'grim': -2.7, 'grimace': -1.0, 'grimaced': -2.0, 'grimaces': -1.8, 'grimacing': -1.4, 'grimalkin': -0.9, 'grimalkins': -0.9, 'grime': -1.5, 'grimed': -1.2, 'grimes': -1.0, 'grimier': -1.6, 'grimiest': -0.7, 'grimily': -0.7, 'griminess': -1.6, 'griming': -0.7, 'grimly': -1.3, 'grimmer': -1.5, 'grimmest': -0.8, 'grimness': -0.8, 'grimy': -1.8, 'grin': 2.1, 'grinned': 1.1, 'grinner': 1.1, 'grinners': 1.6, 'grinning': 1.5, 'grins': 0.9, 'gross': -2.1, 'grossed': -0.4, 'grosser': -0.3, 'grosses': -0.8, 'grossest': -2.1, 'grossing': -0.3, 'grossly': -0.9, 'grossness': -1.8, 'grossular': -0.3, 'grossularite': -0.1, 'grossularites': -0.7, 'grossulars': -0.3, 'grouch': -2.2, 'grouched': -0.8, 'grouches': -0.9, 'grouchier': -2.0, 'grouchiest': -2.3, 'grouchily': -1.4, 'grouchiness': -2.0, 'grouching': -1.7, 'grouchy': -1.9, 'growing': 0.7, 'growth': 1.6, 'guarantee': 1.0, 'guilt': -1.1, 'guiltier': -2.0, 'guiltiest': -1.7, 'guiltily': -1.1, 'guiltiness': -1.8, 'guiltless': 0.8, 'guiltlessly': 0.7, 'guiltlessness': 0.6, 'guilts': -1.4, 'guilty': -1.8, 'gullibility': -1.6, 'gullible': -1.5, 'gun': -1.4, 'h8': -2.7, 'ha': 1.4, 'hacked': -1.7, 'haha': 2.0, 'hahaha': 2.6, 'hahas': 1.8, 'hail': 0.3, 'hailed': 0.9, 'hallelujah': 3.0, 'handsome': 2.2, 'handsomely': 1.9, 'handsomeness': 2.4, 'handsomer': 2.0, 'handsomest': 2.6, 'hapless': -1.4, 'haplessness': -1.4, 'happier': 2.4, 'happiest': 3.2, 'happily': 2.6, 'happiness': 2.6, 'happing': 1.1, 'happy': 2.7, 'harass': -2.2, 'harassed': -2.5, 'harasser': -2.4, 'harassers': -2.8, 'harasses': -2.5, 'harassing': -2.5, 'harassment': -2.5, 'harassments': -2.6, 'hard': -0.4, 'hardier': -0.6, 'hardship': -1.3, 'hardy': 1.7, 'harm': -2.5, 'harmed': -2.1, 'harmfully': -2.6, 'harmfulness': -2.6, 'harming': -2.6, 'harmless': 1.0, 'harmlessly': 1.4, 'harmlessness': 0.8, 'harmonic': 1.8, 'harmonica': 0.6, 'harmonically': 2.1, 'harmonicas': 0.1, 'harmonicist': 0.5, 'harmonicists': 0.9, 'harmonics': 1.5, 'harmonies': 1.3, 'harmonious': 2.0, 'harmoniously': 1.9, 'harmoniousness': 1.8, 'harmonise': 1.8, 'harmonised': 1.3, 'harmonising': 1.4, 'harmonium': 0.9, 'harmoniums': 0.8, 'harmonization': 1.9, 'harmonizations': 0.9, 'harmonize': 1.7, 'harmonized': 1.6, 'harmonizer': 1.6, 'harmonizers': 1.6, 'harmonizes': 1.5, 'harmonizing': 1.4, 'harmony': 1.7, 'harms': -2.2, 'harried': -1.4, 'harsh': -1.9, 'harsher': -2.2, 'harshest': -2.9, 'hate': -2.7, 'hated': -3.2, 'hateful': -2.2, 'hatefully': -2.3, 'hatefulness': -3.6, 'hater': -1.8, 'haters': -2.2, 'hates': -1.9, 'hating': -2.3, 'hatred': -3.2, 'haunt': -1.7, 'haunted': -2.1, 'haunting': -1.1, 'haunts': -1.0, 'havoc': -2.9, 'healthy': 1.7, 'heartbreak': -2.7, 'heartbreaker': -2.2, 'heartbreakers': -2.1, 'heartbreaking': -2.0, 'heartbreakingly': -1.8, 'heartbreaks': -1.8, 'heartbroken': -3.3, 'heartfelt': 2.5, 'heartless': -2.2, 'heartlessly': -2.8, 'heartlessness': -2.8, 'heartwarming': 2.1, 'heaven': 2.3, 'heavenlier': 3.0, 'heavenliest': 2.7, 'heavenliness': 2.7, 'heavenlinesses': 2.3, 'heavenly': 3.0, 'heavens': 1.7, 'heavenward': 1.4, 'heavenwards': 1.2, 'heavyhearted': -2.1, 'heh': -0.6, 'hell': -3.6, 'hellish': -3.2, 'help': 1.7, 'helper': 1.4, 'helpers': 1.1, 'helpful': 1.8, 'helpfully': 2.3, 'helpfulness': 1.9, 'helping': 1.2, 'helpless': -2.0, 'helplessly': -1.4, 'helplessness': -2.1, 'helplessnesses': -1.7, 'helps': 1.6, 'hero': 2.6, 'heroes': 2.3, 'heroic': 2.6, 'heroical': 2.9, 'heroically': 2.4, 'heroicomic': 1.0, 'heroicomical': 1.1, 'heroics': 2.4, 'heroin': -2.2, 'heroine': 2.7, 'heroines': 1.8, 'heroinism': -2.0, 'heroism': 2.8, 'heroisms': 2.2, 'heroize': 2.1, 'heroized': 2.0, 'heroizes': 2.2, 'heroizing': 1.9, 'heron': 0.1, 'heronries': 0.7, 'heronry': 0.1, 'herons': 0.5, 'heros': 1.3, 'hesitance': -0.9, 'hesitancies': -1.0, 'hesitancy': -0.9, 'hesitant': -1.0, 'hesitantly': -1.2, 'hesitate': -1.1, 'hesitated': -1.3, 'hesitater': -1.4, 'hesitaters': -1.4, 'hesitates': -1.4, 'hesitating': -1.4, 'hesitatingly': -1.5, 'hesitation': -1.1, 'hesitations': -1.1, 'hid': -0.4, 'hide': -0.7, 'hides': -0.7, 'hiding': -1.2, 'highlight': 1.4, 'hilarious': 1.7, 'hindrance': -1.7, 'hoax': -1.1, 'holiday': 1.7, 'holidays': 1.6, 'homesick': -0.7, 'homesickness': -1.8, 'homesicknesses': -1.8, 'honest': 2.3, 'honester': 1.9, 'honestest': 3.0, 'honesties': 1.8, 'honestly': 2.0, 'honesty': 2.2, 'honor': 2.2, 'honorability': 2.2, 'honorable': 2.5, 'honorableness': 2.2, 'honorably': 2.4, 'honoraria': 0.6, 'honoraries': 1.5, 'honorarily': 1.9, 'honorarium': 0.7, 'honorariums': 1.0, 'honorary': 1.4, 'honored': 2.8, 'honoree': 2.1, 'honorees': 2.3, 'honorer': 1.7, 'honorers': 1.3, 'honorific': 1.4, 'honorifically': 2.2, 'honorifics': 1.7, 'honoring': 2.3, 'honors': 2.3, 'honour': 2.7, 'honourable': 2.1, 'honoured': 2.2, 'honourer': 1.8, 'honourers': 1.6, 'honouring': 2.1, 'honours': 2.2, 'hooligan': -1.5, 'hooliganism': -2.1, 'hooligans': -1.1, 'hooray': 2.3, 'hope': 1.9, 'hoped': 1.6, 'hopeful': 2.3, 'hopefully': 1.7, 'hopefulness': 1.6, 'hopeless': -2.0, 'hopelessly': -2.2, 'hopelessness': -3.1, 'hopes': 1.8, 'hoping': 1.8, 'horrendous': -2.8, 'horrendously': -1.9, 'horrent': -0.9, 'horrible': -2.5, 'horribleness': -2.4, 'horribles': -2.1, 'horribly': -2.4, 'horrid': -2.5, 'horridly': -1.4, 'horridness': -2.3, 'horridnesses': -3.0, 'horrific': -3.4, 'horrifically': -2.9, 'horrified': -2.5, 'horrifies': -2.9, 'horrify': -2.5, 'horrifying': -2.7, 'horrifyingly': -3.3, 'horror': -2.7, 'horrors': -2.7, 'hostile': -1.6, 'hostilely': -2.2, 'hostiles': -1.3, 'hostilities': -2.1, 'hostility': -2.5, 'huckster': -0.9, 'hug': 2.1, 'huge': 1.3, 'huggable': 1.6, 'hugged': 1.7, 'hugger': 1.6, 'huggers': 1.8, 'hugging': 1.8, 'hugs': 2.2, 'humerous': 1.4, 'humiliate': -2.5, 'humiliated': -1.4, 'humiliates': -1.0, 'humiliating': -1.2, 'humiliatingly': -2.6, 'humiliation': -2.7, 'humiliations': -2.4, 'humor': 1.1, 'humoral': 0.6, 'humored': 1.2, 'humoresque': 1.2, 'humoresques': 0.9, 'humoring': 2.1, 'humorist': 1.2, 'humoristic': 1.5, 'humorists': 1.3, 'humorless': -1.3, 'humorlessness': -1.4, 'humorous': 1.6, 'humorously': 2.3, 'humorousness': 2.4, 'humors': 1.6, 'humour': 2.1, 'humoured': 1.1, 'humouring': 1.7, 'humourous': 2.0, 'hunger': -1.0, 'hurrah': 2.6, 'hurrahed': 1.9, 'hurrahing': 2.4, 'hurrahs': 2.1, 'hurray': 2.7, 'hurrayed': 1.8, 'hurraying': 1.2, 'hurrays': 2.4, 'hurt': -2.4, 'hurter': -2.3, 'hurters': -1.9, 'hurtful': -2.4, 'hurtfully': -2.6, 'hurtfulness': -1.9, 'hurting': -1.7, 'hurtle': -0.3, 'hurtled': -0.6, 'hurtles': -1.0, 'hurtless': 0.3, 'hurtling': -1.4, 'hurts': -2.1, 'hypocritical': -2.0, 'hysteria': -1.9, 'hysterical': -0.1, 'hysterics': -1.8, 'ideal': 2.4, 'idealess': -1.9, 'idealise': 1.4, 'idealised': 2.1, 'idealises': 2.0, 'idealising': 0.6, 'idealism': 1.7, 'idealisms': 0.8, 'idealist': 1.6, 'idealistic': 1.8, 'idealistically': 1.7, 'idealists': 0.7, 'idealities': 1.5, 'ideality': 1.9, 'idealization': 1.8, 'idealizations': 1.4, 'idealize': 1.2, 'idealized': 1.8, 'idealizer': 1.3, 'idealizers': 1.9, 'idealizes': 2.0, 'idealizing': 1.4, 'idealless': -1.7, 'ideally': 1.8, 'idealogues': 0.5, 'idealogy': 0.8, 'ideals': 0.8, 'idiot': -2.3, 'idiotic': -2.6, 'ignorable': -1.0, 'ignorami': -1.9, 'ignoramus': -1.9, 'ignoramuses': -2.3, 'ignorance': -1.5, 'ignorances': -1.2, 'ignorant': -1.1, 'ignorantly': -1.6, 'ignorantness': -1.1, 'ignore': -1.5, 'ignored': -1.3, 'ignorer': -1.3, 'ignorers': -0.7, 'ignores': -1.1, 'ignoring': -1.7, 'ill': -1.8, 'illegal': -2.6, 'illiteracy': -1.9, 'illness': -1.7, 'illnesses': -2.2, 'imbecile': -2.2, 'immobilized': -1.2, 'immoral': -2.0, 'immoralism': -1.6, 'immoralist': -2.1, 'immoralists': -1.7, 'immoralities': -1.1, 'immorality': -0.6, 'immorally': -2.1, 'immortal': 1.0, 'immune': 1.2, 'impatience': -1.8, 'impatiens': -0.2, 'impatient': -1.2, 'impatiently': -1.7, 'imperfect': -1.3, 'impersonal': -1.3, 'impolite': -1.6, 'impolitely': -1.8, 'impoliteness': -1.8, 'impolitenesses': -2.3, 'importance': 1.5, 'importancies': 0.4, 'importancy': 1.4, 'important': 0.8, 'importantly': 1.3, 'impose': -1.2, 'imposed': -0.3, 'imposes': -0.4, 'imposing': -0.4, 'impotent': -1.1, 'impress': 1.9, 'impressed': 2.1, 'impresses': 2.1, 'impressibility': 1.2, 'impressible': 0.8, 'impressing': 2.5, 'impression': 0.9, 'impressionable': 0.2, 'impressionism': 0.8, 'impressionisms': 0.5, 'impressionist': 1.0, 'impressionistic': 1.5, 'impressionistically': 1.6, 'impressionists': 0.5, 'impressions': 0.9, 'impressive': 2.3, 'impressively': 2.0, 'impressiveness': 1.7, 'impressment': -0.4, 'impressments': 0.5, 'impressure': 0.6, 'imprisoned': -2.0, 'improve': 1.9, 'improved': 2.1, 'improvement': 2.0, 'improvements': 1.3, 'improver': 1.8, 'improvers': 1.3, 'improves': 1.8, 'improving': 1.8, 'inability': -1.7, 'inaction': -1.0, 'inadequacies': -1.7, 'inadequacy': -1.7, 'inadequate': -1.7, 'inadequately': -1.0, 'inadequateness': -1.7, 'inadequatenesses': -1.6, 'incapable': -1.6, 'incapacitated': -1.9, 'incensed': -2.0, 'incentive': 1.5, 'incentives': 1.3, 'incompetence': -2.3, 'incompetent': -2.1, 'inconsiderate': -1.9, 'inconvenience': -1.5, 'inconvenient': -1.4, 'increase': 1.3, 'increased': 1.1, 'indecision': -0.8, 'indecisions': -1.1, 'indecisive': -1.0, 'indecisively': -0.7, 'indecisiveness': -1.3, 'indecisivenesses': -0.9, 'indestructible': 0.6, 'indifference': -0.2, 'indifferent': -0.8, 'indignant': -1.8, 'indignation': -2.4, 'indoctrinate': -1.4, 'indoctrinated': -0.4, 'indoctrinates': -0.6, 'indoctrinating': -0.7, 'ineffective': -0.5, 'ineffectively': -1.3, 'ineffectiveness': -1.3, 'ineffectual': -1.2, 'ineffectuality': -1.6, 'ineffectually': -1.1, 'ineffectualness': -1.3, 'infatuated': 0.2, 'infatuation': 0.6, 'infected': -2.2, 'inferior': -1.7, 'inferiorities': -1.9, 'inferiority': -1.1, 'inferiorly': -2.0, 'inferiors': -0.5, 'inflamed': -1.4, 'influential': 1.9, 'infringement': -2.1, 'infuriate': -2.2, 'infuriated': -3.0, 'infuriates': -2.6, 'infuriating': -2.4, 'inhibin': -0.2, 'inhibit': -1.6, 'inhibited': -0.4, 'inhibiting': -0.4, 'inhibition': -1.5, 'inhibitions': -0.8, 'inhibitive': -1.4, 'inhibitor': -0.3, 'inhibitors': -1.0, 'inhibitory': -1.0, 'inhibits': -0.9, 'injured': -1.7, 'injury': -1.8, 'injustice': -2.7, 'innocence': 1.6, 'innocency': 1.9, 'innocent': 1.4, 'innocenter': 0.9, 'innocently': 1.4, 'innocents': 1.1, 'innovate': 2.2, 'innovates': 2.0, 'innovation': 1.6, 'innovative': 1.9, 'inquisition': -1.2, 'inquisitive': 0.7, 'insane': -1.7, 'insanity': -2.7, 'insecure': -1.8, 'insecurely': -1.4, 'insecureness': -1.8, 'insecurities': -1.8, 'insecurity': -1.8, 'insensitive': -0.9, 'insensitivity': -1.8, 'insignificant': -1.4, 'insincere': -1.8, 'insincerely': -1.9, 'insincerity': -1.4, 'insipid': -2.0, 'inspiration': 2.4, 'inspirational': 2.3, 'inspirationally': 2.3, 'inspirations': 2.1, 'inspirator': 1.9, 'inspirators': 1.2, 'inspiratory': 1.5, 'inspire': 2.7, 'inspired': 2.2, 'inspirer': 2.2, 'inspirers': 2.0, 'inspires': 1.9, 'inspiring': 1.8, 'inspiringly': 2.6, 'inspirit': 1.9, 'inspirited': 1.3, 'inspiriting': 1.8, 'inspiritingly': 2.1, 'inspirits': 0.8, 'insult': -2.3, 'insulted': -2.3, 'insulter': -2.0, 'insulters': -2.0, 'insulting': -2.2, 'insultingly': -2.3, 'insults': -1.8, 'intact': 0.8, 'integrity': 1.6, 'intellect': 2.0, 'intellection': 0.6, 'intellections': 0.8, 'intellective': 1.7, 'intellectively': 0.8, 'intellects': 1.8, 'intellectual': 2.3, 'intellectualism': 2.2, 'intellectualist': 2.0, 'intellectualistic': 1.3, 'intellectualists': 0.8, 'intellectualities': 1.7, 'intellectuality': 1.7, 'intellectualization': 1.5, 'intellectualize': 1.5, 'intellectualized': 1.2, 'intellectualizes': 1.8, 'intellectualizing': 0.8, 'intellectually': 1.4, 'intellectualness': 1.5, 'intellectuals': 1.6, 'intelligence': 2.1, 'intelligencer': 1.5, 'intelligencers': 1.6, 'intelligences': 1.6, 'intelligent': 2.0, 'intelligential': 1.9, 'intelligently': 2.0, 'intelligentsia': 1.5, 'intelligibility': 1.5, 'intelligible': 1.4, 'intelligibleness': 1.5, 'intelligibly': 1.2, 'intense': 0.3, 'interest': 2.0, 'interested': 1.7, 'interestedly': 1.5, 'interesting': 1.7, 'interestingly': 1.7, 'interestingness': 1.8, 'interests': 1.0, 'interrogated': -1.6, 'interrupt': -1.4, 'interrupted': -1.2, 'interrupter': -1.1, 'interrupters': -1.3, 'interruptible': -1.3, 'interrupting': -1.2, 'interruption': -1.5, 'interruptions': -1.7, 'interruptive': -1.4, 'interruptor': -1.3, 'interrupts': -1.3, 'intimidate': -0.8, 'intimidated': -1.9, 'intimidates': -1.3, 'intimidating': -1.9, 'intimidatingly': -1.1, 'intimidation': -1.8, 'intimidations': -1.4, 'intimidator': -1.6, 'intimidators': -1.6, 'intimidatory': -1.1, 'intricate': 0.6, 'intrigues': 0.9, 'invigorate': 1.9, 'invigorated': 0.8, 'invigorates': 2.1, 'invigorating': 2.1, 'invigoratingly': 2.0, 'invigoration': 1.5, 'invigorations': 1.2, 'invigorator': 1.1, 'invigorators': 1.2, 'invincible': 2.2, 'invite': 0.6, 'inviting': 1.3, 'invulnerable': 1.3, 'irate': -2.9, 'ironic': -0.5, 'irony': -0.2, 'irrational': -1.4, 'irrationalism': -1.5, 'irrationalist': -2.1, 'irrationalists': -1.5, 'irrationalities': -1.5, 'irrationality': -1.7, 'irrationally': -1.6, 'irrationals': -1.1, 'irresistible': 1.4, 'irresolute': -1.4, 'irresponsible': -1.9, 'irreversible': -0.8, 'irritabilities': -1.7, 'irritability': -1.4, 'irritable': -2.1, 'irritableness': -1.7, 'irritably': -1.8, 'irritant': -2.3, 'irritants': -2.1, 'irritate': -1.8, 'irritated': -2.0, 'irritates': -1.7, 'irritating': -2.0, 'irritatingly': -2.0, 'irritation': -2.3, 'irritations': -1.5, 'irritative': -2.0, 'isolatable': 0.2, 'isolate': -0.8, 'isolated': -1.3, 'isolates': -1.3, 'isolation': -1.7, 'isolationism': 0.4, 'isolationist': 0.7, 'isolations': -0.5, 'isolator': -0.4, 'isolators': -0.4, 'itchy': -1.1, 'jackass': -1.8, 'jackasses': -2.8, 'jaded': -1.6, 'jailed': -2.2, 'jaunty': 1.2, 'jealous': -2.0, 'jealousies': -2.0, 'jealously': -2.0, 'jealousness': -1.7, 'jealousy': -1.3, 'jeopardy': -2.1, 'jerk': -1.4, 'jerked': -0.8, 'jerks': -1.1, 'jewel': 1.5, 'jewels': 2.0, 'jocular': 1.2, 'join': 1.2, 'joke': 1.2, 'joked': 1.3, 'joker': 0.5, 'jokes': 1.0, 'jokester': 1.5, 'jokesters': 0.9, 'jokey': 1.1, 'joking': 0.9, 'jollied': 2.4, 'jollier': 2.4, 'jollies': 2.0, 'jolliest': 2.9, 'jollification': 2.2, 'jollifications': 2.0, 'jollify': 2.1, 'jollily': 2.7, 'jolliness': 2.5, 'jollities': 1.7, 'jollity': 1.8, 'jolly': 2.3, 'jollying': 2.3, 'jovial': 1.9, 'joy': 2.8, 'joyance': 2.3, 'joyed': 2.9, 'joyful': 2.9, 'joyfuller': 2.4, 'joyfully': 2.5, 'joyfulness': 2.7, 'joying': 2.5, 'joyless': -2.5, 'joylessly': -1.7, 'joylessness': -2.7, 'joyous': 3.1, 'joyously': 2.9, 'joyousness': 2.8, 'joypop': -0.2, 'joypoppers': -0.1, 'joyridden': 0.6, 'joyride': 1.1, 'joyrider': 0.7, 'joyriders': 1.3, 'joyrides': 0.8, 'joyriding': 0.9, 'joyrode': 1.0, 'joys': 2.2, 'joystick': 0.7, 'joysticks': 0.2, 'jubilant': 3.0, 'jumpy': -1.0, 'justice': 2.4, 'justifiably': 1.0, 'justified': 1.7, 'keen': 1.5, 'keened': 0.3, 'keener': 0.5, 'keeners': 0.6, 'keenest': 1.9, 'keening': -0.7, 'keenly': 1.0, 'keenness': 1.4, 'keens': 0.1, 'kewl': 1.3, 'kidding': 0.4, 'kill': -3.7, 'killdeer': -1.1, 'killdeers': -0.1, 'killdees': -0.6, 'killed': -3.5, 'killer': -3.3, 'killers': -3.3, 'killick': 0.1, 'killie': -0.1, 'killifish': -0.1, 'killifishes': -0.1, 'killing': -3.4, 'killingly': -2.6, 'killings': -3.5, 'killjoy': -2.1, 'killjoys': -1.7, 'killock': -0.3, 'killocks': -0.4, 'kills': -2.5, 'kind': 2.4, 'kinder': 2.2, 'kindly': 2.2, 'kindness': 2.0, 'kindnesses': 2.3, 'kiss': 1.8, 'kissable': 2.0, 'kissably': 1.9, 'kissed': 1.6, 'kisser': 1.7, 'kissers': 1.5, 'kisses': 2.3, 'kissing': 2.7, 'kissy': 1.8, 'kudos': 2.3, 'lack': -1.3, 'lackadaisical': -1.6, 'lag': -1.4, 'lagged': -1.2, 'lagging': -1.1, 'lags': -1.5, 'laidback': 0.5, 'lame': -1.8, 'lamebrain': -1.6, 'lamebrained': -2.5, 'lamebrains': -1.2, 'lamedh': 0.1, 'lamella': -0.1, 'lamellae': -0.1, 'lamellas': 0.1, 'lamellibranch': 0.2, 'lamellibranchs': -0.1, 'lamely': -2.0, 'lameness': -0.8, 'lament': -2.0, 'lamentable': -1.5, 'lamentableness': -1.3, 'lamentably': -1.5, 'lamentation': -1.4, 'lamentations': -1.9, 'lamented': -1.4, 'lamenter': -1.2, 'lamenters': -0.5, 'lamenting': -2.0, 'laments': -1.5, 'lamer': -1.4, 'lames': -1.2, 'lamest': -1.5, 'landmark': 0.3, 'laugh': 2.6, 'laughable': 0.2, 'laughableness': 1.2, 'laughably': 1.2, 'laughed': 2.0, 'laugher': 1.7, 'laughers': 1.7, 'laughing': 2.2, 'laughingly': 2.3, 'laughings': 1.9, 'laughingstocks': -1.3, 'laughs': 2.2, 'laughter': 2.2, 'laughters': 2.2, 'launched': 0.5, 'lawl': 1.4, 'lawsuit': -0.9, 'lawsuits': -0.6, 'lazier': -2.3, 'laziest': -2.7, 'lazy': -1.5, 'leak': -1.4, 'leaked': -1.3, 'leave': -0.2, 'leet': 1.3, 'legal': 0.5, 'legally': 0.4, 'lenient': 1.1, 'lethargic': -1.2, 'lethargy': -1.4, 'liabilities': -0.8, 'liability': -0.8, 'liar': -2.3, 'liards': -0.4, 'liars': -2.4, 'libelous': -2.1, 'libertarian': 0.9, 'libertarianism': 0.4, 'libertarianisms': 0.1, 'libertarians': 0.1, 'liberties': 2.3, 'libertinage': 0.2, 'libertine': -0.9, 'libertines': 0.4, 'libertinisms': 1.2, 'liberty': 2.4, 'lied': -1.6, 'lies': -1.8, 'lifesaver': 2.8, 'lighthearted': 1.8, 'like': 1.5, 'likeable': 2.0, 'liked': 1.8, 'likes': 1.8, 'liking': 1.7, 'limitation': -1.2, 'limited': -0.9, 'litigation': -0.8, 'litigious': -0.8, 'livelier': 1.7, 'liveliest': 2.1, 'livelihood': 0.8, 'livelihoods': 0.9, 'livelily': 1.8, 'liveliness': 1.6, 'livelong': 1.7, 'lively': 1.9, 'livid': -2.5, 'loathe': -2.2, 'loathed': -2.1, 'loathes': -1.9, 'loathing': -2.7, 'lobby': 0.1, 'lobbying': -0.3, 'lone': -1.1, 'lonelier': -1.4, 'loneliest': -2.4, 'loneliness': -1.8, 'lonelinesses': -1.5, 'lonely': -1.5, 'loneness': -1.1, 'loner': -1.3, 'loners': -0.9, 'lonesome': -1.5, 'lonesomely': -1.3, 'lonesomeness': -1.8, 'lonesomes': -1.4, 'longing': -0.1, 'longingly': 0.7, 'longings': 0.4, 'loom': -0.9, 'loomed': -1.1, 'looming': -0.5, 'looms': -0.6, 'loose': -1.3, 'looses': -0.6, 'lose': -1.7, 'loser': -2.4, 'losers': -2.4, 'loses': -1.3, 'losing': -1.6, 'loss': -1.3, 'losses': -1.7, 'lossy': -1.2, 'lost': -1.3, 'louse': -1.6, 'loused': -1.0, 'louses': -1.3, 'lousewort': 0.1, 'louseworts': -0.6, 'lousier': -2.2, 'lousiest': -2.6, 'lousily': -1.2, 'lousiness': -1.7, 'lousing': -1.1, 'lousy': -2.5, 'lovable': 3.0, 'love': 3.2, 'loved': 2.9, 'lovelies': 2.2, 'lovely': 2.8, 'lover': 2.8, 'loverly': 2.8, 'lovers': 2.4, 'loves': 2.7, 'loving': 2.9, 'lovingly': 3.2, 'lovingness': 2.7, 'low': -1.1, 'lowball': -0.8, 'lowballed': -1.5, 'lowballing': -0.7, 'lowballs': -1.2, 'lowborn': -0.7, 'lowboys': -0.6, 'lowbred': -2.6, 'lowbrow': -1.9, 'lowbrows': -0.6, 'lowdown': -0.8, 'lowdowns': -0.2, 'lowe': 0.5, 'lowed': -0.8, 'lower': -1.2, 'lowercase': 0.3, 'lowercased': -0.2, 'lowerclassman': -0.4, 'lowered': -0.5, 'lowering': -1.0, 'lowermost': -1.4, 'lowers': -0.5, 'lowery': -1.8, 'lowest': -1.6, 'lowing': -0.5, 'lowish': -0.9, 'lowland': -0.1, 'lowlander': -0.4, 'lowlanders': -0.3, 'lowlands': -0.1, 'lowlier': -1.7, 'lowliest': -1.8, 'lowlife': -1.5, 'lowlifes': -2.2, 'lowlight': -2.0, 'lowlights': -0.3, 'lowlihead': -0.3, 'lowliness': -1.1, 'lowlinesses': -1.2, 'lowlives': -2.1, 'lowly': -1.0, 'lown': 0.9, 'lowness': -1.3, 'lowrider': -0.2, 'lowriders': 0.1, 'lows': -0.8, 'lowse': -0.7, 'loyal': 2.1, 'loyalism': 1.0, 'loyalisms': 0.9, 'loyalist': 1.5, 'loyalists': 1.1, 'loyally': 2.1, 'loyalties': 1.9, 'loyalty': 2.5, 'luck': 2.0, 'lucked': 1.9, 'luckie': 1.6, 'luckier': 1.9, 'luckiest': 2.9, 'luckily': 2.3, 'luckiness': 1.0, 'lucking': 1.2, 'luckless': -1.3, 'lucks': 1.6, 'lucky': 1.8, 'ludicrous': -1.5, 'ludicrously': -0.2, 'ludicrousness': -1.9, 'lugubrious': -2.1, 'lulz': 2.0, 'lunatic': -2.2, 'lunatics': -1.6, 'lurk': -0.8, 'lurking': -0.5, 'lurks': -0.9, 'lying': -2.4, 'mad': -2.2, 'maddening': -2.2, 'madder': -1.2, 'maddest': -2.8, 'madly': -1.7, 'madness': -1.9, 'magnific': 2.3, 'magnifical': 2.4, 'magnifically': 2.4, 'magnification': 1.0, 'magnifications': 1.2, 'magnificence': 2.4, 'magnificences': 2.3, 'magnificent': 2.9, 'magnificently': 3.4, 'magnifico': 1.8, 'magnificoes': 1.4, 'mandatory': 0.3, 'maniac': -2.1, 'maniacal': -0.3, 'maniacally': -1.7, 'maniacs': -1.2, 'manipulated': -1.6, 'manipulating': -1.5, 'manipulation': -1.2, 'marvel': 1.8, 'marvelous': 2.9, 'marvels': 2.0, 'masochism': -1.6, 'masochisms': -1.1, 'masochist': -1.7, 'masochistic': -2.2, 'masochistically': -1.6, 'masochists': -1.2, 'masterpiece': 3.1, 'masterpieces': 2.5, 'matter': 0.1, 'matters': 0.1, 'mature': 1.8, 'meaningful': 1.3, 'meaningless': -1.9, 'medal': 2.1, 'mediocrity': -0.3, 'meditative': 1.4, 'meh': -0.3, 'melancholia': -0.5, 'melancholiac': -2.0, 'melancholias': -1.6, 'melancholic': -0.3, 'melancholics': -1.0, 'melancholies': -1.1, 'melancholy': -1.9, 'menace': -2.2, 'menaced': -1.7, 'mercy': 1.5, 'merit': 1.8, 'merited': 1.4, 'meriting': 1.1, 'meritocracy': 0.6, 'meritocrat': 0.4, 'meritocrats': 1.1, 'meritorious': 2.1, 'meritoriously': 1.3, 'meritoriousness': 1.7, 'merits': 1.7, 'merrier': 1.7, 'merriest': 2.7, 'merrily': 2.4, 'merriment': 2.4, 'merriments': 2.0, 'merriness': 2.2, 'merry': 2.5, 'merrymaker': 2.2, 'merrymakers': 1.7, 'merrymaking': 2.2, 'merrymakings': 2.4, 'merrythought': 1.1, 'merrythoughts': 1.6, 'mess': -1.5, 'messed': -1.4, 'messy': -1.5, 'methodical': 0.6, 'mindless': -1.9, 'miracle': 2.8, 'mirth': 2.6, 'mirthful': 2.7, 'mirthfully': 2.0, 'misbehave': -1.9, 'misbehaved': -1.6, 'misbehaves': -1.6, 'misbehaving': -1.7, 'mischief': -1.5, 'mischiefs': -0.8, 'miser': -1.8, 'miserable': -2.2, 'miserableness': -2.8, 'miserably': -2.1, 'miserere': -0.8, 'misericorde': 0.1, 'misericordes': -0.5, 'miseries': -2.7, 'miserliness': -2.6, 'miserly': -1.4, 'misers': -1.5, 'misery': -2.7, 'misgiving': -1.4, 'misinformation': -1.3, 'misinformed': -1.6, 'misinterpreted': -1.3, 'misleading': -1.7, 'misread': -1.1, 'misreporting': -1.5, 'misrepresentation': -2.0, 'miss': -0.6, 'missed': -1.2, 'misses': -0.9, 'missing': -1.2, 'mistakable': -0.8, 'mistake': -1.4, 'mistaken': -1.5, 'mistakenly': -1.2, 'mistaker': -1.6, 'mistakers': -1.6, 'mistakes': -1.5, 'mistaking': -1.1, 'misunderstand': -1.5, 'misunderstanding': -1.8, 'misunderstands': -1.3, 'misunderstood': -1.4, 'mlm': -1.4, 'mmk': 0.6, 'moan': -0.6, 'moaned': -0.4, 'moaning': -0.4, 'moans': -0.6, 'mock': -1.8, 'mocked': -1.3, 'mocker': -0.8, 'mockeries': -1.6, 'mockers': -1.3, 'mockery': -1.3, 'mocking': -1.7, 'mocks': -2.0, 'molest': -2.1, 'molestation': -1.9, 'molestations': -2.9, 'molested': -1.9, 'molester': -2.3, 'molesters': -2.2, 'molesting': -2.8, 'molests': -3.1, 'mongering': -0.8, 'monopolize': -0.8, 'monopolized': -0.9, 'monopolizes': -1.1, 'monopolizing': -0.5, 'mooch': -1.7, 'mooched': -1.4, 'moocher': -1.5, 'moochers': -1.9, 'mooches': -1.4, 'mooching': -1.7, 'moodier': -1.1, 'moodiest': -2.1, 'moodily': -1.3, 'moodiness': -1.4, 'moodinesses': -1.4, 'moody': -1.5, 'mope': -1.9, 'moping': -1.0, 'moron': -2.2, 'moronic': -2.7, 'moronically': -1.4, 'moronity': -1.1, 'morons': -1.3, 'motherfucker': -3.6, 'motherfucking': -2.8, 'motivate': 1.6, 'motivated': 2.0, 'motivating': 2.2, 'motivation': 1.4, 'mourn': -1.8, 'mourned': -1.3, 'mourner': -1.6, 'mourners': -1.8, 'mournful': -1.6, 'mournfuller': -1.9, 'mournfully': -1.7, 'mournfulness': -1.8, 'mourning': -1.9, 'mourningly': -2.3, 'mourns': -2.4, 'mumpish': -1.4, 'murder': -3.7, 'murdered': -3.4, 'murderee': -3.2, 'murderees': -3.1, 'murderer': -3.6, 'murderers': -3.3, 'murderess': -2.2, 'murderesses': -2.6, 'murdering': -3.3, 'murderous': -3.2, 'murderously': -3.1, 'murderousness': -2.9, 'murders': -3.0, 'n00b': -1.6, 'nag': -1.5, 'nagana': -1.7, 'nagged': -1.7, 'nagger': -1.8, 'naggers': -1.5, 'naggier': -1.4, 'naggiest': -2.4, 'nagging': -1.7, 'naggingly': -0.9, 'naggy': -1.7, 'nags': -1.1, 'nah': -0.4, 'naive': -1.1, 'nastic': 0.2, 'nastier': -2.3, 'nasties': -2.1, 'nastiest': -2.4, 'nastily': -1.9, 'nastiness': -1.1, 'nastinesses': -2.6, 'nasturtium': 0.4, 'nasturtiums': 0.1, 'nasty': -2.6, 'natural': 1.5, 'neat': 2.0, 'neaten': 1.2, 'neatened': 2.0, 'neatening': 1.3, 'neatens': 1.1, 'neater': 1.0, 'neatest': 1.7, 'neath': 0.2, 'neatherd': -0.4, 'neatly': 1.4, 'neatness': 1.3, 'neats': 1.1, 'needy': -1.4, 'negative': -2.7, 'negativity': -2.3, 'neglect': -2.0, 'neglected': -2.4, 'neglecter': -1.7, 'neglecters': -1.5, 'neglectful': -2.0, 'neglectfully': -2.1, 'neglectfulness': -2.0, 'neglecting': -1.7, 'neglects': -2.2, 'nerd': -1.2, 'nerdier': -0.2, 'nerdiest': 0.6, 'nerdish': -0.1, 'nerdy': -0.2, 'nerves': -0.4, 'nervous': -1.1, 'nervously': -0.6, 'nervousness': -1.2, 'neurotic': -1.4, 'neurotically': -1.8, 'neuroticism': -0.9, 'neurotics': -0.7, 'nice': 1.8, 'nicely': 1.9, 'niceness': 1.6, 'nicenesses': 2.1, 'nicer': 1.9, 'nicest': 2.2, 'niceties': 1.5, 'nicety': 1.2, 'nifty': 1.7, 'niggas': -1.4, 'nigger': -3.3, 'no': -1.2, 'noble': 2.0, 'noisy': -0.7, 'nonsense': -1.7, 'noob': -0.2, 'nosey': -0.8, 'notorious': -1.9, 'novel': 1.3, 'numb': -1.4, 'numbat': 0.2, 'numbed': -0.9, 'number': 0.3, 'numberable': 0.6, 'numbest': -1.0, 'numbfish': -0.4, 'numbfishes': -0.7, 'numbing': -1.1, 'numbingly': -1.3, 'numbles': 0.4, 'numbly': -1.4, 'numbness': -1.1, 'numbs': -0.7, 'numbskull': -2.3, 'numbskulls': -2.2, 'nurtural': 1.5, 'nurturance': 1.6, 'nurturances': 1.3, 'nurturant': 1.7, 'nurture': 1.4, 'nurtured': 1.9, 'nurturer': 1.9, 'nurturers': 0.8, 'nurtures': 1.9, 'nurturing': 2.0, 'nuts': -1.3, 'o/\\\\o': 2.1, 'o_0': -0.1, 'obliterate': -2.9, 'obliterated': -2.1, 'obnoxious': -2.0, 'obnoxiously': -2.3, 'obnoxiousness': -2.1, 'obscene': -2.8, 'obsess': -1.0, 'obsessed': -0.7, 'obsesses': -1.0, 'obsessing': -1.4, 'obsession': -1.4, 'obsessional': -1.5, 'obsessionally': -1.3, 'obsessions': -0.9, 'obsessive': -0.9, 'obsessively': -0.4, 'obsessiveness': -1.2, 'obsessives': -0.7, 'obsolete': -1.2, 'obstacle': -1.5, 'obstacles': -1.6, 'obstinate': -1.2, 'odd': -1.3, 'offence': -1.2, 'offences': -1.4, 'offend': -1.2, 'offended': -1.0, 'offender': -1.5, 'offenders': -1.5, 'offending': -2.3, 'offends': -2.0, 'offense': -1.0, 'offenseless': 0.7, 'offenses': -1.5, 'offensive': -2.0, 'offensively': -2.8, 'offensiveness': -2.3, 'offensives': -0.8, 'offline': -0.5, 'okay': 0.9, 'okays': 2.1, 'ominous': -1.4, 'once-in-a-lifetime': 1.8, 'openness': 1.4, 'opportune': 1.7, 'opportunely': 1.5, 'opportuneness': 1.2, 'opportunism': 0.4, 'opportunisms': 0.2, 'opportunist': 0.2, 'opportunistic': -0.1, 'opportunistically': 0.9, 'opportunists': 0.3, 'opportunities': 1.6, 'opportunity': 1.8, 'oppressed': -2.1, 'oppressive': -1.7, 'optimal': 1.5, 'optimality': 1.9, 'optimally': 1.3, 'optimisation': 1.6, 'optimisations': 1.8, 'optimise': 1.9, 'optimised': 1.7, 'optimises': 1.6, 'optimising': 1.7, 'optimism': 2.5, 'optimisms': 2.0, 'optimist': 2.4, 'optimistic': 1.3, 'optimistically': 2.1, 'optimists': 1.6, 'optimization': 1.6, 'optimizations': 0.9, 'optimize': 2.2, 'optimized': 2.0, 'optimizer': 1.5, 'optimizers': 2.1, 'optimizes': 1.8, 'optimizing': 2.0, 'optionless': -1.7, 'original': 1.3, 'outcry': -2.3, 'outgoing': 1.2, 'outmaneuvered': 0.5, 'outrage': -2.3, 'outraged': -2.5, 'outrageous': -2.0, 'outrageously': -1.2, 'outrageousness': -1.2, 'outrageousnesses': -1.3, 'outrages': -2.3, 'outraging': -2.0, 'outreach': 1.1, 'outstanding': 3.0, 'overjoyed': 2.7, 'overload': -1.5, 'overlooked': -0.1, 'overreact': -1.0, 'overreacted': -1.7, 'overreaction': -0.7, 'overreacts': -2.2, 'oversell': -0.9, 'overselling': -0.8, 'oversells': 0.3, 'oversimplification': 0.2, 'oversimplifies': 0.1, 'oversimplify': -0.6, 'overstatement': -1.1, 'overstatements': -0.7, 'overweight': -1.5, 'overwhelm': -0.7, 'overwhelmed': 0.2, 'overwhelmingly': -0.5, 'overwhelms': -0.8, 'oxymoron': -0.5, 'pain': -2.3, 'pained': -1.8, 'painful': -1.9, 'painfuller': -1.7, 'painfully': -2.4, 'painfulness': -2.7, 'paining': -1.7, 'painless': 1.2, 'painlessly': 1.1, 'painlessness': 0.4, 'pains': -1.8, 'palatable': 1.6, 'palatableness': 0.8, 'palatably': 1.1, 'panic': -2.3, 'panicked': -2.0, 'panicking': -1.9, 'panicky': -1.5, 'panicle': 0.5, 'panicled': 0.1, 'panicles': -0.2, 'panics': -1.9, 'paniculate': 0.1, 'panicums': -0.1, 'paradise': 3.2, 'paradox': -0.4, 'paranoia': -1.0, 'paranoiac': -1.3, 'paranoiacs': -0.7, 'paranoias': -1.5, 'paranoid': -1.0, 'paranoids': -1.6, 'pardon': 1.3, 'pardoned': 0.9, 'pardoning': 1.7, 'pardons': 1.2, 'parley': -0.4, 'partied': 1.4, 'partier': 1.4, 'partiers': 0.7, 'parties': 1.7, 'party': 1.7, 'partyer': 1.2, 'partyers': 1.1, 'partying': 1.6, 'passion': 2.0, 'passional': 1.6, 'passionate': 2.4, 'passionately': 2.4, 'passionateness': 2.3, 'passionflower': 0.3, 'passionflowers': 0.4, 'passionless': -1.9, 'passions': 2.2, 'passive': 0.8, 'passively': -0.7, 'pathetic': -2.7, 'pathetical': -1.2, 'pathetically': -1.8, 'pay': -0.4, 'peace': 2.5, 'peaceable': 1.7, 'peaceableness': 1.8, 'peaceably': 2.0, 'peaceful': 2.2, 'peacefuller': 1.9, 'peacefullest': 3.1, 'peacefully': 2.4, 'peacefulness': 2.1, 'peacekeeper': 1.6, 'peacekeepers': 1.6, 'peacekeeping': 2.0, 'peacekeepings': 1.6, 'peacemaker': 2.0, 'peacemakers': 2.4, 'peacemaking': 1.7, 'peacenik': 0.8, 'peaceniks': 0.7, 'peaces': 2.1, 'peacetime': 2.2, 'peacetimes': 2.1, 'peculiar': 0.6, 'peculiarities': 0.1, 'peculiarity': 0.6, 'peculiarly': -0.4, 'penalty': -2.0, 'pensive': 0.3, 'perfect': 2.7, 'perfecta': 1.4, 'perfectas': 0.6, 'perfected': 2.7, 'perfecter': 1.8, 'perfecters': 1.4, 'perfectest': 3.1, 'perfectibilities': 2.1, 'perfectibility': 1.8, 'perfectible': 1.5, 'perfecting': 2.3, 'perfection': 2.7, 'perfectionism': 1.3, 'perfectionist': 1.5, 'perfectionistic': 0.7, 'perfectionists': 0.1, 'perfections': 2.5, 'perfective': 1.2, 'perfectively': 2.1, 'perfectiveness': 0.9, 'perfectives': 0.9, 'perfectivity': 2.2, 'perfectly': 3.2, 'perfectness': 3.0, 'perfecto': 1.3, 'perfects': 1.6, 'peril': -1.7, 'perjury': -1.9, 'perpetrator': -2.2, 'perpetrators': -1.0, 'perplexed': -1.3, 'persecute': -2.1, 'persecuted': -1.3, 'persecutes': -1.2, 'persecuting': -1.5, 'perturbed': -1.4, 'perverse': -1.8, 'perversely': -2.2, 'perverseness': -2.1, 'perversenesses': -0.5, 'perversion': -1.3, 'perversions': -1.2, 'perversities': -1.1, 'perversity': -2.6, 'perversive': -2.1, 'pervert': -2.3, 'perverted': -2.5, 'pervertedly': -1.2, 'pervertedness': -1.2, 'perverter': -1.7, 'perverters': -0.6, 'perverting': -1.0, 'perverts': -2.8, 'pesky': -1.2, 'pessimism': -1.5, 'pessimisms': -2.0, 'pessimist': -1.5, 'pessimistic': -1.5, 'pessimistically': -2.0, 'pessimists': -1.0, 'petrifaction': -1.9, 'petrifactions': -0.3, 'petrification': -0.1, 'petrifications': -0.4, 'petrified': -2.5, 'petrifies': -2.3, 'petrify': -1.7, 'petrifying': -2.6, 'pettier': -0.3, 'pettiest': -1.3, 'petty': -0.8, 'phobia': -1.6, 'phobias': -2.0, 'phobic': -1.2, 'phobics': -1.3, 'picturesque': 1.6, 'pileup': -1.1, 'pique': -1.1, 'piqued': 0.1, 'piss': -1.7, 'pissant': -1.5, 'pissants': -2.5, 'pissed': -3.2, 'pisser': -2.0, 'pissers': -1.4, 'pisses': -1.4, 'pissing': -1.7, 'pissoir': -0.8, 'piteous': -1.2, 'pitiable': -1.1, 'pitiableness': -1.1, 'pitiably': -1.1, 'pitied': -1.3, 'pitier': -1.2, 'pitiers': -1.3, 'pities': -1.2, 'pitiful': -2.2, 'pitifuller': -1.8, 'pitifullest': -1.1, 'pitifully': -1.2, 'pitifulness': -1.2, 'pitiless': -1.8, 'pitilessly': -2.1, 'pitilessness': -0.5, 'pity': -1.2, 'pitying': -1.4, 'pityingly': -1.0, 'pityriasis': -0.8, 'play': 1.4, 'played': 1.4, 'playful': 1.9, 'playfully': 1.6, 'playfulness': 1.2, 'playing': 0.8, 'plays': 1.0, 'pleasant': 2.3, 'pleasanter': 1.5, 'pleasantest': 2.6, 'pleasantly': 2.1, 'pleasantness': 2.3, 'pleasantnesses': 2.3, 'pleasantries': 1.3, 'pleasantry': 2.0, 'please': 1.3, 'pleased': 1.9, 'pleaser': 1.7, 'pleasers': 1.0, 'pleases': 1.7, 'pleasing': 2.4, 'pleasurability': 1.9, 'pleasurable': 2.4, 'pleasurableness': 2.4, 'pleasurably': 2.6, 'pleasure': 2.7, 'pleasured': 2.3, 'pleasureless': -1.6, 'pleasures': 1.9, 'pleasuring': 2.8, 'poised': 1.0, 'poison': -2.5, 'poisoned': -2.2, 'poisoner': -2.7, 'poisoners': -3.1, 'poisoning': -2.8, 'poisonings': -2.4, 'poisonous': -2.7, 'poisonously': -2.9, 'poisons': -2.7, 'poisonwood': -1.0, 'pollute': -2.3, 'polluted': -2.0, 'polluter': -1.8, 'polluters': -2.0, 'pollutes': -2.2, 'poor': -2.1, 'poorer': -1.5, 'poorest': -2.5, 'popular': 1.8, 'popularise': 1.6, 'popularised': 1.1, 'popularises': 0.5, 'popularising': 1.2, 'popularities': 1.6, 'popularity': 2.1, 'popularization': 1.3, 'popularizations': 0.9, 'popularize': 1.3, 'popularized': 1.9, 'popularizer': 1.8, 'popularizers': 1.0, 'popularizes': 1.4, 'popularizing': 1.5, 'popularly': 1.8, 'positive': 2.6, 'positively': 2.4, 'positiveness': 2.3, 'positivenesses': 2.2, 'positiver': 2.3, 'positives': 2.4, 'positivest': 2.9, 'positivism': 1.6, 'positivisms': 1.8, 'positivist': 2.0, 'positivistic': 1.9, 'positivists': 1.7, 'positivities': 2.6, 'positivity': 2.3, 'possessive': -0.9, 'postpone': -0.9, 'postponed': -0.8, 'postpones': -1.1, 'postponing': -0.5, 'poverty': -2.3, 'powerful': 1.8, 'powerless': -2.2, 'praise': 2.6, 'praised': 2.2, 'praiser': 2.0, 'praisers': 2.0, 'praises': 2.4, 'praiseworthily': 1.9, 'praiseworthiness': 2.4, 'praiseworthy': 2.6, 'praising': 2.5, 'pray': 1.3, 'praying': 1.5, 'prays': 1.4, 'prblm': -1.6, 'prblms': -2.3, 'precious': 2.7, 'preciously': 2.2, 'preciousness': 1.9, 'prejudice': -2.3, 'prejudiced': -1.9, 'prejudices': -1.8, 'prejudicial': -2.6, 'prejudicially': -1.5, 'prejudicialness': -2.4, 'prejudicing': -1.8, 'prepared': 0.9, 'pressure': -1.2, 'pressured': -0.9, 'pressureless': 1.0, 'pressures': -1.3, 'pressuring': -1.4, 'pressurise': -0.6, 'pressurised': -0.4, 'pressurises': -0.8, 'pressurising': -0.6, 'pressurizations': -0.3, 'pressurize': -0.7, 'pressurized': 0.1, 'pressurizer': 0.1, 'pressurizers': -0.7, 'pressurizes': -0.2, 'pressurizing': -0.2, 'pretend': -0.4, 'pretending': 0.4, 'pretends': -0.4, 'prettied': 1.6, 'prettier': 2.1, 'pretties': 1.7, 'prettiest': 2.7, 'pretty': 2.2, 'prevent': 0.1, 'prevented': 0.1, 'preventing': -0.1, 'prevents': 0.3, 'prick': -1.4, 'pricked': -0.6, 'pricker': -0.3, 'prickers': -0.2, 'pricket': -0.5, 'prickets': 0.3, 'pricking': -0.9, 'prickle': -1.0, 'prickled': -0.2, 'prickles': -0.8, 'pricklier': -1.6, 'prickliest': -1.4, 'prickliness': -0.6, 'prickling': -0.8, 'prickly': -0.9, 'pricks': -0.9, 'pricky': -0.6, 'pride': 1.4, 'prison': -2.3, 'prisoner': -2.5, 'prisoners': -2.3, 'privilege': 1.5, 'privileged': 1.9, 'privileges': 1.6, 'privileging': 0.7, 'prize': 2.3, 'prized': 2.4, 'prizefight': -0.1, 'prizefighter': 1.0, 'prizefighters': -0.1, 'prizefighting': 0.4, 'prizefights': 0.3, 'prizer': 1.0, 'prizers': 0.8, 'prizes': 2.0, 'prizewinner': 2.3, 'prizewinners': 2.4, 'prizewinning': 3.0, 'proactive': 1.8, 'problem': -1.7, 'problematic': -1.9, 'problematical': -1.8, 'problematically': -2.0, 'problematics': -1.3, 'problems': -1.7, 'profit': 1.9, 'profitabilities': 1.1, 'profitability': 1.1, 'profitable': 1.9, 'profitableness': 2.4, 'profitably': 1.6, 'profited': 1.3, 'profiteer': 0.8, 'profiteered': -0.5, 'profiteering': -0.6, 'profiteers': 0.5, 'profiter': 0.7, 'profiterole': 0.4, 'profiteroles': 0.5, 'profiting': 1.6, 'profitless': -1.5, 'profits': 1.9, 'profitwise': 0.9, 'progress': 1.8, 'prominent': 1.3, 'promiscuities': -0.8, 'promiscuity': -1.8, 'promiscuous': -0.3, 'promiscuously': -1.5, 'promiscuousness': -0.9, 'promise': 1.3, 'promised': 1.5, 'promisee': 0.8, 'promisees': 1.1, 'promiser': 1.3, 'promisers': 1.6, 'promises': 1.6, 'promising': 1.7, 'promisingly': 1.2, 'promisor': 1.0, 'promisors': 0.4, 'promissory': 0.9, 'promote': 1.6, 'promoted': 1.8, 'promotes': 1.4, 'promoting': 1.5, 'propaganda': -1.0, 'prosecute': -1.7, 'prosecuted': -1.6, 'prosecutes': -1.8, 'prosecution': -2.2, 'prospect': 1.2, 'prospects': 1.2, 'prosperous': 2.1, 'protect': 1.6, 'protected': 1.9, 'protects': 1.3, 'protest': -1.0, 'protested': -0.5, 'protesters': -0.9, 'protesting': -1.8, 'protests': -0.9, 'proud': 2.1, 'prouder': 2.2, 'proudest': 2.6, 'proudful': 1.9, 'proudhearted': 1.4, 'proudly': 2.6, 'provoke': -1.7, 'provoked': -1.1, 'provokes': -1.3, 'provoking': -0.8, 'pseudoscience': -1.2, 'puke': -2.4, 'puked': -1.8, 'pukes': -1.9, 'puking': -1.8, 'pukka': 2.8, 'punish': -2.4, 'punishabilities': -1.7, 'punishability': -1.6, 'punishable': -1.9, 'punished': -2.0, 'punisher': -1.9, 'punishers': -2.6, 'punishes': -2.1, 'punishing': -2.6, 'punishment': -2.2, 'punishments': -1.8, 'punitive': -2.3, 'pushy': -1.1, 'puzzled': -0.7, 'quaking': -1.5, 'questionable': -1.2, 'questioned': -0.4, 'questioning': -0.4, 'racism': -3.1, 'racist': -3.0, 'racists': -2.5, 'radian': 0.4, 'radiance': 1.4, 'radiances': 1.1, 'radiancies': 0.8, 'radiancy': 1.4, 'radians': 0.2, 'radiant': 2.1, 'radiantly': 1.3, 'radiants': 1.2, 'rage': -2.6, 'raged': -2.0, 'ragee': -0.4, 'rageful': -2.8, 'rages': -2.1, 'raging': -2.4, 'rainy': -0.3, 'rancid': -2.5, 'rancidity': -2.6, 'rancidly': -2.5, 'rancidness': -2.6, 'rancidnesses': -1.6, 'rant': -1.4, 'ranter': -1.2, 'ranters': -1.2, 'rants': -1.3, 'rape': -3.7, 'raped': -3.6, 'raper': -3.4, 'rapers': -3.6, 'rapes': -3.5, 'rapeseeds': -0.5, 'raping': -3.8, 'rapist': -3.9, 'rapists': -3.3, 'rapture': 0.6, 'raptured': 0.9, 'raptures': 0.7, 'rapturous': 1.7, 'rash': -1.7, 'ratified': 0.6, 'reach': 0.1, 'reached': 0.4, 'reaches': 0.2, 'reaching': 0.8, 'readiness': 1.0, 'ready': 1.5, 'reassurance': 1.5, 'reassurances': 1.4, 'reassure': 1.4, 'reassured': 1.7, 'reassures': 1.5, 'reassuring': 1.7, 'reassuringly': 1.8, 'rebel': -0.6, 'rebeldom': -1.5, 'rebelled': -1.0, 'rebelling': -1.1, 'rebellion': -0.5, 'rebellions': -1.1, 'rebellious': -1.2, 'rebelliously': -1.8, 'rebelliousness': -1.2, 'rebels': -0.8, 'recession': -1.8, 'reckless': -1.7, 'recommend': 1.5, 'recommended': 0.8, 'recommends': 0.9, 'redeemed': 1.3, 'reek': -2.4, 'reeked': -2.0, 'reeker': -1.7, 'reekers': -1.5, 'reeking': -2.0, 'refuse': -1.2, 'refused': -1.2, 'refusing': -1.7, 'regret': -1.8, 'regretful': -1.9, 'regretfully': -1.9, 'regretfulness': -1.6, 'regrets': -1.5, 'regrettable': -2.3, 'regrettably': -2.0, 'regretted': -1.6, 'regretter': -1.6, 'regretters': -2.0, 'regretting': -1.7, 'reinvigorate': 2.3, 'reinvigorated': 1.9, 'reinvigorates': 1.8, 'reinvigorating': 1.7, 'reinvigoration': 2.2, 'reject': -1.7, 'rejected': -2.3, 'rejectee': -2.3, 'rejectees': -1.8, 'rejecter': -1.6, 'rejecters': -1.8, 'rejecting': -2.0, 'rejectingly': -1.7, 'rejection': -2.5, 'rejections': -2.1, 'rejective': -1.8, 'rejector': -1.8, 'rejects': -2.2, 'rejoice': 1.9, 'rejoiced': 2.0, 'rejoices': 2.1, 'rejoicing': 2.8, 'relax': 1.9, 'relaxant': 1.0, 'relaxants': 0.7, 'relaxation': 2.4, 'relaxations': 1.0, 'relaxed': 2.2, 'relaxedly': 1.5, 'relaxedness': 2.0, 'relaxer': 1.6, 'relaxers': 1.4, 'relaxes': 1.5, 'relaxin': 1.7, 'relaxing': 2.2, 'relaxins': 1.2, 'relentless': 0.2, 'reliant': 0.5, 'relief': 2.1, 'reliefs': 1.3, 'relievable': 1.1, 'relieve': 1.5, 'relieved': 1.6, 'relievedly': 1.4, 'reliever': 1.5, 'relievers': 1.0, 'relieves': 1.5, 'relieving': 1.5, 'relievo': 1.3, 'relishing': 1.6, 'reluctance': -1.4, 'reluctancy': -1.6, 'reluctant': -1.0, 'reluctantly': -0.4, 'remarkable': 2.6, 'remorse': -1.1, 'remorseful': -0.9, 'remorsefully': -0.7, 'remorsefulness': -0.7, 'remorseless': -2.3, 'remorselessly': -2.0, 'remorselessness': -2.8, 'repetitive': -1.0, 'repress': -1.4, 'repressed': -1.3, 'represses': -1.3, 'repressible': -1.5, 'repressing': -1.8, 'repression': -1.6, 'repressions': -1.7, 'repressive': -1.4, 'repressively': -1.7, 'repressiveness': -1.0, 'repressor': -1.4, 'repressors': -2.2, 'repressurize': -0.3, 'repressurized': 0.1, 'repressurizes': 0.1, 'repressurizing': -0.1, 'repulse': -2.8, 'repulsed': -2.2, 'rescue': 2.3, 'rescued': 1.8, 'rescues': 1.3, 'resent': -0.7, 'resented': -1.6, 'resentence': -1.0, 'resentenced': -0.8, 'resentences': -0.6, 'resentencing': 0.2, 'resentful': -2.1, 'resentfully': -1.4, 'resentfulness': -2.0, 'resenting': -1.2, 'resentment': -1.9, 'resentments': -1.9, 'resents': -1.2, 'resign': -1.4, 'resignation': -1.2, 'resignations': -1.2, 'resigned': -1.0, 'resignedly': -0.7, 'resignedness': -0.8, 'resigner': -1.2, 'resigners': -1.0, 'resigning': -0.9, 'resigns': -1.3, 'resolute': 1.1, 'resolvable': 1.0, 'resolve': 1.6, 'resolved': 0.7, 'resolvent': 0.7, 'resolvents': 0.4, 'resolver': 0.7, 'resolvers': 1.4, 'resolves': 0.7, 'resolving': 1.6, 'respect': 2.1, 'respectabilities': 1.8, 'respectability': 2.4, 'respectable': 1.9, 'respectableness': 1.2, 'respectably': 1.7, 'respected': 2.1, 'respecter': 2.1, 'respecters': 1.6, 'respectful': 2.0, 'respectfully': 1.7, 'respectfulness': 1.9, 'respectfulnesses': 1.3, 'respecting': 2.2, 'respective': 1.8, 'respectively': 1.4, 'respectiveness': 1.1, 'respects': 1.3, 'responsible': 1.3, 'responsive': 1.5, 'restful': 1.5, 'restless': -1.1, 'restlessly': -1.4, 'restlessness': -1.2, 'restore': 1.2, 'restored': 1.4, 'restores': 1.2, 'restoring': 1.2, 'restrict': -1.6, 'restricted': -1.6, 'restricting': -1.6, 'restriction': -1.1, 'restricts': -1.3, 'retained': 0.1, 'retard': -2.4, 'retarded': -2.7, 'retreat': 0.8, 'revenge': -2.4, 'revenged': -0.9, 'revengeful': -2.4, 'revengefully': -1.4, 'revengefulness': -2.2, 'revenger': -2.1, 'revengers': -2.0, 'revenges': -1.9, 'revered': 2.3, 'revive': 1.4, 'revives': 1.6, 'reward': 2.7, 'rewardable': 2.0, 'rewarded': 2.2, 'rewarder': 1.6, 'rewarders': 1.9, 'rewarding': 2.4, 'rewardingly': 2.4, 'rewards': 2.1, 'rich': 2.6, 'richened': 1.9, 'richening': 1.0, 'richens': 0.8, 'richer': 2.4, 'riches': 2.4, 'richest': 2.4, 'richly': 1.9, 'richness': 2.2, 'richnesses': 2.1, 'richweed': 0.1, 'richweeds': -0.1, 'ridicule': -2.0, 'ridiculed': -1.5, 'ridiculer': -1.6, 'ridiculers': -1.6, 'ridicules': -1.8, 'ridiculing': -1.8, 'ridiculous': -1.5, 'ridiculously': -1.4, 'ridiculousness': -1.1, 'ridiculousnesses': -1.6, 'rig': -0.5, 'rigged': -1.5, 'rigid': -0.5, 'rigidification': -1.1, 'rigidifications': -0.8, 'rigidified': -0.7, 'rigidifies': -0.6, 'rigidify': -0.3, 'rigidities': -0.7, 'rigidity': -0.7, 'rigidly': -0.7, 'rigidness': -0.3, 'rigorous': -1.1, 'rigorously': -0.4, 'riot': -2.6, 'riots': -2.3, 'risk': -1.1, 'risked': -0.9, 'risker': -0.8, 'riskier': -1.4, 'riskiest': -1.5, 'riskily': -0.7, 'riskiness': -1.3, 'riskinesses': -1.6, 'risking': -1.3, 'riskless': 1.3, 'risks': -1.1, 'risky': -0.8, 'rob': -2.6, 'robber': -2.6, 'robed': -0.7, 'robing': -1.5, 'robs': -2.0, 'robust': 1.4, 'roflcopter': 2.1, 'romance': 2.6, 'romanced': 2.2, 'romancer': 1.3, 'romancers': 1.7, 'romances': 1.3, 'romancing': 2.0, 'romantic': 1.7, 'romantically': 1.8, 'romanticise': 1.7, 'romanticised': 1.7, 'romanticises': 1.3, 'romanticising': 2.7, 'romanticism': 2.2, 'romanticisms': 2.1, 'romanticist': 1.9, 'romanticists': 1.3, 'romanticization': 1.5, 'romanticizations': 2.0, 'romanticize': 1.8, 'romanticized': 0.9, 'romanticizes': 1.8, 'romanticizing': 1.2, 'romantics': 1.9, 'rotten': -2.3, 'rude': -2.0, 'rudely': -2.2, 'rudeness': -1.5, 'ruder': -2.1, 'ruderal': -0.8, 'ruderals': -0.4, 'rudesby': -2.0, 'rudest': -2.5, 'ruin': -2.8, 'ruinable': -1.6, 'ruinate': -2.8, 'ruinated': -1.5, 'ruinates': -1.5, 'ruinating': -1.5, 'ruination': -2.7, 'ruinations': -1.6, 'ruined': -2.1, 'ruiner': -2.0, 'ruing': -1.6, 'ruining': -1.0, 'ruinous': -2.7, 'ruinously': -2.6, 'ruinousness': -1.0, 'ruins': -1.9, 'sabotage': -2.4, 'sad': -2.1, 'sadden': -2.6, 'saddened': -2.4, 'saddening': -2.2, 'saddens': -1.9, 'sadder': -2.4, 'saddest': -3.0, 'sadly': -1.8, 'sadness': -1.9, 'safe': 1.9, 'safecracker': -0.7, 'safecrackers': -0.9, 'safecracking': -0.9, 'safecrackings': -0.7, 'safeguard': 1.6, 'safeguarded': 1.5, 'safeguarding': 1.1, 'safeguards': 1.4, 'safekeeping': 1.4, 'safelight': 1.1, 'safelights': 0.8, 'safely': 2.2, 'safeness': 1.5, 'safer': 1.8, 'safes': 0.4, 'safest': 1.7, 'safeties': 1.5, 'safety': 1.8, 'safetyman': 0.3, 'salient': 1.1, 'sappy': -1.0, 'sarcasm': -0.9, 'sarcasms': -0.9, 'sarcastic': -1.0, 'sarcastically': -1.1, 'satisfaction': 1.9, 'satisfactions': 2.1, 'satisfactorily': 1.6, 'satisfactoriness': 1.5, 'satisfactory': 1.5, 'satisfiable': 1.9, 'satisfied': 1.8, 'satisfies': 1.8, 'satisfy': 2.0, 'satisfying': 2.0, 'satisfyingly': 1.9, 'savage': -2.0, 'savaged': -2.0, 'savagely': -2.2, 'savageness': -2.6, 'savagenesses': -0.9, 'savageries': -1.9, 'savagery': -2.5, 'savages': -2.4, 'save': 2.2, 'saved': 1.8, 'scam': -2.7, 'scams': -2.8, 'scandal': -1.9, 'scandalous': -2.4, 'scandals': -2.2, 'scapegoat': -1.7, 'scapegoats': -1.4, 'scare': -2.2, 'scarecrow': -0.8, 'scarecrows': -0.7, 'scared': -1.9, 'scaremonger': -2.1, 'scaremongers': -2.0, 'scarer': -1.7, 'scarers': -1.3, 'scares': -1.4, 'scarey': -1.7, 'scaring': -1.9, 'scary': -2.2, 'sceptic': -1.0, 'sceptical': -1.2, 'scepticism': -0.8, 'sceptics': -0.7, 'scold': -1.7, 'scoop': 0.6, 'scorn': -1.7, 'scornful': -1.8, 'scream': -1.7, 'screamed': -1.3, 'screamers': -1.5, 'screaming': -1.6, 'screams': -1.2, 'screw': -0.4, 'screwball': -0.2, 'screwballs': -0.3, 'screwbean': 0.3, 'screwdriver': 0.3, 'screwdrivers': 0.1, 'screwed': -2.2, 'screwed up': -1.5, 'screwer': -1.2, 'screwers': -0.5, 'screwier': -0.6, 'screwiest': -2.0, 'screwiness': -0.5, 'screwing': -0.9, 'screwlike': 0.1, 'screws': -1.0, 'screwup': -1.7, 'screwups': -1.0, 'screwworm': -0.4, 'screwworms': -0.1, 'screwy': -1.4, 'scrumptious': 2.1, 'scrumptiously': 1.5, 'scumbag': -3.2, 'secure': 1.4, 'secured': 1.7, 'securely': 1.4, 'securement': 1.1, 'secureness': 1.4, 'securer': 1.5, 'securers': 0.6, 'secures': 1.3, 'securest': 2.6, 'securing': 1.3, 'securities': 1.2, 'securitization': 0.2, 'securitizations': 0.1, 'securitize': 0.3, 'securitized': 1.4, 'securitizes': 1.6, 'securitizing': 0.7, 'security': 1.4, 'sedition': -1.8, 'seditious': -1.7, 'seduced': -1.5, 'self-confident': 2.5, 'selfish': -2.1, 'selfishly': -1.4, 'selfishness': -1.7, 'selfishnesses': -2.0, 'sentence': 0.3, 'sentenced': -0.1, 'sentences': 0.2, 'sentencing': -0.6, 'sentimental': 1.3, 'sentimentalise': 1.2, 'sentimentalised': 0.8, 'sentimentalising': 0.4, 'sentimentalism': 1.0, 'sentimentalisms': 0.4, 'sentimentalist': 0.8, 'sentimentalists': 0.7, 'sentimentalities': 0.9, 'sentimentality': 1.2, 'sentimentalization': 1.2, 'sentimentalizations': 0.4, 'sentimentalize': 0.8, 'sentimentalized': 1.1, 'sentimentalizes': 1.1, 'sentimentalizing': 0.8, 'sentimentally': 1.9, 'serene': 2.0, 'serious': -0.3, 'seriously': -0.7, 'seriousness': -0.2, 'severe': -1.6, 'severed': -1.5, 'severely': -2.0, 'severeness': -1.0, 'severer': -1.6, 'severest': -1.5, 'sexy': 2.4, 'shake': -0.7, 'shakeable': -0.3, 'shakedown': -1.2, 'shakedowns': -1.4, 'shaken': -0.3, 'shakeout': -1.3, 'shakeouts': -0.8, 'shakers': 0.3, 'shakeup': -0.6, 'shakeups': -0.5, 'shakier': -0.9, 'shakiest': -1.2, 'shakily': -0.7, 'shakiness': -0.7, 'shaking': -0.7, 'shaky': -0.9, 'shame': -2.1, 'shamed': -2.6, 'shamefaced': -2.3, 'shamefacedly': -1.9, 'shamefacedness': -2.0, 'shamefast': -1.0, 'shameful': -2.2, 'shamefully': -1.9, 'shamefulness': -2.4, 'shamefulnesses': -2.3, 'shameless': -1.4, 'shamelessly': -1.4, 'shamelessness': -1.4, 'shamelessnesses': -2.0, 'shames': -1.7, 'share': 1.2, 'shared': 1.4, 'shares': 1.2, 'sharing': 1.8, 'shattered': -2.1, 'shit': -2.6, 'shitake': -0.3, 'shitakes': -1.1, 'shithead': -3.1, 'shitheads': -2.6, 'shits': -2.1, 'shittah': 0.1, 'shitted': -1.7, 'shittier': -2.1, 'shittiest': -3.4, 'shittim': -0.6, 'shittimwood': -0.3, 'shitting': -1.8, 'shitty': -2.6, 'shock': -1.6, 'shockable': -1.0, 'shocked': -1.3, 'shocker': -0.6, 'shockers': -1.1, 'shocking': -1.7, 'shockingly': -0.7, 'shockproof': 1.3, 'shocks': -1.6, 'shook': -0.4, 'shoot': -1.4, 'short-sighted': -1.2, 'short-sightedness': -1.1, 'shortage': -1.0, 'shortages': -0.6, 'shrew': -0.9, 'shy': -1.0, 'shyer': -0.8, 'shying': -0.9, 'shylock': -2.1, 'shylocked': -0.7, 'shylocking': -1.5, 'shylocks': -1.4, 'shyly': -0.7, 'shyness': -1.3, 'shynesses': -1.2, 'shyster': -1.6, 'shysters': -0.9, 'sick': -2.3, 'sicken': -1.9, 'sickened': -2.5, 'sickener': -2.2, 'sickeners': -2.2, 'sickening': -2.4, 'sickeningly': -2.1, 'sickens': -2.0, 'sigh': 0.1, 'significance': 1.1, 'significant': 0.8, 'silencing': -0.5, 'sillibub': -0.1, 'sillier': 1.0, 'sillies': 0.8, 'silliest': 0.8, 'sillily': -0.1, 'sillimanite': 0.1, 'sillimanites': 0.2, 'silliness': -0.9, 'sillinesses': -1.2, 'silly': 0.1, 'sin': -2.6, 'sincere': 1.7, 'sincerely': 2.1, 'sincereness': 1.8, 'sincerer': 2.0, 'sincerest': 2.0, 'sincerities': 1.5, 'sinful': -2.6, 'singleminded': 1.2, 'sinister': -2.9, 'sins': -2.0, 'skeptic': -0.9, 'skeptical': -1.3, 'skeptically': -1.2, 'skepticism': -1.0, 'skepticisms': -1.2, 'skeptics': -0.4, 'slam': -1.6, 'slash': -1.1, 'slashed': -0.9, 'slashes': -0.8, 'slashing': -1.1, 'slavery': -3.8, 'sleeplessness': -1.6, 'slicker': 0.4, 'slickest': 0.3, 'sluggish': -1.7, 'slut': -2.8, 'sluts': -2.7, 'sluttier': -2.7, 'sluttiest': -3.1, 'sluttish': -2.2, 'sluttishly': -2.1, 'sluttishness': -2.5, 'sluttishnesses': -2.0, 'slutty': -2.3, 'smart': 1.7, 'smartass': -2.1, 'smartasses': -1.7, 'smarted': 0.7, 'smarten': 1.9, 'smartened': 1.5, 'smartening': 1.7, 'smartens': 1.5, 'smarter': 2.0, 'smartest': 3.0, 'smartie': 1.3, 'smarties': 1.7, 'smarting': -0.7, 'smartly': 1.5, 'smartness': 2.0, 'smartnesses': 1.5, 'smarts': 1.6, 'smartweed': 0.2, 'smartweeds': 0.1, 'smarty': 1.1, 'smear': -1.5, 'smilax': 0.6, 'smilaxes': 0.3, 'smile': 1.5, 'smiled': 2.5, 'smileless': -1.4, 'smiler': 1.7, 'smiles': 2.1, 'smiley': 1.7, 'smileys': 1.5, 'smiling': 2.0, 'smilingly': 2.3, 'smog': -1.2, 'smother': -1.8, 'smothered': -0.9, 'smothering': -1.4, 'smothers': -1.9, 'smothery': -1.1, 'smug': 0.8, 'smugger': -1.0, 'smuggest': -1.5, 'smuggle': -1.6, 'smuggled': -1.5, 'smuggler': -2.1, 'smugglers': -1.4, 'smuggles': -1.7, 'smuggling': -2.1, 'smugly': 0.2, 'smugness': -1.4, 'smugnesses': -1.7, 'sneaky': -0.9, 'snob': -2.0, 'snobbery': -2.0, 'snobbier': -0.7, 'snobbiest': -0.5, 'snobbily': -1.6, 'snobbish': -0.9, 'snobbishly': -1.2, 'snobbishness': -1.1, 'snobbishnesses': -1.7, 'snobbism': -1.0, 'snobbisms': -0.3, 'snobby': -1.7, 'snobs': -1.4, 'snub': -1.8, 'snubbed': -2.0, 'snubbing': -0.9, 'snubs': -2.1, 'sobbed': -1.9, 'sobbing': -1.6, 'sobering': -0.8, 'sobs': -2.5, 'sociabilities': 1.2, 'sociability': 1.1, 'sociable': 1.9, 'sociableness': 1.5, 'sociably': 1.6, 'sok': 1.3, 'solemn': -0.3, 'solemnified': -0.5, 'solemnifies': -0.5, 'solemnify': 0.3, 'solemnifying': 0.1, 'solemnities': 0.3, 'solemnity': -1.1, 'solemnization': 0.7, 'solemnize': 0.3, 'solemnized': -0.7, 'solemnizes': 0.6, 'solemnizing': -0.6, 'solemnly': 0.8, 'solid': 0.6, 'solidarity': 1.2, 'solution': 1.3, 'solutions': 0.7, 'solve': 0.8, 'solved': 1.1, 'solves': 1.1, 'solving': 1.4, 'somber': -1.8, 'son-of-a-bitch': -2.7, 'soothe': 1.5, 'soothed': 0.5, 'soothing': 1.3, 'sophisticated': 2.6, 'sore': -1.5, 'sorrow': -2.4, 'sorrowed': -2.4, 'sorrower': -2.3, 'sorrowful': -2.2, 'sorrowfully': -2.3, 'sorrowfulness': -2.5, 'sorrowing': -1.7, 'sorrows': -1.6, 'sorry': -0.3, 'soulmate': 2.9, 'spam': -1.5, 'spammer': -2.2, 'spammers': -1.6, 'spamming': -2.1, 'spark': 0.9, 'sparkle': 1.8, 'sparkles': 1.3, 'sparkling': 1.2, 'special': 1.7, 'speculative': 0.4, 'spirit': 0.7, 'spirited': 1.3, 'spiritless': -1.3, 'spite': -2.4, 'spited': -2.4, 'spiteful': -1.9, 'spitefully': -2.3, 'spitefulness': -1.5, 'spitefulnesses': -2.3, 'spites': -1.4, 'splendent': 2.7, 'splendid': 2.8, 'splendidly': 2.1, 'splendidness': 2.3, 'splendiferous': 2.6, 'splendiferously': 1.9, 'splendiferousness': 1.7, 'splendor': 3.0, 'splendorous': 2.2, 'splendors': 2.0, 'splendour': 2.2, 'splendours': 2.2, 'splendrous': 2.2, 'sprightly': 2.0, 'squelched': -1.0, 'stab': -2.8, 'stabbed': -1.9, 'stable': 1.2, 'stabs': -1.9, 'stall': -0.8, 'stalled': -0.8, 'stalling': -0.8, 'stamina': 1.2, 'stammer': -0.9, 'stammered': -0.9, 'stammerer': -1.1, 'stammerers': -0.8, 'stammering': -1.0, 'stammers': -0.8, 'stampede': -1.8, 'stank': -1.9, 'startle': -1.3, 'startled': -0.7, 'startlement': -0.5, 'startlements': 0.2, 'startler': -0.8, 'startlers': -0.5, 'startles': -0.5, 'startling': 0.3, 'startlingly': -0.3, 'starve': -1.9, 'starved': -2.6, 'starves': -2.3, 'starving': -1.8, 'steadfast': 1.0, 'steal': -2.2, 'stealable': -1.7, 'stealer': -1.7, 'stealers': -2.2, 'stealing': -2.7, 'stealings': -1.9, 'steals': -2.3, 'stealth': -0.3, 'stealthier': -0.3, 'stealthiest': 0.4, 'stealthily': 0.1, 'stealthiness': 0.2, 'stealths': -0.3, 'stealthy': -0.1, 'stench': -2.3, 'stenches': -1.5, 'stenchful': -2.4, 'stenchy': -2.3, 'stereotype': -1.3, 'stereotyped': -1.2, 'stifled': -1.4, 'stimulate': 0.9, 'stimulated': 0.9, 'stimulates': 1.0, 'stimulating': 1.9, 'stingy': -1.6, 'stink': -1.7, 'stinkard': -2.3, 'stinkards': -1.0, 'stinkbug': -0.2, 'stinkbugs': -1.0, 'stinker': -1.5, 'stinkers': -1.2, 'stinkhorn': -0.2, 'stinkhorns': -0.8, 'stinkier': -1.5, 'stinkiest': -2.1, 'stinking': -2.4, 'stinkingly': -1.3, 'stinko': -1.5, 'stinkpot': -2.5, 'stinkpots': -0.7, 'stinks': -1.0, 'stinkweed': -0.4, 'stinkwood': -0.1, 'stinky': -1.5, 'stolen': -2.2, 'stop': -1.2, 'stopped': -0.9, 'stopping': -0.6, 'stops': -0.6, 'stout': 0.7, 'straight': 0.9, 'strain': -0.2, 'strained': -1.7, 'strainer': -0.8, 'strainers': -0.3, 'straining': -1.3, 'strains': -1.2, 'strange': -0.8, 'strangely': -1.2, 'strangled': -2.5, 'strength': 2.2, 'strengthen': 1.3, 'strengthened': 1.8, 'strengthener': 1.8, 'strengtheners': 1.4, 'strengthening': 2.2, 'strengthens': 2.0, 'strengths': 1.7, 'stress': -1.8, 'stressed': -1.4, 'stresses': -2.0, 'stressful': -2.3, 'stressfully': -2.6, 'stressing': -1.5, 'stressless': 1.6, 'stresslessness': 1.6, 'stressor': -1.8, 'stressors': -2.1, 'stricken': -2.3, 'strike': -0.5, 'strikers': -0.6, 'strikes': -1.5, 'strong': 2.3, 'strongbox': 0.7, 'strongboxes': 0.3, 'stronger': 1.6, 'strongest': 1.9, 'stronghold': 0.5, 'strongholds': 1.0, 'strongish': 1.7, 'strongly': 1.1, 'strongman': 0.7, 'strongmen': 0.5, 'strongyl': 0.6, 'strongyles': 0.2, 'strongyloidosis': -0.8, 'strongyls': 0.1, 'struck': -1.0, 'struggle': -1.3, 'struggled': -1.4, 'struggler': -1.1, 'strugglers': -1.4, 'struggles': -1.5, 'struggling': -1.8, 'stubborn': -1.7, 'stubborner': -1.5, 'stubbornest': -0.6, 'stubbornly': -1.4, 'stubbornness': -1.1, 'stubbornnesses': -1.5, 'stuck': -1.0, 'stunk': -1.6, 'stunned': -0.4, 'stunning': 1.6, 'stuns': 0.1, 'stupid': -2.4, 'stupider': -2.5, 'stupidest': -2.4, 'stupidities': -2.0, 'stupidity': -1.9, 'stupidly': -2.0, 'stupidness': -1.7, 'stupidnesses': -2.6, 'stupids': -2.3, 'stutter': -1.0, 'stuttered': -0.9, 'stutterer': -1.0, 'stutterers': -1.1, 'stuttering': -1.3, 'stutters': -1.0, 'suave': 2.0, 'submissive': -1.3, 'submissively': -1.0, 'submissiveness': -0.7, 'substantial': 0.8, 'subversive': -0.9, 'succeed': 2.2, 'succeeded': 1.8, 'succeeder': 1.2, 'succeeders': 1.3, 'succeeding': 2.2, 'succeeds': 2.2, 'success': 2.7, 'successes': 2.6, 'successful': 2.8, 'successfully': 2.2, 'successfulness': 2.7, 'succession': 0.8, 'successional': 0.9, 'successionally': 1.1, 'successions': 0.1, 'successive': 1.1, 'successively': 0.9, 'successiveness': 1.0, 'successor': 0.9, 'successors': 1.1, 'suck': -1.9, 'sucked': -2.0, 'sucker': -2.4, 'suckered': -2.0, 'suckering': -2.1, 'suckers': -2.3, 'sucks': -1.5, 'sucky': -1.9, 'suffer': -2.5, 'suffered': -2.2, 'sufferer': -2.0, 'sufferers': -2.4, 'suffering': -2.1, 'suffers': -2.1, 'suicidal': -3.5, 'suicide': -3.5, 'suing': -1.1, 'sulking': -1.5, 'sulky': -0.8, 'sullen': -1.7, 'sunnier': 2.3, 'sunniest': 2.4, 'sunny': 1.8, 'sunshine': 2.2, 'sunshiny': 1.9, 'super': 2.9, 'superb': 3.1, 'superior': 2.5, 'superiorities': 0.8, 'superiority': 1.4, 'superiorly': 2.2, 'superiors': 1.0, 'support': 1.7, 'supported': 1.3, 'supporter': 1.1, 'supporters': 1.9, 'supporting': 1.9, 'supportive': 1.2, 'supportiveness': 1.5, 'supports': 1.5, 'supremacies': 0.8, 'supremacist': 0.5, 'supremacists': -1.0, 'supremacy': 0.2, 'suprematists': 0.4, 'supreme': 2.6, 'supremely': 2.7, 'supremeness': 2.3, 'supremer': 2.3, 'supremest': 2.2, 'supremo': 1.9, 'supremos': 1.3, 'sure': 1.3, 'surefire': 1.0, 'surefooted': 1.9, 'surefootedly': 1.6, 'surefootedness': 1.5, 'surely': 1.9, 'sureness': 2.0, 'surer': 1.2, 'surest': 1.3, 'sureties': 1.3, 'surety': 1.0, 'suretyship': -0.1, 'suretyships': 0.4, 'surprisal': 1.5, 'surprisals': 0.7, 'surprise': 1.1, 'surprised': 0.9, 'surpriser': 0.6, 'surprisers': 0.3, 'surprises': 0.9, 'surprising': 1.1, 'surprisingly': 1.2, 'survived': 2.3, 'surviving': 1.2, 'survivor': 1.5, 'suspect': -1.2, 'suspected': -0.9, 'suspecting': -0.7, 'suspects': -1.4, 'suspend': -1.3, 'suspended': -2.1, 'suspicion': -1.6, 'suspicions': -1.5, 'suspicious': -1.5, 'suspiciously': -1.7, 'suspiciousness': -1.2, 'sux': -1.5, 'swear': -0.2, 'swearing': -1.0, 'swears': 0.2, 'sweet': 2.0, 'sweet<3': 3.0, 'sweetheart': 3.3, 'sweethearts': 2.8, 'sweetie': 2.2, 'sweeties': 2.1, 'sweetly': 2.1, 'sweetness': 2.2, 'sweets': 2.2, 'swift': 0.8, 'swiftly': 1.2, 'swindle': -2.4, 'swindles': -1.5, 'swindling': -2.0, 'sympathetic': 2.3, 'sympathy': 1.5, 'talent': 1.8, 'talented': 2.3, 'talentless': -1.6, 'talents': 2.0, 'tantrum': -1.8, 'tantrums': -1.5, 'tard': -2.5, 'tears': -0.9, 'teas': 0.3, 'tease': -1.3, 'teased': -1.2, 'teasel': -0.1, 'teaseled': -0.8, 'teaseler': -0.8, 'teaselers': -1.2, 'teaseling': -0.4, 'teaselled': -0.4, 'teaselling': -0.2, 'teasels': -0.1, 'teaser': -1.0, 'teasers': -0.7, 'teases': -1.2, 'teashops': 0.2, 'teasing': -0.3, 'teasingly': -0.4, 'teaspoon': 0.2, 'teaspoonful': 0.2, 'teaspoonfuls': 0.4, 'teaspoons': 0.5, 'teaspoonsful': 0.3, 'temper': -1.8, 'tempers': -1.3, 'tendered': 0.5, 'tenderer': 0.6, 'tenderers': 1.2, 'tenderest': 1.4, 'tenderfeet': -0.4, 'tenderfoot': -0.1, 'tenderfoots': -0.5, 'tenderhearted': 1.5, 'tenderheartedly': 2.7, 'tenderheartedness': 0.7, 'tenderheartednesses': 2.8, 'tendering': 0.6, 'tenderization': 0.2, 'tenderize': 0.1, 'tenderized': 0.1, 'tenderizer': 0.4, 'tenderizes': 0.3, 'tenderizing': 0.3, 'tenderloin': -0.2, 'tenderloins': 0.4, 'tenderly': 1.8, 'tenderness': 1.8, 'tendernesses': 0.9, 'tenderometer': 0.2, 'tenderometers': 0.2, 'tenders': 0.6, 'tense': -1.4, 'tensed': -1.0, 'tensely': -1.2, 'tenseness': -1.5, 'tenser': -1.5, 'tenses': -0.9, 'tensest': -1.2, 'tensing': -1.0, 'tension': -1.3, 'tensional': -0.8, 'tensioned': -0.4, 'tensioner': -1.6, 'tensioners': -0.9, 'tensioning': -1.4, 'tensionless': 0.6, 'tensions': -1.7, 'terrible': -2.1, 'terribleness': -1.9, 'terriblenesses': -2.6, 'terribly': -2.6, 'terrific': 2.1, 'terrifically': 1.7, 'terrified': -3.0, 'terrifies': -2.6, 'terrify': -2.3, 'terrifying': -2.7, 'terror': -2.4, 'terrorise': -3.1, 'terrorised': -3.3, 'terrorises': -3.3, 'terrorising': -3.0, 'terrorism': -3.6, 'terrorisms': -3.2, 'terrorist': -3.7, 'terroristic': -3.3, 'terrorists': -3.1, 'terrorization': -2.7, 'terrorize': -3.3, 'terrorized': -3.1, 'terrorizes': -3.1, 'terrorizing': -3.0, 'terrorless': 0.9, 'terrors': -2.6, 'thank': 1.5, 'thanked': 1.9, 'thankful': 2.7, 'thankfuller': 1.9, 'thankfullest': 2.0, 'thankfully': 1.8, 'thankfulness': 2.1, 'thanks': 1.9, 'thief': -2.4, 'thieve': -2.2, 'thieved': -1.4, 'thieveries': -2.1, 'thievery': -2.0, 'thieves': -2.3, 'thorny': -1.1, 'thoughtful': 1.6, 'thoughtfully': 1.7, 'thoughtfulness': 1.9, 'thoughtless': -2.0, 'threat': -2.4, 'threaten': -1.6, 'threatened': -2.0, 'threatener': -1.4, 'threateners': -1.8, 'threatening': -2.4, 'threateningly': -2.2, 'threatens': -1.6, 'threating': -2.0, 'threats': -1.8, 'thrill': 1.5, 'thrilled': 1.9, 'thriller': 0.4, 'thrillers': 0.1, 'thrilling': 2.1, 'thrillingly': 2.0, 'thrills': 1.5, 'thwarted': -0.1, 'thwarting': -0.7, 'thwarts': -0.4, 'ticked': -1.8, 'timid': -1.0, 'timider': -1.0, 'timidest': -0.9, 'timidities': -0.7, 'timidity': -1.3, 'timidly': -0.7, 'timidness': -1.0, 'timorous': -0.8, 'tired': -1.9, 'tits': -0.9, 'tolerance': 1.2, 'tolerances': 0.3, 'tolerant': 1.1, 'tolerantly': 0.4, 'toothless': -1.4, 'top': 0.8, 'tops': 2.3, 'torn': -1.0, 'torture': -2.9, 'tortured': -2.6, 'torturer': -2.3, 'torturers': -3.5, 'tortures': -2.5, 'torturing': -3.0, 'torturous': -2.7, 'torturously': -2.2, 'totalitarian': -2.1, 'totalitarianism': -2.7, 'tough': -0.5, 'toughed': 0.7, 'toughen': 0.1, 'toughened': 0.1, 'toughening': 0.9, 'toughens': -0.2, 'tougher': 0.7, 'toughest': -0.3, 'toughie': -0.7, 'toughies': -0.6, 'toughing': -0.5, 'toughish': -1.0, 'toughly': -1.1, 'toughness': -0.2, 'toughnesses': 0.3, 'toughs': -0.8, 'toughy': -0.5, 'tout': -0.5, 'touted': -0.2, 'touting': -0.7, 'touts': -0.1, 'tragedian': -0.5, 'tragedians': -1.0, 'tragedienne': -0.4, 'tragediennes': -1.4, 'tragedies': -1.9, 'tragedy': -3.4, 'tragic': -2.0, 'tragical': -2.4, 'tragically': -2.7, 'tragicomedy': 0.2, 'tragicomic': -0.2, 'tragics': -2.2, 'tranquil': 0.2, 'tranquiler': 1.9, 'tranquilest': 1.6, 'tranquilities': 1.5, 'tranquility': 1.8, 'tranquilize': 0.3, 'tranquilized': -0.2, 'tranquilizer': -0.1, 'tranquilizers': -0.4, 'tranquilizes': -0.1, 'tranquilizing': -0.5, 'tranquillest': 0.8, 'tranquillities': 0.5, 'tranquillity': 1.8, 'tranquillized': -0.2, 'tranquillizer': -0.1, 'tranquillizers': -0.2, 'tranquillizes': 0.1, 'tranquillizing': 0.8, 'tranquilly': 1.2, 'tranquilness': 1.5, 'trap': -1.3, 'trapped': -2.4, 'trauma': -1.8, 'traumas': -2.2, 'traumata': -1.7, 'traumatic': -2.7, 'traumatically': -2.8, 'traumatise': -2.8, 'traumatised': -2.4, 'traumatises': -2.2, 'traumatising': -1.9, 'traumatism': -2.4, 'traumatization': -3.0, 'traumatizations': -2.2, 'traumatize': -2.4, 'traumatized': -1.7, 'traumatizes': -1.4, 'traumatizing': -2.3, 'travesty': -2.7, 'treason': -1.9, 'treasonous': -2.7, 'treasurable': 2.5, 'treasure': 1.2, 'treasured': 2.6, 'treasurer': 0.5, 'treasurers': 0.4, 'treasurership': 0.4, 'treasurerships': 1.2, 'treasures': 1.8, 'treasuries': 0.9, 'treasuring': 2.1, 'treasury': 0.8, 'treat': 1.7, 'tremble': -1.1, 'trembled': -1.1, 'trembler': -0.6, 'tremblers': -1.0, 'trembles': -0.1, 'trembling': -1.5, 'trembly': -1.2, 'tremulous': -1.0, 'trick': -0.2, 'tricked': -0.6, 'tricker': -0.9, 'trickeries': -1.2, 'trickers': -1.4, 'trickery': -1.1, 'trickie': -0.4, 'trickier': -0.7, 'trickiest': -1.2, 'trickily': -0.8, 'trickiness': -1.2, 'trickinesses': -0.4, 'tricking': 0.1, 'trickish': -1.0, 'trickishly': -0.7, 'trickishness': -0.4, 'trickled': 0.1, 'trickledown': -0.7, 'trickles': 0.2, 'trickling': -0.2, 'trickly': -0.3, 'tricks': -0.5, 'tricksier': -0.5, 'tricksiness': -1.0, 'trickster': -0.9, 'tricksters': -1.3, 'tricksy': -0.8, 'tricky': -0.6, 'trite': -0.8, 'triumph': 2.1, 'triumphal': 2.0, 'triumphalisms': 1.9, 'triumphalist': 0.5, 'triumphalists': 0.9, 'triumphant': 2.4, 'triumphantly': 2.3, 'triumphed': 2.2, 'triumphing': 2.3, 'triumphs': 2.0, 'trivial': -0.1, 'trivialise': -0.8, 'trivialised': -0.8, 'trivialises': -1.1, 'trivialising': -1.4, 'trivialities': -1.0, 'triviality': -0.5, 'trivialization': -0.9, 'trivializations': -0.7, 'trivialize': -1.1, 'trivialized': -0.6, 'trivializes': -1.0, 'trivializing': -0.6, 'trivially': 0.4, 'trivium': -0.3, 'trouble': -1.7, 'troubled': -2.0, 'troublemaker': -2.0, 'troublemakers': -2.2, 'troublemaking': -1.8, 'troubler': -1.4, 'troublers': -1.9, 'troubles': -2.0, 'troubleshoot': 0.8, 'troubleshooter': 1.0, 'troubleshooters': 0.8, 'troubleshooting': 0.7, 'troubleshoots': 0.5, 'troublesome': -2.3, 'troublesomely': -1.8, 'troublesomeness': -1.9, 'troubling': -2.5, 'troublous': -2.1, 'troublously': -2.1, 'trueness': 2.1, 'truer': 1.5, 'truest': 1.9, 'truly': 1.9, 'trust': 2.3, 'trustability': 2.1, 'trustable': 2.3, 'trustbuster': -0.5, 'trusted': 2.1, 'trustee': 1.0, 'trustees': 0.3, 'trusteeship': 0.5, 'trusteeships': 0.6, 'truster': 1.9, 'trustful': 2.1, 'trustfully': 1.5, 'trustfulness': 2.1, 'trustier': 1.3, 'trusties': 1.0, 'trustiest': 2.2, 'trustily': 1.6, 'trustiness': 1.6, 'trusting': 1.7, 'trustingly': 1.6, 'trustingness': 1.6, 'trustless': -2.3, 'trustor': 0.4, 'trustors': 1.2, 'trusts': 2.1, 'trustworthily': 2.3, 'trustworthiness': 1.8, 'trustworthy': 2.6, 'trusty': 2.2, 'truth': 1.3, 'truthful': 2.0, 'truthfully': 1.9, 'truthfulness': 1.7, 'truths': 1.8, 'tumor': -1.6, 'turmoil': -1.5, 'twat': -3.4, 'ugh': -1.8, 'uglier': -2.2, 'uglies': -2.0, 'ugliest': -2.8, 'uglification': -2.2, 'uglified': -1.5, 'uglifies': -1.8, 'uglify': -2.1, 'uglifying': -2.2, 'uglily': -2.1, 'ugliness': -2.7, 'uglinesses': -2.5, 'ugly': -2.3, 'unacceptable': -2.0, 'unappreciated': -1.7, 'unapproved': -1.4, 'unattractive': -1.9, 'unaware': -0.8, 'unbelievable': 0.8, 'unbelieving': -0.8, 'unbiased': -0.1, 'uncertain': -1.2, 'uncertainly': -1.4, 'uncertainness': -1.3, 'uncertainties': -1.4, 'uncertainty': -1.4, 'unclear': -1.0, 'uncomfortable': -1.6, 'uncomfortably': -1.7, 'uncompelling': -0.9, 'unconcerned': -0.9, 'unconfirmed': -0.5, 'uncontrollability': -1.7, 'uncontrollable': -1.5, 'uncontrollably': -1.5, 'uncontrolled': -1.0, 'unconvinced': -1.6, 'uncredited': -1.0, 'undecided': -0.9, 'underestimate': -1.2, 'underestimated': -1.1, 'underestimates': -1.1, 'undermine': -1.2, 'undermined': -1.5, 'undermines': -1.4, 'undermining': -1.5, 'undeserving': -1.9, 'undesirable': -1.9, 'unease': -1.7, 'uneasier': -1.4, 'uneasiest': -2.1, 'uneasily': -1.4, 'uneasiness': -1.6, 'uneasinesses': -1.8, 'uneasy': -1.6, 'unemployment': -1.9, 'unequal': -1.4, 'unequaled': 0.5, 'unethical': -2.3, 'unfair': -2.1, 'unfocused': -1.7, 'unfortunate': -2.0, 'unfortunately': -1.4, 'unfortunates': -1.9, 'unfriendly': -1.5, 'unfulfilled': -1.8, 'ungrateful': -2.0, 'ungratefully': -1.8, 'ungratefulness': -1.6, 'unhappier': -2.4, 'unhappiest': -2.5, 'unhappily': -1.9, 'unhappiness': -2.4, 'unhappinesses': -2.2, 'unhappy': -1.8, 'unhealthy': -2.4, 'unified': 1.6, 'unimportant': -1.3, 'unimpressed': -1.4, 'unimpressive': -1.4, 'unintelligent': -2.0, 'uninvolved': -2.2, 'uninvolving': -2.0, 'united': 1.8, 'unjust': -2.3, 'unkind': -1.6, 'unlovable': -2.7, 'unloved': -1.9, 'unlovelier': -1.9, 'unloveliest': -1.9, 'unloveliness': -2.0, 'unlovely': -2.1, 'unloving': -2.3, 'unmatched': -0.3, 'unmotivated': -1.4, 'unpleasant': -2.1, 'unprofessional': -2.3, 'unprotected': -1.5, 'unresearched': -1.1, 'unsatisfied': -1.7, 'unsavory': -1.9, 'unsecured': -1.6, 'unsettled': -1.3, 'unsophisticated': -1.2, 'unstable': -1.5, 'unstoppable': -0.8, 'unsuccessful': -1.5, 'unsuccessfully': -1.7, 'unsupported': -1.7, 'unsure': -1.0, 'unsurely': -1.3, 'untarnished': 1.6, 'unwanted': -0.9, 'unwelcome': -1.7, 'unworthy': -2.0, 'upset': -1.6, 'upsets': -1.5, 'upsetter': -1.9, 'upsetters': -2.0, 'upsetting': -2.1, 'uptight': -1.6, 'uptightness': -1.2, 'urgent': 0.8, 'useful': 1.9, 'usefully': 1.8, 'usefulness': 1.2, 'useless': -1.8, 'uselessly': -1.5, 'uselessness': -1.6, 'v.v': -2.9, 'vague': -0.4, 'vain': -1.8, 'validate': 1.5, 'validated': 0.9, 'validates': 1.4, 'validating': 1.4, 'valuable': 2.1, 'valuableness': 1.7, 'valuables': 2.1, 'valuably': 2.3, 'value': 1.4, 'valued': 1.9, 'values': 1.7, 'valuing': 1.4, 'vanity': -0.9, 'verdict': 0.6, 'verdicts': 0.3, 'vested': 0.6, 'vexation': -1.9, 'vexing': -2.0, 'vibrant': 2.4, 'vicious': -1.5, 'viciously': -1.3, 'viciousness': -2.4, 'viciousnesses': -0.6, 'victim': -1.1, 'victimhood': -2.0, 'victimhoods': -0.9, 'victimise': -1.1, 'victimised': -1.5, 'victimises': -1.2, 'victimising': -2.5, 'victimization': -2.3, 'victimizations': -1.5, 'victimize': -2.5, 'victimized': -1.8, 'victimizer': -1.8, 'victimizers': -1.6, 'victimizes': -1.5, 'victimizing': -2.6, 'victimless': 0.6, 'victimologies': -0.6, 'victimologist': -0.5, 'victimologists': -0.4, 'victimology': 0.3, 'victims': -1.3, 'vigilant': 0.7, 'vigor': 1.1, 'vigorish': -0.4, 'vigorishes': 0.4, 'vigoroso': 1.5, 'vigorously': 0.5, 'vigorousness': 0.4, 'vigors': 1.0, 'vigour': 0.9, 'vigours': 0.4, 'vile': -3.1, 'villain': -2.6, 'villainess': -2.9, 'villainesses': -2.0, 'villainies': -2.3, 'villainous': -2.0, 'villainously': -2.9, 'villainousness': -2.7, 'villains': -3.4, 'villainy': -2.6, 'vindicate': 0.3, 'vindicated': 1.8, 'vindicates': 1.6, 'vindicating': -1.1, 'violate': -2.2, 'violated': -2.4, 'violater': -2.6, 'violaters': -2.4, 'violates': -2.3, 'violating': -2.5, 'violation': -2.2, 'violations': -2.4, 'violative': -2.4, 'violator': -2.4, 'violators': -1.9, 'violence': -3.1, 'violent': -2.9, 'violently': -2.8, 'virtue': 1.8, 'virtueless': -1.4, 'virtues': 1.5, 'virtuosa': 1.7, 'virtuosas': 1.8, 'virtuose': 1.0, 'virtuosi': 0.9, 'virtuosic': 2.2, 'virtuosity': 2.1, 'virtuoso': 2.0, 'virtuosos': 1.8, 'virtuous': 2.4, 'virtuously': 1.8, 'virtuousness': 2.0, 'virulent': -2.7, 'vision': 1.0, 'visionary': 2.4, 'visioning': 1.1, 'visions': 0.9, 'vital': 1.2, 'vitalise': 1.1, 'vitalised': 0.6, 'vitalises': 1.1, 'vitalising': 2.1, 'vitalism': 0.2, 'vitalist': 0.3, 'vitalists': 0.3, 'vitalities': 1.2, 'vitality': 1.3, 'vitalization': 1.6, 'vitalizations': 0.8, 'vitalize': 1.6, 'vitalized': 1.5, 'vitalizes': 1.4, 'vitalizing': 1.3, 'vitally': 1.1, 'vitals': 1.1, 'vitamin': 1.2, 'vitriolic': -2.1, 'vivacious': 1.8, 'vociferous': -0.8, 'vulnerabilities': -0.6, 'vulnerability': -0.9, 'vulnerable': -0.9, 'vulnerableness': -1.1, 'vulnerably': -1.2, 'vulture': -2.0, 'vultures': -1.3, 'w00t': 2.2, 'walkout': -1.3, 'walkouts': -0.7, 'wanker': -2.5, 'want': 0.3, 'war': -2.9, 'warfare': -1.2, 'warfares': -1.8, 'warm': 0.9, 'warmblooded': 0.2, 'warmed': 1.1, 'warmer': 1.2, 'warmers': 1.0, 'warmest': 1.7, 'warmhearted': 1.8, 'warmheartedness': 2.7, 'warming': 0.6, 'warmish': 1.4, 'warmly': 1.7, 'warmness': 1.5, 'warmonger': -2.9, 'warmongering': -2.5, 'warmongers': -2.8, 'warmouth': 0.4, 'warmouths': -0.8, 'warms': 1.1, 'warmth': 2.0, 'warmup': 0.4, 'warmups': 0.8, 'warn': -0.4, 'warned': -1.1, 'warning': -1.4, 'warnings': -1.2, 'warns': -0.4, 'warred': -2.4, 'warring': -1.9, 'wars': -2.6, 'warsaw': -0.1, 'warsaws': -0.2, 'warship': -0.7, 'warships': -0.5, 'warstle': 0.1, 'waste': -1.8, 'wasted': -2.2, 'wasting': -1.7, 'wavering': -0.6, 'weak': -1.9, 'weaken': -1.8, 'weakened': -1.3, 'weakener': -1.6, 'weakeners': -1.3, 'weakening': -1.3, 'weakens': -1.3, 'weaker': -1.9, 'weakest': -2.3, 'weakfish': -0.2, 'weakfishes': -0.6, 'weakhearted': -1.6, 'weakish': -1.2, 'weaklier': -1.5, 'weakliest': -2.1, 'weakling': -1.3, 'weaklings': -1.4, 'weakly': -1.8, 'weakness': -1.8, 'weaknesses': -1.5, 'weakside': -1.1, 'wealth': 2.2, 'wealthier': 2.2, 'wealthiest': 2.2, 'wealthily': 2.0, 'wealthiness': 2.4, 'wealthy': 1.5, 'weapon': -1.2, 'weaponed': -1.4, 'weaponless': 0.1, 'weaponry': -0.9, 'weapons': -1.9, 'weary': -1.1, 'weep': -2.7, 'weeper': -1.9, 'weepers': -1.1, 'weepie': -0.4, 'weepier': -1.8, 'weepies': -1.6, 'weepiest': -2.4, 'weeping': -1.9, 'weepings': -1.9, 'weeps': -1.4, 'weepy': -1.3, 'weird': -0.7, 'weirder': -0.5, 'weirdest': -0.9, 'weirdie': -1.3, 'weirdies': -1.0, 'weirdly': -1.2, 'weirdness': -0.9, 'weirdnesses': -0.7, 'weirdo': -1.8, 'weirdoes': -1.3, 'weirdos': -1.1, 'weirds': -0.6, 'weirdy': -0.9, 'welcome': 2.0, 'welcomed': 1.4, 'welcomely': 1.9, 'welcomeness': 2.0, 'welcomer': 1.4, 'welcomers': 1.9, 'welcomes': 1.7, 'welcoming': 1.9, 'well': 1.1, 'welladay': 0.3, 'wellaway': -0.8, 'wellborn': 1.8, 'welldoer': 2.5, 'welldoers': 1.6, 'welled': 0.4, 'wellhead': 0.1, 'wellheads': 0.5, 'wellhole': -0.1, 'wellies': 0.4, 'welling': 1.6, 'wellness': 1.9, 'wells': 1.0, 'wellsite': 0.5, 'wellspring': 1.5, 'wellsprings': 1.4, 'welly': 0.2, 'wept': -2.0, 'whimsical': 0.3, 'whine': -1.5, 'whined': -0.9, 'whiner': -1.2, 'whiners': -0.6, 'whines': -1.8, 'whiney': -1.3, 'whining': -0.9, 'whitewash': 0.1, 'whore': -3.3, 'whored': -2.8, 'whoredom': -2.1, 'whoredoms': -2.4, 'whorehouse': -1.1, 'whorehouses': -1.9, 'whoremaster': -1.9, 'whoremasters': -1.5, 'whoremonger': -2.6, 'whoremongers': -2.0, 'whores': -3.0, 'whoreson': -2.2, 'whoresons': -2.5, 'wicked': -2.4, 'wickeder': -2.2, 'wickedest': -2.9, 'wickedly': -2.1, 'wickedness': -2.1, 'wickednesses': -2.2, 'widowed': -2.1, 'willingness': 1.1, 'wimp': -1.4, 'wimpier': -1.0, 'wimpiest': -0.9, 'wimpiness': -1.2, 'wimpish': -1.6, 'wimpishness': -0.2, 'wimple': -0.2, 'wimples': -0.3, 'wimps': -1.0, 'wimpy': -0.9, 'win': 2.8, 'winnable': 1.8, 'winned': 1.8, 'winner': 2.8, 'winners': 2.1, 'winning': 2.4, 'winningly': 2.3, 'winnings': 2.5, 'winnow': -0.3, 'winnower': -0.1, 'winnowers': -0.2, 'winnowing': -0.1, 'winnows': -0.2, 'wins': 2.7, 'wisdom': 2.4, 'wise': 2.1, 'wiseacre': -1.2, 'wiseacres': -0.1, 'wiseass': -1.8, 'wiseasses': -1.5, 'wisecrack': -0.1, 'wisecracked': -0.5, 'wisecracker': -0.1, 'wisecrackers': 0.1, 'wisecracking': -0.6, 'wisecracks': -0.3, 'wised': 1.5, 'wiseguys': 0.3, 'wiselier': 0.9, 'wiseliest': 1.6, 'wisely': 1.8, 'wiseness': 1.9, 'wisenheimer': -1.0, 'wisenheimers': -1.4, 'wisents': 0.4, 'wiser': 1.2, 'wises': 1.3, 'wisest': 2.1, 'wisewomen': 1.3, 'wish': 1.7, 'wishes': 0.6, 'wishing': 0.9, 'witch': -1.5, 'withdrawal': 0.1, 'woe': -1.8, 'woebegone': -2.6, 'woebegoneness': -1.1, 'woeful': -1.9, 'woefully': -1.7, 'woefulness': -2.1, 'woes': -1.9, 'woesome': -1.2, 'won': 2.7, 'wonderful': 2.7, 'wonderfully': 2.9, 'wonderfulness': 2.9, 'woo': 2.1, 'woohoo': 2.3, 'woot': 1.8, 'worn': -1.2, 'worried': -1.2, 'worriedly': -2.0, 'worrier': -1.8, 'worriers': -1.7, 'worries': -1.8, 'worriment': -1.5, 'worriments': -1.9, 'worrisome': -1.7, 'worrisomely': -2.0, 'worrisomeness': -1.9, 'worrit': -2.1, 'worrits': -1.2, 'worry': -1.9, 'worrying': -1.4, 'worrywart': -1.8, 'worrywarts': -1.5, 'worse': -2.1, 'worsen': -2.3, 'worsened': -1.9, 'worsening': -2.0, 'worsens': -2.1, 'worser': -2.0, 'worship': 1.2, 'worshiped': 2.4, 'worshiper': 1.0, 'worshipers': 0.9, 'worshipful': 0.7, 'worshipfully': 1.1, 'worshipfulness': 1.6, 'worshiping': 1.0, 'worshipless': -0.6, 'worshipped': 2.7, 'worshipper': 0.6, 'worshippers': 0.8, 'worshipping': 1.6, 'worships': 1.4, 'worst': -3.1, 'worth': 0.9, 'worthless': -1.9, 'worthwhile': 1.4, 'worthy': 1.9, 'wow': 2.8, 'wowed': 2.6, 'wowing': 2.5, 'wows': 2.0, 'wowser': -1.1, 'wowsers': 1.0, 'wrathful': -2.7, 'wreck': -1.9, 'wrong': -2.1, 'wronged': -1.9, 'yay': 2.4, 'yeah': 1.2, 'yearning': 0.5, 'yeees': 1.7, 'yep': 1.2, 'yes': 1.7, 'youthful': 1.3, 'yucky': -1.8, 'yummy': 2.4, 'zealot': -1.9, 'zealots': -0.8, 'zealous': 0.5, '{:': 1.8, '|-0': -1.2, '|-:': -0.8, '|-:>': -1.6, '|-o': -1.2, '|:': -0.5, '|;-)': 2.2, '|=': -0.4, '|^:': -1.1, '|o:': -0.9, '||-:': -2.3, '}:': -2.1, '}:(': -2.0, '}:)': 0.4, '}:-(': -2.1, '}:-)': 0.3}\n"
],
[
"domain_words = {\"bruise\": -3.0, \"pity\": -3.0, \"thanks\": 0.0, \"glue\": -2.0, \"shortcoming\": -3.0, \"break\": -3.0, \"inflamed\": -2.0, \"reminder\": -1.0, \"reliable\": 3.0, \"uncomplicated\": 2.0, \"fast\": 2.0, \"kindly\": 0.0, \"confuse\": -2.0, \"blister\": -3.0, \"flaw\": -3.0, \"stain\": -3.0, \"complain\": -2.0, \"dissolve\": -3.0, \"apalled\": -4.0, \"discolor\": -3.0, \"spot\": -2.0, \"big\": -1.5, \"small\": -1.5, \"broken\": -3.0, \"worn\": -3.0, \"torn\": -3.0, \"hole\": -3.0, \"dirt\": -3.0}\nsia.lexicon.update(domain_words)",
"_____no_output_____"
],
[
"df_raw = pd.read_csv(DATA)\ndf_raw[6:11]",
"_____no_output_____"
],
[
"df = df_raw.copy()",
"_____no_output_____"
],
[
"%%time\n\npos_treshold = 0.8\nneg_treshold = -0.25\ndf['vader'] = df['normalized_with_stopwords'].apply(lambda x: 'POSITIVE' if sia.polarity_scores(str(x))['compound'] >= pos_treshold \n else ('NEGATIVE' if sia.polarity_scores(str(x))['compound'] <= neg_treshold \n else 'NONE'))\n\ndf['vader score'] = df['normalized_with_stopwords'].apply(lambda x: sia.polarity_scores(str(x))['compound'])",
"CPU times: user 44.3 s, sys: 238 ms, total: 44.5 s\nWall time: 44.7 s\n"
],
[
"df.iloc[idx, 8]",
"_____no_output_____"
],
[
"# Original sentiment distribution\ndf[\"sentiment\"].value_counts(normalize=True)",
"_____no_output_____"
],
[
"# Vader initial predictions\ndf[\"vader\"].value_counts(normalize=True)",
"_____no_output_____"
],
[
"# No including stopwords\ndf[\"vader\"].value_counts(normalize=True)",
"_____no_output_____"
],
[
"# With more stopwords v2\ndf[\"vader\"].value_counts(normalize=True)",
"_____no_output_____"
],
[
"# With more stopwords v3\ndf[\"vader\"].value_counts(normalize=True)",
"_____no_output_____"
],
[
"test_sia = \"material error on the belt loop leather color flake off\"",
"_____no_output_____"
],
[
"sia.polarity_scores(test_sia)",
"_____no_output_____"
],
[
"df_export = df[[\"feedback_text_en\", \"sentiment\", \"vader\", \"vader score\", \"delivery\", \"feedback_return\", \"product\", \"monetary\", \"one_hot_labels\", \"feedback_normalized\", \"normalized_with_stopwords\"]]",
"_____no_output_____"
],
[
"df_export.to_csv(DATA_EXPORT)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0516b337d6037d2fdcf41d288b3b9e5810dcd47 | 3,937 | ipynb | Jupyter Notebook | juypter/notebooks/linear-algebra/3_linear_independence.ipynb | JamesMcGuigan/ecosystem-research | bfd98bd5b0a2165f449eb36b368b54fe972374fe | [
"MIT"
] | 1 | 2019-01-01T02:04:27.000Z | 2019-01-01T02:04:27.000Z | juypter/notebooks/linear-algebra/3_linear_independence.ipynb | JamesMcGuigan/ecosystem-research | bfd98bd5b0a2165f449eb36b368b54fe972374fe | [
"MIT"
] | 1 | 2020-03-09T17:51:00.000Z | 2020-03-09T17:51:00.000Z | juypter/notebooks/linear-algebra/3_linear_independence.ipynb | JamesMcGuigan/ecosystem-research | bfd98bd5b0a2165f449eb36b368b54fe972374fe | [
"MIT"
] | null | null | null | 17.420354 | 174 | 0.441961 | [
[
[
"## Linear independence",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom sympy.solvers import solve\nfrom sympy import Symbol\n\nx = Symbol('x')\ny = Symbol('y')\nz = Symbol('z')",
"_____no_output_____"
]
],
[
[
"The set of vectors are called linearly independent because each of the vectors in the set {V0, V1, …, Vn−1} cannot be written as a combination of the others in the set.",
"_____no_output_____"
],
[
"### Linear Independent Arrays",
"_____no_output_____"
]
],
[
[
"A = np.array([1,1,1])\nB = np.array([0,1,1])\nC = np.array([0,0,1])\nZ = np.array([0,0,0])",
"_____no_output_____"
],
[
"np.array_equal(\n Z, \n 0*A + 0*B + 0*C\n)",
"_____no_output_____"
],
[
"solve(x*A + y*B + z*C)",
"_____no_output_____"
]
],
[
[
"### Linear Dependent Arrays",
"_____no_output_____"
]
],
[
[
"A = np.array([1,1,1])\nB = np.array([0,0,1])\nC = np.array([1,1,0])",
"_____no_output_____"
],
[
"1*A + -1*B + -1*C",
"_____no_output_____"
],
[
"solve(x*A + y*B + z*C)",
"_____no_output_____"
],
[
"A = np.array([1,2,3])\nB = np.array([1,-4,-4])\nC = np.array([3,0,2])",
"_____no_output_____"
],
[
"2*A + 1*B + -C",
"_____no_output_____"
],
[
"solve(x*A + y*B + z*C)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d051758479987a67cc7419312c1fdc7134fa0225 | 9,731 | ipynb | Jupyter Notebook | models/dataset_nn/dataset_neural_nets.ipynb | TreeHacks/hackpack-ml | c5fed6a047d6082c2f10592ef35ac999a4e4e393 | [
"MIT"
] | 33 | 2019-01-03T19:57:55.000Z | 2022-02-27T03:33:37.000Z | models/dataset_nn/dataset_neural_nets.ipynb | TreeHacks/hackpack-ml | c5fed6a047d6082c2f10592ef35ac999a4e4e393 | [
"MIT"
] | null | null | null | models/dataset_nn/dataset_neural_nets.ipynb | TreeHacks/hackpack-ml | c5fed6a047d6082c2f10592ef35ac999a4e4e393 | [
"MIT"
] | 9 | 2019-01-02T23:15:19.000Z | 2021-11-28T11:44:13.000Z | 34.024476 | 203 | 0.55205 | [
[
[
"# Datasets and Neural Networks\nThis notebook will step through the process of loading an arbitrary dataset in PyTorch, and creating a simple neural network for regression.",
"_____no_output_____"
],
[
"# Datasets\nWe will first work through loading an arbitrary dataset in PyTorch. For this project, we chose the <a href=\"http://www.cs.toronto.edu/~delve/data/abalone/desc.html\">delve abalone dataset</a>. \n\nFirst, download and unzip the dataset from the link above, then unzip `Dataset.data.gz` and move `Dataset.data` into `hackpack-ml/models/data`.\nWe are given the following attribute information in the spec:\n```\nAttributes:\n 1 sex u M F I\t# Gender or Infant (I)\n 2 length u (0,Inf]\t# Longest shell measurement (mm)\n 3 diameter u (0,Inf]\t# perpendicular to length (mm)\n 4 height u (0,Inf]\t# with meat in shell (mm)\n 5 whole_weight u (0,Inf]\t# whole abalone (gr)\n 6 shucked_weight u (0,Inf]\t# weight of meat (gr) \n 7 viscera_weight u (0,Inf]\t# gut weight (after bleeding) (gr)\n 8 shell_weight u (0,Inf]\t# after being dried (gr)\n 9 rings u 0..29\t# +1.5 gives the age in years\n```",
"_____no_output_____"
]
],
[
[
"import math\nfrom tqdm import tqdm\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.utils.data as data\nimport torch.nn.functional as F\nimport pandas as pd\n\nfrom torch.utils.data import Dataset, DataLoader",
"_____no_output_____"
]
],
[
[
"Pandas is a data manipulation library that works really well with structured data. We can use Pandas DataFrames to load the dataset.",
"_____no_output_____"
]
],
[
[
"col_names = ['sex', 'length', 'diameter', 'height', 'whole_weight', \n 'shucked_weight', 'viscera_weight', 'shell_weight', 'rings']\nabalone_df = pd.read_csv('../data/Dataset.data', sep=' ', names=col_names)\nabalone_df.head(n=3)",
"_____no_output_____"
]
],
[
[
"We define a subclass of PyTorch Dataset for our Abalone dataset.",
"_____no_output_____"
]
],
[
[
"class AbaloneDataset(data.Dataset):\n \"\"\"Abalone dataset. Provides quick iteration over rows of data.\"\"\"\n\n def __init__(self, csv):\n \"\"\"\n Args: csv (string): Path to the Abalone dataset.\n \"\"\"\n self.features = ['sex', 'length', 'diameter', 'height', 'whole_weight', \n 'shucked_weight', 'viscera_weight', 'shell_weight']\n self.y = ['rings']\n self.abalone_df = pd.read_csv(csv, sep=' ', names=(self.features + self.y))\n \n # Turn categorical data into machine interpretable format (one hot)\n self.abalone_df['sex'] = pd.get_dummies(self.abalone_df['sex'])\n\n def __len__(self):\n return len(self.abalone_df)\n\n def __getitem__(self, idx):\n \"\"\"Return (x,y) pair where x are abalone features and y is age.\"\"\"\n features = self.abalone_df.iloc[idx][self.features].values\n y = self.abalone_df.iloc[idx][self.y]\n return torch.Tensor(features).float(), torch.Tensor(y).float()",
"_____no_output_____"
]
],
[
[
"# Neural Networks\n\nThe task is to predict the age (number of rings) of abalone from physical measurements. We build a simple neural network with one hidden layer to model the regression.",
"_____no_output_____"
]
],
[
[
"class Net(nn.Module):\n\n def __init__(self, feature_size):\n super(Net, self).__init__()\n # feature_size input channels (8), 1 output channels\n self.fc1 = nn.Linear(feature_size, 4)\n self.fc2 = nn.Linear(4, 1)\n\n def forward(self, x):\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n return x",
"_____no_output_____"
]
],
[
[
"We instantiate an Abalone dataset instance and create DataLoaders for train and test sets.",
"_____no_output_____"
]
],
[
[
"dataset = AbaloneDataset('../data/Dataset.data')\ntrain_split, test_split = math.floor(len(dataset) * 0.8), math.ceil(len(dataset) * 0.2)\n\ntrainset = [dataset[i] for i in range(train_split)]\ntestset = [dataset[train_split + j] for j in range(test_split)]\nbatch_sz = len(trainset) # Compact data allows for big batch size\ntrainloader = data.DataLoader(trainset, batch_size=batch_sz, shuffle=True, num_workers=4)\ntestloader = data.DataLoader(testset, batch_size=batch_sz, shuffle=False, num_workers=4)",
"_____no_output_____"
]
],
[
[
"Now, we can initialize our network and define train and test functions",
"_____no_output_____"
]
],
[
[
"net = Net(len(dataset.features))\nloss_fn = nn.MSELoss()\noptimizer = optim.Adam(net.parameters(), lr=0.1)\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\ngpu_ids = [0] # On Colab, we have access to one GPU. Change this value as you see fit\n\ndef train(epoch):\n \"\"\"\n Trains our net on data from the trainloader for a single epoch\n \"\"\"\n net.train()\n with tqdm(total=len(trainloader.dataset)) as progress_bar:\n for batch_idx, (inputs, targets) in enumerate(trainloader):\n inputs, targets = inputs.to(device), targets.to(device)\n optimizer.zero_grad() # Clear any stored gradients for new step\n outputs = net(inputs.float())\n loss = loss_fn(outputs, targets) # Calculate loss between prediction and label \n loss.backward() # Backpropagate gradient updates through net based on loss\n optimizer.step() # Update net weights based on gradients\n progress_bar.set_postfix(loss=loss.item())\n progress_bar.update(inputs.size(0))\n \n \ndef test(epoch):\n \"\"\"\n Run net in inference mode on test data. \n \"\"\" \n net.eval()\n # Ensures the net will not update weights\n with torch.no_grad():\n with tqdm(total=len(testloader.dataset)) as progress_bar:\n for batch_idx, (inputs, targets) in enumerate(testloader):\n inputs, targets = inputs.to(device).float(), targets.to(device).float()\n outputs = net(inputs)\n loss = loss_fn(outputs, targets)\n progress_bar.set_postfix(testloss=loss.item())\n progress_bar.update(inputs.size(0))\n",
"_____no_output_____"
]
],
[
[
"Now that everything is prepared, it's time to train!",
"_____no_output_____"
]
],
[
[
"test_freq = 5 # Frequency to run model on validation data\n\nfor epoch in range(0, 200):\n train(epoch)\n if epoch % test_freq == 0:\n test(epoch)",
"_____no_output_____"
]
],
[
[
"We use the network's eval mode to do a sample prediction to see how well it does.",
"_____no_output_____"
]
],
[
[
"net.eval()\nsample = testset[0]\npredicted_age = net(sample[0])\ntrue_age = sample[1]\n\nprint(f'Input features: {sample[0]}')\nprint(f'Predicted age: {predicted_age.item()}, True age: {true_age[0]}')",
"_____no_output_____"
]
],
[
[
"Congratulations! You now know how to load your own datasets into PyTorch and run models on it. For an example of Computer Vision, check out the DenseNet notebook. Happy hacking!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d051859101f6a2b2d46e7705c1aa00d49f6e1041 | 39,501 | ipynb | Jupyter Notebook | 00-pre-requisitos/2-math/otimização-II.ipynb | sn3fru/datascience_course | ee0a505134383034e09020d9b1de18904d9b2665 | [
"MIT"
] | 331 | 2019-01-26T21:11:45.000Z | 2022-03-02T11:35:16.000Z | 00-pre-requisitos/2-math/otimização-II.ipynb | sn3fru/datascience_course | ee0a505134383034e09020d9b1de18904d9b2665 | [
"MIT"
] | 2 | 2019-11-02T22:32:13.000Z | 2020-04-13T10:31:11.000Z | 00-pre-requisitos/2-math/otimização-II.ipynb | sn3fru/datascience_course | ee0a505134383034e09020d9b1de18904d9b2665 | [
"MIT"
] | 88 | 2019-01-25T16:53:47.000Z | 2022-03-03T00:05:08.000Z | 73.285714 | 12,210 | 0.764107 | [
[
[
"# Optimization with equality constraints",
"_____no_output_____"
]
],
[
[
"import math\nimport numpy as np\nfrom scipy import optimize as opt",
"_____no_output_____"
]
],
[
[
"maximize $.4\\,\\log(x_1)+.6\\,\\log(x_2)$ s.t. $x_1+3\\,x_2=50$.",
"_____no_output_____"
]
],
[
[
"I = 50\np = np.array([1, 3])",
"_____no_output_____"
],
[
"U = lambda x: (.4*math.log(x[0])+.6*math.log(x[1]))",
"_____no_output_____"
],
[
"x0 = (I/len(p))/np.array(p)",
"_____no_output_____"
],
[
"budget = ({'type': 'eq', 'fun': lambda x: I-np.sum(np.multiply(x, p))})",
"_____no_output_____"
],
[
"opt.minimize(lambda x: -U(x), x0, method='SLSQP', constraints=budget, tol=1e-08, \n options={'disp': True, 'ftol': 1e-08})",
"Optimization terminated successfully. (Exit mode 0)\n Current function value: -2.5798439652115133\n Iterations: 8\n Function evaluations: 32\n Gradient evaluations: 8\n"
],
[
"def consumer(U, p, I):\n budget = ({'type': 'eq', 'fun': lambda x: I-np.sum(np.multiply(x, p))})\n x0 = (I/len(p))/np.array(p)\n sol = opt.minimize(lambda x: -U(x), x0, method='SLSQP', constraints=budget, tol=1e-08, \n options={'disp': False, 'ftol': 1e-08})\n if sol.status == 0:\n return {'x': sol.x, 'V': -sol.fun, 'MgU': -sol.jac, 'mult': -sol.jac[0]/p[0]}\n else:\n return 0",
"_____no_output_____"
],
[
"consumer(U, p, I)",
"_____no_output_____"
],
[
"delta=.01",
"_____no_output_____"
],
[
"(consumer(U, p, I+delta)['V']-consumer(U, p, I-delta)['V'])/(2*delta)",
"_____no_output_____"
],
[
"delta=.001",
"_____no_output_____"
],
[
"numerador = (consumer(U,p+np.array([delta, 0]), I)['V']-consumer(U,p+np.array([-delta, 0]), I)['V'])/(2*delta)",
"_____no_output_____"
],
[
"denominador = (consumer(U, p, I+delta)['V']-consumer(U, p, I-delta)['V'])/(2*delta)",
"_____no_output_____"
],
[
"-numerador/denominador",
"_____no_output_____"
]
],
[
[
"## Cost function",
"_____no_output_____"
]
],
[
[
"# Production function\nF = lambda x: (x[0]**.8)*(x[1]**.2)",
"_____no_output_____"
],
[
"w = np.array([5, 4])",
"_____no_output_____"
],
[
"y = 1",
"_____no_output_____"
],
[
"constraint = ({'type': 'eq', 'fun': lambda x: y-F(x)})",
"_____no_output_____"
],
[
"x0 = np.array([.5, .5])",
"_____no_output_____"
],
[
"cost = opt.minimize(lambda x: w@x, x0, method='SLSQP', constraints=constraint, tol=1e-08, \n options={'disp': True, 'ftol': 1e-08})",
"Optimization terminated successfully. (Exit mode 0)\n Current function value: 7.886966805999761\n Iterations: 8\n Function evaluations: 33\n Gradient evaluations: 8\n"
],
[
"F(cost.x)",
"_____no_output_____"
],
[
"cost",
"_____no_output_____"
]
],
[
[
"## Exercise",
"_____no_output_____"
]
],
[
[
"a = 2\nu = lambda c: -np.exp(-a*c)",
"_____no_output_____"
],
[
"R = 2\nZ2 = np.array([.72, .92, 1.12, 1.32])\nZ3 = np.array([.86, .96, 1.06, 1.16])",
"_____no_output_____"
],
[
"def U(x):\n states = len(Z2)*len(Z3)\n U = u(x[0])\n \n for z2 in Z2:\n for z3 in Z3:\n U += (1/states)*u(x[1]*R+x[2]*z2+x[3]*z3)\n \n return U",
"_____no_output_____"
],
[
"p = np.array([1, 1, .5, .5])\nI = 4",
"_____no_output_____"
],
[
"# a=1\nconsumer(U, p, I)",
"_____no_output_____"
],
[
"# a=5\nconsumer(U, p, I)",
"_____no_output_____"
],
[
"# a=2\nconsumer(U, p, I)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"x = np.arange(0.0, 2.0, 0.01)",
"_____no_output_____"
],
[
"a = 2\nu = lambda c: -np.exp(-a*c)\nplt.plot(x, u(x))",
"_____no_output_____"
],
[
"a = -2\nplt.plot(x, u(x))",
"_____no_output_____"
]
],
[
[
"# Optimization with inequality constraints",
"_____no_output_____"
]
],
[
[
"f = lambda x: -x[0]**3+x[1]**2-2*x[0]*(x[2]**2)",
"_____no_output_____"
],
[
"constraints =({'type': 'eq', 'fun': lambda x: 2*x[0]+x[1]**2+x[2]-5}, \n {'type': 'ineq', 'fun': lambda x: 5*x[0]**2-x[1]**2-x[2]-2})",
"_____no_output_____"
],
[
"constraints =({'type': 'eq', 'fun': lambda x: x[0]**3-x[1]})",
"_____no_output_____"
],
[
"x0 = np.array([.5, .5, 2])\nopt.minimize(f, x0, method='SLSQP', constraints=constraints, tol=1e-08, \n options={'disp': True, 'ftol': 1e-08})",
"Optimization terminated successfully. (Exit mode 0)\n Current function value: -19.000000000000256\n Iterations: 11\n Function evaluations: 56\n Gradient evaluations: 11\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d05195dce5d6e094adccb1d4dbc23a2884536473 | 240,822 | ipynb | Jupyter Notebook | modeling/basic_model_framework.ipynb | rahul263-stack/covid19-severity-prediction | f581adb2fccb12d5ab3f3c59ee120f484703edf5 | [
"MIT"
] | 2 | 2020-05-15T14:42:02.000Z | 2020-05-22T08:51:47.000Z | modeling/basic_model_framework.ipynb | rahul263-stack/covid19-severity-prediction | f581adb2fccb12d5ab3f3c59ee120f484703edf5 | [
"MIT"
] | null | null | null | modeling/basic_model_framework.ipynb | rahul263-stack/covid19-severity-prediction | f581adb2fccb12d5ab3f3c59ee120f484703edf5 | [
"MIT"
] | null | null | null | 205.304348 | 35,120 | 0.881734 | [
[
[
"import sys\nsys.path.append('../') ",
"_____no_output_____"
],
[
"\n%load_ext autoreload\n%autoreload 2\nimport sklearn\nimport copy\nimport numpy as np\n\nimport seaborn as sns\nsns.set()\n\nimport scipy as sp\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport seaborn as sns\n# from viz import viz\nfrom bokeh.plotting import figure, show, output_notebook, output_file, save\nfrom functions import merge_data\nfrom sklearn.model_selection import RandomizedSearchCV\nimport load_data\n\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.ensemble import RandomForestRegressor\n\nfrom fit_and_predict import fit_and_predict\n",
"_____no_output_____"
]
],
[
[
"\n## Params:",
"_____no_output_____"
]
],
[
[
"aggregate_by_state = False\noutcome_type = 'cases'",
"_____no_output_____"
]
],
[
[
"## Basic Data Visualization",
"_____no_output_____"
]
],
[
[
"# Just something to quickly summarize the number of cases and distributions each day",
"_____no_output_____"
],
[
"# 'deaths' and 'cases' contain the time-series of the outbreak\ndf = load_data.load_county_level(data_dir = '../data/')\ndf = df.sort_values('#Deaths_3/30/2020', ascending=False)\n# outcome_cases = load_data.outcome_cases # most recent day\n# outcome_deaths = load_data.outcome_deaths\nimportant_vars = load_data.important_keys(df)\nvery_important_vars = ['PopulationDensityperSqMile2010',\n# 'MedicareEnrollment,AgedTot2017',\n 'PopulationEstimate2018',\n '#ICU_beds',\n 'MedianAge2010',\n 'Smokers_Percentage',\n 'DiabetesPercentage',\n 'HeartDiseaseMortality',\n '#Hospitals'\n# 'PopMale60-642010',\n# 'PopFmle60-642010',\n# 'PopMale65-742010',\n# 'PopFmle65-742010',\n# 'PopMale75-842010',\n# 'PopFmle75-842010',\n# 'PopMale>842010',\n# 'PopFmle>842010'\n ]",
"loading county level data...\n"
],
[
"def sum_lists(list_of_lists):\n arr = np.array(list(list_of_lists))\n sum_arr = np.sum(arr,0)\n return list(sum_arr)\nif aggregate_by_state:\n # Aggregate by State\n state_deaths_df = df.groupby('StateNameAbbreviation').deaths.agg(sum_lists).to_frame()\n state_cases_df = df.groupby('StateNameAbbreviation').cases.agg(sum_lists).to_frame()\n df = pd.concat([state_cases_df,state_deaths_df],axis =1 )",
"_____no_output_____"
],
[
"# Distribution of the maximum number of cases\n_cases = list(df['cases'])\n\nmax_cases = []\nfor i in range(len(df)):\n max_cases.append(max(_cases[i]))\n\nprint('Number of counties with non-zero cases')\nprint(sum([v >0 for v in max_cases]))\n\n\n# cases truncated below 20 and above 1000 for plot readability\nplt.hist([v for v in max_cases if v > 20 and v < 1000],bins = 100)\n",
"Number of counties with non-zero cases\n2049\n"
],
[
"sum(max_cases)",
"_____no_output_____"
],
[
"print(sum([v > 50 for v in max_cases]))\n",
"272\n"
],
[
"np.quantile(max_cases,.5)",
"_____no_output_____"
],
[
"# Distribution of the maximum number of cases\n_deaths = list(df['deaths'])\n\nmax_deaths = []\nfor i in range(len(df)):\n max_deaths.append(max(_deaths[i]))\n\n \nprint('Number of counties with non-zero deaths')\nprint(sum([v > 0 for v in max_deaths]))\n# plt.hist(max_cases)\n\n# print(sum([v >0 for v in max_cases]))\nplt.hist([v for v in max_deaths if v > 5],bins=30)",
"Number of counties with non-zero deaths\n446\n"
],
[
"sum(max_deaths)",
"_____no_output_____"
],
[
"max(max_deaths)",
"_____no_output_____"
],
[
"np.quantile(max_deaths,.7)",
"_____no_output_____"
]
],
[
[
"### Clean data",
"_____no_output_____"
]
],
[
[
"# Remove counties with zero cases\nmax_cases = [max(v) for v in df['cases']]\ndf['max_cases'] = max_cases\nmax_deaths = [max(v) for v in df['deaths']]\ndf['max_deaths'] = max_deaths\ndf = df[df['max_cases'] > 0]\n",
"_____no_output_____"
]
],
[
[
"\n## Predict data from model:",
"_____no_output_____"
]
],
[
[
"method_keys = []",
"_____no_output_____"
],
[
"# clear predictions\nfor m in method_keys:\n del df[m]\n ",
"_____no_output_____"
],
[
"# target_day = np.array([1])\n# # Trains model on train_df and produces predictions for the final day for test_df and writes prediction\n# # to a new column for test_df \n# # fit_and_predict(df, method='exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)\n# # fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)\n# # fit_and_predict(train_df, test_df,'shared_exponential', mode='eval_mode',demographic_vars=important_vars)\n# # fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',demographic_vars=very_important_vars,target_day=target_day)\n# fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],\n# method='ensemble',target_day=target_day)\n# fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],\n# method='ensemble',target_day=np.array([1,2,3]))\n# # fit_and_predict(train_df, test_d f,method='exponential',mode='eval_mode',target_day = np.array([1,2]))\n\n# # Finds the names of all the methods\n# method_keys = [c for c in df if 'predicted' in c]\n# method_keys",
"_____no_output_____"
],
[
"# for days_ahead in [1, 2, 3]:\n# for method in ['exponential', 'shared_exponential', 'ensemble']: \n# fit_and_predict(df, method=method, outcome=outcome_type, mode='eval_mode',target_day=np.array([days_ahead]))\n \n# if method == 'shared_exponential':\n# fit_and_predict(df,method='shared_exponential', \n# outcome=outcome_type, \n# mode='eval_mode',\n# demographic_vars=very_important_vars,\n# target_day=np.array([days_ahead]))\n# method_keys = [c for c in df if 'predicted' in c]\n# geo = ['countyFIPS', 'CountyNamew/StateAbbrev']",
"_____no_output_____"
],
[
"# method_keys = [c for c in df if 'predicted' in c]\n# df_preds = df[method_keys + geo + ['deaths']]\n# df_preds.to_pickle(\"multi_day_6.pkl\")",
"_____no_output_____"
]
],
[
[
"## Ensemble predictions",
"_____no_output_____"
]
],
[
[
"exponential = {'model_type':'exponential'}\nshared_exponential = {'model_type':'shared_exponential'}\ndemographics = {'model_type':'shared_exponential', 'demographic_vars':very_important_vars}\nlinear = {'model_type':'linear'}",
"_____no_output_____"
],
[
"# import fit_and_predict\n# for d in [1, 2, 3]:\n# df = fit_and_predict.fit_and_predict_ensemble(df, \n# target_day=np.array([d]),\n# mode='eval_mode',\n# outcome=outcome_type,\n# output_key=f'predicted_{outcome_type}_ensemble_{d}'\n# )",
"_____no_output_____"
],
[
"import fit_and_predict\nfor d in [1, 3, 5, 7]:\n df = fit_and_predict.fit_and_predict_ensemble(df, \n target_day=np.array(range(1, d+1)),\n mode='eval_mode',\n outcome=outcome_type,\n methods=[exponential, \n shared_exponential,\n demographics,\n linear\n ],\n output_key=f'predicted_{outcome_type}_ensemble_{d}_with_exponential'\n )",
"Warning: PerfectSeparationError detected, adding one death to last day\nWarning: PerfectSeparationError detected, adding one death to last day\n"
],
[
"method_keys = [c for c in df if 'predicted' in c]",
"_____no_output_____"
],
[
"# df = fit_and_predict.fit_and_predict_ensemble(df)",
"_____no_output_____"
],
[
"method_keys",
"_____no_output_____"
]
],
[
[
"## Evaluate and visualize models",
"_____no_output_____"
],
[
"### Compute MSE and log MSE on relevant cases",
"_____no_output_____"
]
],
[
[
"# TODO: add average rank as metric",
"_____no_output_____"
],
[
"# Computes the mse in log space and non-log space for all columns",
"_____no_output_____"
],
[
"def l1(arr1,arr2,norm=True):\n \"\"\"\n arr2 ground truth\n arr1 predictions\n \"\"\"\n if norm:\n sum_percent_dif = 0\n for i in range(len(arr1)):\n sum_percent_dif += np.abs(arr2[i]-arr1[i])/arr1[i]\n return sum_percent_dif/len(arr1)\n \n return sum([np.abs(a1-a2) for (a1,a2) in zip(arr1,arr2)])/len(arr1)\nmse = sklearn.metrics.mean_squared_error\n# Only evaluate points that exceed this number of deaths \n# lower_threshold, upper_threshold = 10, 100000\nlower_threshold, upper_threshold = 10, np.inf",
"_____no_output_____"
],
[
"\n# Log scaled\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [np.log(p[-1] + 1) for p in df[key][(outcome > lower_threshold)]] # * (outcome < upper_threshold)]]\n print('Log scale MSE for '+key)\n print(mse(np.log(outcome[(outcome > lower_threshold) * (outcome < upper_threshold)] + 1),preds))",
"Log scale MSE for predicted_cases_ensemble_1\n0.03342253336708497\nLog scale MSE for predicted_cases_ensemble_3\n0.18469279606483388\n"
],
[
"# Log scaled\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [np.log(p[-1] + 1) for p in df[key][outcome > lower_threshold]]\n print('Log scale l1 for '+key)\n print(l1(np.log(outcome[outcome > lower_threshold] + 1),preds))",
"Log scale l1 for predicted_cases_ensemble_1\n0.03955004997773341\nLog scale l1 for predicted_cases_ensemble_3\n0.10221902257044516\n"
],
[
"# No log scale\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [p[-1] for p in df[key][outcome > lower_threshold]]\n print('Raw MSE for '+key)\n print(mse(outcome[outcome > lower_threshold],preds))",
"Raw MSE for predicted_cases_ensemble_1\n2503.339646561852\nRaw MSE for predicted_cases_ensemble_3\n56967.847470386354\n"
],
[
"# No log scale\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [p[-1] for p in df[key][outcome > lower_threshold]]\n print('Raw l1 for '+key)\n print(l1(outcome[outcome > lower_threshold],preds))",
"Raw l1 for predicted_cases_ensemble_1\n0.14138076623225437\nRaw l1 for predicted_cases_ensemble_3\n0.46245918957314697\n"
],
[
"# No log scale\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [p[-1] for p in df[key][outcome > lower_threshold]]\n print('Raw l1 for '+key)\n print(l1(outcome[outcome > lower_threshold],preds,norm=False))",
"Raw l1 for predicted_cases_ensemble_1\n15.702192279696032\nRaw l1 for predicted_cases_ensemble_3\n56.27341453693248\n"
]
],
[
[
"### Plot residuals",
"_____no_output_____"
]
],
[
[
"# TODO: Create bounds automatically, create a plot function and call it instead of copying code, figure out way\n# to plot more than two things at once cleanly\n\n# Creates residual plots log scaled and raw\n# We only look at cases with number of deaths greater than 5",
"_____no_output_____"
],
[
"def method_name_to_pretty_name(key):\n # TODO: hacky, fix\n words = key.split('_')\n words2 = []\n for w in words:\n if not w.isnumeric():\n words2.append(w)\n else:\n num = w\n \n model_name = ' '.join(words2[2:])\n# model_name = 'model'\n if num == '1':\n model_name += ' predicting 1 day ahead'\n else:\n model_name += ' predicting ' +w+' days ahead'\n \n return model_name",
"_____no_output_____"
],
[
"# Make log plots:\nbounds = [1.5, 7]\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [np.log(p[-1]) for p in df[key][outcome > 5]]\n plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))\n plt.xlabel('actual '+outcome_type)\n plt.ylabel('predicted '+outcome_type)\n plt.xlim(bounds)\n plt.ylim(bounds)\n plt.legend()\n\n plt.plot(bounds, bounds, ls=\"--\", c=\".3\")\n plt.show()",
"_____no_output_____"
],
[
"# Make log plots zoomed in for the counties that have a fewer number of deaths\nbounds = [1.5, 4]\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [np.log(p[-1]) for p in df[key][outcome > 5]]\n plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))\n\n plt.xlabel('actual '+outcome_type)\n plt.ylabel('predicted '+outcome_type)\n plt.xlim(bounds)\n plt.ylim(bounds)\n plt.legend()\n\n plt.plot(bounds, bounds, ls=\"--\", c=\".3\")\n plt.show()",
"_____no_output_____"
],
[
"# Make non-log plots zoomed in for the counties that have a fewer number of deaths# We set bounds \nbounds = [10,400]\noutcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])\nfor key in method_keys:\n preds = [p[-1] for p in df[key][outcome > 5]]\n plt.scatter(outcome[outcome > 5],preds,label=method_name_to_pretty_name(key))\n\n plt.xlabel('actual '+outcome_type)\n plt.ylabel('predicted '+outcome_type)\n plt.xlim(bounds)\n plt.ylim(bounds)\n plt.legend()\n\n plt.plot(bounds, bounds, ls=\"--\", c=\".3\")\n plt.show()",
"_____no_output_____"
]
],
[
[
"### Graph Visualizations",
"_____no_output_____"
]
],
[
[
"# Here we visualize predictions on a per county level.\n# The blue lines are the true number of deaths, and the dots are our predictions for each model for those days.",
"_____no_output_____"
],
[
"def plot_prediction(row):\n \"\"\"\n Plots model predictions vs actual\n row: dataframe row\n window: autoregressive window size\n \"\"\"\n gold_key = outcome_type\n for i,val in enumerate(row[gold_key]):\n if val > 0:\n start_point = i\n break\n# plt.plot(row[gold_key][start_point:], label=gold_key) \n if len(row[gold_key][start_point:]) < 3:\n return\n sns.lineplot(list(range(len(row[gold_key][start_point:]))),row[gold_key][start_point:], label=gold_key)\n \n \n\n for key in method_keys:\n preds = row[key]\n\n sns.scatterplot(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=method_name_to_pretty_name(key))\n \n# plt.scatter(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=key)\n \n# plt.legend()\n# plt.show()\n# sns.legend()\n plt.title(row['CountyName']+' in '+row['StateNameAbbreviation'])\n plt.ylabel(outcome_type)\n plt.xlabel('Days since first death')\n plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n plt.figure(dpi=500) \n plt.show()\n\n ",
"_____no_output_____"
],
[
"# feature_vals = {\n# 'PopulationDensityperSqMile2010' : 1.1525491065255939e-05,\n# \"MedicareEnrollment,AgedTot2017\" : -2.119520577282583e-06,\n# 'PopulationEstimate2018' : 2.8898343032154275e-07,\n# '#ICU_beds' : -0.000647030727828718,\n# 'MedianAge2010' : 0.05032666600339253,\n# 'Smokers_Percentage' : -0.013410742818946319,\n# 'DiabetesPercentage' : 0.04395318355581005,\n# 'HeartDiseaseMortality' : 0.0015473771787186525,\n# '#Hospitals': 0.019248102357644396,\n# 'log(deaths)' : 0.8805209010821442,\n# 'bias' : -1.871552103871495\n# }",
"_____no_output_____"
],
[
"df = df.sort_values(by='max_deaths',ascending=False)\nfor i in range(len(df)):\n row = df.iloc[i]\n # If number of deaths greater than 10\n if max(row['deaths']) > 10:\n print(row['CountyName']+' in '+row['StateNameAbbreviation'])\n plot_prediction(row)\n for v in very_important_vars:\n print(v+ ': '+str(row[v])) #+';\\t contrib: '+ str(feature_vals[v]*float(row[v])))\n print('\\n')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d051a74c4fdd43eb6ead00ada54a929fa395717c | 979,515 | ipynb | Jupyter Notebook | NT02-Bahia (NRS Sul).ipynb | pedreirajr/GeoCombatCOVID19 | 3f6c66c8553333403f4fcef949c924a0bac0cff6 | [
"MIT"
] | null | null | null | NT02-Bahia (NRS Sul).ipynb | pedreirajr/GeoCombatCOVID19 | 3f6c66c8553333403f4fcef949c924a0bac0cff6 | [
"MIT"
] | null | null | null | NT02-Bahia (NRS Sul).ipynb | pedreirajr/GeoCombatCOVID19 | 3f6c66c8553333403f4fcef949c924a0bac0cff6 | [
"MIT"
] | null | null | null | 76.530588 | 222,196 | 0.727864 | [
[
[
"# 0) Carregamento as bibliotecas",
"_____no_output_____"
]
],
[
[
"# Mostra múltiplos resultados em uma única saída:\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\nfrom IPython.display import Math",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport geopandas as gpd\nimport os\nimport pysal\nfrom pyproj import CRS\nfrom shapely.geometry import Point, MultiPoint, Polygon, mapping\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport pickle",
"C:\\Users\\Jorge\\Anaconda3\\lib\\site-packages\\pysal\\explore\\segregation\\network\\network.py:16: UserWarning: You need pandana and urbanaccess to work with segregation's network module\nYou can install them with `pip install urbanaccess pandana` or `conda install -c udst pandana urbanaccess`\n \"You need pandana and urbanaccess to work with segregation's network module\\n\"\nC:\\Users\\Jorge\\Anaconda3\\lib\\site-packages\\pysal\\model\\spvcm\\abstracts.py:10: UserWarning: The `dill` module is required to use the sqlite backend fully.\n from .sqlite import head_to_sql, start_sql\n"
]
],
[
[
"# 1) Leitura dos Banco de Dados:",
"_____no_output_____"
],
[
"**(a) Dados SIH 2019:**",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"NT02 - Bahia/SIH/sih_17-19.csv\")\n#pickle.dump(df, open('sih_2019', 'wb'))",
"_____no_output_____"
],
[
"#df = pickle.load(open('sih_2019','rb'))\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2607955 entries, 0 to 2607954\nData columns (total 11 columns):\n # Column Dtype \n--- ------ ----- \n 0 N_AIH int64 \n 1 MES_CMPT int64 \n 2 DT_INTER int64 \n 3 DT_SAIDA int64 \n 4 MUNIC_RES int64 \n 5 CEP int64 \n 6 MUNIC_MOV int64 \n 7 DIAG_PRINC object\n 8 PROC_REA int64 \n 9 COMPLEX int64 \n 10 QT_DIARIAS int64 \ndtypes: int64(10), object(1)\nmemory usage: 218.9+ MB\n"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.rename(columns={'MES_CMPT':'Mes','DT_INTER':'DT_Inter','DT_SAIDA':'DT_Saida','MUNIC_RES':'Cod_Municipio_Res',\n 'MUNIC_MOV':'Cod_Municipio','DIAG_PRINC':'Diagnostico','PROC_REA':'Procedimento','COMPLEX':'Complexidade',\n 'QT_DIARIAS':'Quantidade Diarias'}, inplace=True)",
"_____no_output_____"
],
[
"df = df.astype({'Cod_Municipio_Res': 'str','Cod_Municipio':'str','DT_Inter':'str','DT_Saida':'str',\n 'Complexidade':'str','Procedimento':'str'})",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2607955 entries, 0 to 2607954\nData columns (total 11 columns):\n # Column Dtype \n--- ------ ----- \n 0 N_AIH int64 \n 1 Mes int64 \n 2 DT_Inter object\n 3 DT_Saida object\n 4 Cod_Municipio_Res object\n 5 CEP int64 \n 6 Cod_Municipio object\n 7 Diagnostico object\n 8 Procedimento object\n 9 Complexidade object\n 10 Quantidade Diarias int64 \ndtypes: int64(4), object(7)\nmemory usage: 218.9+ MB\n"
],
[
"df['Complexidade'] = df['Complexidade'].replace(['2','3'],['Média','Alta'])",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"* **Formatação para datas:**",
"_____no_output_____"
]
],
[
[
"from datetime import datetime",
"_____no_output_____"
],
[
"df['DT_Inter'] = df['DT_Inter'].apply(lambda x: pd.to_datetime(x, format = '%Y%m%d'))",
"_____no_output_____"
],
[
"df['DT_Saida'] = df['DT_Saida'].apply(lambda x: pd.to_datetime(x, format = '%Y%m%d'))",
"_____no_output_____"
],
[
"pickle.dump(df, open('sih', 'wb'))",
"_____no_output_____"
],
[
"df = pickle.load(open('sih','rb'))",
"_____no_output_____"
],
[
"df2 = df.drop_duplicates(subset =\"N_AIH\",keep = 'last')",
"_____no_output_____"
],
[
"len(df2) #Total de internações em hospitais baianos",
"_____no_output_____"
],
[
"len(df2[df2['Cod_Municipio_Res'].str.startswith('29')]) # Internações em hospitais baianos de indivíduos que moram na bahia",
"_____no_output_____"
],
[
"2550223/2579967",
"_____no_output_____"
]
],
[
[
"**(b) Shape municípios:**",
"_____no_output_____"
]
],
[
[
"mun = gpd.read_file(\"NT02 - Bahia/mun_br.shp\")\nmun = mun.to_crs(CRS(\"WGS84\"));\nmun.crs",
"_____no_output_____"
],
[
"mun.info()",
"<class 'geopandas.geodataframe.GeoDataFrame'>\nRangeIndex: 5570 entries, 0 to 5569\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 NOME 5570 non-null object \n 1 NOMEABREV 5570 non-null object \n 2 GEOMETRIAA 5570 non-null object \n 3 GEOCODIGO 5570 non-null object \n 4 ANODEREFER 5570 non-null int64 \n 5 LoGVADBAgr 5547 non-null float64 \n 6 LogVADBInd 5550 non-null float64 \n 7 geometry 5570 non-null geometry\ndtypes: float64(2), geometry(1), int64(1), object(4)\nmemory usage: 348.2+ KB\n"
],
[
"mun.head()",
"_____no_output_____"
],
[
"mun.plot();\nplt.show();",
"_____no_output_____"
],
[
"mun_ba = mun[mun['GEOCODIGO'].str.startswith('29')].copy()",
"_____no_output_____"
],
[
"mun_ba.head()",
"_____no_output_____"
],
[
"mun_ba[mun_ba['GEOCODIGO'].str.startswith('290160')]",
"_____no_output_____"
],
[
"mun_ba[mun_ba['NOME']=='Sítio do Quinto']\nmun_ba[mun_ba['NOME']=='Antas']",
"_____no_output_____"
],
[
"mun_ba.plot();\nplt.show();",
"_____no_output_____"
]
],
[
[
"**Adicionando a população de 2019 (IBGE):**",
"_____no_output_____"
]
],
[
[
"pop = gpd.read_file('NT02 - Bahia/IBGE - Estimativa popul 2019.shp')",
"_____no_output_____"
],
[
"pop.head()",
"_____no_output_____"
],
[
"mun_ba['Pop'] = 0\nfor i, row in mun_ba.iterrows():\n mun_ba.loc[i,'Pop'] = pop[pop['Codigo']==row['GEOCODIGO']]['p_pop_2019'].values[0]",
"_____no_output_____"
]
],
[
[
"**Adicionando Casos até 24/04:**",
"_____no_output_____"
]
],
[
[
"casos = gpd.read_file('NT02 - Bahia/Evolução/data_shape_ba_mod(1).shp')",
"_____no_output_____"
],
[
"casos.info()",
"<class 'geopandas.geodataframe.GeoDataFrame'>\nRangeIndex: 417 entries, 0 to 416\nData columns (total 53 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 MUNICIPIO 417 non-null object \n 1 Codigo 417 non-null object \n 2 2020-03-06 1 non-null float64 \n 3 2020-03-07 1 non-null float64 \n 4 2020-03-08 1 non-null float64 \n 5 2020-03-09 1 non-null float64 \n 6 2020-03-10 1 non-null float64 \n 7 2020-03-11 1 non-null float64 \n 8 2020-03-12 1 non-null float64 \n 9 2020-03-13 2 non-null float64 \n 10 2020-03-14 2 non-null float64 \n 11 2020-03-15 2 non-null float64 \n 12 2020-03-16 3 non-null float64 \n 13 2020-03-17 4 non-null float64 \n 14 2020-03-18 4 non-null float64 \n 15 2020-03-19 6 non-null float64 \n 16 2020-03-20 7 non-null float64 \n 17 2020-03-21 7 non-null float64 \n 18 2020-03-22 9 non-null float64 \n 19 2020-03-23 12 non-null float64 \n 20 2020-03-24 14 non-null float64 \n 21 2020-03-25 17 non-null float64 \n 22 2020-03-26 18 non-null float64 \n 23 2020-03-27 19 non-null float64 \n 24 2020-03-28 20 non-null float64 \n 25 2020-03-29 23 non-null float64 \n 26 2020-03-30 24 non-null float64 \n 27 2020-03-31 30 non-null float64 \n 28 2020-04-01 32 non-null float64 \n 29 2020-04-02 34 non-null float64 \n 30 2020-04-03 35 non-null float64 \n 31 2020-04-04 41 non-null float64 \n 32 2020-04-05 47 non-null float64 \n 33 2020-04-06 51 non-null float64 \n 34 2020-04-07 51 non-null float64 \n 35 2020-04-08 59 non-null float64 \n 36 2020-04-09 63 non-null float64 \n 37 2020-04-10 67 non-null float64 \n 38 2020-04-11 70 non-null float64 \n 39 2020-04-12 71 non-null float64 \n 40 2020-04-13 74 non-null float64 \n 41 2020-04-14 76 non-null float64 \n 42 2020-04-15 81 non-null float64 \n 43 2020-04-16 85 non-null float64 \n 44 2020-04-17 86 non-null float64 \n 45 2020-04-18 90 non-null float64 \n 46 2020-04-19 92 non-null float64 \n 47 2020-04-20 98 non-null float64 \n 48 2020-04-21 101 non-null float64 \n 49 2020-04-22 105 non-null float64 \n 50 2020-04-23 109 non-null float64 \n 51 2020-04-24 417 non-null int64 \n 52 geometry 417 non-null geometry\ndtypes: float64(49), geometry(1), int64(1), object(2)\nmemory usage: 172.8+ KB\n"
],
[
"mun_ba['c20200424'] = 0\nfor i, row in mun_ba.iterrows():\n mun_ba.loc[i,'c20200424'] = casos[casos['Codigo']==row['GEOCODIGO']]['2020-04-24'].values[0]",
"_____no_output_____"
],
[
"mun_ba['c20200424'] = mun_ba['c20200424'].fillna(0)",
"_____no_output_____"
]
],
[
[
"**Calculando prevalências (com base em 24/04):**",
"_____no_output_____"
]
],
[
[
"mun_ba['prev'] = (mun_ba['c20200424']/mun_ba['Pop'])*100000",
"_____no_output_____"
],
[
"mun_ba.sort_values(by='prev', ascending = False)",
"_____no_output_____"
]
],
[
[
"# (2) Internações nos Hospitais BA",
"_____no_output_____"
],
[
"**(a) Quantidade de indivíduos:**",
"_____no_output_____"
]
],
[
[
"mun_ba['Qtd_Tot'] = 0\nmun_ba['Qtd_Fora'] = 0\nmun_ba['Qtd_CplxM'] = 0\nmun_ba['Qtd_CplxA'] = 0\nmun_ba['Dia_Tot'] = 0\nmun_ba['Dia_CplxM'] = 0\nmun_ba['Dia_CplxA'] = 0",
"_____no_output_____"
]
],
[
[
"**Período de 01/07/2018 a 30/06/2019:**",
"_____no_output_____"
]
],
[
[
"from datetime import date",
"_____no_output_____"
],
[
"per = pd.date_range(date(2018,7,1), periods=365).tolist()",
"_____no_output_____"
],
[
"per[0]\nper[-1]",
"_____no_output_____"
],
[
"# Entraram em alguma data até 30/06/2019 e saíram entre 01/07/2018 até 30/06/2019\ndf_BA = df2[(df2['DT_Inter'] <= per[-1]) & (df2['DT_Saida'] >= per[0]) & (df2['DT_Saida'] <= per[-1])]",
"_____no_output_____"
],
[
"#df_BA = df2[(df2['Cod_Municipio'].str.startswith('29')) & (df2['Cod_Municipio_Res'].str.startswith('29'))].copy()",
"_____no_output_____"
],
[
"df_BA.head()",
"_____no_output_____"
],
[
"for i, row in mun_ba.iterrows():\n mun_ba.loc[i,'Qtd_Tot'] = len(df_BA[df_BA['Cod_Municipio']==row['GEOCODIGO'][:-1]])\n mun_ba.loc[i,'Qtd_Fora'] = len(df_BA[(df_BA['Cod_Municipio']==row['GEOCODIGO'][:-1]) & (df2['Cod_Municipio_Res']!=row['GEOCODIGO'][:-1])])\n mun_ba.loc[i,'Qtd_CplxM'] = len(df_BA[(df_BA['Cod_Municipio']==row['GEOCODIGO'][:-1]) & \n (df_BA['Complexidade']=='Média')])\n mun_ba.loc[i,'Qtd_CplxA'] = len(df_BA[(df_BA['Cod_Municipio']==row['GEOCODIGO'][:-1]) & \n (df_BA['Complexidade']=='Alta')])\n mun_ba.loc[i,'Dia_Tot'] = df_BA[df_BA['Cod_Municipio']==row['GEOCODIGO'][:-1]]['Quantidade Diarias'].sum()\n mun_ba.loc[i,'Dia_CplxM'] = df_BA[(df_BA['Cod_Municipio']==row['GEOCODIGO'][:-1]) & \n (df_BA['Complexidade']=='Média')]['Quantidade Diarias'].sum()\n mun_ba.loc[i,'Dia_CplxA'] = df_BA[(df_BA['Cod_Municipio']==row['GEOCODIGO'][:-1]) & \n (df_BA['Complexidade']=='Alta')]['Quantidade Diarias'].sum()",
"C:\\Users\\Jorge\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: UserWarning: Boolean Series key will be reindexed to match DataFrame index.\n This is separate from the ipykernel package so we can avoid doing imports until\n"
],
[
"fig, ax = plt.subplots(figsize=(15,15));\nmun_ba.plot(ax = ax, column = 'Qtd_Tot');",
"_____no_output_____"
],
[
"mun_ba.to_file('NT02 - Bahia/intern_ba.shp')",
"_____no_output_____"
],
[
"mun_ba = gpd.read_file('NT02 - Bahia/intern_ba.shp')",
"_____no_output_____"
]
],
[
[
"# (3) Internações por dia em cada município",
"_____no_output_____"
]
],
[
[
"from datetime import date",
"_____no_output_____"
],
[
"datas = pd.date_range(date(2018,7,1), periods=365).tolist()",
"_____no_output_____"
],
[
"lst_mun_ba = list(mun_ba['GEOCODIGO'].apply(lambda x: x[:-1]).values)",
"_____no_output_____"
],
[
"datas[0]\ndatas[-1]",
"_____no_output_____"
],
[
"# Entraram em alguma data até 30/06/2019 e saíram entre 01/07/2018 até 30/06/2019\ndf2[(df2['DT_Inter'] <= datas[-1]) & (df2['DT_Saida'] >= datas[0]) & (df2['DT_Saida'] <= datas[-1]) & (df2['Cod_Municipio'] == '292740')]",
"_____no_output_____"
],
[
"ssa = []\nfor dt in datas:\n ssa.append(len(df2[(df2['DT_Inter'] <= dt) & (df2['DT_Saida'] >= dt) & (df2['Cod_Municipio'] == '292740')]))",
"_____no_output_____"
],
[
"pd_ssa = pd.DataFrame(zip(ssa,datas), columns = ['intern', 'data'])",
"_____no_output_____"
],
[
"pd_ssa['datas'] = pd.to_datetime(pd_ssa['data'])",
"_____no_output_____"
],
[
"pd_ssa['intern'].plot(figsize = (20,10), style = 'o--', markersize = 5);\nplt.ylim(0,max(pd_ssa['intern'])+1000);\nplt.xlim(-1,365);\nplt.show();",
"_____no_output_____"
],
[
"max(ssa)\nmin(ssa)",
"_____no_output_____"
]
],
[
[
"* **Série temporal para todos os municípios:**",
"_____no_output_____"
]
],
[
[
"ba_int = pd.DataFrame(index=datas, columns=mun_ba['GEOCODIGO'].apply(lambda x: x[:-1]).values)",
"_____no_output_____"
],
[
"list_mun = list(mun_ba['GEOCODIGO'].apply(lambda x: x[:-1]).values)\nfor i, row in ba_int.iterrows():\n for mun in list_mun:\n row[mun] = len(df2[(df2['DT_Inter'] <= i) & (df2['DT_Saida'] >= i) & (df2['Cod_Municipio'] == mun)])",
"_____no_output_____"
],
[
"ba_int",
"_____no_output_____"
],
[
"ba_int.to_excel('NT02 - Bahia/ba_int_dia.xlsx')",
"_____no_output_____"
]
],
[
[
"# (4) Padrão Origem-Destino das Internações ",
"_____no_output_____"
]
],
[
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2607955 entries, 0 to 2607954\nData columns (total 11 columns):\n # Column Dtype \n--- ------ ----- \n 0 N_AIH int64 \n 1 Mes int64 \n 2 DT_Inter datetime64[ns]\n 3 DT_Saida datetime64[ns]\n 4 Cod_Municipio_Res object \n 5 CEP int64 \n 6 Cod_Municipio object \n 7 Diagnostico object \n 8 Procedimento object \n 9 Complexidade object \n 10 Quantidade Diarias int64 \ndtypes: datetime64[ns](2), int64(4), object(5)\nmemory usage: 218.9+ MB\n"
],
[
"per = pd.date_range(date(2018,7,1), periods=365).tolist()",
"_____no_output_____"
],
[
"per[0]\nper[-1]",
"_____no_output_____"
],
[
"# Entraram em alguma data até 30/06/2019 e saíram entre 01/07/2018 até 30/06/2019\ndf_BA = df2[(df2['DT_Inter'] <= per[-1]) & (df2['DT_Saida'] >= per[0]) & (df2['DT_Saida'] <= per[-1]) & (df2['Cod_Municipio_Res'].str.startswith('29'))]",
"_____no_output_____"
],
[
"#df_BA = df2[(df2['Cod_Municipio'].str.startswith('29')) & (df2['Cod_Municipio_Res'].str.startswith('29'))].copy()",
"_____no_output_____"
],
[
"df_BA['Quantidade'] = 1",
"C:\\Users\\Jorge\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"df_BA.groupby(['Cod_Municipio_Res','Cod_Municipio']).sum()",
"_____no_output_____"
],
[
"df_BA['Quantidade'].sum()",
"_____no_output_____"
],
[
"tab = df_BA.groupby(['Cod_Municipio_Res','Cod_Municipio']).sum()",
"_____no_output_____"
],
[
"tab_OD = pd.DataFrame(columns = ['ORI','DES','Qtd','Dia','Qtd_Dia'])",
"_____no_output_____"
],
[
"tab_OD",
"_____no_output_____"
],
[
"tab.index[0][1]",
"_____no_output_____"
],
[
"for i in np.arange(len(tab)):\n ORI = tab.index[i][0]\n DES = tab.index[i][1]\n Qtd = tab.loc[tab.index[i],'Quantidade']\n Dia = tab.loc[tab.index[i],'Quantidade Diarias']\n Qtd_Dia = tab.loc[tab.index[i],'Quantidade']*tab.loc[tab.index[i],'Quantidade Diarias']\n tab_OD.loc[i] = [ORI, DES, Qtd, Dia, Qtd_Dia]",
"_____no_output_____"
],
[
"tab_OD",
"_____no_output_____"
],
[
"tab_OD['ORI_GC'] = 0\ntab_OD['DES_GC'] = 0",
"_____no_output_____"
],
[
"for i, el in enumerate(zip(tab_OD['ORI'],tab_OD['DES'])):\n tab_OD.loc[i,'ORI_GC'] = mun_ba[mun_ba['GEOCODIGO'].str.startswith(str(el[0]))]['GEOCODIGO'].values[0]\n tab_OD.loc[i,'DES_GC'] = mun_ba[mun_ba['GEOCODIGO'].str.startswith(str(el[1]))]['GEOCODIGO'].values[0]",
"_____no_output_____"
],
[
"tab_OD['Qtd'] = pd.to_numeric(tab_OD['Qtd'])\ntab_OD['Dia'] = pd.to_numeric(tab_OD['Dia'])\ntab_OD['Qtd_Dia'] = pd.to_numeric(tab_OD['Qtd_Dia'])\ntab_OD.head()\ntab_OD.info()",
"_____no_output_____"
],
[
"tab_OD.to_excel('NT02 - Bahia/tab_OD.xlsx', index = False)",
"_____no_output_____"
],
[
"tab_OD = pd.read_excel('NT02 - Bahia/tab_OD.xlsx')",
"_____no_output_____"
],
[
"tab_OD_dif = tab_OD[tab_OD['ORI'] != tab_OD['DES']].copy()",
"_____no_output_____"
],
[
"tab_OD_dif.to_excel('NT02 - Bahia/tab_OD_dif.xlsx', index = False)",
"_____no_output_____"
],
[
"tab_OD_dif.sort_values(by='Qtd', ascending = False).head(20)[['ORI_GC','DES_GC','Qtd','Dia','Qtd_Dia']]",
"_____no_output_____"
]
],
[
[
"### (4.1) Principais centros de internação hospitalar (origens mais demandadas)",
"_____no_output_____"
]
],
[
[
"tab_OD.groupby(['DES_GC']).sum().sort_values(by='Qtd', ascending = False)['Qtd'].sum()",
"_____no_output_____"
],
[
"tab_OD.groupby(['DES_GC']).sum().sort_values(by='Qtd', ascending = False)[:20]",
"_____no_output_____"
]
],
[
[
"Proporção:",
"_____no_output_____"
]
],
[
[
"tab_OD.groupby(['DES_GC']).sum().sort_values(by='Qtd', ascending = False)[:50]['Qtd']/tab_OD.groupby(['DES_GC']).sum().sort_values(by='Qtd', ascending = False)['Qtd'].sum()\n(tab_OD.groupby(['DES_GC']).sum().sort_values(by='Qtd', ascending = False)[:50]['Qtd']/tab_OD.groupby(['DES_GC']).sum().sort_values(by='Qtd', ascending = False)['Qtd'].sum()).sum()",
"_____no_output_____"
]
],
[
[
"### (4.2) Municípios mais atendidos pelos principais centros de internação hospitalar",
"_____no_output_____"
]
],
[
[
"mun_ba.loc[mun_ba['GEOCODIGO'].isin(tab_OD['DES_GC'].astype(str))][['NOME','NOMEABREV','geometry']]",
"_____no_output_____"
],
[
"idx = list(tab_OD.groupby(['DES_GC']).sum().sort_values(by='Qtd', ascending = False)[:10]['Qtd'].index)",
"_____no_output_____"
]
],
[
[
"20 municípios mais atendidos dos 10 maiores centros de atendimento",
"_____no_output_____"
]
],
[
[
"for k in np.arange(len(idx)):\n mun_ba[mun_ba['GEOCODIGO']==idx[k]]['NOME'].values[0] #Nome\n tab_OD[tab_OD['DES_GC']==idx[k]].sort_values(by='Qtd', ascending = False)['Qtd'].sum() #Quantidade de internações\n tab_OD[tab_OD['DES_GC']==idx[k]].sort_values(by='Qtd', ascending = False)['Qtd'][:20].sum() \\\n /tab_OD[tab_OD['DES_GC']==idx[k]].sort_values(by='Qtd', ascending = False)['Qtd'].sum() # Percentual de internações que estes 20 representam\n ",
"_____no_output_____"
],
[
"mun_ba[mun_ba['GEOCODIGO']==idx[0]]['NOME']",
"_____no_output_____"
],
[
"tab_OD[tab_OD['DES_GC']==idx[0]].sort_values(by='Qtd', ascending = False)['ORI_GC'][:20].values",
"_____no_output_____"
],
[
"atend = []\nfor k in np.arange(len(idx)):\n idx_mun = tab_OD[tab_OD['DES_GC']==idx[k]].sort_values(by='Qtd', ascending = False)['ORI_GC'][:20].values\n int_mun = tab_OD[tab_OD['DES_GC']==idx[k]].sort_values(by='Qtd', ascending = False)['Qtd'][:20].values\n nome_mun = list(map(lambda x: mun_ba[mun_ba['GEOCODIGO']==x]['NOME'].values[0], idx_mun))\n #pd.DataFrame(zip(idx_mun,nome_mun,int_mun), columns = ['Geocódigo','Município','Internações'])\n for i in idx_mun:\n atend.append(i)",
"_____no_output_____"
],
[
"len(atend)\nlen(list(set(atend)))",
"_____no_output_____"
],
[
"atend = list(set(atend))",
"_____no_output_____"
],
[
"mun_ba[mun_ba['GEOCODIGO'].isin(atend)]['Pop'].sum()\nmun_ba[mun_ba['GEOCODIGO'].isin(atend)]['Pop'].sum()/mun_ba['Pop'].sum()",
"_____no_output_____"
]
],
[
[
"### (4.3) Análise da Pandemia no NRS Sul:",
"_____no_output_____"
],
[
"**Núcleos Regionais de Saúde:**",
"_____no_output_____"
]
],
[
[
"nrs = gpd.read_file('NT02 - Bahia/Oferta Hospitalar/SESAB - NUCLEO REG SAUDE - 20190514 - SIRGAS2000.shp')",
"_____no_output_____"
],
[
"nrs = nrs.to_crs(CRS(\"WGS84\"));\nnrs.crs",
"_____no_output_____"
],
[
"mun_ba.crs == nrs.crs",
"_____no_output_____"
],
[
"nrs",
"_____no_output_____"
],
[
"mun_ba['NRS'] = 0\nfor i in list(nrs.index):\n mun_ba.loc[mun_ba['geometry'].apply(lambda x: x.centroid.within(nrs.loc[i,'geometry'])),'NRS'] = nrs.loc[i,'NM_NRS']",
"_____no_output_____"
],
[
"mun_ba.plot(column = 'NRS');\nplt.show();",
"_____no_output_____"
]
],
[
[
"População",
"_____no_output_____"
]
],
[
[
"for i in nrs['NM_NRS'].values:\n print(i,mun_ba[mun_ba['NRS']==i]['Pop'].sum())",
"Centro-norte 829076\nOeste 967197\nLeste 4801201\nNorte 1105695\nExtremo sul 840325\nCentro-leste 2273262\nSudoeste 1816387\nSul 1689265\nNordeste 881934\n"
],
[
"mun_ba['Qtd_Tot'].sum()",
"_____no_output_____"
],
[
"nrs.to_file('NT02 - Bahia/nrs.shp')",
"_____no_output_____"
]
],
[
[
"**Municípios com maior prevalência:**",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(10,10));\nmun_ba.plot(ax = ax, column = 'prev');\nplt.show();",
"_____no_output_____"
],
[
"# 20 maiores do Estado:\nmun_ba.sort_values(by='prev', ascending = False)[['GEOCODIGO','NOME','Pop','prev','NRS']][:20]",
"_____no_output_____"
],
[
"# Quantidade de municípios no NRS Sul que já possuem casos confirmados até 24/04/2020\nlen(mun_ba[(mun_ba['NRS']=='Sul') & (mun_ba['c20200424']>0)])",
"_____no_output_____"
],
[
"# 10 maiores da Região Sul:\nmun_ba[mun_ba['NRS']=='Sul'].sort_values(by='prev', ascending = False)[['GEOCODIGO','NOME','prev']][:14]",
"_____no_output_____"
]
],
[
[
"### (4.4) Oferta Hospitalar no NRS Sul",
"_____no_output_____"
],
[
"**Leitos convencionais:**",
"_____no_output_____"
]
],
[
[
"leitos = pd.read_excel('NT02 - Bahia/Oferta Hospitalar/leitos.xlsx')",
"_____no_output_____"
],
[
"leitos.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 343 entries, 0 to 342\nData columns (total 7 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 GEOCODIGO 343 non-null int64\n 1 Cirúrgicos 343 non-null int64\n 2 Clínicos 343 non-null int64\n 3 Obstétrico 343 non-null int64\n 4 Pediátrico 343 non-null int64\n 5 Outras Especialidades 343 non-null int64\n 6 HospitalDIA 343 non-null int64\ndtypes: int64(7)\nmemory usage: 18.9 KB\n"
],
[
"leitos.head(2)",
"_____no_output_____"
]
],
[
[
"**Leitos complementares:**",
"_____no_output_____"
]
],
[
[
"leitos_c = pd.read_excel('NT02 - Bahia/Oferta Hospitalar/leitos_comp.xlsx')",
"_____no_output_____"
],
[
"leitos_c.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 119 entries, 0 to 118\nData columns (total 18 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 GEOCODIGO 119 non-null int64\n 1 Unidade isolamento 119 non-null int64\n 2 UTI adulto I 119 non-null int64\n 3 UTI adulto II 119 non-null int64\n 4 UTI adulto III 119 non-null int64\n 5 UTI pediátrica I 119 non-null int64\n 6 UTI pediátrica II 119 non-null int64\n 7 UTI pediátrica III 119 non-null int64\n 8 UTI neonatal I 119 non-null int64\n 9 UTI neonatal II 119 non-null int64\n 10 UTI neonatal III 119 non-null int64\n 11 UTI de Queimados 119 non-null int64\n 12 UTI coronariana tipo II -UCO tipo II 119 non-null int64\n 13 UTI coronariana tipo III - UCO tipo III 119 non-null int64\n 14 Unidade de cuidados intermed neonatal convencional 119 non-null int64\n 15 Unidade de cuidados intermed neonatal canguru 119 non-null int64\n 16 Unidade de cuidados intermed pediatrico 119 non-null int64\n 17 Unidade de cuidados intermed adulto 119 non-null int64\ndtypes: int64(18)\nmemory usage: 16.9 KB\n"
],
[
"leitos_c.head(2)",
"_____no_output_____"
]
],
[
[
"**Leitos adicionados pós COVID:**",
"_____no_output_____"
]
],
[
[
"leitos_add = pd.read_excel('NT02 - Bahia/Oferta Hospitalar/leitos_add.xlsx')",
"_____no_output_____"
],
[
"leitos_add.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 44 entries, 0 to 43\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 MUNICIPIO 44 non-null object\n 1 HOSPITAL 44 non-null object\n 2 L_Clin 44 non-null int64 \n 3 L_UTI_Adu 44 non-null int64 \ndtypes: int64(2), object(2)\nmemory usage: 1.5+ KB\n"
],
[
"leitos_add.head(2)",
"_____no_output_____"
]
],
[
[
"**Respiradores:**",
"_____no_output_____"
]
],
[
[
"resp = pd.read_excel('NT02 - Bahia/Oferta Hospitalar/respiradores.xlsx')",
"_____no_output_____"
],
[
"resp.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 189 entries, 0 to 188\nData columns (total 4 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 GEOCODIGO 189 non-null int64\n 1 Equipamentos_Existentes 189 non-null int64\n 2 Equipamentos_em_Uso 189 non-null int64\n 3 Estab_c/_Equip_SUS 189 non-null int64\ndtypes: int64(4)\nmemory usage: 6.0 KB\n"
],
[
"resp.head(2)",
"_____no_output_____"
]
],
[
[
"**Profissionais:**",
"_____no_output_____"
]
],
[
[
"prof = pd.read_excel('NT02 - Bahia/Oferta Hospitalar/profissionais.xlsx')",
"_____no_output_____"
],
[
"prof.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 414 entries, 0 to 413\nData columns (total 7 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 GEOCODIGO 414 non-null int64\n 1 Médico pneumologista 414 non-null int64\n 2 Médico da Família 414 non-null int64\n 3 Médico em Medicina Intensiva 414 non-null int64\n 4 Enfermeiro 414 non-null int64\n 5 Fisioterapeuta 414 non-null int64\n 6 Nutricionista 414 non-null int64\ndtypes: int64(7)\nmemory usage: 22.8 KB\n"
],
[
"prof.head(2)",
"_____no_output_____"
]
],
[
[
"**Adicionando à `mun_ba`:**",
"_____no_output_____"
]
],
[
[
"mun_ba['L_Clin'] = 0\nmun_ba['L_UTI_Adu'] = 0\nmun_ba['L_UTI_Ped'] = 0\nmun_ba['L_CInt_Adu'] = 0\nmun_ba['L_CInt_Ped'] = 0\nmun_ba['LA_Clin'] = 0\nmun_ba['LA_UTI_Adu'] = 0\nmun_ba['Resp'] = 0\nmun_ba['M_Pneumo'] = 0\nmun_ba['M_Familia'] = 0\nmun_ba['M_Intens'] = 0\nmun_ba['Enferm'] = 0\nmun_ba['Fisiot'] = 0\nmun_ba['Nutric'] = 0",
"_____no_output_____"
],
[
"for i, row in mun_ba.iterrows():\n try:\n mun_ba.loc[i,'L_Clin'] = leitos[leitos['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Clínicos'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'L_UTI_Adu'] = leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['UTI adulto I'].values[0] + leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['UTI adulto II'].values[0] + leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['UTI adulto III'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'L_UTI_Ped'] = leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['UTI pediátrica I'].values[0] + leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['UTI pediátrica II'].values[0] + leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['UTI pediátrica III'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'L_CInt_Adu'] = leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Unidade de cuidados intermed adulto'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'L_CInt_Ped'] = leitos_c[leitos_c['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Unidade de cuidados intermed pediatrico'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'LA_Clin'] = leitos_add[leitos_add['MUNICIPIO']==row['NOME']]['L_Clin'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'LA_UTI_Adu'] = leitos_add[leitos_add['MUNICIPIO']==row['NOME']]['L_UTI_Adu'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'Resp'] = resp[resp['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Equipamentos_Existentes'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'M_Pneumo'] = prof[prof['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Médico pneumologista'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'M_Familia'] = prof[prof['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Médico da Família'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'M_Intens'] = prof[prof['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Médico em Medicina Intensiva'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'Enferm'] = prof[prof['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Enfermeiro'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'Fisiot'] = prof[prof['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Fisioterapeuta'].values[0]\n except:\n pass\n try:\n mun_ba.loc[i,'Nutric'] = prof[prof['GEOCODIGO']==int(row['GEOCODIGO'][:-1])]['Nutricionista'].values[0]\n except:\n pass",
"_____no_output_____"
],
[
"mun_ba[mun_ba['NRS']=='Sul'].sort_values(by='prev', ascending = False)[['NOME','Pop','prev','L_Clin','LA_Clin','L_UTI_Adu','LA_UTI_Adu','Resp','M_Pneumo','M_Intens','Fisiot','Enferm']][:14]",
"_____no_output_____"
],
[
"mun_ba.to_file('NT02 - Bahia/saude_mun_ba.shp')",
"_____no_output_____"
]
],
[
[
"### (4.5) Dinâmica do Fluxo de Internaçõe no NRS Sul",
"_____no_output_____"
],
[
"**(a) Recursos:**",
"_____no_output_____"
]
],
[
[
"#.isin(mun_ba[mun_ba['NRS']=='Sul']['NOME'].values)\nnrs_rec = mun_ba[['NRS','Pop','L_Clin','L_UTI_Adu','L_UTI_Ped','L_CInt_Adu','L_CInt_Ped','LA_Clin','LA_UTI_Adu','Resp','M_Pneumo','M_Familia','M_Intens','Enferm','Fisiot','Nutric']].groupby(['NRS']).sum()",
"_____no_output_____"
],
[
"pd.DataFrame(zip(10000*nrs_rec['L_Clin']/nrs_rec['Pop'],10000*nrs_rec['L_UTI_Adu']/nrs_rec['Pop'],10000*nrs_rec['L_UTI_Ped']/nrs_rec['Pop'],\n 10000*nrs_rec['Resp']/nrs_rec['Pop'],10000*nrs_rec['M_Pneumo']/nrs_rec['Pop'],\n 10000*nrs_rec['M_Intens']/nrs_rec['Pop'],10000*nrs_rec['Fisiot']/nrs_rec['Pop'],\n 10000*nrs_rec['Enferm']/nrs_rec['Pop']),\n index = (10000*nrs_rec['Enferm']/nrs_rec['Pop']).index, columns = ['L_Clin','L_UTI_Adu','L_UTI_Ped','Resp','M_Pneumo',\n 'M_Intens','Fisiot','Enferm'])",
"_____no_output_____"
],
[
"pd.DataFrame(zip(nrs_rec['L_UTI_Adu'],nrs_rec['Resp'],nrs_rec['M_Intens'],nrs_rec['Fisiot']),\n index = (100000*nrs_rec['Enferm']/nrs_rec['Pop']).index, columns = ['L_UTI_Adu','Resp','M_Intens',\n 'Fisiot'])",
"_____no_output_____"
]
],
[
[
"**(b) Internações hospitalares:**",
"_____no_output_____"
],
[
"**Interdependência entre NRS's (Matriz OD):**",
"_____no_output_____"
]
],
[
[
"nrs_names = list(nrs['NM_NRS'].values)",
"_____no_output_____"
],
[
"nrs_OD = np.zeros([len(nrs_names),len(nrs_names)])",
"_____no_output_____"
],
[
"for i, nrs_o in enumerate(nrs_names):\n muns_o = list(mun_ba[mun_ba['NRS']==nrs_o]['GEOCODIGO'].values)\n for j, nrs_d in enumerate(nrs_names):\n muns_d = list(mun_ba[mun_ba['NRS']==nrs_d]['GEOCODIGO'].values)\n nrs_OD[i,j] = tab_OD[tab_OD['ORI_GC'].isin(muns_o) & tab_OD['DES_GC'].isin(muns_d)]['Qtd'].sum()",
"_____no_output_____"
],
[
"nrs_od_df = pd.DataFrame(nrs_OD, columns = nrs_names, index = nrs_names).astype(int)\nnrs_od_df",
"_____no_output_____"
],
[
"from itertools import product\nnrs_tab_od = pd.DataFrame(list(product(nrs_names,nrs_names)))\nnrs_tab_od['flux'] = 0\nnrs_tab_od.rename(columns={0:'ORI',1:'DES'}, inplace = True)\nnrs_tab_od",
"_____no_output_____"
],
[
"for i, row in nrs_od_df.iterrows():\n nrs_tab_od.loc[(nrs_tab_od['ORI']==i),'flux'] = list(row.values)",
"_____no_output_____"
],
[
"nrs_tab_od",
"_____no_output_____"
],
[
"nrs_tab_od.to_csv('NT02 - Bahia/nrs_tab_od.csv')",
"_____no_output_____"
]
],
[
[
"**P/ cada NRS:**",
"_____no_output_____"
]
],
[
[
"#Municípios de cada NRS\nfor i in list(nrs['NM_NRS'].values):\n muns = list(mun_ba[mun_ba['NRS']==i]['NOME'].values)\n muns_gc = list(mun_ba[mun_ba['NRS']==i]['GEOCODIGO'].values)\n \"NRS \"+i+\":\"\n \"Total de internações: {}\".format(tab_OD[tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum())\n \"Proporção de internações em relação ao total de internações do estado: {:.3f}\".format(tab_OD[tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum()/tab_OD['Qtd'].sum())\n \"Total de internações de residentes do NRS realizadas no próprio NRS: {}\".format(tab_OD[tab_OD['ORI_GC'].isin(muns_gc) & tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum())\n \"Razão entre internações de residentes do NRS atendidas no próprio NRS e o total de internações de residentes no NRS em todo o estado: {:.3f}\".format(tab_OD[tab_OD['ORI_GC'].isin(muns_gc) & tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum() \\\n / tab_OD[tab_OD['ORI_GC'].isin(muns_gc)]['Qtd'].sum())\n \"Total de internações no NRS de residentes fora do NRS: {}\".format(tab_OD[~tab_OD['ORI_GC'].isin(muns_gc) & tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum())\n \"Proporção de internações no NRS de residentes fora do NRS em relação ao total de internações do NRS: {:.3f}\".format(tab_OD[~tab_OD['ORI_GC'].isin(muns_gc) & tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum() \\\n /tab_OD[tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum())",
"_____no_output_____"
]
],
[
[
"**Dependência do NRS Leste:**",
"_____no_output_____"
]
],
[
[
"muns = [i for i in list(nrs['NM_NRS'].values) if i!='Leste']\nfor i in muns:\n muns_gc = list(mun_ba[mun_ba['NRS']==i]['GEOCODIGO'].values)\n muns_le = list(mun_ba[mun_ba['NRS']=='Leste']['GEOCODIGO'].values) \n \"Internações de residentes do {} = {}\".format(i,tab_OD[tab_OD['ORI_GC'].isin(muns_gc) & tab_OD['DES_GC'].isin(muns_le)]['Qtd'].sum())\n \"Proporção dos atendimentos do NRS {} = {}\".format(i,tab_OD[tab_OD['ORI_GC'].isin(muns_gc) & tab_OD['DES_GC'].isin(muns_le)]['Qtd'].sum() \\\n /tab_OD[tab_OD['ORI_GC'].isin(muns_gc) & tab_OD['DES_GC'].isin(muns_gc)]['Qtd'].sum())",
"_____no_output_____"
]
],
[
[
"**Análise do NRS Sul (maior qtd de casos acumulados):**",
"_____no_output_____"
]
],
[
[
"#Municípios do NRS Sul \nmun_sul = list(mun_ba[mun_ba['NRS']=='Sul']['NOME'].values)\nmun_sul_gc = list(mun_ba[mun_ba['NRS']=='Sul']['GEOCODIGO'].values)",
"_____no_output_____"
],
[
"# Todas as internações demandadas pelos municípios do NRS Sul\ntab_OD[tab_OD['ORI_GC'].isin(mun_sul_gc)].sort_values(by='Qtd', ascending = False)",
"_____no_output_____"
],
[
"# Todas as internações demandadas pelos municípios do NRS Sul que foram atendidas no NRS Sul\ntab_OD[tab_OD['ORI_GC'].isin(mun_sul_gc) & tab_OD['DES_GC'].isin(mun_sul_gc)].sort_values(by='Qtd', ascending = False)",
"_____no_output_____"
],
[
"# Todas as internações que foram atendidas no NRS Sul de municípios que não foram do NRS Sul\ntab_OD[~tab_OD['ORI_GC'].isin(mun_sul_gc) & tab_OD['DES_GC'].isin(mun_sul_gc)].sort_values(by='Qtd', ascending = False)",
"_____no_output_____"
],
[
"# Total de internações na Bahia:\ntab_OD['Qtd'].sum()",
"_____no_output_____"
],
[
"# Total de internações no NRS Sul:\ntab_OD[tab_OD['DES_GC'].isin(mun_sul_gc)]['Qtd'].sum()",
"_____no_output_____"
],
[
"# Percentual de internações no NRS Sul em relação ao total de internações do estado:\ntab_OD[tab_OD['DES_GC'].isin(mun_sul_gc)]['Qtd'].sum()/tab_OD['Qtd'].sum()",
"_____no_output_____"
],
[
"# Total de internações no NRS Sul de municípios dentro do NRS Sul:\ntab_OD[tab_OD['ORI_GC'].isin(mun_sul_gc) & tab_OD['DES_GC'].isin(mun_sul_gc)]['Qtd'].sum()",
"_____no_output_____"
],
[
"# Razão entre internações realizadas no NRS Sul e o total demandado no NRS Sul\ntab_OD[tab_OD['ORI_GC'].isin(mun_sul_gc) & tab_OD['DES_GC'].isin(mun_sul_gc)]['Qtd'].sum() \\\n/ tab_OD[tab_OD['ORI_GC'].isin(mun_sul_gc)]['Qtd'].sum()",
"_____no_output_____"
],
[
"# Total de internações no NRS Sul de municípios fora do NRS Sul:\ntab_OD[~tab_OD['ORI_GC'].isin(mun_sul_gc) & tab_OD['DES_GC'].isin(mun_sul_gc)]['Qtd'].sum()",
"_____no_output_____"
],
[
"# Total de internações de residentes em municípios do NRS Sul realizadas fora do NRS Sul:\ntab_OD[tab_OD['ORI_GC'].isin(mun_sul_gc) & ~tab_OD['DES_GC'].isin(mun_sul_gc)]['Qtd'].sum()",
"_____no_output_____"
],
[
"#Municípios que mais atenderam internações no NRS Sul:\ntab_OD[tab_OD['DES_GC'].isin(mun_sul_gc)].sort_values(by='Qtd', ascending = False)",
"_____no_output_____"
],
[
"# Percentual das internações nos 10 primeiros:\ntab_OD[tab_OD['DES_GC'].isin(mun_sul_gc)].sort_values(by='Qtd', ascending = False)[:10]['Qtd'].sum() \\\n/ tab_OD[tab_OD['DES_GC'].isin(mun_sul_gc)]['Qtd'].sum()",
"_____no_output_____"
],
[
"muns_10sul = list(map(str,tab_OD[tab_OD['DES_GC'].isin(mun_sul_gc)].sort_values(by='Qtd', ascending = False)[:10]['DES_GC'].values))",
"_____no_output_____"
],
[
"# Recursos materiais\nmun_ba[mun_ba['GEOCODIGO'].isin(muns_10sul)][['NOME','Pop','Qtd_Tot','L_Clin','LA_Clin','L_UTI_Adu','LA_UTI_Adu','Resp']].sort_values(by = 'Qtd_Tot', ascending = False)",
"_____no_output_____"
],
[
"mun_ba[mun_ba['NOME'].isin(['Ilhéus','Itabuna','Jequié'])]['Pop'].sum() \\\n/mun_ba[mun_ba['NRS']=='Sul']['Pop'].sum()\nmun_ba[mun_ba['NRS']=='Sul']['Pop'].sum()",
"_____no_output_____"
],
[
"# Recursos materiais de Itabuna, Ilhéus e Jequié em relação a todo NRS\nmun_ba[mun_ba['GEOCODIGO'].isin(muns_10sul)][['Qtd_Tot','L_Clin','LA_Clin','L_UTI_Adu','LA_UTI_Adu','Resp']].sort_values(by = 'Qtd_Tot', ascending = False)[:3].sum() \\\n/ mun_ba[mun_ba['NRS']=='Sul'][['Qtd_Tot','L_Clin','LA_Clin','L_UTI_Adu','LA_UTI_Adu','Resp']].sum()\n",
"_____no_output_____"
],
[
"# Recursos humanos\nmun_ba[mun_ba['GEOCODIGO'].isin(muns_10sul)][['NOME','Qtd_Tot','M_Pneumo','M_Intens','Fisiot','Enferm']].sort_values(by = 'Qtd_Tot', ascending = False)",
"_____no_output_____"
],
[
"# Recursos humanos de Itabuna, Ilhéus e Jequié em relação a todo NRS\nmun_ba[mun_ba['GEOCODIGO'].isin(muns_10sul)][['Qtd_Tot','M_Pneumo','M_Intens','Fisiot','Enferm']].sort_values(by = 'Qtd_Tot', ascending = False)[:3].sum() \\\n/ mun_ba[mun_ba['NRS']=='Sul'][['Qtd_Tot','M_Pneumo','M_Intens','Fisiot','Enferm']].sum()",
"_____no_output_____"
]
],
[
[
"### (4.6) Fluxo de Internações dos 10 municípios mais prevalentes do NRS Sul:",
"_____no_output_____"
]
],
[
[
"mun_sul = list(mun_ba[mun_ba['NRS']=='Sul'].sort_values(by='prev', ascending = False)['GEOCODIGO'].values)",
"_____no_output_____"
],
[
"for i in mun_sul[:10]:\n orig = []\n lst_orig = tab_OD[tab_OD['DES_GC']==i].sort_values(by = 'Qtd', ascending = False)['ORI_GC'].values\n if len(lst_orig) == 0:\n \"{} não recebeu pacientes\".format(mun_ba[mun_ba['GEOCODIGO']==i]['NOME'].values[0])\n continue\n for k, j in enumerate(lst_orig):\n if k < len(lst_orig) - 1:\n orig.append(mun_ba[mun_ba['GEOCODIGO']==j]['NOME'].values[0])\n else:\n orig.append(mun_ba[mun_ba['GEOCODIGO']==j]['NOME'].values[0])\n print('Intenações com destino a ' + mun_ba[mun_ba['GEOCODIGO']==i]['NOME'].values[0] + ':')\n qtd = tab_OD[tab_OD['DES_GC']==i].sort_values(by = 'Qtd', ascending = False)['Qtd']\n perc = qtd/tab_OD[tab_OD['DES_GC']==i].sort_values(by = 'Qtd', ascending = False)['Qtd'].sum()\n pd.DataFrame(zip(orig,qtd,perc), columns = ['Mun_orig','Qtd','Distr_perc'])\n\n\nfor i in mun_sul[:10]:\n dest = []\n lst_dest = tab_OD[tab_OD['ORI_GC']==i].sort_values(by = 'Qtd', ascending = False)['DES_GC'].values\n if len(lst_dest) == 0:\n continue\n for k, j in enumerate(lst_dest):\n if k < len(lst_dest) - 1:\n dest.append(mun_ba[mun_ba['GEOCODIGO']==j]['NOME'].values[0])\n else:\n dest.append(mun_ba[mun_ba['GEOCODIGO']==j]['NOME'].values[0])\n print('Intenações com origem em ' + mun_ba[mun_ba['GEOCODIGO']==i]['NOME'].values[0] + ':')\n qtd = tab_OD[tab_OD['ORI_GC']==i].sort_values(by = 'Qtd', ascending = False)['Qtd']\n perc = qtd/tab_OD[tab_OD['ORI_GC']==i].sort_values(by = 'Qtd', ascending = False)['Qtd'].sum()\n pd.DataFrame(zip(dest,qtd,perc), columns = ['Mun_dest','Qtd','Distr_perc'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d051ae06466ef1662ac6f7f016867f0792d34cae | 762,342 | ipynb | Jupyter Notebook | Group_037_SEC_3_Assignment_2_Image_Captioning.ipynb | arindamdeyofficial/Amazon_Review_Sentiment_Analysys | 247fd8daa676c98a7bdb2402237cb3f6c3845422 | [
"Apache-2.0"
] | null | null | null | Group_037_SEC_3_Assignment_2_Image_Captioning.ipynb | arindamdeyofficial/Amazon_Review_Sentiment_Analysys | 247fd8daa676c98a7bdb2402237cb3f6c3845422 | [
"Apache-2.0"
] | null | null | null | Group_037_SEC_3_Assignment_2_Image_Captioning.ipynb | arindamdeyofficial/Amazon_Review_Sentiment_Analysys | 247fd8daa676c98a7bdb2402237cb3f6c3845422 | [
"Apache-2.0"
] | null | null | null | 752.558736 | 152,765 | 0.94227 | [
[
[
"<a href=\"https://colab.research.google.com/github/arindamdeyofficial/Amazon_Review_Sentiment_Analysys/blob/main/Group_037_SEC_3_Assignment_2_Image_Captioning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Assignment 2 Set 5\nImage Captioning\n###Deep Learning (S1-21_DSECLZG524) - DL Group 037 - SEC-3\n* Arindam Dey - 2020FC04251\n* Kaushik Dubey - 2020FC04245\n* Mohammad Attaullah - 2020FC04274",
"_____no_output_____"
],
[
"1.\tImport Libraries/Dataset (0 mark) \n 1.\tImport the required libraries\n 2.\tCheck the GPU available (recommended- use free GPU provided by Google Colab)",
"_____no_output_____"
]
],
[
[
"import os\n#COLAB_GPU\n#print(os.environ )\nisCollab = os.getenv('COLAB_GPU', False) and os.getenv('OS', True)\nprint('Collab' if isCollab else 'Local')",
"Collab\n"
],
[
"#libraries\nimport numpy as np \nimport pandas as pd \nimport random\n\n# folder\nimport os\n\n# Imports packages to view data\n#pip install opencv-python\n#pip install opencv-contrib-python\nimport cv2\n\n#pip install glob2\nfrom glob2 import glob\n\n#pip install matplotlib\nimport matplotlib.pyplot as plt\nfrom PIL import Image\n\n#below only works in collab as it doesn't support imshow() directly in Google Collab\nif isCollab:\n from google.colab.patches import cv2_imshow\n\n#pip install prettytable\nfrom prettytable import PrettyTable\n\n# visu\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#pip install seaborn\nimport seaborn as sns\nplt.rc('image', cmap='gray')\n\n# sklearn\n#pip install scikit-learn\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.model_selection import train_test_split\n\n#tensorflow and keras\n#pip install tensorflow\n#pip install keras\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Input, Dense, GRU, Embedding\nfrom tensorflow.keras.applications import VGG16\nfrom tensorflow.keras.optimizers import RMSprop\nfrom tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\n\n#google drive\n#doesn't work in local\nimport pickle\nif isCollab:\n from google.colab import drive\n drive.mount('/content/drive')\n\nimport sklearn.metrics as metrics\nfrom keras.wrappers.scikit_learn import KerasClassifier\nfrom sklearn.metrics import accuracy_score, confusion_matrix\nfrom sklearn.metrics import classification_report",
"Mounted at /content/drive\n"
],
[
"print(tf.__version__)",
"2.8.0\n"
],
[
"isCollab",
"_____no_output_____"
]
],
[
[
"2.\tData Processing(1 mark) \n\n### Read the pickle file ",
"_____no_output_____"
]
],
[
[
"if isCollab:\n drivemasterpath = '/content/drive/My Drive/Colab Notebooks/AutoImageCaptioning'\nelse:\n drivemasterpath = 'D:/OneDrive/Certification/Bits Pilani Data Science/3rd Sem/Deep Learning (S1-21_DSECLZG524)/Assignment 2'\nimgDatasetPath = drivemasterpath+\"/Flicker8k_Dataset\"\npklFilePath = drivemasterpath+'/set_0.pkl'\nprint(imgDatasetPath,pklFilePath)",
"/content/drive/My Drive/Colab Notebooks/AutoImageCaptioning/Flicker8k_Dataset /content/drive/My Drive/Colab Notebooks/AutoImageCaptioning/set_0.pkl\n"
],
[
"infile = open(pklFilePath,'rb')\nbest_model = pickle.load(infile)\n\n#keep dataobj into file\n#import pickle\n# dump : put the data of the object in a file\n#pickle.dump(obj, open(file_path, \"wb\"))\n# dumps : return the object in bytes\n#data = pickle.dump(obj)",
"_____no_output_____"
]
],
[
[
"### Plot at least two samples and their captions (use matplotlib/seaborn/any other library). ",
"_____no_output_____"
]
],
[
[
"pics = os.listdir(imgDatasetPath)[25:30] # for 5 images we are showing",
"_____no_output_____"
],
[
"pic_address = [imgDatasetPath + '/' + pic for pic in pics]\npic_address",
"_____no_output_____"
],
[
"for i in range(0,5):\n # Load the images\n norm_img = Image.open(pic_address[i])\n\n #Let's plt these images\n ## plot normal picture\n f = plt.figure(figsize= (10,6))\n a1 = f.add_subplot(1,2,1)\n img_plot = plt.imshow(norm_img)\n a1.set_title(f'Normal {pics[i]}')\n",
"_____no_output_____"
],
[
"def load_image(path, size=None):\n \"\"\"\n Load the image from the given file-path and resize it\n to the given size if not None.\n \"\"\"\n\n # Load the image using PIL.\n img = Image.open(path)\n\n # Resize image if desired.\n if not size is None:\n img = img.resize(size=size, resample=Image.LANCZOS)\n\n # Convert image to numpy array.\n img = np.array(img)\n\n # Scale image-pixels so they fall between 0.0 and 1.0\n img = img / 255.0\n\n # Convert 2-dim gray-scale array to 3-dim RGB array.\n if (len(img.shape) == 2):\n img = np.repeat(img[:, :, np.newaxis], 3, axis=2)\n\n return img",
"_____no_output_____"
],
[
"def show_image(idx, train):\n \"\"\"\n Load and plot an image from the training- or validation-set\n with the given index.\n \"\"\"\n\n if train:\n # Use an image from the training-set.\n dir = coco.train_dir\n filename = filenames_train[idx]\n captions = captions_train[idx]\n else:\n # Use an image from the validation-set.\n dir = coco.val_dir\n filename = filenames_val[idx]\n captions = captions_val[idx]\n\n # Path for the image-file.\n path = os.path.join(dir, filename)\n\n # Print the captions for this image.\n for caption in captions:\n print(caption)\n \n # Load the image and plot it.\n img = load_image(path)\n plt.imshow(img)\n plt.show()",
"_____no_output_____"
]
],
[
[
"3.\tModel Building (4 mark) \n",
"_____no_output_____"
],
[
"1.\tUse Pretrained VGG-16 model trained on ImageNet dataset (available publicly on google) for image feature extraction.\n2.\tCreate 3 layered LSTM layer model and other relevant layers for image caption generation.\n3.\tAdd L2 regularization to all the LSTM layers. \n4.\tAdd one layer of dropout at the appropriate position and give reasons. \n5.\tChoose the appropriate activation function for all the layers. \n6.\tPrint the model summary. \n",
"_____no_output_____"
],
[
"Use Pretrained VGG-16 model trained on ImageNet dataset (available publicly on google) for image feature extraction.",
"_____no_output_____"
],
[
"VGG16 is a convolution neural net (CNN ) architecture which was used to win ILSVR(Imagenet) competition in 2014. It is considered to be one of the excellent vision model architecture till date. Most unique thing about VGG16 is that instead of having a large number of hyper-parameter they focused on having convolution layers of 3x3 filter with a stride 1 and always used same padding and maxpool layer of 2x2 filter of stride 2. It follows this arrangement of convolution and max pool layers consistently throughout the whole architecture. In the end it has 2 FC(fully connected layers) followed by a softmax for output. The 16 in VGG16 refers to it has 16 layers that have weights. This network is a pretty large network and it has about 138 million (approx) parameters.\n\n",
"_____no_output_____"
],
[
"Pre-Trained Image Model (VGG16)\nThe following creates an instance of the VGG16 model using the Keras API. This automatically downloads the required files if you don't have them already.\n\nThe VGG16 model was pre-trained on the ImageNet data-set for classifying images. The VGG16 model contains a convolutional part and a fully-connected (or dense) part which is used for the image classification.\n\nIf include_top=True then the whole VGG16 model is downloaded which is about 528 MB. If include_top=False then only the convolutional part of the VGG16 model is downloaded which is just 57 MB.\n\nWe will use some of the fully-connected layers in this pre-trained model, so we have to download the full model, but if you have a slow internet connection, then you can try and modify the code below to use the smaller pre-trained model without the classification layers.",
"_____no_output_____"
]
],
[
[
"image_model = VGG16(include_top=True, weights='imagenet')",
"Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5\n553467904/553467096 [==============================] - 3s 0us/step\n553476096/553467096 [==============================] - 3s 0us/step\n"
],
[
"image_model.summary()",
"Model: \"vgg16\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_1 (InputLayer) [(None, 224, 224, 3)] 0 \n \n block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n \n block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n \n block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n \n block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n \n block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n \n block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n \n block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n \n block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n \n block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n \n block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n \n block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n \n block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n \n block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n \n block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n \n block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n \n block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n \n block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n \n block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n \n flatten (Flatten) (None, 25088) 0 \n \n fc1 (Dense) (None, 4096) 102764544 \n \n fc2 (Dense) (None, 4096) 16781312 \n \n predictions (Dense) (None, 1000) 4097000 \n \n=================================================================\nTotal params: 138,357,544\nTrainable params: 138,357,544\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"transfer_layer = image_model.get_layer('fc2')",
"_____no_output_____"
],
[
"image_model_transfer = Model(inputs=image_model.input,\n outputs=transfer_layer.output)",
"_____no_output_____"
]
],
[
[
"The model expects input images to be of this size:",
"_____no_output_____"
]
],
[
[
"img_size = K.int_shape(image_model.input)[1:3]\nimg_size",
"_____no_output_____"
],
[
"transfer_values_size = K.int_shape(transfer_layer.output)[1]\ntransfer_values_size",
"_____no_output_____"
]
],
[
[
"Process All Images\nWe now make functions for processing all images in the data-set using the pre-trained image-model and saving the transfer-values in a cache-file so they can be reloaded quickly.\n\nWe effectively create a new data-set of the transfer-values. This is because it takes a long time to process an image in the VGG16 model. We will not be changing all the parameters of the VGG16 model, so every time it processes an image, it gives the exact same result. We need the transfer-values to train the image-captioning model for many epochs, so we save a lot of time by calculating the transfer-values once and saving them in a cache-file.\n\nThis is a helper-function for printing the progress.",
"_____no_output_____"
]
],
[
[
"import keras,os\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Conv2D, MaxPool2D , Flatten\nfrom keras.preprocessing.image import ImageDataGenerator\nimport numpy as np\n\ntrdata = ImageDataGenerator()\ntraindata = trdata.flow_from_directory(directory=\"data\",target_size=(224,224))\ntsdata = ImageDataGenerator()\ntestdata = tsdata.flow_from_directory(directory=\"test\", target_size=(224,224))\n\nmodel = Sequential()\nmodel.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(3,3),padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=64,kernel_size=(3,3),padding=\"same\", activation=\"relu\"))\nmodel.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))\nmodel.add(Conv2D(filters=128, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=128, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))\nmodel.add(Conv2D(filters=256, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=256, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=256, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))\nmodel.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))",
"_____no_output_____"
],
[
"def print_progress(count, max_count):\n # Percentage completion.\n pct_complete = count / max_count\n\n # Status-message. Note the \\r which means the line should\n # overwrite itself.\n msg = \"\\r- Progress: {0:.1%}\".format(pct_complete)\n\n # Print it.\n sys.stdout.write(msg)\n sys.stdout.flush()",
"_____no_output_____"
]
],
[
[
"This is the function for processing the given files using the VGG16-model and returning their transfer-values.",
"_____no_output_____"
]
],
[
[
"def process_images(data_dir, filenames, batch_size=32):\n \"\"\"\n Process all the given files in the given data_dir using the\n pre-trained image-model and return their transfer-values.\n \n Note that we process the images in batches to save\n memory and improve efficiency on the GPU.\n \"\"\"\n \n # Number of images to process.\n num_images = len(filenames)\n\n # Pre-allocate input-batch-array for images.\n shape = (batch_size,) + img_size + (3,)\n image_batch = np.zeros(shape=shape, dtype=np.float16)\n\n # Pre-allocate output-array for transfer-values.\n # Note that we use 16-bit floating-points to save memory.\n shape = (num_images, transfer_values_size)\n transfer_values = np.zeros(shape=shape, dtype=np.float16)\n\n # Initialize index into the filenames.\n start_index = 0\n\n # Process batches of image-files.\n while start_index < num_images:\n # Print the percentage-progress.\n print_progress(count=start_index, max_count=num_images)\n\n # End-index for this batch.\n end_index = start_index + batch_size\n\n # Ensure end-index is within bounds.\n if end_index > num_images:\n end_index = num_images\n\n # The last batch may have a different batch-size.\n current_batch_size = end_index - start_index\n\n # Load all the images in the batch.\n for i, filename in enumerate(filenames[start_index:end_index]):\n # Path for the image-file.\n path = os.path.join(data_dir, filename)\n\n # Load and resize the image.\n # This returns the image as a numpy-array.\n img = load_image(path, size=img_size)\n\n # Save the image for later use.\n image_batch[i] = img\n\n # Use the pre-trained image-model to process the image.\n # Note that the last batch may have a different size,\n # so we only use the relevant images.\n transfer_values_batch = \\\n image_model_transfer.predict(image_batch[0:current_batch_size])\n\n # Save the transfer-values in the pre-allocated array.\n transfer_values[start_index:end_index] = \\\n transfer_values_batch[0:current_batch_size]\n\n # Increase the index for the next loop-iteration.\n start_index = end_index\n\n # Print newline.\n print()\n\n return transfer_values",
"_____no_output_____"
]
],
[
[
"Helper-function for processing all images in the training-set. This saves the transfer-values in a cache-file for fast reloading.",
"_____no_output_____"
]
],
[
[
"def process_images_train():\n print(\"Processing {0} images in training-set ...\".format(len(filenames_train)))\n\n # Path for the cache-file.\n cache_path = os.path.join(coco.data_dir,\n \"transfer_values_train.pkl\")\n\n # If the cache-file already exists then reload it,\n # otherwise process all images and save their transfer-values\n # to the cache-file so it can be reloaded quickly.\n transfer_values = cache(cache_path=cache_path,\n fn=process_images,\n data_dir=coco.train_dir,\n filenames=filenames_train)\n\n return transfer_values",
"_____no_output_____"
]
],
[
[
"Helper-function for processing all images in the validation-set.",
"_____no_output_____"
]
],
[
[
"def process_images_val():\n print(\"Processing {0} images in validation-set ...\".format(len(filenames_val)))\n\n # Path for the cache-file.\n cache_path = os.path.join(coco.data_dir, \"transfer_values_val.pkl\")\n\n # If the cache-file already exists then reload it,\n # otherwise process all images and save their transfer-values\n # to the cache-file so it can be reloaded quickly.\n transfer_values = cache(cache_path=cache_path,\n fn=process_images,\n data_dir=coco.val_dir,\n filenames=filenames_val)\n\n return transfer_values",
"_____no_output_____"
]
],
[
[
"Process all images in the training-set and save the transfer-values to a cache-file. This took about 30 minutes to process on a GTX 1070 GPU.",
"_____no_output_____"
]
],
[
[
"%%time\ntransfer_values_train = process_images_train()\nprint(\"dtype:\", transfer_values_train.dtype)\nprint(\"shape:\", transfer_values_train.shape)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d051bd1b0e8262639308c6c602a94577c13c55b3 | 42,675 | ipynb | Jupyter Notebook | jupyter/BLOOMBERG/SektorEksternal/script/SektorEksternal2_1.ipynb | langpp/bappenas | f780607192bb99b9bc8fbe29412b4c6c49bf15ae | [
"Apache-2.0"
] | 1 | 2021-03-17T03:10:49.000Z | 2021-03-17T03:10:49.000Z | jupyter/BLOOMBERG/SektorEksternal/script/SektorEksternal2_1.ipynb | langpp/bappenas | f780607192bb99b9bc8fbe29412b4c6c49bf15ae | [
"Apache-2.0"
] | null | null | null | jupyter/BLOOMBERG/SektorEksternal/script/SektorEksternal2_1.ipynb | langpp/bappenas | f780607192bb99b9bc8fbe29412b4c6c49bf15ae | [
"Apache-2.0"
] | 1 | 2021-03-17T03:12:34.000Z | 2021-03-17T03:12:34.000Z | 85.00996 | 2,221 | 0.645272 | [
[
[
"#IMPORT SEMUA LIBARARY",
"_____no_output_____"
],
[
"#IMPORT LIBRARY PANDAS\nimport pandas as pd\n#IMPORT LIBRARY UNTUK POSTGRE\nfrom sqlalchemy import create_engine\nimport psycopg2\n#IMPORT LIBRARY CHART\nfrom matplotlib import pyplot as plt\nfrom matplotlib import style\n#IMPORT LIBRARY BASE PATH\nimport os\nimport io\n#IMPORT LIBARARY PDF\nfrom fpdf import FPDF\n#IMPORT LIBARARY CHART KE BASE64\nimport base64\n#IMPORT LIBARARY EXCEL\nimport xlsxwriter ",
"_____no_output_____"
],
[
"#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL",
"_____no_output_____"
],
[
"def uploadToPSQL(columns, table, filePath, engine):\n #FUNGSI UNTUK MEMBACA CSV\n df = pd.read_csv(\n os.path.abspath(filePath),\n names=columns,\n keep_default_na=False\n )\n #APABILA ADA FIELD KOSONG DISINI DIFILTER\n df.fillna('')\n #MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN\n del df['kategori']\n del df['jenis']\n del df['pengiriman']\n del df['satuan']\n \n #MEMINDAHKAN DATA DARI CSV KE POSTGRESQL\n df.to_sql(\n table, \n engine,\n if_exists='replace'\n )\n \n #DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA\n if len(df) == 0:\n return False\n else:\n return True",
"_____no_output_____"
],
[
"#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT\n#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF",
"_____no_output_____"
],
[
"def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):\n #TEST KONEKSI DATABASE\n try:\n #KONEKSI KE DATABASE\n connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)\n cursor = connection.cursor()\n #MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR\n #BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT\n postgreSQL_select_Query = \"SELECT * FROM \"+table+\" ORDER BY tanggal ASC LIMIT \" + str(limit)\n \n cursor.execute(postgreSQL_select_Query)\n mobile_records = cursor.fetchall() \n uid = []\n lengthx = []\n lengthy = []\n #MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL\n #KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI\n for row in mobile_records:\n uid.append(row[0])\n lengthx.append(row[1])\n if row[2] == \"\":\n lengthy.append(float(0))\n else:\n lengthy.append(float(row[2]))\n\n #FUNGSI UNTUK MEMBUAT CHART\n #bar\n style.use('ggplot')\n \n fig, ax = plt.subplots()\n #MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL\n ax.bar(uid, lengthy, align='center')\n #UNTUK JUDUL CHARTNYA\n ax.set_title(judul)\n ax.set_ylabel('Total')\n ax.set_xlabel('Tanggal')\n \n ax.set_xticks(uid)\n #TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI\n ax.set_xticklabels((lengthx))\n b = io.BytesIO()\n #CHART DISIMPAN KE FORMAT PNG\n plt.savefig(b, format='png', bbox_inches=\"tight\")\n #CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64\n barChart = base64.b64encode(b.getvalue()).decode(\"utf-8\").replace(\"\\n\", \"\")\n #CHART DITAMPILKAN\n plt.show()\n \n #line\n #MASUKAN DATA DARI DATABASE\n plt.plot(lengthx, lengthy)\n plt.xlabel('Tanggal')\n plt.ylabel('Total')\n #UNTUK JUDUL CHARTNYA\n plt.title(judul)\n plt.grid(True)\n l = io.BytesIO()\n #CHART DISIMPAN KE FORMAT PNG\n plt.savefig(l, format='png', bbox_inches=\"tight\")\n #CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64\n lineChart = base64.b64encode(l.getvalue()).decode(\"utf-8\").replace(\"\\n\", \"\")\n #CHART DITAMPILKAN\n plt.show()\n \n #pie\n #UNTUK JUDUL CHARTNYA\n plt.title(judul)\n #MASUKAN DATA DARI DATABASE\n plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%', \n shadow=True, startangle=180)\n \n plt.axis('equal')\n p = io.BytesIO()\n #CHART DISIMPAN KE FORMAT PNG\n plt.savefig(p, format='png', bbox_inches=\"tight\")\n #CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64\n pieChart = base64.b64encode(p.getvalue()).decode(\"utf-8\").replace(\"\\n\", \"\")\n #CHART DITAMPILKAN\n plt.show()\n \n #MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF\n header = pd.read_csv(\n os.path.abspath(filePath),\n names=columns,\n keep_default_na=False\n )\n #MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN\n header.fillna('')\n del header['tanggal']\n del header['total']\n #MEMANGGIL FUNGSI EXCEL\n makeExcel(mobile_records, header, name, limit, basePath)\n #MEMANGGIL FUNGSI PDF\n makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath) \n \n #JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA\n except (Exception, psycopg2.Error) as error :\n print (error)\n\n #KONEKSI DITUTUP\n finally:\n if(connection):\n cursor.close()\n connection.close()",
"_____no_output_____"
],
[
"#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2\n#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER",
"_____no_output_____"
],
[
"def makeExcel(datarow, dataheader, name, limit, basePath):\n #MEMBUAT FILE EXCEL\n workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorEksternal/excel/'+name+'.xlsx')\n #MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT\n worksheet = workbook.add_worksheet('sheet1')\n #SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD\n row1 = workbook.add_format({'border': 2, 'bold': 1})\n row2 = workbook.add_format({'border': 2})\n #MENJADIKAN DATA MENJADI ARRAY\n data=list(datarow)\n isihead=list(dataheader.values)\n header = []\n body = []\n \n #LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS\n for rowhead in dataheader:\n header.append(str(rowhead))\n \n for rowhead2 in datarow:\n header.append(str(rowhead2[1]))\n \n for rowbody in isihead[1]:\n body.append(str(rowbody))\n \n for rowbody2 in data:\n body.append(str(rowbody2[2]))\n \n #MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL\n for col_num, data in enumerate(header):\n worksheet.write(0, col_num, data, row1)\n \n for col_num, data in enumerate(body):\n worksheet.write(1, col_num, data, row2)\n \n #FILE EXCEL DITUTUP\n workbook.close()",
"_____no_output_____"
],
[
"#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2\n#PLUGIN YANG DIGUNAKAN ADALAH FPDF",
"_____no_output_____"
],
[
"def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):\n #FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE\n pdf = FPDF('L', 'mm', [210,297])\n #MENAMBAHKAN HALAMAN PADA PDF\n pdf.add_page()\n #PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT\n pdf.set_font('helvetica', 'B', 20.0)\n pdf.set_xy(145.0, 15.0)\n #MEMASUKAN JUDUL KE DALAM PDF\n pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)\n \n #PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING\n pdf.set_font('arial', '', 14.0)\n pdf.set_xy(145.0, 25.0)\n #MEMASUKAN SUB JUDUL KE PDF\n pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)\n #MEMBUAT GARIS DI BAWAH SUB JUDUL\n pdf.line(10.0, 30.0, 287.0, 30.0)\n pdf.set_font('times', '', 10.0)\n pdf.set_xy(17.0, 37.0)\n \n #PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING\n pdf.set_font('Times','',10.0) \n #MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS\n datahead=list(dataheader.values)\n pdf.set_font('Times','B',12.0) \n pdf.ln(0.5)\n \n th1 = pdf.font_size\n \n #MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM\n pdf.cell(100, 2*th1, \"Kategori\", border=1, align='C')\n pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')\n pdf.ln(2*th1)\n pdf.cell(100, 2*th1, \"Jenis\", border=1, align='C')\n pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')\n pdf.ln(2*th1)\n pdf.cell(100, 2*th1, \"Pengiriman\", border=1, align='C')\n pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')\n pdf.ln(2*th1)\n pdf.cell(100, 2*th1, \"Satuan\", border=1, align='C')\n pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')\n pdf.ln(2*th1)\n \n #PENGATURAN PADDING\n pdf.set_xy(17.0, 75.0)\n \n #PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING\n pdf.set_font('Times','B',11.0) \n data=list(datarow)\n epw = pdf.w - 2*pdf.l_margin\n col_width = epw/(lengthPDF+1)\n \n #PENGATURAN UNTUK JARAK PADDING\n pdf.ln(0.5)\n th = pdf.font_size\n \n #MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF\n pdf.cell(50, 2*th, str(\"Negara\"), border=1, align='C')\n for row in data:\n pdf.cell(40, 2*th, str(row[1]), border=1, align='C')\n pdf.ln(2*th)\n \n #MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF\n pdf.set_font('Times','B',10.0)\n pdf.set_font('Arial','',9)\n pdf.cell(50, 2*th, negara, border=1, align='C')\n for row in data:\n pdf.cell(40, 2*th, str(row[2]), border=1, align='C')\n pdf.ln(2*th)\n \n #MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI\n #BAR CHART\n bardata = base64.b64decode(bar)\n barname = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-bar.png'\n with open(barname, 'wb') as f:\n f.write(bardata)\n \n #LINE CHART\n linedata = base64.b64decode(line)\n linename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-line.png'\n with open(linename, 'wb') as f:\n f.write(linedata)\n \n #PIE CHART\n piedata = base64.b64decode(pie)\n piename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-pie.png'\n with open(piename, 'wb') as f:\n f.write(piedata)\n \n #PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING\n pdf.set_xy(17.0, 75.0)\n col = pdf.w - 2*pdf.l_margin\n widthcol = col/3\n #MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS\n pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)\n pdf.set_xy(17.0, 75.0)\n col = pdf.w - 2*pdf.l_margin\n pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)\n pdf.set_xy(17.0, 75.0)\n col = pdf.w - 2*pdf.l_margin\n pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)\n pdf.ln(2*th)\n \n #MEMBUAT FILE PDF\n pdf.output(basePath+'jupyter/BLOOMBERG/SektorEksternal/pdf/'+name+'.pdf', 'F')",
"_____no_output_____"
],
[
"#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI\n#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART\n#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF",
"_____no_output_____"
],
[
"#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV\ncolumns = [\n \"kategori\",\n \"jenis\",\n \"tanggal\",\n \"total\",\n \"pengiriman\",\n \"satuan\",\n]\n\n#UNTUK NAMA FILE\nname = \"SektorEksternal2_1\"\n#VARIABLE UNTUK KONEKSI KE DATABASE\nhost = \"localhost\"\nusername = \"postgres\"\npassword = \"1234567890\"\nport = \"5432\"\ndatabase = \"bloomberg_sektoreksternal\"\ntable = name.lower()\n#JUDUL PADA PDF DAN EXCEL\njudul = \"Data Sektor Eksternal\"\nsubjudul = \"Badan Perencanaan Pembangunan Nasional\"\n#LIMIT DATA UNTUK SELECT DI DATABASE\nlimitdata = int(8)\n#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF\nnegara = \"Indonesia\"\n#BASE PATH DIRECTORY\nbasePath = 'C:/Users/ASUS/Documents/bappenas/'\n#FILE CSV\nfilePath = basePath+ 'data mentah/BLOOMBERG/SektorEksternal/' +name+'.csv';\n#KONEKSI KE DATABASE\nengine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)\n\n#MEMANGGIL FUNGSI UPLOAD TO PSQL\ncheckUpload = uploadToPSQL(columns, table, filePath, engine)\n#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR\nif checkUpload == True:\n makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)\nelse:\n print(\"Error When Upload CSV\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d051c588d8587ce7c0585df5eb5be069892c83d1 | 666,934 | ipynb | Jupyter Notebook | Locomotion dynamcis/1_Kinemtaics_201102.ipynb | AlbertLordsun/Physical_measurement | 3db6714e0d1042dbb029c335f2aa7002cadcd4c5 | [
"Unlicense"
] | null | null | null | Locomotion dynamcis/1_Kinemtaics_201102.ipynb | AlbertLordsun/Physical_measurement | 3db6714e0d1042dbb029c335f2aa7002cadcd4c5 | [
"Unlicense"
] | null | null | null | Locomotion dynamcis/1_Kinemtaics_201102.ipynb | AlbertLordsun/Physical_measurement | 3db6714e0d1042dbb029c335f2aa7002cadcd4c5 | [
"Unlicense"
] | null | null | null | 234.094068 | 78,040 | 0.906874 | [
[
[
"import numpy as np\nimport pandas as pd\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\n#src_name = [\"Results1-5-116.csv\", \"Results1-54-109-20.csv\", \"Results2-125-215-20.csv\", \"Results2-160-210.csv\",\n# \"Results3-1-74-20.csv\", \"Results3-75-120.csv\", \"Results4-60-100.csv\", \"Results4-248-370-20.csv\",\n# \"Results5-1-100-20.csv\", \"Results6-380-485-20.csv\", \"Results7-250-310-20.csv\", \"Results8-1-105-20.csv\",\n# \"Results9-464-555-20.csv\", \"Results10-665-733-20.csv\", \"Results11-249-315-20.csv\"]\n\n# 20 points = the anterior edges of T3, and A1-A9\nsrc_name = [\"Results1-54-109-20.csv\", \"Results2-125-215-20.csv\", \"Results3-1-74-20.csv\", \"Results4-248-370-20.csv\",\n \"Results5-1-100-20.csv\", \"Results6-380-485-20.csv\", \"Results7-250-310-20.csv\", \"Results8-1-105-20.csv\",\n \"Results9-464-555-20.csv\", \"Results10-665-733-20.csv\", \"Results11-249-315-20.csv\"]\n\n\nsrc = []\nfor elem in src_name:\n src.append(pd.read_csv(src_path + elem))\n \nprint(\"Flie number:\", len(src))\nprint(\"Frames:\")\nfor i in range(len(src)):\n print(\"file{0:2d}: {1:d}\".format(i, int(len(src[i])/20)))",
"Flie number: 11\nFrames:\nfile 0: 56\nfile 1: 91\nfile 2: 74\nfile 3: 123\nfile 4: 100\nfile 5: 106\nfile 6: 61\nfile 7: 105\nfile 8: 92\nfile 9: 69\nfile10: 67\n"
],
[
"print(src[0].iloc[0])\nprint(src[0].iloc[0,1])\nprint(src[0].iloc[0,2])\nprint(src[0].iloc[18,1])\nprint(src[0].iloc[18,2])",
" 1\nX 434.242\nY 150.48\nSlice 54\nUnnamed: 4 NaN\nUnnamed: 5 seg0_midx\nUnnamed: 6 444.377\nName: 0, dtype: object\n434.24199999999996\n150.48\n197.774\n196.545\n"
],
[
"# xy coordinates of all\n\nxy_all = []\nlabel_num = 20\n\nfor src_dat in src:\n xy = []\n if len(src_dat)%label_num != 0:\n print(\"Invalid data.\")\n else:\n for frame in range(len(src_dat)//label_num):\n xy0 = []\n for segment in range(label_num//2):\n xy00 = []\n xy00_LR = []\n xy00_LR.append([src_dat.iloc[frame*label_num + segment*2, 1],\n src_dat.iloc[frame*label_num + segment*2, 2]] )\n xy00_LR.append([src_dat.iloc[frame*label_num + segment*2+1, 1],\n src_dat.iloc[frame*label_num + segment*2+1, 2]] )\n xy0.append(xy00_LR)\n xy.append(xy0)\n xy = np.array(xy)\n xy_all.append(xy)\n\nprint(\"file:\", len(xy_all))\nprint(\"frames:\", len(xy_all[0]))\nprint(\"segments:\", len(xy_all[0][0]))\nprint(\"LR:\", len(xy_all[0][0][0]))\nprint(\"xy:\", len(xy_all[0][0][0][0]))\nprint(\"shape of xy_all[0]:\", xy_all[0].shape)",
"file: 11\nframes: 56\nsegments: 10\nLR: 2\nxy: 2\nshape of xy_all[0]: (56, 10, 2, 2)\n"
],
[
"import matplotlib.pyplot as plt\n\nfile = 0\nseg = 0 # 0: A9, 9: T3\nLR = 0 # 0: right, 1: left\nplt.plot(xy_all[0][:,seg,LR,0], xy_all[0][:,seg,LR,1])\nplt.plot(xy_all[0][:,seg,LR+1,0], xy_all[0][:,seg,LR+1,1])\nplt.plot(xy_all[0][:,seg+9,LR,0], xy_all[0][:,seg+9,LR,1])\nplt.plot(xy_all[0][:,seg+9,LR+1,0], xy_all[0][:,seg+9,LR+1,1])\nplt.show()\n\nframe = 0\nprint(\"seg0_Right\")\nprint(\"x:\", xy_all[0][frame,seg,LR,0])\nprint(\"y:\", xy_all[0][frame,seg,LR,1])\nprint(\"seg0_Left\")\nprint(\"x:\", xy_all[0][frame,seg,LR+1,0])\nprint(\"y:\", xy_all[0][frame,seg,LR+1,1])\nseg0_mid_x = (xy_all[0][frame,seg,LR,0] + xy_all[0][frame,seg,LR+1,0])/2\nseg0_mid_y = (xy_all[0][frame,seg,LR,1] + xy_all[0][frame,seg,LR+1,1])/2\n\nprint(\"seg9_Right\")\nprint(\"x:\", xy_all[0][frame,seg+9,LR,0])\nprint(\"y:\", xy_all[0][frame,seg+9,LR,1])\nprint(\"seg9_Left\")\nprint(\"x:\", xy_all[0][frame,seg+9,LR+1,0])\nprint(\"y:\", xy_all[0][frame,seg+9,LR+1,1])\n\nseg9_mid_x = (xy_all[0][frame,seg+9,LR,0] + xy_all[0][frame,seg+9,LR+1,0])/2\nseg9_mid_y = (xy_all[0][frame,seg+9,LR,1] + xy_all[0][frame,seg+9,LR+1,1])/2\n\n\nmm_per_pixel = 0.011\nv0 = np.array([seg0_mid_x, seg0_mid_y])\nv1 = np.array([seg9_mid_x, seg9_mid_y])\nprint(v0)\nprint(v1)\nd = np.linalg.norm(v0-v1)\nprint(\"Distance between seg0_mid and seg9_mid, pixel:\", d, \"mm:\", d*mm_per_pixel)",
"_____no_output_____"
],
[
"xy_all_mid = []\nfor i in range(len(xy_all)):\n xy_mid0 = []\n for frame in range(len(xy_all[i])):\n xy_mid00 = []\n for seg in range(len(xy_all[i][0])):\n midx = (xy_all[i][frame,seg,0,0] + xy_all[i][frame,seg,1,0])/2\n midy = (xy_all[i][frame,seg,0,1] + xy_all[i][frame,seg,1,1])/2\n xy_mid00.append([midx, midy])\n xy_mid0.append(xy_mid00)\n xy_mid0 = np.array(xy_mid0)\n xy_all_mid.append(xy_mid0)\nprint(\"file:\", len(xy_all_mid))\nprint(\"xy_all_mid[0].shape (frame, seg, xy):\", xy_all_mid[0].shape)",
"file: 11\nxy_all_mid[0].shape (frame, seg, xy): (56, 10, 2)\n"
],
[
"initial_disp_all = []\nfor file_id in range(len(xy_all_mid)):\n initial_disp = []\n dat = xy_all_mid[file_id]\n for seg in range(10):\n v0 = dat[0,0,:]\n v1 = dat[0,seg,:]\n initial_disp.append(np.linalg.norm(v0-v1)*mm_per_pixel)\n initial_disp_all.append(initial_disp)\ninitial_disp_all = np.array(initial_disp_all)\nprint(initial_disp_all[:,-1])",
"[2.75021475 2.80903219 2.80012268 3.14081984 3.20833499 3.28546156\n 3.16081372 3.2977763 3.55264693 3.11243426 3.33153275]\n"
],
[
"i = 0\nfor elm in range(10):\n plt.plot(xy_all_mid[i][:,elm,0], xy_all_mid[i][:,elm,1])\nplt.title(src_name[i])\nplt.xlabel(\"x axis (pixel)\")\nplt.ylabel(\"y axis (pixel)\")\nplt.show()",
"_____no_output_____"
],
[
"for i in range(len(xy_all_mid)):\n for elm in range(10):\n plt.plot(xy_all_mid[i][:,elm,0], xy_all_mid[i][:,elm,1])\n plt.title(src_name[i])\n plt.xlabel(\"x axis (pixel)\")\n plt.ylabel(\"y axis (pixel)\")\n plt.savefig(src_path + \"img/201102_midpoint_plot_\" + src_name[i] + \".png\")\n plt.close()",
"_____no_output_____"
],
[
"print(\"file:\", len(xy_all_mid))\nprint(\"xy_all_mid[0].shape (frame, seg, xy):\", xy_all_mid[0].shape)",
"file: 11\nxy_all_mid[0].shape (frame, seg, xy): (56, 10, 2)\n"
],
[
"# constants\nmm_per_pixel = 0.011\nsec_per_frame = 0.03333\n\ninitial_disp_all = []\ndisp_rel_all = []\ndisp_abs_all = []\nseg_len_all = []\nbody_len_all = []\n\nfor file_id in range(len(xy_all_mid)):\n\n # initial position\n\n initial_disp = []\n dat = xy_all_mid[file_id]\n for seg in range(10):\n v0 = dat[0,0,:]\n v1 = dat[0,seg,:]\n initial_disp.append(np.linalg.norm(v0-v1)*mm_per_pixel)\n initial_disp_all.append(initial_disp)\n\n # displacement_rel\n\n disp_rel = []\n dat = xy_all_mid[file_id]\n for seg in range(10):\n disp_seg = []\n for frame in range(len(dat)):\n t = frame * sec_per_frame\n v0 = dat[0,seg,:]\n v1 = dat[frame,seg,:]\n disp_seg.append([t, np.linalg.norm(v0-v1)*mm_per_pixel])\n disp_rel.append(disp_seg)\n disp_rel = np.array(disp_rel)\n disp_rel_all.append(disp_rel)\n\n # displacement_abs\n\n disp_abs = []\n for seg in range(10):\n disp_abs0 = []\n for frame in range(len(disp_rel[0])):\n t = disp_rel[seg,frame,0]\n disp_abs00 = disp_rel[seg,frame,1] + initial_disp[seg]\n disp_abs0.append([t, disp_abs00])\n disp_abs.append(disp_abs0)\n disp_abs = np.array(disp_abs)\n disp_abs_all.append(disp_abs)\n\n # segment length\n\n seg_len = []\n dat = xy_all_mid[file_id]\n\n for seg in range(9):\n seg_len0 = []\n for frame in range(len(dat)):\n t = frame * sec_per_frame\n v0 = dat[frame,seg,:]\n v1 = dat[frame,seg+1,:]\n seg_len0.append([t, np.linalg.norm(v0-v1)*mm_per_pixel])\n seg_len.append(seg_len0)\n seg_len = np.array(seg_len)\n seg_len_all.append(seg_len)\n\n # body length\n \n body_len = []\n dat = xy_all_mid[file_id]\n \n for frame in range(len(dat)):\n t = frame * sec_per_frame\n v0 = dat[frame,0,:] # posterior end\n v1 = dat[frame,9,:] # anterior end\n body_len.append([t, np.linalg.norm(v0-v1)*mm_per_pixel])\n body_len_all.append(np.array(body_len))\n \nprint(\"len(initial_disp_all):\", len(initial_disp_all))\nprint(\"len(initial_disp_all[0]) (seg number):\", len(initial_disp_all[0]))\nprint(\"len(disp_rel_all):\", len(disp_rel_all))\nprint(\"disp_rel_all[0].shape:\", disp_rel_all[0].shape)\nprint(\"len(disp_abs_all):\", len(disp_abs_all))\nprint(\"disp_abs_all[0].shape:\", disp_abs_all[0].shape)\nprint(\"len(seg_len_all):\", len(seg_len_all))\nprint(\"seg_len_all[0].shape:\", seg_len_all[0].shape)\nprint(\"len(body_len_all):\", len(body_len_all))\nprint(\"body_len_all[0].shape:\", body_len_all[0].shape)",
"len(initial_disp_all): 11\nlen(initial_disp_all[0]) (seg number): 10\nlen(disp_rel_all): 11\ndisp_rel_all[0].shape: (10, 56, 2)\nlen(disp_abs_all): 11\ndisp_abs_all[0].shape: (10, 56, 2)\nlen(seg_len_all): 11\nseg_len_all[0].shape: (9, 56, 2)\nlen(body_len_all): 11\nbody_len_all[0].shape: (56, 2)\n"
],
[
"print(initial_disp_all)",
"[[0.0, 0.22359935947862222, 0.5773429952480588, 0.9692658764948082, 1.3795665381046325, 1.7733360203258148, 2.1273042848859793, 2.3803578558260936, 2.546249111003134, 2.750214751792348], [0.0, 0.2230786738804274, 0.5464283827248082, 0.9268210390799565, 1.3312578773480555, 1.7202411243640383, 2.1074577385096593, 2.42506915739165, 2.623275426483559, 2.8090321894692662], [0.0, 0.2510038905325968, 0.5944321516853291, 1.0063354599263852, 1.4072837055315104, 1.807004543182031, 2.2049807106803674, 2.489752172930169, 2.6564115732974245, 2.800122679028769], [0.0, 0.29943462071210025, 0.7278943907863629, 1.1849826755959345, 1.6541930119309687, 2.038126648730765, 2.2950620983592693, 2.5509808715457862, 2.802999655710646, 3.1408198398734206], [0.0, 0.22981243049332686, 0.45615613831724283, 0.8314563807067682, 1.2346817454041539, 1.6895927991383, 2.106328129167972, 2.532776779549522, 2.8908122880056553, 3.2083349945680926], [0.0, 0.25606319851991655, 0.6487714396010359, 1.0714155323193943, 1.5418599746115893, 1.9995290189142674, 2.452327962892096, 2.846303597367338, 3.0958509833792873, 3.2854615645671843], [0.0, 0.26565060511318317, 0.5510066211083589, 0.9230783477837894, 1.3348086849560126, 1.7652611448605136, 2.228134049608832, 2.65316613052957, 2.9588084138158135, 3.1608137226646247], [0.0, 0.2859413396572802, 0.6834274857349519, 1.13041986098551, 1.6162026424809786, 2.070655749183094, 2.5119340703442536, 2.7785623427352224, 3.0524486266442707, 3.297776300647832], [0.0, 0.3074323446406541, 0.751959246324227, 1.2693953987221636, 1.7555378019018135, 2.21977542032167, 2.6113257772560665, 2.921801015475557, 3.2216487344665623, 3.552646931395519], [0.0, 0.19064510813616475, 0.4612057327923187, 0.7499719623175919, 1.1406365728544952, 1.5858914382386007, 1.9920922123522546, 2.4218969002701676, 2.7830648276416854, 3.1124342595268177], [0.0, 0.24828913966432375, 0.585909858202821, 0.9802183178583226, 1.4012372340318575, 1.860044966546911, 2.277007696280471, 2.6890891500894867, 3.069943019966413, 3.3315327546660103]]\n"
],
[
"for file_id in range(11):\n for seg in range(10):\n plt.plot(disp_abs_all[file_id][seg,:,0], disp_abs_all[file_id][seg,:,1])\n plt.title(\"Displacement of file {0}\".format(src_name[file_id]))\n plt.xlabel(\"Time (sec)\")\n plt.ylabel(\"Displacement (mm)\")\n plt.xlim([0,4.2])\n plt.ylim([0,6.2])\n plt.xticks([0,1,2,3,4])\n plt.savefig(src_path + \"img/201102_displacement_plot_\" + src_name[file_id] + \".png\")\n plt.close()",
"_____no_output_____"
],
[
"file_id = 0\nfor seg in range(10):\n plt.plot(disp_abs_all[file_id][seg,:,0], disp_abs_all[file_id][seg,:,1])\nplt.title(\"Displacement of file {0}\".format(src_name[file_id]))\nplt.xlabel(\"Time (sec)\")\nplt.ylabel(\"Displacement (mm)\")\nplt.xlim([0,4.2])\nplt.ylim([0,6.2])\nplt.xticks([0,1,2,3,4])\nplt.show()",
"_____no_output_____"
],
[
"for file_id in range(11):\n plt.figure(figsize = (10,6))\n for seg in range(9):\n plt.plot(seg_len_all[file_id][seg,:,0], seg_len_all[file_id][seg,:,1])\n plt.title(\"Segment length of file {0}\".format(src_name[file_id]))\n plt.xlabel(\"Time (sec)\")\n plt.ylabel(\"Segment length (mm)\")\n plt.xlim([0,4.2])\n plt.ylim([0,0.6])\n plt.xticks([0,1,2,3,4])\n plt.savefig(src_path + \"img/201102_segment_length_plot_\" + src_name[file_id] + \".png\")\n plt.close()",
"_____no_output_____"
],
[
"file_id = 0\nplt.figure(figsize = (10,6))\nfor seg in range(9):\n plt.plot(seg_len_all[file_id][seg,:,0], seg_len_all[file_id][seg,:,1])\nplt.title(\"Segment length of file {0}\".format(src_name[file_id]))\nplt.xlabel(\"Time (sec)\")\nplt.ylabel(\"Segment length (mm)\")\nplt.xlim([0,4.2])\nplt.ylim([0,0.6])\nplt.xticks([0,1,2,3,4])\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nfor file_id in range(len(body_len_all)):\n plt.figure(figsize = (10,6))\n plt.plot(body_len_all[file_id][:,0], body_len_all[file_id][:,1])\n plt.title(\"Body length of file {0}\".format(src_name[file_id]))\n plt.xlabel(\"Time (sec)\")\n plt.ylabel(\"Segment length (mm)\")\n plt.xlim([0,4.2])\n plt.ylim([2,4])\n plt.xticks([0,1,2,3,4])\n plt.savefig(src_path + \"img/201104_body_length_plot_\" + src_name[file_id] + \".png\")\n plt.close()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nfile_id = 0\nplt.figure(figsize = (10,6))\nplt.plot(body_len_all[file_id][:,0], body_len_all[file_id][:,1])\nplt.title(\"Body length of file {0}\".format(src_name[file_id]))\nplt.xlabel(\"Time (sec)\")\nplt.ylabel(\"Segment length (mm)\")\nplt.xlim([0,4.2])\nplt.ylim([2,4])\nplt.xticks([0,1,2,3,4])\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Parameter extraction",
"_____no_output_____"
]
],
[
[
"# Stride length and stride duration\n\nprint(\"len(disp_abs_all):\", len(disp_abs_all))\nprint(\"disp_abs_all[0].shape:\", disp_abs_all[0].shape)",
"len(disp_abs_all): 11\ndisp_abs_all[0].shape: (10, 56, 2)\n"
],
[
"import copy\nfrom scipy import signal\n\ndisp_abs_all_savgol = copy.deepcopy(disp_abs_all)\nfile_id = 0\nseg = 0\ndisp_abs_all_savgol[file_id][seg][:,1] = signal.savgol_filter(disp_abs_all[file_id][seg][:,1], 11,2)\n\nplt.figure()\nplt.plot(disp_abs_all[file_id][seg,:,0], disp_abs_all[file_id][seg,:,1], color='g')\nplt.plot(disp_abs_all_savgol[file_id][seg,:,0], disp_abs_all_savgol[file_id][seg,:,1], color='m')\nplt.show()",
"_____no_output_____"
],
[
"import copy\nfrom scipy import signal\n\ndisp_abs_all_savgol = copy.deepcopy(disp_abs_all)\n\nfor file_id in range(len(disp_abs_all)):\n savgol0 = []\n for seg in range(len(disp_abs_all[0])):\n disp_abs_all_savgol[file_id][seg][:,1] = signal.savgol_filter(disp_abs_all[file_id][seg][:,1], 11,2)\nplt.figure()\nplt.plot(disp_abs_all[file_id][seg,:,0], disp_abs_all[file_id][seg,:,1], color='g')\nplt.plot(disp_abs_all_savgol[file_id][seg,:,0], disp_abs_all_savgol[file_id][seg,:,1], color='m')\nplt.show()",
"_____no_output_____"
],
[
"import peakutils\nfrom scipy.signal import argrelmax\n\nxmin = 0\nxmax = 6\nbins = 120\nwidth = (xmax-xmin)/bins\n\nstride_all = []\n\nfor file_id in range(len(disp_abs_all)):\n stride_seg = []\n for seg in range(10):\n stride_seg0 = []\n hist_dat = np.histogram(disp_abs_all_savgol[file_id][seg,:,1], bins=120,range=(0,6))\n #peaks = hist_dat[1][argrelmax(hist_dat[0], order=4)]\n peaks_id = peakutils.indexes(hist_dat[0], thres=0.2, min_dist=5)\n peaks_id = np.sort(peaks_id)\n peaks = hist_dat[1][peaks_id]\n for peak_id in range(len(peaks)):\n dat0 = disp_abs_all[file_id][seg]\n disp_peak = [dat0[i,1] for i in range(len(dat0)) \n if dat0[i,1] > peaks[peak_id] and dat0[i,1] < peaks[peak_id] + width]\n time_peak = [dat0[i,0] for i in range(len(dat0)) \n if dat0[i,1] > peaks[peak_id] and dat0[i,1] < peaks[peak_id] + width]\n disp_peak_med = np.median(disp_peak)\n time_peak_med = np.median(time_peak)\n stride_seg0.append([time_peak_med, disp_peak_med])\n stride_seg.append(np.array(stride_seg0))\n\n stride_all.append(stride_seg)\n\n plt.figure()\n for seg in range(10):\n plt.plot(disp_abs_all[file_id][seg,:,0], disp_abs_all[file_id][seg,:,1])\n plt.plot(stride_all[file_id][seg][:,0], stride_all[file_id][seg][:,1], 'o')\n\n plt.title(\"Displacement of file {0}\".format(src_name[file_id]))\n plt.xlabel(\"Time (sec)\")\n plt.ylabel(\"Displacement (mm)\")\n plt.xlim([0,4.2])\n plt.ylim([0,6.2])\n plt.xticks([0,1,2,3,4])\n plt.savefig(src_path + \"img/201102_stride_length_detection_\" + src_name[file_id] + \".png\")\n plt.close()\n",
"_____no_output_____"
],
[
"import pickle\n\nwith open(src_path + \"pickle/initial_disp_all_201102.pickle\", \"wb\") as f1:\n pickle.dump(initial_disp_all, f1)\nwith open(src_path + \"pickle/disp_rel_all_201102.pickle\", \"wb\") as f2:\n pickle.dump(disp_rel_all, f2)\nwith open(src_path + \"pickle/disp_abs_all_201102.pickle\", \"wb\") as f3:\n pickle.dump(disp_abs_all, f3)\nwith open(src_path + \"pickle/seg_len_all_201102.pickle\", \"wb\") as f4:\n pickle.dump(seg_len_all, f4)\nwith open(src_path + \"pickle/stride_all_201102.pickle\", \"wb\") as f5:\n pickle.dump(stride_all, f5)\nwith open(src_path + \"pickle/body_len_all_201104.pickle\", \"wb\") as f6:\n pickle.dump(body_len_all, f6)",
"_____no_output_____"
],
[
"print(\"len(initial_disp_all):\", len(initial_disp_all))\nprint(\"len(initial_disp_all[0]) (seg number):\", len(initial_disp_all[0]))\nprint(\"len(disp_rel_all):\", len(disp_rel_all))\nprint(\"disp_rel_all[0].shape:\", disp_rel_all[0].shape)\nprint(\"len(disp_abs_all):\", len(disp_abs_all))\nprint(\"disp_abs_all[0].shape:\", disp_abs_all[0].shape)\nprint(\"len(seg_len_all):\", len(seg_len_all))\nprint(\"seg_len_all[0].shape:\", seg_len_all[0].shape)\nprint(\"len(stride_all)(movie number):\", len(stride_all))\nprint(\"len(stride_all[0])(seg number):\", len(stride_all[0]))\nprint(\"len(stride_all[0][0])(peak number):\", len(stride_all[0][0]))\nprint(\"len(stride_all[0][0][0])(time, displacement):\", len(stride_all[0][0][0]))",
"len(initial_disp_all): 11\nlen(initial_disp_all[0]) (seg number): 10\nlen(disp_rel_all): 11\ndisp_rel_all[0].shape: (10, 56, 2)\nlen(disp_abs_all): 11\ndisp_abs_all[0].shape: (10, 56, 2)\nlen(seg_len_all): 11\nseg_len_all[0].shape: (9, 56, 2)\nlen(stride_all)(movie number): 11\nlen(stride_all[0])(seg number): 10\nlen(stride_all[0][0])(peak number): 2\nlen(stride_all[0][0][0])(time, displacement): 2\n"
],
[
"import pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/stride_all_201102.pickle\", \"rb\") as f5:\n stride_all = pickle.load(f5)",
"_____no_output_____"
],
[
"import numpy as np\n\nstride_length_all = []\nfor mov_id in range(len(stride_all)):\n dst1 = []\n for seg_id in range(10):\n dat_stride = stride_all[mov_id][seg_id]\n dst0 = []\n for i in range(len(dat_stride)-1):\n dst0.append(dat_stride[i+1,1]-dat_stride[i,1])\n dst1.append(np.median(dst0))\n stride_length_all.append(dst1)\nprint(stride_length_all)",
"[[0.6911924962899594, 0.6919708751678539, 0.7220980959696078, 0.7489331914225484, 0.7262781951747381, 0.7121084927849997, 0.725317340142472, 0.7690803304916778, 0.7601284630539773, 0.7423975493033774], [0.7243578673624755, 0.7040242359907471, 0.5588638051751833, 0.54303724894798, 0.7365530152809071, 0.7524941406039366, 0.7562378468986575, 0.7589913040482938, 0.7683735130055431, 0.7647331529866126], [0.7638177303457273, 0.7812880756887456, 0.5513965700671024, 0.6797462177366717, 0.6827611771711533, 0.6789696612975935, 0.6757636632697168, 0.6493251315836592, 0.6495614323120988, 0.6278338521645455], [0.6298624571029399, 0.600858184125618, 0.6927980778262661, 0.6928475389645223, 0.6891729905099142, 0.6792435969835195, 0.681014412675073, 0.6763933574184153, 0.6478991250733683, 0.6495949721786616], [0.7379154009077686, 0.7524459817038709, 0.7548183390025156, 0.7691113269483887, 0.7689950430796604, 0.7430074576562662, 0.7509369807907023, 0.732285491104697, 0.7506894130006461, 0.7625829108119708], [0.7603830898190878, 0.7667671153576608, 0.7187741797084246, 0.7857775128769102, 0.7882727288232798, 0.7852771109519416, 0.796096196480077, 0.8006233181009828, 0.8195487200807152, 0.8178436495493444], [0.7253713434879722, 0.7945692655522432, 0.7880333814996487, 0.801664549405158, 0.8369261784335862, 0.8081924117110076, 0.7924753463757088, 0.7809756418657277, 0.8186456895822154, 0.8098585299219376], [0.6081258761845001, 0.6151965914083279, 0.6080816488482957, 0.6216909724532476, 0.6276371081888796, 0.6358670684503762, 0.6875134740262134, 0.6955600396725974, 0.6942405790276844, 0.6846507943929219], [0.7026844670264516, 0.7074859762843387, 0.7102631460203871, 0.6841058698061984, 0.6766354421822676, 0.6739600196524751, 0.6908387407209411, 0.703364531602684, 0.7052506083845922, 0.7048100565828375], [0.7214719848109008, 0.7647329927288214, 0.7879532568999966, 0.7536681970979773, 0.6975898630697808, 0.7019558763566905, 0.7022466106222438, 0.6780463019201275, 0.6916829823067783, 0.7107180566786317], [0.7737044865397655, 0.6286317130759453, 0.6846325664250807, 0.6952161963036537, 0.697868489553797, 0.6974817248438289, 0.7214770222534888, 0.6279836089923214, 0.690840797612541, 0.7488504009560595]]\n"
],
[
"import pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/stride_length_all_201104.pickle\", \"wb\") as f7:\n pickle.dump(stride_length_all, f7)",
"_____no_output_____"
],
[
"import numpy as np\nimport pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/stride_length_all_201104.pickle\", \"rb\") as f6:\n stride_length_all = np.array(pickle.load(f6))",
"_____no_output_____"
],
[
"print(\"stride_length_all.shape\", stride_length_all.shape)",
"stride_length_all.shape (11, 10)\n"
],
[
"stride_len_med = []\nfor i in range(len(stride_length_all)):\n stride_len_med.append(np.median(stride_length_all[i]))\n print(\"median stride length of movie{0}: {1:3f}\".format(i, np.median(stride_length_all[i])))",
"median stride length of movie0: 0.725798\nmedian stride length of movie1: 0.744524\nmedian stride length of movie2: 0.677367\nmedian stride length of movie3: 0.677818\nmedian stride length of movie4: 0.751691\nmedian stride length of movie5: 0.787025\nmedian stride length of movie6: 0.798117\nmedian stride length of movie7: 0.631752\nmedian stride length of movie8: 0.703024\nmedian stride length of movie9: 0.706482\nmedian stride length of movie10: 0.696349\n"
],
[
"with open(src_path + \"pickle/body_len_all_201104.pickle\", \"rb\") as f6:\n body_len_all = pickle.load(f6)",
"_____no_output_____"
],
[
"body_len_max = []\n\nfor file_id in range(len(body_len_all)):\n body_len_max.append(body_len_all[file_id][:,1].max())\n \nprint(\"body_len_max:\", body_len_max)\nprint(\"stride_length_med:\", stride_len_med)\n",
"body_len_max: [2.9060666772957906, 2.865987481121035, 2.89964021516403, 3.5829399300064315, 3.4107561719472486, 3.4504843320582195, 3.251682577404581, 3.4507534626365293, 3.7120708042955277, 3.2801522383540753, 3.3908765664837315]\nstride_length_med: [0.725797767658605, 0.7445235779424219, 0.6773666622836552, 0.6778184772009674, 0.7516914812472866, 0.787025120850095, 0.7981169074787006, 0.6317520883196279, 0.7030244993145678, 0.7064823336504378, 0.6963489605737413]\n"
],
[
"import matplotlib.pyplot as plt\nfrom scipy import stats\n\nplt.plot(body_len_max, stride_len_med, 'go')\nplt.xlim([2,5])\nplt.xlabel(\"Body length (mm)\")\nplt.ylim([0.5,1.0])\nplt.ylabel(\"Stride length (mm)\")\nplt.show()\n\nprint(\"Body length average (mm):{0:4.2f}±{1:4.2f}\".format(np.mean(body_len_max), stats.sem(body_len_max)))\nprint(\"Stride length average (mm):{0:4.2f}±{1:4.2f}\".format(np.mean(stride_len_med), stats.sem(stride_len_med)))",
"_____no_output_____"
],
[
"print(\"len(seg_len_all):\", len(seg_len_all))\nprint(\"seg_len_all[0].shape: (seg, frame, time/length)\", seg_len_all[0].shape)",
"len(seg_len_all): 11\nseg_len_all[0].shape: (seg, frame, time/length) (9, 56, 2)\n"
],
[
"import copy\nimport matplotlib.pyplot as plt\nimport peakutils\nfrom scipy import signal\n\nseg_len_savgol = []\nseg_len_peaks = []\n\nfor file_id in range(len(seg_len_all)):\n seg_len_savgol0 = []\n seg_len_peaks0 = []\n for seg in range(len(seg_len_all[file_id])):\n dat = seg_len_all[file_id][seg]\n dat_savgol = copy.deepcopy(dat)\n dat_savgol[:,1] = signal.savgol_filter(dat[:,1],11,2)\n peaks_id_p = peakutils.indexes(dat_savgol[:,1], thres=0.2, min_dist=20)\n peaks_id_n = peakutils.indexes(-dat_savgol[:,1], thres=0.2, min_dist=20)\n seg_len_savgol0.append(dat_savgol)\n seg_len_peaks0.append([peaks_id_p, peaks_id_n])\n seg_len_savgol.append(seg_len_savgol0)\n seg_len_peaks.append(seg_len_peaks0)\n \nfile_id = 0\nseg = 0\ndat_src = seg_len_all[file_id][seg]\ndat_sav = seg_len_savgol[file_id][seg]\ndat_peaks = seg_len_peaks[file_id][seg]\nplt.plot(dat_src[:,0], dat_src[:,1])\nplt.plot(dat_sav[:,0], dat_sav[:,1])\nplt.plot(dat_sav[dat_peaks[0],0], dat_sav[dat_peaks[0],1], 'go')\nplt.plot(dat_sav[dat_peaks[1],0], dat_sav[dat_peaks[1],1], 'mo')\nplt.savefig(src_path + \"img/201104_segment_length_{0}_seg{1}.png\".format(src_name[file_id], seg))\nplt.show()",
"_____no_output_____"
],
[
"seg_len_range_all = []\n\nfor file_id in range(len(seg_len_all)):\n dst = []\n for seg in range(len(seg_len_all[file_id])):\n dat_src = seg_len_all[file_id][seg]\n dat_sav = seg_len_savgol[file_id][seg]\n dat_peaks = seg_len_peaks[file_id][seg]\n\n dst_p = [dat_sav[dat_peaks[0],0], dat_sav[dat_peaks[0],1]]\n dst_n = [dat_sav[dat_peaks[1],0], dat_sav[dat_peaks[1],1]]\n dst.append([dst_p, dst_n])\n \n plt.plot(dat_src[:,0], dat_src[:,1])\n plt.plot(dat_sav[:,0], dat_sav[:,1])\n plt.plot(dat_sav[dat_peaks[0],0], dat_sav[dat_peaks[0],1], 'go')\n plt.plot(dat_sav[dat_peaks[1],0], dat_sav[dat_peaks[1],1], 'mo')\n plt.savefig(src_path + \"img/201104_segment_length_{0}_seg{1}.png\".format(src_name[file_id], seg))\n plt.close()\n seg_len_range_all.append(dst)",
"_____no_output_____"
],
[
"import pickle\n\nwith open(src_path + \"pickle/seg_len_range_all_201104.pickle\", \"wb\") as f:\n pickle.dump(seg_len_range_all, f)",
"_____no_output_____"
],
[
"import pickle\n\nwith open(src_path + \"pickle/seg_len_range_all_201104.pickle\", \"rb\") as f:\n seg_len_range_all = pickle.load(f)",
"_____no_output_____"
],
[
"print(\"len(seg_len_range_all) (file_id):\", len(seg_len_range_all))\nprint(\"len(seg_len_range_all[0])(seg):\", len(seg_len_range_all[0]))\nprint(\"len(seg_len_range_all[0][0])(peak/valley)\", len(seg_len_range_all[0][0]))\nprint(\"len(seg_len_range_all[0][0][0])(time/length)\", len(seg_len_range_all[0][0][0]))\n\nfile_id = 0\nseg_id = 0\npeak = 0\nvalley = 1\nprint(\"seg_len_range_all[file_id][seg][peak]:(time/length)\", seg_len_range_all[file_id][seg_id][peak])\nprint(\"seg_len_range_all[file_id][seg][valley]:(time/length)\", seg_len_range_all[file_id][seg_id][valley])",
"len(seg_len_range_all) (file_id): 11\nlen(seg_len_range_all[0])(seg): 9\nlen(seg_len_range_all[0][0])(peak/valley) 2\nlen(seg_len_range_all[0][0][0])(time/length) 2\nseg_len_range_all[file_id][seg][peak]:(time/length) [array([0.69993, 1.69983]), array([0.235319 , 0.24612954])]\nseg_len_range_all[file_id][seg][valley]:(time/length) [array([0.29997, 1.36653]), array([0.13288642, 0.13586834])]\n"
],
[
"import numpy as np\nimport peakutils\n\n# signal:\n\nseg0 = 0\nseg1 = 4\n\nsig0 = seg_len_savgol[0][seg0][:,1]\nsig1 = seg_len_savgol[0][seg1][:,1]\n\n# centralization\nsig0 = sig0 - sig0.mean()\nsig1 = sig1 - sig1.mean()\ncorr = np.correlate(sig1, sig0, \"full\")\npeaks_id = peakutils.indexes(corr[len(corr)-len(sig0):], thres=0.2, min_dist=20)\nestimated_delay = peaks_id[0]\nprint(\"estimated delay is {}\".format(estimated_delay))\nprint(peaks_id)\n\nfig, ax = plt.subplots(2,1, figsize = (10,8))\nax[0].plot(sig0, label=\"sig0\")\nax[0].plot(sig1, label=\"sig1\")\nax[0].legend()\nax[1].set_ylabel(\"corr\")\nax[1].plot(np.arange(len(corr))-len(sig0)+1, corr)\nax[1].plot(peaks_id, corr[peaks_id+len(sig0)-1], 'ro')\nax[1].set_xlim([0, len(sig1)])\nplt.show()\nprint(len(corr))",
"estimated delay is 10\n[10 42]\n"
],
[
"import numpy as np\nimport peakutils\n\nfig_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/img/correlation/\"\n\n# segmental delay\nseg_len_delay_all = []\n\nfor file_id in range(len(seg_len_savgol)):\n dst0 = []\n for seg_id in range(len(seg_len_savgol[file_id])-1):\n\n sig0 = seg_len_savgol[file_id][seg_id][:,1]\n sig1 = seg_len_savgol[file_id][seg_id+1][:,1]\n\n # centralization\n sig0 = sig0 - sig0.mean()\n sig1 = sig1 - sig1.mean()\n corr = np.correlate(sig1, sig0, \"full\")\n t_margin = 2\n peaks_id = peakutils.indexes(corr[len(corr)-len(sig0)-t_margin:], thres=0.2, min_dist=20)\n peaks_id = peaks_id - t_margin\n estimated_delay = peaks_id[0]\n dst0.append(estimated_delay)\n \n fig, ax = plt.subplots(2,1, figsize = (10,8))\n ax[0].plot(sig0, label=\"sig0\")\n ax[0].plot(sig1, label=\"sig1\")\n ax[0].legend()\n ax[1].set_ylabel(\"corr\")\n ax[1].plot(np.arange(len(corr))-len(sig0)+1, corr)\n ax[1].plot(peaks_id, corr[peaks_id+len(sig0)-1], 'ro')\n ax[1].set_xlim([0, len(sig1)])\n plt.savefig(fig_path + \"intersegmental_corr_{0}_seg{1}.png\".format(src_name[file_id], seg_id))\n plt.close()\n seg_len_delay_all.append(dst0)\n \n# stride duration\nstride_duration_all = []\n\nfor file_id in range(len(seg_len_savgol)):\n dst0 = []\n for seg_id in range(len(seg_len_savgol[file_id])):\n\n sig0 = seg_len_savgol[file_id][seg_id][:,1]\n sig1 = seg_len_savgol[file_id][seg_id][:,1]\n\n # centralization\n sig0 = sig0 - sig0.mean()\n sig1 = sig1 - sig1.mean()\n corr = np.correlate(sig1, sig0, \"full\")\n peaks_id = peakutils.indexes(corr[len(corr)-len(sig0):], thres=0.2, min_dist=20)\n estimated_delay = peaks_id[0]\n dst0.append(estimated_delay)\n \n fig, ax = plt.subplots(2,1, figsize = (10,8))\n ax[0].plot(sig0, label=\"sig0\")\n ax[0].plot(sig1, label=\"sig1\")\n ax[0].legend()\n ax[1].set_ylabel(\"corr\")\n ax[1].plot(np.arange(len(corr))-len(sig0)+1, corr)\n ax[1].plot(peaks_id, corr[peaks_id+len(sig0)-1], 'ro')\n ax[1].set_xlim([0, len(sig1)])\n plt.savefig(fig_path + \"auto_corr_{0}_seg{1}.png\".format(src_name[file_id], seg_id))\n plt.close()\n stride_duration_all.append(dst0)",
"_____no_output_____"
],
[
"import pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/seg_len_delay_all_201104.pickle\", \"wb\") as f8:\n pickle.dump(seg_len_delay_all, f8)\nwith open(src_path + \"pickle/stride_duration_all_201104.pickle\", \"wb\") as f9:\n pickle.dump(stride_duration_all, f9)",
"_____no_output_____"
],
[
"import pickle\n\nwith open(src_path + \"pickle/seg_len_delay_all_201104.pickle\", \"rb\") as f8:\n seg_len_delay_all = pickle.load(f8)\nwith open(src_path + \"pickle/stride_duration_all_201104.pickle\", \"rb\") as f9:\n stride_duration_all = pickle.load(f9)",
"_____no_output_____"
],
[
"print(\"From cross-correlation\")\nprint(\"len(seg_len_delay_all):\", len(seg_len_delay_all))\nprint(\"len(seg_len_delay_all[0])(seg):\", len(seg_len_delay_all[0]))\nprint(\"seg_len_delay_all[0]:\", seg_len_delay_all[0])\n\nprint(\"From auto-correlation\")\nprint(\"len(stride_duration_all):\", len(stride_duration_all))\nprint(\"len(stride_duration_all[0])(seg):\", len(stride_duration_all[0]))\nprint(\"stride_duration_all[0]:\", stride_duration_all[0])",
"From cross-correlation\nlen(seg_len_delay_all): 11\nlen(seg_len_delay_all[0])(seg): 8\nseg_len_delay_all[0]: [1, 3, 3, 2, 3, 3, 2, 2]\nFrom auto-correlation\nlen(stride_duration_all): 11\nlen(stride_duration_all[0])(seg): 9\nstride_duration_all[0]: [31, 31, 31, 30, 31, 30, 29, 28, 30]\n"
],
[
"# boundary stride duration 201119\n\nimport pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/disp_abs_all_201102.pickle\", \"rb\") as f:\n disp_abs_all = pickle.load(f)",
"_____no_output_____"
],
[
"import copy\nfrom scipy import signal\n\ndisp_abs_all_savgol = copy.deepcopy(disp_abs_all)\n\nfor file_id in range(len(disp_abs_all)):\n savgol0 = []\n for seg in range(len(disp_abs_all[0])):\n disp_abs_all_savgol[file_id][seg][:,1] = signal.savgol_filter(disp_abs_all[file_id][seg][:,1], 11,2)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nfile_id = 0\nseg = 0\n\nplt.figure()\nplt.plot(disp_abs_all[file_id][seg,:,0], disp_abs_all[file_id][seg,:,1], color='g')\nplt.plot(disp_abs_all_savgol[file_id][seg,:,0], disp_abs_all_savgol[file_id][seg,:,1], color='m')\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\n\ndiff = np.diff(disp_abs_all_savgol[file_id][seg,:,1])\n\nplt.plot(diff)\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\nimport peakutils\n\n# signal:\n\nsig0 = diff\nsig1 = diff\n\n# centralization\nsig0 = sig0 - sig0.mean()\nsig1 = sig1 - sig1.mean()\ncorr = np.correlate(sig1, sig0, \"full\")\npeaks_id = peakutils.indexes(corr[len(corr)-len(sig0):], thres=0.2, min_dist=20)\nestimated_delay = peaks_id[0]\nprint(\"estimated delay is {}\".format(estimated_delay))\nprint(peaks_id)\n\nfig, ax = plt.subplots(2,1, figsize = (10,8))\nax[0].plot(sig0, label=\"sig0\")\nax[0].plot(sig1, label=\"sig1\")\nax[0].legend()\nax[1].set_ylabel(\"corr\")\nax[1].plot(np.arange(len(corr))-len(sig0)+1, corr)\nax[1].plot(peaks_id, corr[peaks_id+len(sig0)-1], 'ro')\nax[1].set_xlim([0, len(sig1)])\nplt.show()\nprint(len(corr))",
"estimated delay is 32\n[32]\n"
],
[
"import copy\nfrom scipy import signal\n\ndisp_abs_all_savgol = copy.deepcopy(disp_abs_all)\n\nfor file_id in range(len(disp_abs_all)):\n savgol0 = []\n for seg in range(len(disp_abs_all[0])):\n disp_abs_all_savgol[file_id][seg][:,1] = signal.savgol_filter(disp_abs_all[file_id][seg][:,1], 11,2)",
"_____no_output_____"
],
[
"import numpy as np\n\ndiff = np.diff(disp_abs_all_savgol[file_id][seg,:,1])\n\nplt.plot(diff)\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\nimport peakutils\n\n# source: disp_abs_all_savgol\nfig_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/img/correlation/\"\nsrc_name = [\"Results1-54-109-20.csv\", \"Results2-125-215-20.csv\", \"Results3-1-74-20.csv\", \"Results4-248-370-20.csv\",\n \"Results5-1-100-20.csv\", \"Results6-380-485-20.csv\", \"Results7-250-310-20.csv\", \"Results8-1-105-20.csv\",\n \"Results9-464-555-20.csv\", \"Results10-665-733-20.csv\", \"Results11-249-315-20.csv\"]\n\n# bounday motion delay\nboundary_motion_delay_all = []\n\nfor file_id in range(len(disp_abs_all_savgol)):\n dst0 = []\n for seg_id in range(len(disp_abs_all_savgol[file_id])-1):\n\n sig0 = np.diff(disp_abs_all_savgol[file_id][seg_id][:,1])\n sig1 = np.diff(disp_abs_all_savgol[file_id][seg_id+1][:,1])\n\n # centralization\n sig0 = sig0 - sig0.mean()\n sig1 = sig1 - sig1.mean()\n corr = np.correlate(sig1, sig0, \"full\")\n t_margin = 2\n peaks_id = peakutils.indexes(corr[len(corr)-len(sig0)-t_margin:], thres=0.2, min_dist=20)\n peaks_id = peaks_id - t_margin\n estimated_delay = peaks_id[0]\n dst0.append(estimated_delay)\n \n fig, ax = plt.subplots(2,1, figsize = (10,8))\n ax[0].plot(sig0, label=\"sig0\")\n ax[0].plot(sig1, label=\"sig1\")\n ax[0].legend()\n ax[1].set_ylabel(\"corr\")\n ax[1].plot(np.arange(len(corr))-len(sig0)+1, corr)\n ax[1].plot(peaks_id, corr[peaks_id+len(sig0)-1], 'ro')\n ax[1].set_xlim([0, len(sig1)])\n plt.savefig(fig_path + \"201119_boundary_motion_interseg_corr_{0}_seg{1}.png\".format(src_name[file_id], seg_id))\n plt.close()\n boundary_motion_delay_all.append(dst0)\n \n# boundary stride duration\nboundary_stride_duration_all = []\n\nfor file_id in range(len(disp_abs_all_savgol)):\n dst0 = []\n for seg_id in range(len(disp_abs_all_savgol[file_id])):\n\n sig0 = np.diff(disp_abs_all_savgol[file_id][seg_id][:,1])\n sig1 = np.diff(disp_abs_all_savgol[file_id][seg_id][:,1])\n\n # centralization\n sig0 = sig0 - sig0.mean()\n sig1 = sig1 - sig1.mean()\n corr = np.correlate(sig1, sig0, \"full\")\n peaks_id = peakutils.indexes(corr[len(corr)-len(sig0):], thres=0.2, min_dist=20)\n estimated_delay = peaks_id[0]\n dst0.append(estimated_delay)\n \n fig, ax = plt.subplots(2,1, figsize = (10,8))\n ax[0].plot(sig0, label=\"sig0\")\n ax[0].plot(sig1, label=\"sig1\")\n ax[0].legend()\n ax[1].set_ylabel(\"corr\")\n ax[1].plot(np.arange(len(corr))-len(sig0)+1, corr)\n ax[1].plot(peaks_id, corr[peaks_id+len(sig0)-1], 'ro')\n ax[1].set_xlim([0, len(sig1)])\n plt.savefig(fig_path + \"201119_boundary_auto_corr_{0}_seg{1}.png\".format(src_name[file_id], seg_id))\n plt.close()\n boundary_stride_duration_all.append(dst0)",
"_____no_output_____"
],
[
"import pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/boundary_motion_delay_all_201119.pickle\", \"wb\") as f1:\n pickle.dump(boundary_motion_delay_all, f1)\nwith open(src_path + \"pickle/boundary_stride_duration_all_201119.pickle\", \"wb\") as f2:\n pickle.dump(boundary_stride_duration_all, f2)",
"_____no_output_____"
],
[
"boundary_stride_duration_all = np.array(boundary_stride_duration_all)\nprint(\"boundary_stride_duration_all\", boundary_stride_duration_all.shape)\nprint(boundary_stride_duration_all)",
"boundary_stride_duration_all (11, 10)\n[[32 32 31 31 31 30 29 28 29 29]\n [31 31 31 31 31 32 32 31 31 31]\n [35 36 34 33 33 33 33 33 32 32]\n [27 27 27 27 27 27 28 27 27 27]\n [30 30 31 30 31 31 31 30 31 30]\n [41 40 40 41 41 40 41 42 42 39]\n [25 25 25 25 24 24 24 24 25 25]\n [35 35 36 36 35 35 36 36 35 34]\n [34 35 35 34 34 35 35 35 35 35]\n [27 28 28 28 28 28 29 28 28 29]\n [30 33 33 33 33 33 33 33 34 33]]\n"
],
[
"boundary_motion_delay_all = np.array(boundary_motion_delay_all)\nprint(\"boundary_motion_delay_all\", boundary_motion_delay_all.shape)\nprint(boundary_motion_delay_all)",
"boundary_motion_delay_all (11, 9)\n[[1 4 3 3 2 2 2 2 2]\n [0 5 5 3 3 2 2 2 3]\n [0 5 3 3 3 3 2 3 2]\n [0 2 3 3 3 3 2 2 1]\n [1 3 3 3 3 2 2 2 2]\n [0 5 5 4 3 3 3 2 1]\n [0 3 2 2 2 2 2 1 1]\n [0 4 4 3 3 3 3 2 1]\n [1 3 4 4 4 3 2 1 1]\n [1 3 3 2 3 3 3 2 2]\n [1 3 2 2 3 2 2 2 1]]\n"
],
[
"# Calculate speed\n\nimport copy\nfrom scipy import signal\nimport pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/disp_abs_all_201102.pickle\", \"rb\") as f:\n disp_abs_all = pickle.load(f)\n\ndisp_abs_all_savgol = copy.deepcopy(disp_abs_all)\n\nfor file_id in range(len(disp_abs_all)):\n savgol0 = []\n for seg in range(len(disp_abs_all[0])):\n disp_abs_all_savgol[file_id][seg][:,1] = signal.savgol_filter(disp_abs_all[file_id][seg][:,1], 11,2)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nfile_id = 0\nseg = 0\n\nplt.figure()\nplt.plot(disp_abs_all[file_id][seg,:,0], disp_abs_all[file_id][seg,:,1], color='g')\nplt.plot(disp_abs_all_savgol[file_id][seg,:,0], disp_abs_all_savgol[file_id][seg,:,1], color='m')\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\nlr = LinearRegression()\n\nfile_id = 0\nseg = 0\n\nX = disp_abs_all_savgol[file_id][seg,:,0].reshape(-1,1)\nY = disp_abs_all_savgol[file_id][seg,:,1].reshape(-1,1)\nlr.fit(X, Y)\n\nplt.scatter(X, Y, color='green')\nplt.plot(X, lr.predict(X), color='magenta')\nplt.show()\n\nprint(\"coefficient:\", lr.coef_[0])",
"_____no_output_____"
],
[
"print(X)\nprint(Y)\nprint(Y.reshape(-1,1))",
"[[0. ]\n [0.03333]\n [0.06666]\n [0.09999]\n [0.13332]\n [0.16665]\n [0.19998]\n [0.23331]\n [0.26664]\n [0.29997]\n [0.3333 ]\n [0.36663]\n [0.39996]\n [0.43329]\n [0.46662]\n [0.49995]\n [0.53328]\n [0.56661]\n [0.59994]\n [0.63327]\n [0.6666 ]\n [0.69993]\n [0.73326]\n [0.76659]\n [0.79992]\n [0.83325]\n [0.86658]\n [0.89991]\n [0.93324]\n [0.96657]\n [0.9999 ]\n [1.03323]\n [1.06656]\n [1.09989]\n [1.13322]\n [1.16655]\n [1.19988]\n [1.23321]\n [1.26654]\n [1.29987]\n [1.3332 ]\n [1.36653]\n [1.39986]\n [1.43319]\n [1.46652]\n [1.49985]\n [1.53318]\n [1.56651]\n [1.59984]\n [1.63317]\n [1.6665 ]\n [1.69983]\n [1.73316]\n [1.76649]\n [1.79982]\n [1.83315]\n [1.86648]\n [1.89981]\n [1.93314]\n [1.96647]\n [1.9998 ]\n [2.03313]\n [2.06646]\n [2.09979]\n [2.13312]\n [2.16645]\n [2.19978]]\n[[3.33050139]\n [3.36068808]\n [3.38590638]\n [3.4061563 ]\n [3.42143783]\n [3.43175098]\n [3.43594485]\n [3.43201092]\n [3.42591012]\n [3.42171509]\n [3.41417138]\n [3.41039934]\n [3.4061706 ]\n [3.40399084]\n [3.40133379]\n [3.39849517]\n [3.40274719]\n [3.40841089]\n [3.41791242]\n [3.42924995]\n [3.44742043]\n [3.47991835]\n [3.51684073]\n [3.5668647 ]\n [3.62137365]\n [3.67656376]\n [3.73615125]\n [3.79136827]\n [3.8399514 ]\n [3.8779827 ]\n [3.91606446]\n [3.9553968 ]\n [3.99753938]\n [4.04792024]\n [4.09668975]\n [4.13861294]\n [4.17319518]\n [4.19553592]\n [4.2025096 ]\n [4.1983975 ]\n [4.18948911]\n [4.17419071]\n [4.16273211]\n [4.15909859]\n [4.15747547]\n [4.15609204]\n [4.15486265]\n [4.15592174]\n [4.15649301]\n [4.15908958]\n [4.16016094]\n [4.16178812]\n [4.17068704]\n [4.18335698]\n [4.20502844]\n [4.22846566]\n [4.26229461]\n [4.31121607]\n [4.36810477]\n [4.42913515]\n [4.49543864]\n [4.56185902]\n [4.61654865]\n [4.66815448]\n [4.71667652]\n [4.76211476]\n [4.80446921]]\n[[3.33050139]\n [3.36068808]\n [3.38590638]\n [3.4061563 ]\n [3.42143783]\n [3.43175098]\n [3.43594485]\n [3.43201092]\n [3.42591012]\n [3.42171509]\n [3.41417138]\n [3.41039934]\n [3.4061706 ]\n [3.40399084]\n [3.40133379]\n [3.39849517]\n [3.40274719]\n [3.40841089]\n [3.41791242]\n [3.42924995]\n [3.44742043]\n [3.47991835]\n [3.51684073]\n [3.5668647 ]\n [3.62137365]\n [3.67656376]\n [3.73615125]\n [3.79136827]\n [3.8399514 ]\n [3.8779827 ]\n [3.91606446]\n [3.9553968 ]\n [3.99753938]\n [4.04792024]\n [4.09668975]\n [4.13861294]\n [4.17319518]\n [4.19553592]\n [4.2025096 ]\n [4.1983975 ]\n [4.18948911]\n [4.17419071]\n [4.16273211]\n [4.15909859]\n [4.15747547]\n [4.15609204]\n [4.15486265]\n [4.15592174]\n [4.15649301]\n [4.15908958]\n [4.16016094]\n [4.16178812]\n [4.17068704]\n [4.18335698]\n [4.20502844]\n [4.22846566]\n [4.26229461]\n [4.31121607]\n [4.36810477]\n [4.42913515]\n [4.49543864]\n [4.56185902]\n [4.61654865]\n [4.66815448]\n [4.71667652]\n [4.76211476]\n [4.80446921]]\n"
],
[
"# Calculate all speed\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.linear_model import LinearRegression\n\nfig_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/img/\"\nsrc_name = [\"Results1-54-109-20.csv\", \"Results2-125-215-20.csv\", \"Results3-1-74-20.csv\", \"Results4-248-370-20.csv\",\n \"Results5-1-100-20.csv\", \"Results6-380-485-20.csv\", \"Results7-250-310-20.csv\", \"Results8-1-105-20.csv\",\n \"Results9-464-555-20.csv\", \"Results10-665-733-20.csv\", \"Results11-249-315-20.csv\"]\n\nspeed_all = []\n\nfor file_id in range(len(disp_abs_all_savgol)):\n dst = []\n for seg_id in range(len(disp_abs_all_savgol[file_id])):\n lr = LinearRegression()\n X = disp_abs_all_savgol[file_id][seg_id,:,0].reshape(-1,1)\n Y = disp_abs_all_savgol[file_id][seg_id,:,1].reshape(-1,1)\n lr.fit(X, Y)\n\n plt.plot(X, Y, color='green')\n plt.plot(X, lr.predict(X), color='magenta')\n plt.savefig(fig_path + \"201120_speed_{0}_seg{1}.png\".format(src_name[file_id], seg_id))\n plt.close()\n \n dst.append(lr.coef_[0][0])\n speed_all.append(dst)\nspeed_all = np.array(speed_all)",
"_____no_output_____"
],
[
"print(\"speed_all.shape:\", speed_all.shape)\nprint(speed_all)",
"speed_all.shape: (11, 10)\n[[0.72358412 0.74048056 0.77457536 0.77574102 0.7395547 0.68519954\n 0.64985784 0.6391332 0.65684233 0.6763895 ]\n [0.68985929 0.69224301 0.70962817 0.71691718 0.7182924 0.71070182\n 0.69246827 0.68239684 0.68542712 0.69996554]\n [0.59551097 0.59389868 0.61889584 0.64082278 0.64979074 0.6426337\n 0.61260394 0.60139046 0.58861966 0.57042359]\n [0.66444439 0.66587924 0.64633112 0.6187796 0.59198994 0.5786897\n 0.56700881 0.55952508 0.5531017 0.5554273 ]\n [0.74367424 0.73717394 0.73506102 0.73456618 0.73383897 0.74148238\n 0.75169298 0.75017398 0.75673096 0.75666022]\n [0.57795839 0.58519977 0.59662172 0.59387284 0.57577984 0.56352802\n 0.56120865 0.56967158 0.58574055 0.59681394]\n [0.91560158 0.9139225 0.91486787 0.93173559 0.95137229 0.96872657\n 0.97126536 0.96262731 0.97516405 0.95809279]\n [0.52140529 0.52673424 0.55156252 0.57315093 0.57860443 0.57425692\n 0.56694025 0.56433861 0.55943501 0.55951531]\n [0.62867601 0.62354836 0.61447548 0.59273131 0.56998204 0.56152234\n 0.573462 0.57716454 0.58142747 0.58573374]\n [0.78077487 0.77886751 0.77402443 0.76830015 0.75377773 0.7492469\n 0.75716867 0.74088717 0.74505999 0.73987756]\n [0.66547553 0.67020163 0.70551421 0.71190814 0.71468099 0.68107419\n 0.67430019 0.65629581 0.64335396 0.63670902]]\n"
],
[
"import pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\n#with open(src_path + \"pickle/speed_all_201120.pickle\", \"wb\") as f:\n# pickle.dump(speed_all, f)",
"_____no_output_____"
],
[
"import pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\n\nwith open(src_path + \"pickle/speed_all_201120.pickle\", \"rb\") as f:\n speed_all = pickle.load(f)",
"_____no_output_____"
],
[
"speed_larvae = speed_all.mean(axis=1)\nprint(\"speed_larvae.shape:\", speed_larvae.shape)\nprint(speed_larvae)",
"speed_larvae.shape: (11,)\n[0.70613582 0.69978996 0.61145903 0.60011769 0.74410549 0.58063953\n 0.94633759 0.55759435 0.59087233 0.7587985 0.67595137]\n"
],
[
"# Scatter plot of speed vs stride duration/length\n\n# data of speed: speed_all\n# data of stride duration: boundary_stride_duration_all\n# data of stride length: stride_length_all\n\nimport numpy as np\nimport pickle\n\nsrc_path = \"C:/Users/h1006/Documents/Research/Sun/Data/1_Kinematics/\"\nsec_per_frame = 0.03333\n\nwith open(src_path + \"pickle/speed_all_201120.pickle\", \"rb\") as f1:\n speed_all = pickle.load(f1)\nwith open(src_path + \"pickle/boundary_stride_duration_all_201119.pickle\", \"rb\") as f2:\n stride_duration_all = pickle.load(f2)\n stride_duration_all = np.array(stride_duration_all) * sec_per_frame\nwith open(src_path + \"pickle/stride_length_all_201104.pickle\", \"rb\") as f3:\n stride_length_all = pickle.load(f3)\n stride_length_all = np.array(stride_length_all)\n\nprint(\"speed_all:\", speed_all.shape)\nprint(\"stride_duration_all:\", stride_duration_all.shape)\nprint(\"stride_length_all:\", stride_length_all.shape)",
"speed_all: (11, 10)\nstride_duration_all: (11, 10)\nstride_length_all: (11, 10)\n"
],
[
"import matplotlib.pyplot as plt\n\ndst_path = \"C:/Users/h1006/Documents/Research/Sun/Images/\"\n\nspeed = speed_all.reshape(11*10)\nduration = stride_duration_all.reshape(11*10)\nlength = stride_length_all.reshape(11*10)\n\nplt.figure(figsize = (8,9))\nax = plt.gca()\nplt.plot(duration, speed, 'o', color = \"k\", markersize = 10)\nplt.xlim([0.7, 1.45])\nplt.ylim([0.45, 1.0])\nplt.xlabel(\"Stride duration (sec)\", fontsize = 28)\nplt.ylabel(\"Speed (mm/sec)\", fontsize = 28)\nplt.xticks([0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4],fontsize = 20)\nplt.yticks([0.5,0.6,0.7,0.8,0.9,1.0], fontsize = 20)\nax.spines[\"top\"].set_color(\"none\")\nax.spines[\"right\"].set_color(\"none\")\nplt.savefig(dst_path + \"Speed_vs_stride_duration_201120.png\", bbox_inches = \"tight\", facecolor=\"white\")\nplt.show()\nplt.close()\n\nplt.figure(figsize = (8,9))\nax = plt.gca()\nplt.plot(length, speed, 'o', color = \"k\", markersize = 10)\nplt.xlim([0.5, 0.9])\nplt.ylim([0.45, 1.0])\nplt.xlabel(\"Stride length (mm)\", fontsize = 28)\nplt.ylabel(\"Speed (mm/sec)\", fontsize = 28)\nplt.xticks([0.5,0.6,0.7,0.8,0.9], fontsize = 20)\nplt.yticks([0.5,0.6,0.7,0.8,0.9,1.0], fontsize = 20)\nax.spines[\"top\"].set_color(\"none\")\nax.spines[\"right\"].set_color(\"none\")\nplt.savefig(dst_path + \"Speed_vs_stride_length_201120.png\", bbox_inches = \"tight\", facecolor=\"white\")\nplt.show()\nplt.close()",
"_____no_output_____"
],
[
"import pandas as pd\n\nspeed_series = pd.Series(speed)\nduration_series = pd.Series(duration)\nlength_series = pd.Series(length)\n\nCorr_duration = speed_series.corr(duration_series)\nCorr_length = speed_series.corr(length_series)\n\nprint(\"Correlation speed vs duration:\", Corr_duration)\nprint(\"Correlation speed vs length:\", Corr_length)",
"Correlation speed vs duration: -0.6965493143889503\nCorrelation speed vs length: 0.46108346419600316\n"
],
[
"# Calculate maximum and minimum segment length\n# seg_len_all: file_id, seg_id, frame [time, length]; 11 x 9 x frames x 2\n# seg_len_range_all: file_id, seg_id, peak/valley, point number: 11 x 9 x 2 x point number\n\nimport pickle\n\nwith open(src_path + \"pickle/seg_len_range_all_201104.pickle\", \"rb\") as f1:\n seg_len_range_all = pickle.load(f1)\nwith open(src_path + \"pickle/seg_len_all_201102.pickle\", \"rb\") as f2:\n seg_len_all = pickle.load(f2)",
"_____no_output_____"
],
[
"file_id = 0\nseg_id = 4\n\ndat = seg_len_range_all[file_id][seg_id]\nseg_max = dat[0][1].max()\nseg_min = dat[1][1].min()\nprint(\"seg_len_range_all[file_id][seg_Id]:\", dat)\nprint(\"dat[0][1].max():\", dat[0][1].max())\nprint(\"dat[1][1].min():\", dat[1][1].min())",
"seg_len_range_all[file_id][seg_Id]: [[array([1.16655]), array([0.40902885])], [array([0.6666 , 1.69983]), array([0.19407685, 0.21234417])]]\ndat[0][1].max(): 0.4090288544219175\ndat[1][1].min(): 0.1940768519309351\n"
],
[
"import numpy as np\n\nmax_len_all = []\nmin_len_all = []\n\nfor file_id in range(len(seg_len_range_all)):\n dst_max = []\n dst_min = []\n for seg_id in range(len(seg_len_range_all[file_id])):\n dat = seg_len_range_all[file_id][seg_id]\n dst_max.append(dat[0][1].max())\n dst_min.append(dat[1][1].min())\n max_len_all.append(dst_max)\n min_len_all.append(dst_min)\nmax_len_all = np.array(max_len_all)\nmin_len_all = np.array(min_len_all)\n\nprint(max_len_all)\nprint(min_len_all) \n \n \n ",
"[[0.24612954 0.39983541 0.41606516 0.42485425 0.40902885 0.38830499\n 0.37890813 0.35684893 0.37453103]\n [0.21946098 0.38907729 0.42915035 0.40868285 0.41866977 0.40230883\n 0.39106389 0.395345 0.360308 ]\n [0.25090474 0.4071161 0.43442165 0.43034873 0.41831378 0.40992582\n 0.38582637 0.35976147 0.34836371]\n [0.29304348 0.4296776 0.49556904 0.48661495 0.47435819 0.44510565\n 0.43530369 0.41746348 0.38401554]\n [0.29448635 0.45215348 0.48001293 0.48102385 0.47306214 0.44286106\n 0.42986447 0.39829252 0.40449167]\n [0.2843436 0.47029213 0.48497507 0.49114751 0.47102291 0.47266627\n 0.44291683 0.44709602 0.39339547]\n [0.29365698 0.46725816 0.46814229 0.48121428 0.45565346 0.45770351\n 0.42365774 0.44132293 0.3825695 ]\n [0.30186217 0.45821373 0.47712606 0.47748258 0.46469974 0.46550317\n 0.44021167 0.42850584 0.38132717]\n [0.32888918 0.46955889 0.51063956 0.50049768 0.48707817 0.48492159\n 0.46566349 0.45023725 0.42563956]\n [0.2877723 0.43705709 0.45789529 0.45471536 0.46874922 0.45426015\n 0.41974555 0.40966122 0.38977 ]\n [0.29064395 0.43146581 0.45495716 0.47015453 0.46523297 0.45394166\n 0.44148631 0.44736578 0.39343418]]\n[[0.13288642 0.16411013 0.19664784 0.22107007 0.19407685 0.17137297\n 0.18505402 0.18517835 0.18285494]\n [0.1438258 0.17725776 0.22269165 0.23276359 0.22013309 0.19536925\n 0.17698225 0.1922264 0.16941954]\n [0.13534232 0.18087596 0.21822451 0.22850905 0.20413463 0.20827451\n 0.20050398 0.18007084 0.13919975]\n [0.20456354 0.21593598 0.2466842 0.26517595 0.23894603 0.20731342\n 0.20872429 0.21874911 0.2416392 ]\n [0.18841764 0.2230217 0.24059294 0.25078934 0.24291623 0.21320154\n 0.21281923 0.20820029 0.20982894]\n [0.17162663 0.18191734 0.22786179 0.21864871 0.22297943 0.21631068\n 0.21041957 0.20972481 0.15173238]\n [0.2302662 0.22229025 0.2434732 0.25382331 0.25484403 0.25233731\n 0.21585097 0.24082316 0.17702175]\n [0.21864761 0.21926876 0.2428133 0.24130876 0.21270388 0.22110874\n 0.2364266 0.24200302 0.21031827]\n [0.2557107 0.2423046 0.26127137 0.23243783 0.22672278 0.23472735\n 0.25631678 0.28144177 0.27010116]\n [0.18283456 0.24131849 0.26327548 0.26799215 0.24873798 0.2241374\n 0.20354519 0.19968334 0.1877139 ]\n [0.15633944 0.20882418 0.26007316 0.2543786 0.23168261 0.25702492\n 0.25886996 0.23902871 0.22553558]]\n"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\nplt.figure(0, figsize=(6,10))\nplot_shift = 0.5\n\nfor seg in range(9):\n plt.plot(max_len_all[:,seg],[seg+plot_shift]*11, color=cm.jet((seg+1)/10), marker='^', linestyle='None', markersize=15)\n plt.plot(min_len_all[:,seg],[seg]*11, color=cm.jet((seg+1)/10), marker='v', linestyle='None', markersize=15)\n plt.plot([max_len_all[:,seg], min_len_all[:,seg]], [seg+plot_shift, seg], color=cm.jet((seg+1)/10), linewidth=1, linestyle=\"dotted\")\n \nplt.title(\"Segment length range\")\nplt.xlabel(\"Segment length (mm)\", fontsize=30)\n\nplt.xlim([0,0.6])\n#plt.ylim([0,6])\n#plt.xticks([0,1,2,3])\nplt.yticks([])\nplt.tick_params(labelsize=24)\nax = plt.gca()\nax.spines['right'].set_color('none')\nax.spines['top'].set_color('none')\n#plt.legend()\nplt.savefig(dst_path + \"Segment_length_range_201120.png\", facecolor=\"white\", bbox_inches = \"tight\")\nplt.show() ",
"_____no_output_____"
],
[
"import pickle\n\nwith open(src_path + \"pickle/max_len_all_201120.pickle\", \"wb\") as f1:\n #pickle.dump(max_len_all, f1)\nwith open(src_path + \"pickle/min_len_all_201120.pickle\", \"wb\") as f2:\n #pickle.dump(min_len_all, f2)",
"_____no_output_____"
],
[
"# Calculate contraction duration\n\nimport pickle\n\nwith open(src_path + \"pickle/seg_len_range_all_201104.pickle\", \"rb\") as f1:\n seg_len_range_all = pickle.load(f1)\nwith open(src_path + \"pickle/seg_len_all_201102.pickle\", \"rb\") as f2:\n seg_len_all = pickle.load(f2)\nwith open(src_path + \"pickle/max_len_all_201120.pickle\", \"rb\") as f3:\n max_len_all = pickle.load(f3)\nwith open(src_path + \"pickle/min_len_all_201120.pickle\", \"rb\") as f4:\n min_len_all = pickle.load(f4)",
"_____no_output_____"
],
[
"# Check max and min in segment length data\n# seg0 (A8) - seg8 (T3)\n# select valleys\n# Result1: 1,1,0,0,0,0,0,0,0\n# Result2: 1,1,1,1,1,1,1,1,1\n# Result3: 1,1,1,1,1,1,0,0,0\n# Result4: 3,2,2,2,2,2,2,2,3\n# Result5: 2,2,2,2,2,2,2,2,2\n# Result6: 0,1,1,1,1,1,1,1,1\n# Result7: 1,1,1,1,1,1,1,1,1\n# Result8: 1,1,1,1,1,1,1,1,1\n# Result9: 1,1,1,1,1,1,1,1,1\n# Result10: 1,1,1,1,1,1,1,1,1\n# Result11: 1,1,1,1,1,0,0,0,0\n\nvalleys = np.array([[1,1,0,0,0,0,0,0,1],\n [1,1,1,1,1,1,1,1,1],\n [1,1,1,1,1,1,0,0,1],\n [3,2,2,2,2,2,2,2,3],\n [2,2,2,2,2,2,2,2,2],\n [0,1,1,1,1,1,1,1,1],\n [1,1,1,1,1,1,1,1,1],\n [1,1,1,1,1,1,1,1,1],\n [1,1,1,1,1,1,1,1,1],\n [1,1,1,1,1,1,1,1,1],\n [1,1,1,1,1,0,0,0,0]])",
"_____no_output_____"
],
[
"# Calculate contraction duration\n\n# seg_len_all: file_id, seg_id, frame [time, length]; 11 x 9 x frames x 2\n# seg_len_range_all: file_id, seg_id, peak/valley, point number: 11 x 9 x 2 x point number\n\n\nimport matplotlib.pyplot as plt\nfrom scipy import signal\n\nfile_id = 0\nseg_id = 2\n\nt = seg_len_all[file_id][seg_id][:,0]\nlength = signal.savgol_filter(seg_len_all[file_id][seg_id][:,1], 11, 2)\npeaks = seg_len_range_all[file_id][seg_id]\n\nplt.plot(t, length)\nplt.plot(peaks[0][0], peaks[0][1], 'go')\nplt.plot(peaks[1][0], peaks[1][1], 'mo')\n\nplt.show()",
"_____no_output_____"
],
[
"from scipy import signal\n\nfile_id = 0\nseg_id = 2\n\ndat_t = seg_len_all[file_id][seg_id][:,0]\ndat_l = signal.savgol_filter(seg_len_all[file_id][seg_id][:,1],11,2)\nvalley_point = seg_len_range_all[file_id][seg_id][1][0][valleys[file_id][seg_id]]\nidx = np.where(dat_t == valley_point)[0]\nthrd = (max_len_all[file_id][seg_id] - min_len_all[file_id][seg_id])*0.5 + min_len_all[file_id][seg_id]\n\n# search for left idx\nleft_ = 0\nwhile(dat_l[idx-left_]<thrd):\n left_ += 1\nidx_left = idx - left_\n\n# search for right idx\nright_ = 0\nwhile(dat_l[idx+right_]<thrd):\n right_ += 1\nidx_right = idx + right_\n\ntime_left = dat_t[idx_left]\ntime_right = dat_t[idx_right]\n\ndst0 = [[time_left, time_right], [idx_left, idx_right]]\n\nprint(dst0)\nplt.plot(dat_t, dat_l)\nplt.plot(dat_t[idx_left], dat_l[idx_left], \"go\")\nplt.plot(dat_t[idx_right], dat_l[idx_right], \"go\")\nplt.show()\n\nprint(\"thrd:\", thrd)\nprint(\"left side:\", dat_l[idx_left-1], dat_l[idx_left], dat_l[idx_left+1])\nprint(\"right side:\", dat_l[idx_right-1], dat_l[idx_right], dat_l[idx_right+1])",
"[[array([0.19998]), array([0.6666])], [array([6], dtype=int64), array([20], dtype=int64)]]\n"
],
[
"# Calculate contraction duration\n\nfrom scipy import signal\n\nFWHM_segment_length_all = []\n\nfor file_id in range(11):\n dst = []\n for seg_id in range(9):\n\n dat_t = seg_len_all[file_id][seg_id][:,0]\n dat_l = signal.savgol_filter(seg_len_all[file_id][seg_id][:,1],11,2)\n valley_point = seg_len_range_all[file_id][seg_id][1][0][valleys[file_id][seg_id]]\n idx = np.where(dat_t == valley_point)[0]\n thrd = (max_len_all[file_id][seg_id] - min_len_all[file_id][seg_id])*0.5 + min_len_all[file_id][seg_id]\n\n # search for left idx\n left_ = 0\n while(dat_l[idx-left_]<thrd):\n left_ += 1\n idx_left = idx - left_\n\n # search for right idx\n right_ = 0\n while(dat_l[idx+right_]<thrd):\n right_ += 1\n idx_right = idx + right_\n\n time_left = dat_t[idx_left]\n time_right = dat_t[idx_right]\n\n dst0 = [[time_left[0], time_right[0]], [int(idx_left[0]), int(idx_right[0])]]\n\n dst.append(dst0)\n FWHM_segment_length_all.append(dst)\nFWHM_segment_length_all = np.array(FWHM_segment_length_all)",
"_____no_output_____"
],
[
"FWHM_segment_length_all.shape",
"_____no_output_____"
],
[
"contraction_duration_all = []\nfor file_id in range(11):\n dst = []\n for seg_id in range(9):\n dat = FWHM_segment_length_all[file_id][seg_id]\n dst.append(dat[0,1] - dat[0,0])\n contraction_duration_all.append(dst)\ncontraction_duration_all = np.array(contraction_duration_all)\nprint(\"contraction_duration_all\", contraction_duration_all)",
"contraction_duration_all [[0.36663 0.36663 0.46662 0.46662 0.43329 0.36663 0.43329 0.43329 0.49995]\n [0.36663 0.39996 0.46662 0.39996 0.43329 0.39996 0.36663 0.46662 0.46662]\n [0.49995 0.46662 0.53328 0.46662 0.39996 0.43329 0.39996 0.39996 0.39996]\n [0.26664 0.43329 0.39996 0.43329 0.39996 0.36663 0.39996 0.43329 0.3333 ]\n [0.3333 0.43329 0.43329 0.43329 0.39996 0.39996 0.36663 0.39996 0.39996]\n [0.3333 0.56661 0.53328 0.49995 0.49995 0.46662 0.36663 0.43329 0.49995]\n [0.43329 0.39996 0.39996 0.43329 0.43329 0.39996 0.36663 0.49995 0.53328]\n [0.59994 0.46662 0.46662 0.49995 0.36663 0.3333 0.36663 0.43329 0.43329]\n [0.56661 0.46662 0.53328 0.53328 0.43329 0.43329 0.39996 0.36663 0.46662]\n [0.43329 0.43329 0.49995 0.43329 0.46662 0.39996 0.36663 0.43329 0.39996]\n [0.53328 0.49995 0.49995 0.43329 0.46662 0.39996 0.36663 0.46662 0.46662]]\n"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\nplt.figure(0, figsize=(6,10))\nplot_shift = 0.5\n\nfor seg in range(1,9):\n plt.plot(contraction_duration_all[:,seg], np.array([seg-1]*11) + np.random.randn(11)*0.07, color=cm.jet((seg+1)/10), \n marker='o', linestyle='None', markersize=10)\n plt.plot([0,0.7], [seg-1, seg-1], color=cm.jet((seg+1)/10), linestyle='dotted')\n \nplt.title(\"Contraction duration\")\nplt.xlabel(\"Contraction duration (sec)\", fontsize=30)\n\nplt.xlim([0,0.7])\n#plt.ylim([0,6])\nplt.xticks([0,0.2, 0.4, 0.6])\nplt.yticks([])\nplt.tick_params(labelsize=24)\nax = plt.gca()\nax.spines['right'].set_color('none')\nax.spines['top'].set_color('none')\n#plt.legend()\nplt.savefig(dst_path + \"Contraction_duration_201120.png\", facecolor=\"white\", bbox_inches = \"tight\")\nplt.show() ",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d051dc63ac205d9aa821815c1c2f5e9bd648ee4c | 8,436 | ipynb | Jupyter Notebook | gs_quant/documentation/10_one_delta/reports/Thematic Report.ipynb | daniel-schreier/gs-quant | abc5670a35874f2ce701418c9e1da7987092b4f7 | [
"Apache-2.0"
] | null | null | null | gs_quant/documentation/10_one_delta/reports/Thematic Report.ipynb | daniel-schreier/gs-quant | abc5670a35874f2ce701418c9e1da7987092b4f7 | [
"Apache-2.0"
] | null | null | null | gs_quant/documentation/10_one_delta/reports/Thematic Report.ipynb | daniel-schreier/gs-quant | abc5670a35874f2ce701418c9e1da7987092b4f7 | [
"Apache-2.0"
] | null | null | null | 30.021352 | 249 | 0.600996 | [
[
[
"# Thematic Reports\n\nThematic reports run historical analyses on the exposure of a portfolio to various Goldman Sachs Flagship Thematic baskets over a specified date range.\n\n### Prerequisite\n\nTo execute all the code in this tutorial, you will need the following application scopes:\n- **read_product_data**\n- **read_financial_data**\n- **modify_financial_data** (must be requested)\n- **run_analytics** (must be requested)\n\nIf you are not yet permissioned for these scopes, please request them on your [My Applications Page](https://developer.gs.com/go/apps/view).\nIf you have any other questions please reach out to the [Marquee sales team](mailto:[email protected]).\n\n## Step 1: Authenticate and Initialize Your Session\n\nFirst you will import the necessary modules and add your client id and client secret.",
"_____no_output_____"
]
],
[
[
"import datetime as dt\nfrom time import sleep\n\nfrom gs_quant.markets.baskets import Basket\nfrom gs_quant.markets.report import ThematicReport\nfrom gs_quant.session import GsSession, Environment\n\nclient = None\nsecret = None\nscopes = None\n\n## External users must fill in their client ID and secret below and comment out the line below\n\n#client = 'ENTER CLIENT ID'\n#secret = 'ENTER CLIENT SECRET'\n#scopes = ('read_product_data read_financial_data modify_financial_data run_analytics',)\n\nGsSession.use(\n Environment.PROD,\n client_id=client,\n client_secret=secret,\n scopes=scopes\n)\n\nprint('GS Session initialized.')",
"_____no_output_____"
]
],
[
[
"## Step 2: Create a New Thematic Report\n\n#### Already have a thematic report?\n\n<i>If you want to skip creating a new report and continue this tutorial with an existing thematic report, run the following and skip to Step 3:</i>",
"_____no_output_____"
]
],
[
[
"thematic_report_id = 'ENTER THEMATIC REPORT ID'\n\nthematic_report = ThematicReport.get(thematic_report_id)",
"_____no_output_____"
]
],
[
[
"The only parameter necessary in creating a new thematic report is the unique Marquee identifier of the portfolio on which you would like to run thematic analytics.",
"_____no_output_____"
]
],
[
[
"portfolio_id = 'ENTER PORTFOLIO ID'\n\nthematic_report = ThematicReport(position_source_id=portfolio_id)\nthematic_report.save()\n\nprint(f'A new thematic report for portfolio \"{portfolio_id}\" has been made with ID \"{thematic_report.id}\".')",
"_____no_output_____"
]
],
[
[
"## Step 3: Schedule the Report\n\nWhen scheduling reports, you have two options:\n- Backcast the report: Take the earliest date with positions in the portfolio / basket and run the report on the positions held then with a start date before the earliest position date and an end date\n of the earliest position date\n- Do not backcast the report: Set the start date as a date that has positions in the portfolio or basket and an end date after that (best practice is to set it to T-1). In this case the\n report will run on positions held as of each day in the date range\n\nIn this case, let's try scheduling the report without backcasting:",
"_____no_output_____"
]
],
[
[
"start_date = dt.date(2021, 1, 4)\nend_date = dt.date(2021, 8, 4)\n\nthematic_report.schedule(\n start_date=start_date,\n end_date=end_date,\n backcast=False\n)\n\nprint(f'Report \"{thematic_report.id}\" has been scheduled.')",
"_____no_output_____"
]
],
[
[
"## Alternative Step 3: Run the Report\n\nDepending on the size of your portfolio and the length of the schedule range, it usually takes anywhere from a couple seconds to half a minute for your report to finish executing.\nOnly after that can you successfully pull the results from that report. If you would rather run the report and pull the results immediately after they are ready, you can leverage the `run`\nfunction.\n\nYou can run a report synchronously or asynchronously.\n- Synchronous: the Python script will stall at the `run` function line and wait for the report to finish. The `run` function will then return a dataframe with the report results\n- Asynchronously: the Python script will not stall at the `run` function line. The `run` function will return a `ReportJobFuture` object that will contain the report results when they are ready.\n\nIn this example, let's run the report asynchronously and wait for the results:",
"_____no_output_____"
]
],
[
[
"start_date = dt.date(2021, 1, 4)\nend_date = dt.date(2021, 8, 4)\n\nreport_result_future = thematic_report.run(\n start_date=start_date,\n end_date=end_date,\n backcast=False,\n is_async=True\n)\n\nwhile not report_result_future.done():\n print('Waiting for report results...')\n sleep(5)\n\nprint('\\nReport results done! Here they are...')\nprint(report_result_future.result())",
"_____no_output_____"
]
],
[
[
"### Step 3: Pull Report Results\n\nNow that we have our factor risk report, we can leverage the unique functionalities of the `ThematicReport` class to pull exposure and PnL data. Let's get the historical changes in thematic exposure and beta to the GS Asia Stay at Home basket:",
"_____no_output_____"
]
],
[
[
"basket = Basket.get('GSXASTAY')\nthematic_exposures = thematic_report.get_thematic_data(\n start_date=start_date,\n end_date=end_date,\n basket_ids=[basket.get_marquee_id()]\n)\n\nprint(f'Thematic Exposures: \\n{thematic_exposures.__str__()}')\nthematic_exposures.plot(title='Thematic Data Breakdown')",
"_____no_output_____"
]
],
[
[
"### You're all set; Congrats!\n\n*Other questions? Reach out to the [Portfolio Analytics team](mailto:[email protected])!*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.