hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ecc3e7e6a1785a25f643c0153faae9ff70c66833 | 1,101 | ipynb | Jupyter Notebook | Modulo_Publica_Terremotos.ipynb | YanSym/Amazonia_Azul_Reporter | 3d425ba73f225a9171dd7673ca2ab10d13166467 | [
"MIT"
] | 2 | 2021-11-24T12:03:23.000Z | 2021-12-05T15:48:22.000Z | Modulo_Publica_Terremotos.ipynb | YanSym/Amazonia_Azul_Reporter | 3d425ba73f225a9171dd7673ca2ab10d13166467 | [
"MIT"
] | null | null | null | Modulo_Publica_Terremotos.ipynb | YanSym/Amazonia_Azul_Reporter | 3d425ba73f225a9171dd7673ca2ab10d13166467 | [
"MIT"
] | 1 | 2022-03-25T14:17:57.000Z | 2022-03-25T14:17:57.000Z | 18.982759 | 66 | 0.546776 | [
[
[
"### Notebook para publicação de imagens de terremotos",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n\nfrom metodos_auxiliares_terremotos import TerremotosClass\n\n# instancia o objeto\nobjeto_publicacao = TerremotosClass()\n\n# publicação\nobjeto_publicacao.publica_conteudo()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
ecc3ea66087385d80dc14ed70f2cc4f3de800a4b | 4,817 | ipynb | Jupyter Notebook | docs/tutorials/1_tutorial_vqe.ipynb | Morcu/qiskit-braket-plugin | f486dc542204d54699673835a9c0bdc61ae34cca | [
"Apache-2.0"
] | 1 | 2022-03-22T21:07:00.000Z | 2022-03-22T21:07:00.000Z | docs/tutorials/1_tutorial_vqe.ipynb | Morcu/qiskit-braket-plugin | f486dc542204d54699673835a9c0bdc61ae34cca | [
"Apache-2.0"
] | 11 | 2022-02-22T14:39:26.000Z | 2022-03-23T20:18:09.000Z | docs/tutorials/1_tutorial_vqe.ipynb | Morcu/qiskit-braket-plugin | f486dc542204d54699673835a9c0bdc61ae34cca | [
"Apache-2.0"
] | 1 | 2022-03-13T12:41:28.000Z | 2022-03-13T12:41:28.000Z | 27.369318 | 150 | 0.523355 | [
[
[
"# Tutorial: Runing VQE on Braket backend",
"_____no_output_____"
]
],
[
[
"from qiskit.algorithms import VQE\nfrom qiskit.opflow import (\n I,\n X,\n Z,\n)\nfrom qiskit.algorithms.optimizers import SLSQP\nfrom qiskit.circuit.library import TwoLocal\nfrom qiskit.utils import QuantumInstance, algorithm_globals\n\nfrom qiskit_braket_provider import AWSBraketProvider, BraketLocalBackend\n\nseed = 50\nalgorithm_globals.random_seed = seed",
"_____no_output_____"
]
],
[
[
"Get backend to run VQE with",
"_____no_output_____"
]
],
[
[
"provider = AWSBraketProvider()\nlocal_simulator = BraketLocalBackend()\nlocal_simulator",
"_____no_output_____"
],
[
"state_vector_simulator_backend = provider.get_backend(\"SV1\")\nstate_vector_simulator_backend",
"_____no_output_____"
]
],
[
[
"Running VQE\n\n\n\nMore docs on VQE and algorithms https://qiskit.org/documentation/tutorials/algorithms/01_algorithms_introduction.html#A-complete-working-example",
"_____no_output_____"
]
],
[
[
"H2_op = (\n (-1.052373245772859 * I ^ I)\n + (0.39793742484318045 * I ^ Z)\n + (-0.39793742484318045 * Z ^ I)\n + (-0.01128010425623538 * Z ^ Z)\n + (0.18093119978423156 * X ^ X)\n)\n\nqi = QuantumInstance(\n state_vector_simulator_backend, seed_transpiler=seed, seed_simulator=seed\n)\nansatz = TwoLocal(rotation_blocks=\"ry\", entanglement_blocks=\"cz\")\nslsqp = SLSQP(maxiter=1)\n\nvqe = VQE(ansatz, optimizer=slsqp, quantum_instance=qi)\n\nresult = vqe.compute_minimum_eigenvalue(H2_op)\nprint(result)",
"{ 'aux_operator_eigenvalues': None,\n 'cost_function_evals': 9,\n 'eigenstate': { '00': 0.7824990015968072,\n '01': 0.489139870078079,\n '10': 0.3814548629916782,\n '11': 0.05412658773652741},\n 'eigenvalue': (-1.0606393547183097+0j),\n 'optimal_parameters': { ParameterVectorElement(θ[1]): 4.19301252102391,\n ParameterVectorElement(θ[0]): 3.611860069224077,\n ParameterVectorElement(θ[2]): 0.6019852007557844,\n ParameterVectorElement(θ[3]): 5.949536809130025,\n ParameterVectorElement(θ[4]): -3.3070470445355764,\n ParameterVectorElement(θ[6]): -5.466043598406607,\n ParameterVectorElement(θ[7]): 0.6984088030463615,\n ParameterVectorElement(θ[5]): 1.8462931831829383},\n 'optimal_point': array([ 3.61186007, 4.19301252, 0.6019852 , 5.94953681, -3.30704704,\n 1.84629318, -5.4660436 , 0.6984088 ]),\n 'optimal_value': -1.0606393547183097,\n 'optimizer_evals': None,\n 'optimizer_time': 54.37140512466431}\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc3f212991066f45b362a73375084850ab069b2 | 4,744 | ipynb | Jupyter Notebook | index.ipynb | kingablgh/PySprint | c3e76fbf1287d18d78699145f5301593aff47ba0 | [
"MIT"
] | null | null | null | index.ipynb | kingablgh/PySprint | c3e76fbf1287d18d78699145f5301593aff47ba0 | [
"MIT"
] | null | null | null | index.ipynb | kingablgh/PySprint | c3e76fbf1287d18d78699145f5301593aff47ba0 | [
"MIT"
] | null | null | null | 29.104294 | 603 | 0.551012 | [
[
[
"## PySprint - Spectrally resolved interferometry in Python\n\nPySprint package is an open source Python package dedicated to evaluate measurements relying on spectrally and spatially resolved interferometry. The package aims to provide a fluent and smooth interface which takes away the burden of managing complex data processing tasks, however it can hand over the full control to the user when necessary. PySprint is integrated well in Jupyter Notebooks with rich HTML representation of objects. In the future there will be graphical interfaces added for the Jupyter Notebook using the [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/) library. \n\nThe supported methods are:\n\n* [Stationary phase point method](doc/hu_spp.ipynb) with:\n * Eager execution\n * Powerful caching\n * 3 different ways to record and evaluate data, including an interactive *matplotlib* editor\n* [Cosine function fit method](doc/hu_intro.ipynb) with:\n * Successive optimizer\n * ... and also regular fitting\n* [Fourier-transform method](doc/hu_fft.ipynb) with:\n * Automatic evaluation*\n * Pulse shape and phase retrieval\n* [Windowed Fourier-transform method](doc/hu_wft.ipynb) with:\n * Smart caching of intermediate results\n * Parallel computation\n * Automated detection of the *GD* ridge\\**\n * Phase retrieval\n* [Minima–maxima method](doc/hu_minmax.ipynb) with:\n * Phase retrieval of an interferogram having arbitrary complex SPP and extremal point setup\n * Interactive *matplotlib* editor for extremal point recording\n * Built-in tools to aid extremal point detection\n\nTo see the full picture you might want to read about internals:\n* [Dataset](doc/hu_dataset.ipynb) - The class representing a dataset and its miscancellous properties and functions.\n* [Phase](doc/hu_phase.ipynb) - The class representing a phase obtained from various methods.\n\n\\* *Work in progress.*\n\n\\** *Manual control will be added in the future releases.*",
"_____no_output_____"
]
],
[
[
"import pysprint as ps",
"_____no_output_____"
],
[
"ps.__version__",
"_____no_output_____"
],
[
"ps.__author__",
"_____no_output_____"
],
[
"ps.print_info()",
"\nPYSPRINT ANALYSIS TOOL\n\n SYSTEM\n----------------------\npython : 3.7.6.final.0\npython-bits: 64\nOS : Windows\nOS-release : 10\nVersion : 10.0.18362\nmachine : AMD64\nprocessor : Intel64 Family 6 Model 158 Stepping 9, GenuineIntel\nbyteorder : little\n\n DEPENDENCY\n----------------------\npysprint : 0.12.5\nnumpy : 1.18.4\nscipy : 1.4.1\nmatplotlib : 3.3.1.post878.dev0+g543f1891b\npandas : 1.1.1\npytest : 5.4.2\nlmfit : 1.0.1\nnumba : 0.50.1\nIPython : 7.12.0\njinja2 : 2.11.2\ndask : 2.21.0\n\n ADDITIONAL\n----------------------\nConda-env : False\nIPython : True\nSpyder : False\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecc3f921fce4c90533a118e87d636353da81fe83 | 13,673 | ipynb | Jupyter Notebook | 02. Intro to Python 2/Python_EDA.ipynb | sciDelta/DataSci | 1ed7c828630a80ba566c3806efd2cb5fd43d6d0f | [
"MIT"
] | null | null | null | 02. Intro to Python 2/Python_EDA.ipynb | sciDelta/DataSci | 1ed7c828630a80ba566c3806efd2cb5fd43d6d0f | [
"MIT"
] | null | null | null | 02. Intro to Python 2/Python_EDA.ipynb | sciDelta/DataSci | 1ed7c828630a80ba566c3806efd2cb5fd43d6d0f | [
"MIT"
] | null | null | null | 35.42228 | 918 | 0.54955 | [
[
[
"### Exploration of Hacker News Posts\n\nIn this project, we'll compare two different types of posts from Hacker News, a popular site where technology related stories (or 'posts') are voted and commented upon. The two types of posts we'll explore begin with either Ask HN or Show HN.\n\nUsers submit Ask HN posts to ask the Hacker News community a specific question, such as \"What is the best online course you've ever taken?\" Likewise, users submit Show HN posts to show the Hacker News community a project, product, or just generally something interesting.\n\nWe'll specifically compare these two types of posts to determine the following:\n\nDo Ask HN or Show HN receive more comments on average?\nDo posts created at a certain time receive more comments on average?\nIt should be noted that the data set we're working with was reduced from almost 300,000 rows to approximately 20,000 rows by removing all submissions that did not receive any comments, and then randomly sampling from the remaining submissions.",
"_____no_output_____"
]
],
[
[
"#Import data and create a working list\n\nfrom csv import reader\nopened_file = open('hacker_news.csv')\nread_file = reader(opened_file)\nhn = list(read_file)\n\nhn[:5]",
"_____no_output_____"
],
[
"#Split the header from the data\nheaders = hn[0]\nprint(headers)\nhn = hn[1:]\nprint(hn[:5])",
"['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at']\n[['12224879', 'Interactive Dynamic Video', 'http://www.interactivedynamicvideo.com/', '386', '52', 'ne0phyte', '8/4/2016 11:52'], ['10975351', 'How to Use Open Source and Shut the Fuck Up at the Same Time', 'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/', '39', '10', 'josep2', '1/26/2016 19:30'], ['11964716', \"Florida DJs May Face Felony for April Fools' Water Joke\", 'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/', '2', '1', 'vezycash', '6/23/2016 22:20'], ['11919867', 'Technology ventures: From Idea to Enterprise', 'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429', '3', '1', 'hswarna', '6/17/2016 0:01'], ['10301696', 'Note by Note: The Making of Steinway L1037 (2007)', 'http://www.nytimes.com/2007/11/07/movies/07stein.html?_r=0', '8', '2', 'walterbell', '9/30/2015 4:12']]\n"
],
[
"#Extracting the 'Ask HN' and 'Show'HN'data\nask_posts = []\nshow_posts = []\nother_posts = []\n\nfor row in hn:\n title = row[1]\n if title.lower().startswith('ask hn'):\n ask_posts.append(row)\n elif title.lower().startswith('show hn'):\n show_posts.append(row)\n else:\n other_posts.append(row)\n \nprint('Number of ask posts: ', len(ask_posts))\nprint('Number of show posts: ', len(show_posts))\nprint('Number of other posts: ', len(other_posts))",
"Number of ask posts: 1744\nNumber of show posts: 1162\nNumber of other posts: 17194\n"
],
[
"#Find the count and average of each set\n\n#Ask List\ntotal_ask_comments = 0\n\nfor post in ask_posts:\n total_ask_comments += int(post[4])\n\navg_ask_comments = total_ask_comments / len(ask_posts)\nprint('Average Ask Posts: ', round(avg_ask_comments,1))\n\n#Show list\ntotal_show_comments = 0\n\nfor post in show_posts:\n total_show_comments += int(post[4])\n\navg_show_comments = total_show_comments / len(show_posts)\nprint('Average Show Posts: ', round(avg_show_comments, 1))",
"Average Ask Posts: 14.0\nAverage Show Posts: 10.3\n"
]
],
[
[
"The analysis is indicating that ask posts recieve more comments on average.",
"_____no_output_____"
]
],
[
[
"#Analysing the 'Ask Posts' and the time data\nimport datetime as dt\n\nresults_list = []\n\nfor row in ask_posts:\n results_list.append([row[6], int(row[4])])\n \ncomments_by_hour = {}\ncounts_by_hour = {}\ndate_format = \"%m/%d/%Y %H:%M\"\n\nfor row in results_list:\n date = row[0]\n comment = row[1]\n time = dt.datetime.strptime(date, date_format).strftime(\"%H\")\n \n if time in counts_by_hour:\n counts_by_hour[time] += 1\n comments_by_hour[time] += comment\n else:\n counts_by_hour[time] = 1\n comments_by_hour[time] = comment\n\nprint('Counts: ', counts_by_hour)\nprint('Comments: ', comments_by_hour)\n",
"Counts: {'03': 54, '22': 71, '06': 44, '23': 68, '12': 73, '17': 100, '09': 45, '21': 109, '20': 80, '02': 58, '14': 107, '00': 55, '15': 116, '19': 110, '11': 58, '13': 85, '04': 47, '08': 48, '05': 46, '18': 109, '10': 59, '07': 34, '01': 60, '16': 108}\nComments: {'03': 421, '22': 479, '06': 397, '23': 543, '12': 687, '17': 1146, '09': 251, '21': 1745, '20': 1722, '02': 1381, '14': 1416, '00': 447, '15': 4477, '19': 1188, '11': 641, '13': 1253, '04': 337, '08': 492, '05': 464, '18': 1439, '10': 793, '07': 267, '01': 683, '16': 1814}\n"
],
[
"#Average number of posts per hour of the day\n\navg_hour = []\n\nfor hour in comments_by_hour:\n avg_hour.append([hour, comments_by_hour[hour] /\n counts_by_hour[hour]])\n \navg_hour",
"_____no_output_____"
],
[
"#Organise the data\nswap_avg = []\n\nfor row in avg_hour:\n swap_avg.append([row[1], row[0]])\nprint(swap_avg)\n\nsorted_swap = sorted(swap_avg, reverse = True)\nprint(sorted_swap)",
"[[7.796296296296297, '03'], [6.746478873239437, '22'], [9.022727272727273, '06'], [7.985294117647059, '23'], [9.41095890410959, '12'], [11.46, '17'], [5.5777777777777775, '09'], [16.009174311926607, '21'], [21.525, '20'], [23.810344827586206, '02'], [13.233644859813085, '14'], [8.127272727272727, '00'], [38.5948275862069, '15'], [10.8, '19'], [11.051724137931034, '11'], [14.741176470588234, '13'], [7.170212765957447, '04'], [10.25, '08'], [10.08695652173913, '05'], [13.20183486238532, '18'], [13.440677966101696, '10'], [7.852941176470588, '07'], [11.383333333333333, '01'], [16.796296296296298, '16']]\n[[38.5948275862069, '15'], [23.810344827586206, '02'], [21.525, '20'], [16.796296296296298, '16'], [16.009174311926607, '21'], [14.741176470588234, '13'], [13.440677966101696, '10'], [13.233644859813085, '14'], [13.20183486238532, '18'], [11.46, '17'], [11.383333333333333, '01'], [11.051724137931034, '11'], [10.8, '19'], [10.25, '08'], [10.08695652173913, '05'], [9.41095890410959, '12'], [9.022727272727273, '06'], [8.127272727272727, '00'], [7.985294117647059, '23'], [7.852941176470588, '07'], [7.796296296296297, '03'], [7.170212765957447, '04'], [6.746478873239437, '22'], [5.5777777777777775, '09']]\n"
],
[
"# Sort the values and print the the 5 hours with the highest average comments.\n\nprint(\"Top 5 Hours for 'Ask HN' Comments\")\nfor avg, hr in sorted_swap[:5]:\n print(\n \"{}: {:.2f} average comments per post\".format(\n dt.datetime.strptime(hr, \"%H\").strftime(\"%H:%M\"),avg\n )\n )",
"Top 5 Hours for 'Ask HN' Comments\n15:00: 38.59 average comments per post\n02:00: 23.81 average comments per post\n20:00: 21.52 average comments per post\n16:00: 16.80 average comments per post\n21:00: 16.01 average comments per post\n"
]
],
[
[
"The hour that receives the most comments per post on average is 15:00, with an average of 38.59 comments per post. There's about a 60% increase in the number of comments between the hours with the highest and second highest average number of comments.",
"_____no_output_____"
],
[
"### Conclusion\nIn this project we determined which type of post and time receive the most comments on average. \n\nBased on the analysis it was an 'ask post' and created between 15:00 and 16:00 (3:00 pm est - 4:00 pm est). It should be noted that the data set we analyzed excluded posts without any comments. \n\nTherefore it's more accurate to say that of the posts that received comments, ask posts received more comments on average and ask posts created between 15:00 and 16:00 (3:00 pm est - 4:00 pm est) received the most comments on average.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecc4005783f441c0a0609c2c527009aa56a037e7 | 30,122 | ipynb | Jupyter Notebook | tv-script-generation/dlnd_tv_script_generation.ipynb | zjh1943/cn-deep-learning | 3ed11fc3d86b44a8d247f2d0b48d877e6c897bc6 | [
"MIT"
] | 474 | 2017-05-09T02:03:34.000Z | 2022-03-22T03:07:32.000Z | tv-script-generation/dlnd_tv_script_generation.ipynb | zjh1943/cn-deep-learning | 3ed11fc3d86b44a8d247f2d0b48d877e6c897bc6 | [
"MIT"
] | 116 | 2019-12-16T20:57:14.000Z | 2022-03-11T23:23:48.000Z | tv-script-generation/dlnd_tv_script_generation.ipynb | zjh1943/cn-deep-learning | 3ed11fc3d86b44a8d247f2d0b48d877e6c897bc6 | [
"MIT"
] | 600 | 2017-05-20T22:49:52.000Z | 2022-03-20T12:00:57.000Z | 31.31185 | 556 | 0.568754 | [
[
[
"# TV Script Generation\nIn this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern).\n## Get the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"_____no_output_____"
]
],
[
[
"## Explore the Data\nPlay around with `view_sentence_range` to view different parts of the data.",
"_____no_output_____"
]
],
[
[
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"_____no_output_____"
]
],
[
[
"## Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\n\n### Lookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call `vocab_to_int`\n- Dictionary to go from the id to word, we'll call `int_to_vocab`\n\nReturn these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"_____no_output_____"
]
],
[
[
"### Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\n\nImplement the function `token_lookup` to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\n\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"_____no_output_____"
]
],
[
[
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n return None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"_____no_output_____"
]
],
[
[
"## Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"_____no_output_____"
]
],
[
[
"# Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"_____no_output_____"
]
],
[
[
"## Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\n\n### Check the Version of TensorFlow and Access to GPU",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"_____no_output_____"
]
],
[
[
"### Input\nImplement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.\n- Targets placeholder\n- Learning Rate placeholder\n\nReturn the placeholders in the following tuple `(Input, Targets, LearningRate)`",
"_____no_output_____"
]
],
[
[
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n return None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"_____no_output_____"
]
],
[
[
"### Build RNN Cell and Initialize\nStack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).\n- The Rnn size should be set using `rnn_size`\n- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function\n - Apply the name \"initial_state\" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)\n\nReturn the cell and initial state in the following tuple `(Cell, InitialState)`",
"_____no_output_____"
]
],
[
[
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"_____no_output_____"
]
],
[
[
"### Word Embedding\nApply embedding to `input_data` using TensorFlow. Return the embedded sequence.",
"_____no_output_____"
]
],
[
[
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"_____no_output_____"
]
],
[
[
"### Build RNN\nYou created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.\n- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)\n - Apply the name \"final_state\" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)\n\nReturn the outputs and final_state state in the following tuple `(Outputs, FinalState)` ",
"_____no_output_____"
]
],
[
[
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"_____no_output_____"
]
],
[
[
"### Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.\n- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.\n- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.\n\nReturn the logits and final state in the following tuple (Logits, FinalState) ",
"_____no_output_____"
]
],
[
[
"def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"_____no_output_____"
]
],
[
[
"### Batches\nImplement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:\n- The first element is a single batch of **input** with the shape `[batch size, sequence length]`\n- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`\n\nIf you can't fill the last batch with enough data, drop the last batch.\n\nFor exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)` would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n \n # Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```",
"_____no_output_____"
]
],
[
[
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"_____no_output_____"
]
],
[
[
"## Neural Network Training\n### Hyperparameters\nTune the following parameters:\n\n- Set `num_epochs` to the number of epochs.\n- Set `batch_size` to the batch size.\n- Set `rnn_size` to the size of the RNNs.\n- Set `embed_dim` to the size of the embedding.\n- Set `seq_length` to the length of sequence.\n- Set `learning_rate` to the learning rate.\n- Set `show_every_n_batches` to the number of batches the neural network should print progress.",
"_____no_output_____"
]
],
[
[
"# Number of Epochs\nnum_epochs = None\n# Batch Size\nbatch_size = None\n# RNN Size\nrnn_size = None\n# Embedding Dimension Size\nembed_dim = None\n# Sequence Length\nseq_length = None\n# Learning Rate\nlearning_rate = None\n# Show stats for every n number of batches\nshow_every_n_batches = None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"_____no_output_____"
]
],
[
[
"### Build the Graph\nBuild the graph using the neural network you implemented.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"_____no_output_____"
]
],
[
[
"## Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"_____no_output_____"
]
],
[
[
"## Save Parameters\nSave `seq_length` and `save_dir` for generating a new TV script.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"_____no_output_____"
]
],
[
[
"# Checkpoint",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"_____no_output_____"
]
],
[
[
"## Implement Generate Functions\n### Get Tensors\nGet tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\n\nReturn the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)` ",
"_____no_output_____"
]
],
[
[
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n return None, None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"_____no_output_____"
]
],
[
[
"### Choose Word\nImplement the `pick_word()` function to select the next word using `probabilities`.",
"_____no_output_____"
]
],
[
[
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"_____no_output_____"
]
],
[
[
"## Generate TV Script\nThis will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.",
"_____no_output_____"
]
],
[
[
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"_____no_output_____"
]
],
[
[
"# The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\n# Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecc40e3df7677f7bfa105600c90e88740117784e | 11,498 | ipynb | Jupyter Notebook | longcovid.ipynb | geneasyura/cov19-hm | 1dba1b0198ab90c263fd75cd1d423131388bb836 | [
"Apache-2.0"
] | 1 | 2020-12-13T14:17:48.000Z | 2020-12-13T14:17:48.000Z | longcovid.ipynb | geneasyura/cov19-hm | 1dba1b0198ab90c263fd75cd1d423131388bb836 | [
"Apache-2.0"
] | null | null | null | longcovid.ipynb | geneasyura/cov19-hm | 1dba1b0198ab90c263fd75cd1d423131388bb836 | [
"Apache-2.0"
] | null | null | null | 33.917404 | 122 | 0.521569 | [
[
[
"#!/usr/bin/python3\n# coding: utf-8",
"_____no_output_____"
],
[
"from datetime import datetime as dt\nimport sys\nimport numpy as np\nimport os\nimport pandas as pd\nimport plotly\nimport plotly.express as px\nimport plotly.graph_objects as go\nimport plotly.offline as offline\nimport sys\nif \"ipy\" in sys.argv[0]:\n offline.init_notebook_mode()\nfrom cov19utils import create_basic_plot_figure, \\\n show_and_clear, moving_average, \\\n blank2zero, csv2array, \\\n get_twitter, tweet_with_image, \\\n get_gpr_predict, FONT_NAME, DT_OFFSET, \\\n download_if_needed, json2nparr, code2int, age2int, \\\n get_populations, get_os_idx_of_arr, dump_val_in_arr, \\\n calc_last1w2w_dif, create_basic_scatter_figure, \\\n show_and_save_plotly",
"_____no_output_____"
],
[
"from reportlab.lib.units import cm, mm\nfrom reportlab.lib.pagesizes import A4, landscape\nfrom reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle\nfrom reportlab.lib import colors\nfrom reportlab.pdfgen import canvas\nfrom reportlab.pdfbase import pdfmetrics\nfrom reportlab.pdfbase.cidfonts import UnicodeCIDFont\nfrom reportlab.platypus import Paragraph, Frame, Image, Table, TableStyle, LongTable\nfrom reportlab.platypus import PageBreak, SimpleDocTemplate, BaseDocTemplate",
"_____no_output_____"
],
[
"today_str = dt.now().isoformat()[:16].replace('T', ' ')\nupdate_map = False",
"_____no_output_____"
],
[
"with open(\"csv/longcovid.tmp\", \"rt\") as f:\n prev = int(f.read())\nimgname = \"longcovid-map.jpg\"\ntw_body = '新型コロナ 後遺症外来マップ (' + today_str + ')'\ndf=pd.read_csv(\"csv/longcovid.csv\", encoding='shift-jis', header=0)\ndf.to_excel('csv/longcovid.xlsx', encoding='utf-8', index=False)",
"_____no_output_____"
],
[
"if len(df) > prev:\n update_map = True\n with open(\"csv/longcovid.tmp\", \"wt\") as f:\n f.write(\"{}\".format(len(df)))",
"_____no_output_____"
],
[
"#update_map = True # TODO: AAWA",
"_____no_output_____"
],
[
"fig = px.scatter_mapbox(df,\n title=tw_body,\n lat=\"lat\", lon=\"lon\", color=\"可\", size=\"可\",\n color_continuous_scale=plotly.colors.sequential.Bluered,\n hover_name=\"施設名称\", \n hover_data=[\"オンライン\", \"紹介状\", \"条件\"],\n labels={'lat':'緯度', 'lon':'経度'},\n size_max=12, zoom=5, height=550)\nfig.update_layout(mapbox_style=\"open-street-map\")\nfig.update_layout(margin={\"r\":0,\"t\":40,\"l\":0,\"b\":0})\nfig.update_layout(template='plotly_dark')\nif update_map:\n show_and_save_plotly(fig, imgname, js=False, show=True, image=False, html=True)\n appended = len(df) - prev\n tw_body += \"\\n{}件追加 \".format(appended)\n tw_body += \"\\nExcel: https://raw.githubusercontent.com/geneasyura/cov19-hm/master/csv/longcovid.xlsx\"\n tw_body += \"\\nPDF: https://raw.githubusercontent.com/geneasyura/cov19-hm/master/docs/images/longcovid.pdf\"\n tw_body += \"\\n地図: https://geneasyura.github.io/cov19-hm/longcovid.html \"\n tw = get_twitter()\n tweet_with_image(tw, \"docs/images/{}\".format(imgname), tw_body)\n print(tw_body)\nelse:\n print(\"nothin to tweet about long-covid hospital map.\")",
"_____no_output_____"
],
[
"def df_to_html(filename):\n with open(filename, \"w\", encoding='utf-8') as f:\n f.write('<style>\\n')\n f.write('.MapsTblColNo { width: 70px; }\\n')\n f.write('.MapsTblColSt { width: 90px; }\\n')\n f.write('.MapsTblColNm { width: 280px; }\\n')\n f.write('.MapsTblColEn { width: 70px; }\\n')\n f.write('.MapsTblColOL { width: 70px; }\\n')\n f.write('.MapsTblColRS { width: 70px; }\\n')\n f.write('.MapsTblColCd { width: 160px; }\\n')\n f.write('.MapsTblColHP { width: 50px; }\\n')\n f.write('</style>\\n')\n\n f.write('<table border=\"0\">\\n')\n f.write(' <thead style=\"display: block;\" bgcolor=\"#000000\">\\n')\n f.write(' <tr style=\"text-align: left;\">\\n')\n f.write(' <th class=\"MapsTblColNo\">八地方</th>\\n')\n f.write(' <th class=\"MapsTblColSt\">都道府県</th>\\n')\n f.write(' <th class=\"MapsTblColNm\">施設名</th>\\n')\n f.write(' <th class=\"MapsTblColEn\">診療可</th>\\n')\n f.write(' <th class=\"MapsTblColOL\">Web</th>\\n')\n f.write(' <th class=\"MapsTblColRS\">紹介状</th>\\n')\n f.write(' <th class=\"MapsTblColCd\">条件</th>\\n')\n f.write(' <th class=\"MapsTblColHP\">HP</th>\\n')\n f.write(' </tr>\\n')\n f.write(' </thead>\\n')\n f.write(' <tbody style=\"display: block; overflow-x: hidden; overflow-y: scroll; height: 300px;\">\\n')\n for i, r in df.iterrows():\n f.write('<tr bgcolor=\"{}\">'.format([\"#606060\", \"#000000\"][i % 2]))\n f.write('<td class=\"MapsTblColNo\">{}</td>'.format(r['八地方区分']))\n f.write('<td class=\"MapsTblColSt\">{}</td>'.format(r['都道府県']))\n f.write('<td class=\"MapsTblColNm\">{}</td>'.format(r['施設名称']))\n f.write('<td class=\"MapsTblColEn\">{}</td>'.format(r['可']))\n f.write('<td class=\"MapsTblColOL\">{}</td>'.format(r['オンライン']))\n f.write('<td class=\"MapsTblColRS\">{}</td>'.format(r['紹介状']))\n f.write('<td class=\"MapsTblColCd\">{}</td>'.format(r['条件']))\n f.write('<td class=\"MapsTblColHP\"><a href=\"{}\">HP</a></td>'.format(r['HP']))\n f.write('</tr>\\n')\n f.write('</tbody></table>\\n')",
"_____no_output_____"
],
[
"df_to_html('docs/_includes/longcovid-table.html')",
"_____no_output_____"
],
[
"if not update_map:\n if \"ipy\" in sys.argv[0]:\n pass#exit()\n else:\n sys.exit()",
"_____no_output_____"
],
[
"font_name = 'HeiseiKakuGo-W5'\npage_size = landscape(A4)\npdfFile = SimpleDocTemplate(\n 'docs/images/longcovid.pdf',\n #showBoundary=1,\n pagesize=page_size,\n title=\"新型コロナ 後遺症外来一覧 by 遺伝子組換え阿修羅\",\n leftMargin=11*mm, rightMargin=11*mm,\n topMargin=11*mm, bottomMargin=11*mm)\npdfmetrics.registerFont(UnicodeCIDFont(font_name))",
"_____no_output_____"
],
[
"style_dict = {\"name\":\"normal\", \"fontName\":font_name, \"fontSize\":9}\nbody_style = ParagraphStyle(**style_dict)\nstyle_dict = {\"name\":\"header\", \"fontName\":font_name, \"fontSize\":12, \"leading\":16}\nhead_style = ParagraphStyle(**style_dict)\nstyle_dict = {\"name\":\"normal\", \"fontName\":font_name, \"fontSize\":9, \"textColor\":colors.blue}\nlink_style = ParagraphStyle(**style_dict)",
"_____no_output_____"
],
[
"story = []\nlink = '<link href=\"{}\">遺伝子組換え阿修羅 @zwiYWJqourMuEh7</link>'.format(\"https://twitter.com/zwiYWJqourMuEh7\")\nstory.append(Paragraph(\"新型コロナ 後遺症外来一覧 by \" + link + \" \" + today_str, head_style))\nstory.append(Paragraph(\"※1:診療方法、費用、処方箋、診断書発行料などは医療機関により異なります。\", body_style))\nstory.append(Paragraph(\"※2:保険診療が適用されない場合、費用が高額になる場合があります。\", body_style))",
"_____no_output_____"
],
[
"pdfFile.author = \"遺伝子組換え阿修羅 @zwiYWJqourMuEh7\"\npdfFile.subject = \"新型コロナ 後遺症外来一覧\"\npdfFile.creator = \"python3 with reportlab\"\npdfFile.keywords = \"COVID19, LongCovid, SARS-Cov-2, Japan\"",
"_____no_output_____"
],
[
"headers = ['No.', '都道府県', '施設名称', '住所', 'オンライン', '紹介状', '条件', 'HP']\nrows = []\nrows.append(headers)\nfor i, r in df.iterrows():\n columns = [\n r['No.'], r['都道府県'], r['施設名称'], \n Paragraph('<link href=\"https://www.google.co.jp/maps/@{},{},18z?hl=ja\">{}</link>'.format(\n r['lat'], r['lon'], r['所在地']), link_style),\n r['オンライン'], r['紹介状'], r['条件'],\n Paragraph('<link href=\"{}\">Link</link>'.format(r['HP']), link_style)\n ]\n rows.append(columns)\n\ntable = LongTable(\n rows, colWidths=(10*mm, 15*mm, 75*mm, 55*mm, 20*mm, 15*mm, 60*mm, 15*mm),\n repeatRows=1)\ntable.setStyle(TableStyle([\n ('FONT', (0, 0), (-1, -1), font_name, 9),\n ('BOX', (0, 0), (-1, -1), 0.5, colors.black),\n ('INNERGRID', (0, 0), (-1, -1), 0.25, colors.black),\n ('VALIGN', (0, 0), (-1, -1), 'BOTTOM'),\n ]))",
"_____no_output_____"
],
[
"story.append(table)",
"_____no_output_____"
],
[
"pdfFile.multiBuild(story)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc4105fb69d09207308cea3204445744e7dbd7e | 4,349 | ipynb | Jupyter Notebook | data-collection.ipynb | daylight-lab/alexa | 2417b20467ca3dec9a640b552b76fceb9b9d1127 | [
"BSD-3-Clause"
] | null | null | null | data-collection.ipynb | daylight-lab/alexa | 2417b20467ca3dec9a640b552b76fceb9b9d1127 | [
"BSD-3-Clause"
] | null | null | null | data-collection.ipynb | daylight-lab/alexa | 2417b20467ca3dec9a640b552b76fceb9b9d1127 | [
"BSD-3-Clause"
] | null | null | null | 22.769634 | 108 | 0.502644 | [
[
[
"import requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom time import sleep",
"_____no_output_____"
]
],
[
[
"# Get the rankings for each country\n\n## Get Alexa rankings for one country",
"_____no_output_____"
]
],
[
[
"from funcy import compose\n\ndef get_rankings (country_ranking_url, country_name):\n \n def parse_ranking (country_ranking_url):\n '''Takes a country ranking URL and returns a list of (bs4 parsed) HTML ranking listings.'''\n rankings_html = requests.get(country_ranking_url).content\n soup = BeautifulSoup(rankings_html, 'html.parser')\n listings = soup.find_all(\"div\", {\"class\": \"site-listing\"})\n return listings\n\n def extract_info (site_listing_html):\n '''Takes HTML of an Alexa site listing and returns JSON.'''\n attrs = site_listing_html.find_all(\"div\", {\"class\":\"td\"})\n rank = int(attrs[0].text)\n url = attrs[1].a.text\n site_info = attrs[1].a['href']\n return {\n 'rank': rank,\n 'url': url,\n 'site_info': site_info,\n 'country_name': country_name,\n }\n\n return [extract_info(ranking) for\n ranking in parse_ranking(country_ranking_url)] \n \n\nget_rankings('https://www.alexa.com/topsites/countries/AF', 'Afghanistan')\n",
"_____no_output_____"
]
],
[
[
"## Get name and URL for all the countries",
"_____no_output_____"
]
],
[
[
"top = get_rankings('https://www.alexa.com/topsites', '')",
"_____no_output_____"
],
[
"top_urls = [s['url'] for s in top]",
"_____no_output_____"
],
[
"tlds = [u.split('.')[-1] for u in top_urls]",
"_____no_output_____"
],
[
"providers = pd.read_csv('providers_labeled - providers_unlabeled.csv')\ndef get_juris (tld):\n return providers[providers['name']==tld]['country (alpha2)'].values[0]\nget_juris('.com') ",
"_____no_output_____"
],
[
"tld_jurisdictions = [get_juris(f'.{tld}') for tld in tlds]",
"_____no_output_____"
],
[
"len([j for j in tld_jurisdictions if j == 'US'])/len(tld_jurisdictions)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc41fc82187f301b62f7e8d7f39adec002e6372 | 15,162 | ipynb | Jupyter Notebook | session3/session3.ipynb | willwooten/pb-exercises | 8d380438e57dc998dba66f80cef5f284a4e33aa4 | [
"BSD-2-Clause"
] | null | null | null | session3/session3.ipynb | willwooten/pb-exercises | 8d380438e57dc998dba66f80cef5f284a4e33aa4 | [
"BSD-2-Clause"
] | null | null | null | session3/session3.ipynb | willwooten/pb-exercises | 8d380438e57dc998dba66f80cef5f284a4e33aa4 | [
"BSD-2-Clause"
] | null | null | null | 31.719665 | 1,361 | 0.629139 | [
[
[
"############## PLEASE RUN THIS CELL FIRST! ###################\n\n# import everything and define a test runner function\nfrom importlib import reload\nfrom helper import run\nimport ecc, helper, tx, script",
"_____no_output_____"
],
[
"# Signing Example\nfrom ecc import G, N # G = generator point , N = Public point\nfrom helper import hash256\nsecret = 1800555555518005555555\nz = int.from_bytes(hash256(b'ECDSA is awesome!'), 'big')\nk = 12345\nr = (k*G).x.num\ns = (z+r*secret) * pow(k, -1, N) % N\nprint(hex(z), hex(r), hex(s))\nprint(secret*G)",
"0xcf6304e0ed625dc13713ad8b330ca764325f013fe7a3057dbe6a2053135abeb4 0xf01d6b9018ab421dd410404cb869072065522bf85734008f105cf385a023a80f 0xf10c07e197e8b0e717108d0703d874357424ece31237c864621ac7acb0b9394c\nS256Point(4519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574,82b51eab8c27c66e26c858a079bcdf4f1ada34cec420cafc7eac1a42216fb6c4)\n"
],
[
"# Verification Example\nfrom ecc import S256Point, G, N\nz = 0xbc62d4b80d9e36da29c16c5d4d9f11731f36052c72401a76c23c0fb5a9b74423\nr = 0x37206a0610995c58074999cb9767b87af4c4978db68c06e8e6e81d282047a7c6\ns = 0x8ca63759c1157ebeaec0d03cecca119fc9a75bf8e6d0fa65c841c8e2738cdaec\npoint = S256Point(0x04519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574,\n 0x82b51eab8c27c66e26c858a079bcdf4f1ada34cec420cafc7eac1a42216fb6c4)\nu = z * pow(s, -1, N) % N\nv = r * pow(s, -1, N) % N\nprint((u*G + v*point).x.num == r)",
"True\n"
]
],
[
[
"### Exercise 1\nWhich sigs are valid?\n\n```\nP = (887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c,\n61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34)\nz, r, s = ec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60,\nac8d1c87e51d0d441be8b3dd5b05c8795b48875dffe00b7ffcfac23010d3a395,\n68342ceff8935ededd102dd876ffd6ba72d6a427a3edb13d26eb0781cb423c4\nz, r, s = 7c076ff316692a3d7eb3c3bb0f8b1488cf72e1afcd929e29307032997a838a3d,\neff69ef2b1bd93a66ed5219add4fb51e11a840f404876325a1e8ffe0529a2c,\nc7207fee197d27c618aea621406f6bf5ef6fca38681d82b2f06fddbdce6feab6\n```\n",
"_____no_output_____"
]
],
[
[
"# Exercise 1\n\nfrom ecc import S256Point, G, N\npx = 0x887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c\npy = 0x61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34\nsignatures = (\n # (z, r, s)\n (0xec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60,\n 0xac8d1c87e51d0d441be8b3dd5b05c8795b48875dffe00b7ffcfac23010d3a395,\n 0x68342ceff8935ededd102dd876ffd6ba72d6a427a3edb13d26eb0781cb423c4),\n (0x7c076ff316692a3d7eb3c3bb0f8b1488cf72e1afcd929e29307032997a838a3d,\n 0xeff69ef2b1bd93a66ed5219add4fb51e11a840f404876325a1e8ffe0529a2c,\n 0xc7207fee197d27c618aea621406f6bf5ef6fca38681d82b2f06fddbdce6feab6),\n)\n# initialize the public point\n# use: S256Point(x-coordinate, y-coordinate)\npoint = S256Point(px,py)\n# iterate over signatures\nfor z, r, s in signatures:\n # u = z / s, v = r / s\n u = z * pow(s, -1, N) % N\n v = r * pow(s, -1, N) % N\n print((u*G + v*point).x.num == r)\n\n # finally, uG+vP should have the x-coordinate equal to r\n",
"True\nTrue\n"
]
],
[
[
"### Exercise 2\n\n\n\n\n#### Make [this test](/edit/session3/ecc.py) pass: `ecc.py:S256Test:test_verify`",
"_____no_output_____"
]
],
[
[
"# Exercise 2\n\nreload(ecc)\nrun(ecc.S256Test('test_verify'))",
".\n----------------------------------------------------------------------\nRan 1 test in 0.544s\n\nOK\n"
]
],
[
[
"### Exercise 3\n\n\n\n\n#### Make [this test](/edit/session3/ecc.py) pass: `ecc.py:PrivateKeyTest:test_sign`",
"_____no_output_____"
]
],
[
[
"# Exercise 3\n\nreload(ecc)\nrun(ecc.PrivateKeyTest('test_sign'))",
".\n----------------------------------------------------------------------\nRan 1 test in 0.530s\n\nOK\n"
]
],
[
[
"### Exercise 4\nVerify the DER signature for the hash of \"ECDSA is awesome!\" for the given SEC pubkey\n\n`z = int.from_bytes(hash256('ECDSA is awesome!'), 'big')`\n\nPublic Key in SEC Format:\n0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574\n\nSignature in DER Format: 304402201f62993ee03fca342fcb45929993fa6ee885e00ddad8de154f268d98f083991402201e1ca12ad140c04e0e022c38f7ce31da426b8009d02832f0b44f39a6b178b7a1\n",
"_____no_output_____"
]
],
[
[
"# Exercise 4\n\nfrom ecc import S256Point, Signature\nfrom helper import hash256\nder = bytes.fromhex('304402201f62993ee03fca342fcb45929993fa6ee885e00ddad8de154f268d98f083991402201e1ca12ad140c04e0e022c38f7ce31da426b8009d02832f0b44f39a6b178b7a1')\nsec = bytes.fromhex('0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574')\n# message is the hash256 of the message \"ECDSA is awesome!\"\nz = int.from_bytes(hash256(b'ECDSA is awesome!'), 'big')\n# parse the der format to get the signature\nsig = Signature.parse(der)\n# parse the sec format to get the public key\npoint = S256Point.parse(sec)\n# use the verify method on S256Point to validate the signature\nprint(point.verify(z, sig))\n",
"True\n"
]
],
[
[
"### Exercise 5\n\n\n\n\n#### Make [this test](/edit/session3/tx.py) pass: `tx.py:TxTest:test_parse_version`",
"_____no_output_____"
]
],
[
[
"# Exercise 5\n\nreload(tx)\nrun(tx.TxTest('test_parse_version'))",
".\n----------------------------------------------------------------------\nRan 1 test in 0.002s\n\nOK\n"
]
],
[
[
"### Exercise 6\n\n\n\n\n#### Make [this test](/edit/session3/tx.py) pass: `tx.py:TxTest:test_parse_inputs`",
"_____no_output_____"
]
],
[
[
"# Exercise 6\n\nreload(tx)\nrun(tx.TxTest('test_parse_inputs'))",
".\n----------------------------------------------------------------------\nRan 1 test in 0.002s\n\nOK\n"
]
],
[
[
"### Exercise 7\n\n\n\n\n#### Make [this test](/edit/session3/tx.py) pass: `tx.py:TxTest:test_parse_outputs`",
"_____no_output_____"
]
],
[
[
"# Exercise 7\n\nreload(tx)\nrun(tx.TxTest('test_parse_outputs'))",
".\n----------------------------------------------------------------------\nRan 1 test in 0.002s\n\nOK\n"
]
],
[
[
"### Exercise 8\n\n\n\n\n#### Make [this test](/edit/session3/tx.py) pass: `tx.py:TxTest:test_parse_locktime`",
"_____no_output_____"
]
],
[
[
"# Exercise 8\n\nreload(tx)\nrun(tx.TxTest('test_parse_locktime'))",
".\n----------------------------------------------------------------------\nRan 1 test in 0.002s\n\nOK\n"
]
],
[
[
"### Exercise 9\nWhat is the scriptSig from the second input in this tx? What is the scriptPubKey and amount of the first output in this tx? What is the amount for the second output?\n\n```\n010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830ff86e42dd205c0e089bc2a821657e951c002201024a10366077f87d6bce1f7100ad8cfa8a064b39d4e8fe4ea13a7b71aa8180f012102f0da57e85eec2934a82a585ea337ce2f4998b50ae699dd79f5880e253dafafb7feffffffeb8f51f4038dc17e6313cf831d4f02281c2a468bde0fafd37f1bf882729e7fd3000000006a47304402207899531a52d59a6de200179928ca900254a36b8dff8bb75f5f5d71b1cdc26125022008b422690b8461cb52c3cc30330b23d574351872b7c361e9aae3649071c1a7160121035d5c93d9ac96881f19ba1f686f15f009ded7c62efe85a872e6a19b43c15a2937feffffff567bf40595119d1bb8a3037c356efd56170b64cbcc160fb028fa10704b45d775000000006a47304402204c7c7818424c7f7911da6cddc59655a70af1cb5eaf17c69dadbfc74ffa0b662f02207599e08bc8023693ad4e9527dc42c34210f7a7d1d1ddfc8492b654a11e7620a0012102158b46fbdff65d0172b7989aec8850aa0dae49abfb84c81ae6e5b251a58ace5cfeffffffd63a5e6c16e620f86f375925b21cabaf736c779f88fd04dcad51d26690f7f345010000006a47304402200633ea0d3314bea0d95b3cd8dadb2ef79ea8331ffe1e61f762c0f6daea0fabde022029f23b3e9c30f080446150b23852028751635dcee2be669c2a1686a4b5edf304012103ffd6f4a67e94aba353a00882e563ff2722eb4cff0ad6006e86ee20dfe7520d55feffffff0251430f00000000001976a914ab0c0b2e98b1ab6dbf67d4750b0a56244948a87988ac005a6202000000001976a9143c82d7df364eb6c75be8c80df2b3eda8db57397088ac46430600\n```\n",
"_____no_output_____"
]
],
[
[
"# Exercise 9\n\nfrom io import BytesIO\nfrom tx import Tx\nhex_transaction = '010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830ff86e42dd205c0e089bc2a821657e951c002201024a10366077f87d6bce1f7100ad8cfa8a064b39d4e8fe4ea13a7b71aa8180f012102f0da57e85eec2934a82a585ea337ce2f4998b50ae699dd79f5880e253dafafb7feffffffeb8f51f4038dc17e6313cf831d4f02281c2a468bde0fafd37f1bf882729e7fd3000000006a47304402207899531a52d59a6de200179928ca900254a36b8dff8bb75f5f5d71b1cdc26125022008b422690b8461cb52c3cc30330b23d574351872b7c361e9aae3649071c1a7160121035d5c93d9ac96881f19ba1f686f15f009ded7c62efe85a872e6a19b43c15a2937feffffff567bf40595119d1bb8a3037c356efd56170b64cbcc160fb028fa10704b45d775000000006a47304402204c7c7818424c7f7911da6cddc59655a70af1cb5eaf17c69dadbfc74ffa0b662f02207599e08bc8023693ad4e9527dc42c34210f7a7d1d1ddfc8492b654a11e7620a0012102158b46fbdff65d0172b7989aec8850aa0dae49abfb84c81ae6e5b251a58ace5cfeffffffd63a5e6c16e620f86f375925b21cabaf736c779f88fd04dcad51d26690f7f345010000006a47304402200633ea0d3314bea0d95b3cd8dadb2ef79ea8331ffe1e61f762c0f6daea0fabde022029f23b3e9c30f080446150b23852028751635dcee2be669c2a1686a4b5edf304012103ffd6f4a67e94aba353a00882e563ff2722eb4cff0ad6006e86ee20dfe7520d55feffffff0251430f00000000001976a914ab0c0b2e98b1ab6dbf67d4750b0a56244948a87988ac005a6202000000001976a9143c82d7df364eb6c75be8c80df2b3eda8db57397088ac46430600'\n# bytes.fromhex to get the binary representation\nraw_tx = bytes.fromhex(hex_transaction)\n# create a stream using BytesIO()\nstream = BytesIO(raw_tx)\n# Tx.parse() the stream\ntx = Tx.parse(stream)\n# print tx's second input's scriptSig\nprint(tx.tx_ins[1].script_sig)\n# print tx's first output's scriptPubKey\nprint(tx.tx_outs[0].script_pubkey)\n# print tx's second output's amount\nprint(tx.tx_outs[1].amount)",
"304402207899531a52d59a6de200179928ca900254a36b8dff8bb75f5f5d71b1cdc26125022008b422690b8461cb52c3cc30330b23d574351872b7c361e9aae3649071c1a71601 035d5c93d9ac96881f19ba1f686f15f009ded7c62efe85a872e6a19b43c15a2937\nOP_DUP OP_HASH160 ab0c0b2e98b1ab6dbf67d4750b0a56244948a879 OP_EQUALVERIFY OP_CHECKSIG\n40000000\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc42dd73eba2f14068eecd71be552754196bcc0 | 257,797 | ipynb | Jupyter Notebook | Cognoma_discarded/WIP-MultilayerPerceptron-KT12.ipynb | KT12/Training | ac4de382a1387ccfe51404eb3302cc518762a781 | [
"MIT"
] | 1 | 2017-08-17T04:44:53.000Z | 2017-08-17T04:44:53.000Z | Cognoma_discarded/WIP-MultilayerPerceptron-KT12.ipynb | KT12/training | ac4de382a1387ccfe51404eb3302cc518762a781 | [
"MIT"
] | null | null | null | Cognoma_discarded/WIP-MultilayerPerceptron-KT12.ipynb | KT12/training | ac4de382a1387ccfe51404eb3302cc518762a781 | [
"MIT"
] | null | null | null | 230.587657 | 79,702 | 0.891981 | [
[
[
"### Thank you for reviewing. The multilayer perceptron in sklearn does not lend itself to simple AUC analysis. Will attempt in keras/tensorflow with softmax output layer to generate more analysis.",
"_____no_output_____"
],
[
"# Create a multilayer perceptron classifier to predict TP53 mutation from gene expression data in TCGA",
"_____no_output_____"
]
],
[
[
"import os\nimport urllib\nimport random\nimport warnings\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn import preprocessing, grid_search\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import roc_auc_score, roc_curve\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.feature_selection import SelectKBest\nfrom statsmodels.robust.scale import mad",
"/home/ktt/anaconda2/envs/cognoma-machine-learning/lib/python3.5/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n/home/ktt/anaconda2/envs/cognoma-machine-learning/lib/python3.5/site-packages/sklearn/grid_search.py:43: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.\n DeprecationWarning)\n"
],
[
"%matplotlib inline\nplt.style.use('seaborn-notebook')",
"_____no_output_____"
]
],
[
[
"*Please look at the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html) regarding the hyperparameters and modules of the MLP Classifier*\n\n*Here is some [documentation](http://scikit-learn.org/stable/modules/neural_networks_supervised.html#multi-layer-perceptron) regarding the implementation and some examples*\n\n*Here is some [information](https://ghr.nlm.nih.gov/gene/TP53) about TP53*",
"_____no_output_____"
],
[
"## Load Data",
"_____no_output_____"
]
],
[
[
"if not os.path.exists('data'):\n os.makedirs('data')",
"_____no_output_____"
],
[
"url_to_path = {\n # X matrix\n 'https://ndownloader.figshare.com/files/5514386':\n os.path.join('data', 'expression.tsv.bz2'),\n # Y Matrix\n 'https://ndownloader.figshare.com/files/5514389':\n os.path.join('data', 'mutation-matrix.tsv.bz2'),\n}\n\nfor url, path in url_to_path.items():\n if not os.path.exists(path):\n urllib.request.urlretrieve(url, path)",
"_____no_output_____"
],
[
"%%time\npath = os.path.join('data', 'expression.tsv.bz2')\nX = pd.read_table(path, index_col=0)",
"CPU times: user 2min 17s, sys: 204 ms, total: 2min 17s\nWall time: 2min 17s\n"
],
[
"%%time\nX.describe()",
"CPU times: user 39 s, sys: 100 ms, total: 39.1 s\nWall time: 39.1 s\n"
],
[
"%%time\npath = os.path.join('data', 'mutation-matrix.tsv.bz2')\nY = pd.read_table(path, index_col=0)",
"CPU times: user 2min 2s, sys: 1.32 s, total: 2min 3s\nWall time: 2min 3s\n"
],
[
"# We're going to be building a 'TP53' classifier \nGENE = 'TP53'",
"_____no_output_____"
],
[
"y = Y[GENE]",
"_____no_output_____"
],
[
"# The Series now holds TP53 Mutation Status for each Sample\ny.head(6)",
"_____no_output_____"
],
[
"# Here are the percentage of tumors with NF1\ny.value_counts(True)",
"_____no_output_____"
]
],
[
[
"## Specify model configuration",
"_____no_output_____"
]
],
[
[
"# Parameter Sweep for Hyperparameters\nn_feature_kept = 500\nparam_fixed = {\n 'solver': 'adam',\n 'random_state' : 0,\n 'activation': 'tanh'\n}\nparam_grid = {\n 'alpha': [10 ** x for x in range(-5, 3)],\n 'learning_rate_init': [0.001, 0.0025, 0.005, 0.01, 0.025, 0.05],\n}",
"_____no_output_____"
]
],
[
[
"## Set aside 10% of the data for testing",
"_____no_output_____"
]
],
[
[
"# Typically, this can only be done where the number of mutations is large enough\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)\n'Size: {:,} features, {:,} training samples, {:,} testing samples'.format(len(X.columns), len(X_train), len(X_test))",
"_____no_output_____"
]
],
[
[
"## Median absolute deviation feature selection",
"_____no_output_____"
]
],
[
[
"def fs_mad(x, y):\n \"\"\" \n Get the median absolute deviation (MAD) for each column of x\n \"\"\"\n scores = mad(x) \n return scores, np.array([np.NaN]*len(scores))\n\n# select the top features with the highest MAD\nfeature_select = SelectKBest(fs_mad, k=n_feature_kept)",
"_____no_output_____"
]
],
[
[
"## Define pipeline and Cross validation model fitting",
"_____no_output_____"
],
[
"# NB\n\nDue to my inability to get the 'roc_auc' score to properly chart, will change scoring to 'f1_micro'\nPlease see [documentation](http://scikit-learn.org/stable/modules/model_evaluation.html)",
"_____no_output_____"
]
],
[
[
"# Include loss='log' in param_grid doesn't work with pipeline somehow\nclf = MLPClassifier(solver=param_fixed['solver'], activation=param_fixed['activation'],\n hidden_layer_sizes=[64, 8], random_state=param_fixed['random_state'])\n\n# joblib is used to cross-validate in parallel by setting `n_jobs=-1` in GridSearchCV\n# Supress joblib warning. See https://github.com/scikit-learn/scikit-learn/issues/6370\nwarnings.filterwarnings('ignore', message='Changing the shape of non-C contiguous array')\nclf_grid = grid_search.GridSearchCV(estimator=clf, param_grid=param_grid, n_jobs=-1, scoring='f1_micro')\npipeline = make_pipeline(\n feature_select, # Feature selection\n StandardScaler(), # Feature scaling\n clf_grid)",
"_____no_output_____"
],
[
"%%time\n# Fit the model (the computationally intensive part)\npipeline.fit(X=X_train, y=y_train)\nbest_clf = clf_grid.best_estimator_\nfeature_mask = feature_select.get_support() # Get a boolean array indicating the selected features",
"CPU times: user 23.9 s, sys: 3.98 s, total: 27.9 s\nWall time: 5min 27s\n"
],
[
"clf_grid.best_params_",
"_____no_output_____"
]
],
[
[
"# History of best parameters/testing",
"_____no_output_____"
]
],
[
[
"# Tanh activation, large network MLP [250, 128, 64]\n# {'alpha': 0.01, 'learning_rate_init': 0.05}\n# {'alpha': 0.1, 'learning_rate_init': 0.05}\n\n# Add relu and sigmoid activation functions to grid search\n# {'activation': 'relu', 'alpha': 0.001, 'learning_rate_init': 0.05}\n\n# Designated relu as sole activation function\n# {'alpha': 0.1, 'learning_rate_init': 0.01}\n\n# Change activation to tanh to increase chance of convergence, reduced number of nodes [64, 8]\n# {'alpha': 1, 'learning_rate_init': 0.005}\n\n# Change in scoring to F1 micro\n# {'alpha': 0.001, 'learning_rate_init': 0.01}\n\n# Expanded parameters for grid search\n#{'alpha': 0.0001, 'learning_rate_init': 0.0025}\n\n# Expanded parameters for grid search, again\n#{'alpha': 0.0001, 'learning_rate_init': 0.0025}",
"_____no_output_____"
],
[
"best_clf",
"_____no_output_____"
]
],
[
[
"## Visualize hyperparameters performance",
"_____no_output_____"
]
],
[
[
"def grid_scores_to_df(grid_scores):\n \"\"\"\n Convert a sklearn.grid_search.GridSearchCV.grid_scores_ attribute to \n a tidy pandas DataFrame where each row is a hyperparameter-fold combinatination.\n \"\"\"\n rows = list()\n for grid_score in grid_scores:\n for fold, score in enumerate(grid_score.cv_validation_scores):\n row = grid_score.parameters.copy()\n row['fold'] = fold\n row['score'] = score\n rows.append(row)\n df = pd.DataFrame(rows)\n return df",
"_____no_output_____"
]
],
[
[
"## Process Mutation Matrix",
"_____no_output_____"
]
],
[
[
"cv_score_df = grid_scores_to_df(clf_grid.grid_scores_)\ncv_score_df.head(2)",
"_____no_output_____"
],
[
"# Cross-validated performance distribution\nfacet_grid = sns.factorplot(x='alpha', y='score', col='learning_rate_init',\n data=cv_score_df, kind='violin', size=4, aspect=1)\nfacet_grid.set_ylabels('F1 Micro Score');",
"_____no_output_____"
],
[
"# Cross-validated performance heatmap\ncv_score_mat = pd.pivot_table(cv_score_df, values='score', index='alpha', columns='learning_rate_init')\nax = sns.heatmap(cv_score_mat, annot=True, fmt='.1%')\nax.set_xlabel('Learning Rate')\nax.set_ylabel('L2 Regularization Parameter (alpha)');",
"_____no_output_____"
]
],
[
[
"# WARNING!\n## Errors below!",
"_____no_output_____"
],
[
"## Use Optimal Hyperparameters to Output ROC Curve",
"_____no_output_____"
],
[
"MLPClassifier has no method called `decision_function()`, so need to use `pipeline.predict()`.",
"_____no_output_____"
]
],
[
[
"y_pred_train = pipeline.predict(X_train)\ny_pred_test = pipeline.predict(X_test)\n\ndef get_threshold_metrics(y_true, y_pred):\n roc_columns = ['fpr', 'tpr', 'threshold']\n roc_items = zip(roc_columns, roc_curve(y_true, y_pred))\n roc_df = pd.DataFrame.from_items(roc_items)\n auroc = roc_auc_score(y_true, y_pred)\n return {'auroc': auroc, 'roc_df': roc_df}\n\nmetrics_train = get_threshold_metrics(y_train, y_pred_train)\nmetrics_test = get_threshold_metrics(y_test, y_pred_test)",
"_____no_output_____"
],
[
"# Plot ROC\nplt.figure()\nfor label, metrics in ('Training', metrics_train), ('Testing', metrics_test):\n roc_df = metrics['roc_df']\n plt.plot(roc_df.fpr, roc_df.tpr,\n label='{} (AUROC = {:.1%})'.format(label, metrics['auroc']))\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('Predicting TP53 mutation from gene expression (ROC curves)')\nplt.legend(loc='lower right');",
"_____no_output_____"
]
],
[
[
"Above does not look right.\nWill attempt again with `best_clf.predict()`, `feature_select.transform()`.\n\nhttp://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html#sklearn.feature_selection.SelectKBest.transform",
"_____no_output_____"
]
],
[
[
"y_pred_train = best_clf.predict(feature_select.transform(X_train))\ny_pred_test = best_clf.predict(feature_select.transform(X_test))\n\ndef get_threshold_metrics(y_true, y_pred):\n roc_columns = ['fpr', 'tpr', 'threshold']\n roc_items = zip(roc_columns, roc_curve(y_true, y_pred))\n roc_df = pd.DataFrame.from_items(roc_items)\n auroc = roc_auc_score(y_true, y_pred)\n return {'auroc': auroc, 'roc_df': roc_df}\n\nmetrics_train = get_threshold_metrics(y_train, y_pred_train)\nmetrics_test = get_threshold_metrics(y_test, y_pred_test)",
"_____no_output_____"
],
[
"# Plot ROC\nplt.figure()\nfor label, metrics in ('Training', metrics_train), ('Testing', metrics_test):\n roc_df = metrics['roc_df']\n plt.plot(roc_df.fpr, roc_df.tpr,\n label='{} (AUROC = {:.1%})'.format(label, metrics['auroc']))\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('Predicting TP53 mutation from gene expression (ROC curves)')\nplt.legend(loc='lower right');",
"_____no_output_____"
]
],
[
[
"## What are the classifier coefficients?",
"_____no_output_____"
],
[
"In a MLP, the classifier coefficients are not as directly interpretable.",
"_____no_output_____"
],
[
"## Investigate the predictions",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
ecc448fda5302c9ca20097b9e2770794df41d37a | 7,722 | ipynb | Jupyter Notebook | COVID-19-python/COVID-19.ipynb | richross/blog | 73478b4576f45accd9a20f80a6242271adf01fe8 | [
"MIT"
] | 10 | 2019-12-05T23:11:14.000Z | 2022-03-28T06:17:02.000Z | COVID-19-python/COVID-19.ipynb | richross/blog | 73478b4576f45accd9a20f80a6242271adf01fe8 | [
"MIT"
] | null | null | null | COVID-19-python/COVID-19.ipynb | richross/blog | 73478b4576f45accd9a20f80a6242271adf01fe8 | [
"MIT"
] | 11 | 2019-10-16T02:22:05.000Z | 2022-03-25T13:45:57.000Z | 34.017621 | 360 | 0.538332 | [
[
[
"%matplotlib inline\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd",
"_____no_output_____"
],
[
"cases = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_US.csv')\ndeaths = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_US.csv')\n",
"_____no_output_____"
],
[
"print(cases.head())\nprint(deaths.head())",
"_____no_output_____"
],
[
"cases_CA = cases[cases[\"Province_State\"] == \"California\"]",
"_____no_output_____"
],
[
"cases_CA_indexed = cases_CA.set_index(\"Admin2\")\ncases_CA_T = cases_CA_indexed.T",
"_____no_output_____"
],
[
"cases_clean = cases_CA_T.drop(['UID','iso2','iso3','code3','FIPS','Province_State','Country_Region','Lat','Long_','Combined_Key'])",
"_____no_output_____"
],
[
"deaths_clean = deaths[deaths[\"Province_State\"] == \"California\"].set_index(\"Admin2\").T.drop(['UID','iso2','iso3','code3','FIPS','Province_State','Country_Region','Lat','Long_','Combined_Key']).drop(\"Population\",axis=0)",
"_____no_output_____"
],
[
"counties = ['Alameda',\n 'San Francisco',\n 'San Mateo',\n 'Santa Clara']",
"_____no_output_____"
],
[
"plot = cases_clean[counties].plot()\nplot.set_title(\"COVID-19 cases in Bay Area Counties\")",
"_____no_output_____"
],
[
"plot = deaths_clean[counties].plot(figsize=(20,10))\nplot.set_title(\"COVID-19 deaths in Bay Area Counties\")\n",
"_____no_output_____"
],
[
"cases_diff = cases_clean.diff().rolling(window=7).mean()\ndeaths_diff = deaths_clean.diff().rolling(window=7).mean()",
"_____no_output_____"
],
[
"pop = pd.read_csv('https://gist.githubusercontent.com/NillsF/7923a8c7f27ca98ec75b7e1529f259bb/raw/3bedefbe2e242addba3fb47cbcd239fbed16cd54/california.csv')\npop[\"CTYNAME\"] = pop[\"CTYNAME\"].str.replace(\" County\", \"\")\n",
"_____no_output_____"
],
[
"pop2 = pop.drop('GrowthRate',axis=1).set_index('CTYNAME')",
"_____no_output_____"
],
[
"cases_pm = cases_clean.copy()\nfor c in pop2.index.tolist():\n cases_pm[c] = cases_pm[c]/pop2.loc[c , : ]['Pop']\ncases_pm = cases_pm*1000000\n\ndeaths_pm = deaths_clean.copy()\nfor c in pop2.index.tolist():\n deaths_pm[c] = deaths_pm[c]/pop2.loc[c , : ]['Pop']\ndeaths_pm = deaths_pm*1000000",
"_____no_output_____"
],
[
"cases_pm_diff = cases_pm.diff().rolling(window=7).mean()\ndeaths_pm_diff = deaths_pm.diff().rolling(window=7).mean()",
"_____no_output_____"
],
[
"plot = cases_diff[counties].plot(figsize=(20,10))\nplot.set_title(\"7 day moving avg of new COVID-19 cases \")",
"_____no_output_____"
],
[
"plot = cases_pm_diff[counties].plot(figsize=(20,10))\nplot.set_title(\"7 day moving avg of new COVID-19 cases per million inhabitants \")",
"_____no_output_____"
],
[
"plot = deaths_pm_diff[counties].plot(figsize=(20,10))\nplot.set_title(\"7 day moving avg of daily COVID-19 deaths per million inhabitants \")",
"_____no_output_____"
],
[
"plot = cases_pm.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10))\nplot.set_title(\"Top 10 counties by COVID-19 cases per million inhabitants\")",
"_____no_output_____"
],
[
"plot = deaths_pm.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10))\nplot.set_title(\"Top 10 counties by COVID-19 deaths per million inhabitants\")",
"_____no_output_____"
],
[
"plot = cases_pm_diff.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10))\nplot.set_title(\"Top 10 counties by 7 day rolling avg COVID-19 case increases per million inhabitants\")",
"_____no_output_____"
],
[
"plot = deaths_pm_diff.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10))\nplot.set_title(\"Top 10 counties by 7 day rolling avg COVID-19 daily deaths per million inhabitants\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc46be06853b1ec670d12ab35236b9524c7aea8 | 23,943 | ipynb | Jupyter Notebook | homework/HM4/HM4.ipynb | pipiku915/CS583-2019F | 2fdfed8eb250678a4427151e28933352fb53541b | [
"MIT"
] | null | null | null | homework/HM4/HM4.ipynb | pipiku915/CS583-2019F | 2fdfed8eb250678a4427151e28933352fb53541b | [
"MIT"
] | null | null | null | homework/HM4/HM4.ipynb | pipiku915/CS583-2019F | 2fdfed8eb250678a4427151e28933352fb53541b | [
"MIT"
] | null | null | null | 30.735558 | 171 | 0.558326 | [
[
[
"# Home 4: Build a seq2seq model for machine translation.\n\n### Name: [Your-Name?]\n\n### Task: Translate English to [what-language?]",
"_____no_output_____"
],
[
"## 0. You will do the following:\n\n1. Read and run my code.\n2. Complete the code in Section 1.1 and Section 4.2.\n\n * Translation English to **German** is not acceptable!!! Try another language.\n \n3. **Make improvements.** Directly modify the code in Section 3. Do at least one of the followings. By doing more, you will get up to 2 bonus scores to the total.\n\n * Bi-LSTM instead of LSTM\n \n * Multi-task learning (e.g., both English to French and English to Spanish)\n \n * Attention\n \n4. Evaluate the translation using the BLEU score. \n\n * Optional. Up to 1 bonus scores to the total.\n \n5. Convert the notebook to .HTML file. \n\n * The HTML file must contain the code and the output after execution.\n\n6. Put the .HTML file in your own Github repo. \n\n7. Submit the link to the HTML file to Canvas. \n",
"_____no_output_____"
],
[
"### Hint: \n\nTo implement ```Bi-LSTM```, you will need the following code to build the encoder; the decoder won't be much different.",
"_____no_output_____"
]
],
[
[
"from keras.layers import Bidirectional, Concatenate\n\nencoder_bilstm = Bidirectional(LSTM(latent_dim, return_state=True, \n dropout=0.5, name='encoder_lstm'))\n_, forward_h, forward_c, backward_h, backward_c = encoder_bilstm(encoder_inputs)\n\nstate_h = Concatenate()([forward_h, backward_h])\nstate_c = Concatenate()([forward_c, backward_c])",
"_____no_output_____"
]
],
[
[
"### Hint: \n\nTo implement multi-task training, you can refer to ```Section 7.1.3 Multi-output models``` of the textbook, ```Deep Learning with Python```.",
"_____no_output_____"
],
[
"## 1. Data preparation\n\n1. Download data (e.g., \"deu-eng.zip\") from http://www.manythings.org/anki/\n2. Unzip the .ZIP file.\n3. Put the .TXT file (e.g., \"deu.txt\") in the directory \"./Data/\".",
"_____no_output_____"
],
[
"### 1.1. Load and clean text\n",
"_____no_output_____"
]
],
[
[
"import re\nimport string\nfrom unicodedata import normalize\nimport numpy\n\n# load doc into memory\ndef load_doc(filename):\n # open the file as read only\n file = open(filename, mode='rt', encoding='utf-8')\n # read all text\n text = file.read()\n # close the file\n file.close()\n return text\n\n\n# split a loaded document into sentences\ndef to_pairs(doc):\n lines = doc.strip().split('\\n')\n pairs = [line.split('\\t') for line in lines]\n return pairs\n\ndef clean_data(lines):\n cleaned = list()\n # prepare regex for char filtering\n re_print = re.compile('[^%s]' % re.escape(string.printable))\n # prepare translation table for removing punctuation\n table = str.maketrans('', '', string.punctuation)\n for pair in lines:\n clean_pair = list()\n for line in pair:\n # normalize unicode characters\n line = normalize('NFD', line).encode('ascii', 'ignore')\n line = line.decode('UTF-8')\n # tokenize on white space\n line = line.split()\n # convert to lowercase\n line = [word.lower() for word in line]\n # remove punctuation from each token\n line = [word.translate(table) for word in line]\n # remove non-printable chars form each token\n line = [re_print.sub('', w) for w in line]\n # remove tokens with numbers in them\n line = [word for word in line if word.isalpha()]\n # store as string\n clean_pair.append(' '.join(line))\n cleaned.append(clean_pair)\n return numpy.array(cleaned)",
"_____no_output_____"
]
],
[
[
"#### Fill the following blanks:",
"_____no_output_____"
]
],
[
[
"# e.g., filename = 'Data/deu.txt'\nfilename = <what is your file name?>\n\n# e.g., n_train = 20000\nn_train = <how many sentences are you going to use for training?>",
"_____no_output_____"
],
[
"# load dataset\ndoc = load_doc(filename)\n\n# split into Language1-Language2 pairs\npairs = to_pairs(doc)\n\n# clean sentences\nclean_pairs = clean_data(pairs)[0:n_train, :]",
"_____no_output_____"
],
[
"for i in range(3000, 3010):\n print('[' + clean_pairs[i, 0] + '] => [' + clean_pairs[i, 1] + ']')",
"_____no_output_____"
],
[
"input_texts = clean_pairs[:, 0]\ntarget_texts = ['\\t' + text + '\\n' for text in clean_pairs[:, 1]]\n\nprint('Length of input_texts: ' + str(input_texts.shape))\nprint('Length of target_texts: ' + str(input_texts.shape))",
"_____no_output_____"
],
[
"max_encoder_seq_length = max(len(line) for line in input_texts)\nmax_decoder_seq_length = max(len(line) for line in target_texts)\n\nprint('max length of input sentences: %d' % (max_encoder_seq_length))\nprint('max length of target sentences: %d' % (max_decoder_seq_length))",
"_____no_output_____"
]
],
[
[
"**Remark:** To this end, you have two lists of sentences: input_texts and target_texts",
"_____no_output_____"
],
[
"## 2. Text processing\n\n### 2.1. Convert texts to sequences\n\n- Input: A list of $n$ sentences (with max length $t$).\n- It is represented by a $n\\times t$ matrix after the tokenization and zero-padding.",
"_____no_output_____"
]
],
[
[
"from keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences\n\n# encode and pad sequences\ndef text2sequences(max_len, lines):\n tokenizer = Tokenizer(char_level=True, filters='')\n tokenizer.fit_on_texts(lines)\n seqs = tokenizer.texts_to_sequences(lines)\n seqs_pad = pad_sequences(seqs, maxlen=max_len, padding='post')\n return seqs_pad, tokenizer.word_index\n\n\nencoder_input_seq, input_token_index = text2sequences(max_encoder_seq_length, \n input_texts)\ndecoder_input_seq, target_token_index = text2sequences(max_decoder_seq_length, \n target_texts)\n\nprint('shape of encoder_input_seq: ' + str(encoder_input_seq.shape))\nprint('shape of input_token_index: ' + str(len(input_token_index)))\nprint('shape of decoder_input_seq: ' + str(decoder_input_seq.shape))\nprint('shape of target_token_index: ' + str(len(target_token_index)))",
"_____no_output_____"
],
[
"num_encoder_tokens = len(input_token_index) + 1\nnum_decoder_tokens = len(target_token_index) + 1\n\nprint('num_encoder_tokens: ' + str(num_encoder_tokens))\nprint('num_decoder_tokens: ' + str(num_decoder_tokens))",
"_____no_output_____"
]
],
[
[
"**Remark:** To this end, the input language and target language texts are converted to 2 matrices. \n\n- Their number of rows are both n_train.\n- Their number of columns are respective max_encoder_seq_length and max_decoder_seq_length.",
"_____no_output_____"
],
[
"The followings print a sentence and its representation as a sequence.",
"_____no_output_____"
]
],
[
[
"target_texts[100]",
"_____no_output_____"
],
[
"decoder_input_seq[100, :]",
"_____no_output_____"
]
],
[
[
"## 2.2. One-hot encode\n\n- Input: A list of $n$ sentences (with max length $t$).\n- It is represented by a $n\\times t$ matrix after the tokenization and zero-padding.\n- It is represented by a $n\\times t \\times v$ tensor ($t$ is the number of unique chars) after the one-hot encoding.",
"_____no_output_____"
]
],
[
[
"from keras.utils import to_categorical\n\n# one hot encode target sequence\ndef onehot_encode(sequences, max_len, vocab_size):\n n = len(sequences)\n data = numpy.zeros((n, max_len, vocab_size))\n for i in range(n):\n data[i, :, :] = to_categorical(sequences[i], num_classes=vocab_size)\n return data\n\nencoder_input_data = onehot_encode(encoder_input_seq, max_encoder_seq_length, num_encoder_tokens)\ndecoder_input_data = onehot_encode(decoder_input_seq, max_decoder_seq_length, num_decoder_tokens)\n\ndecoder_target_seq = numpy.zeros(decoder_input_seq.shape)\ndecoder_target_seq[:, 0:-1] = decoder_input_seq[:, 1:]\ndecoder_target_data = onehot_encode(decoder_target_seq, \n max_decoder_seq_length, \n num_decoder_tokens)\n\nprint(encoder_input_data.shape)\nprint(decoder_input_data.shape)",
"_____no_output_____"
]
],
[
[
"## 3. Build the networks (for training)\n\n- Build encoder, decoder, and connect the two modules to get \"model\". \n\n- Fit the model on the bilingual data to train the parameters in the encoder and decoder.",
"_____no_output_____"
],
[
"### 3.1. Encoder network\n\n- Input: one-hot encode of the input language\n\n- Return: \n\n -- output (all the hidden states $h_1, \\cdots , h_t$) are always discarded\n \n -- the final hidden state $h_t$\n \n -- the final conveyor belt $c_t$",
"_____no_output_____"
]
],
[
[
"from keras.layers import Input, LSTM\nfrom keras.models import Model\n\nlatent_dim = 256\n\n# inputs of the encoder network\nencoder_inputs = Input(shape=(None, num_encoder_tokens), \n name='encoder_inputs')\n\n# set the LSTM layer\nencoder_lstm = LSTM(latent_dim, return_state=True, \n dropout=0.5, name='encoder_lstm')\n_, state_h, state_c = encoder_lstm(encoder_inputs)\n\n# build the encoder network model\nencoder_model = Model(inputs=encoder_inputs, \n outputs=[state_h, state_c],\n name='encoder')",
"_____no_output_____"
]
],
[
[
"Print a summary and save the encoder network structure to \"./encoder.pdf\"",
"_____no_output_____"
]
],
[
[
"from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot, plot_model\n\nSVG(model_to_dot(encoder_model, show_shapes=False).create(prog='dot', format='svg'))\n\nplot_model(\n model=encoder_model, show_shapes=False,\n to_file='encoder.pdf'\n)\n\nencoder_model.summary()",
"_____no_output_____"
]
],
[
[
"### 3.2. Decoder network\n\n- Inputs: \n\n -- one-hot encode of the target language\n \n -- The initial hidden state $h_t$ \n \n -- The initial conveyor belt $c_t$ \n\n- Return: \n\n -- output (all the hidden states) $h_1, \\cdots , h_t$\n\n -- the final hidden state $h_t$ (discarded in the training and used in the prediction)\n \n -- the final conveyor belt $c_t$ (discarded in the training and used in the prediction)",
"_____no_output_____"
]
],
[
[
"from keras.layers import Input, LSTM, Dense\nfrom keras.models import Model\n\n# inputs of the decoder network\ndecoder_input_h = Input(shape=(latent_dim,), name='decoder_input_h')\ndecoder_input_c = Input(shape=(latent_dim,), name='decoder_input_c')\ndecoder_input_x = Input(shape=(None, num_decoder_tokens), name='decoder_input_x')\n\n# set the LSTM layer\ndecoder_lstm = LSTM(latent_dim, return_sequences=True, \n return_state=True, dropout=0.5, name='decoder_lstm')\ndecoder_lstm_outputs, state_h, state_c = decoder_lstm(decoder_input_x, \n initial_state=[decoder_input_h, decoder_input_c])\n\n# set the dense layer\ndecoder_dense = Dense(num_decoder_tokens, activation='softmax', name='decoder_dense')\ndecoder_outputs = decoder_dense(decoder_lstm_outputs)\n\n# build the decoder network model\ndecoder_model = Model(inputs=[decoder_input_x, decoder_input_h, decoder_input_c],\n outputs=[decoder_outputs, state_h, state_c],\n name='decoder')",
"_____no_output_____"
]
],
[
[
"Print a summary and save the encoder network structure to \"./decoder.pdf\"",
"_____no_output_____"
]
],
[
[
"from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot, plot_model\n\nSVG(model_to_dot(decoder_model, show_shapes=False).create(prog='dot', format='svg'))\n\nplot_model(\n model=decoder_model, show_shapes=False,\n to_file='decoder.pdf'\n)\n\ndecoder_model.summary()",
"_____no_output_____"
]
],
[
[
"### 3.3. Connect the encoder and decoder",
"_____no_output_____"
]
],
[
[
"# input layers\nencoder_input_x = Input(shape=(None, num_encoder_tokens), name='encoder_input_x')\ndecoder_input_x = Input(shape=(None, num_decoder_tokens), name='decoder_input_x')\n\n# connect encoder to decoder\nencoder_final_states = encoder_model([encoder_input_x])\ndecoder_lstm_output, _, _ = decoder_lstm(decoder_input_x, initial_state=encoder_final_states)\ndecoder_pred = decoder_dense(decoder_lstm_output)\n\nmodel = Model(inputs=[encoder_input_x, decoder_input_x], \n outputs=decoder_pred, \n name='model_training')",
"_____no_output_____"
],
[
"print(state_h)\nprint(decoder_input_h)",
"_____no_output_____"
],
[
"from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot, plot_model\n\nSVG(model_to_dot(model, show_shapes=False).create(prog='dot', format='svg'))\n\nplot_model(\n model=model, show_shapes=False,\n to_file='model_training.pdf'\n)\n\nmodel.summary()",
"_____no_output_____"
]
],
[
[
"### 3.5. Fit the model on the bilingual dataset\n\n- encoder_input_data: one-hot encode of the input language\n\n- decoder_input_data: one-hot encode of the input language\n\n- decoder_target_data: labels (left shift of decoder_input_data)\n\n- tune the hyper-parameters\n\n- stop when the validation loss stop decreasing.",
"_____no_output_____"
]
],
[
[
"print('shape of encoder_input_data' + str(encoder_input_data.shape))\nprint('shape of decoder_input_data' + str(decoder_input_data.shape))\nprint('shape of decoder_target_data' + str(decoder_target_data.shape))",
"_____no_output_____"
],
[
"model.compile(optimizer='rmsprop', loss='categorical_crossentropy')\n\nmodel.fit([encoder_input_data, decoder_input_data], # training data\n decoder_target_data, # labels (left shift of the target sequences)\n batch_size=64, epochs=50, validation_split=0.2)\n\nmodel.save('seq2seq.h5')",
"_____no_output_____"
]
],
[
[
"## 4. Make predictions\n\n\n### 4.1. Translate English to XXX\n\n1. Encoder read a sentence (source language) and output its final states, $h_t$ and $c_t$.\n2. Take the [star] sign \"\\t\" and the final state $h_t$ and $c_t$ as input and run the decoder.\n3. Get the new states and predicted probability distribution.\n4. sample a char from the predicted probability distribution\n5. take the sampled char and the new states as input and repeat the process (stop if reach the [stop] sign \"\\n\").",
"_____no_output_____"
]
],
[
[
"# Reverse-lookup token index to decode sequences back to something readable.\nreverse_input_char_index = dict((i, char) for char, i in input_token_index.items())\nreverse_target_char_index = dict((i, char) for char, i in target_token_index.items())",
"_____no_output_____"
],
[
"def decode_sequence(input_seq):\n states_value = encoder_model.predict(input_seq)\n\n target_seq = numpy.zeros((1, 1, num_decoder_tokens))\n target_seq[0, 0, target_token_index['\\t']] = 1.\n\n stop_condition = False\n decoded_sentence = ''\n while not stop_condition:\n output_tokens, h, c = decoder_model.predict([target_seq] + states_value)\n\n # this line of code is greedy selection\n # try to use multinomial sampling instead (with temperature)\n sampled_token_index = numpy.argmax(output_tokens[0, -1, :])\n \n sampled_char = reverse_target_char_index[sampled_token_index]\n decoded_sentence += sampled_char\n\n if (sampled_char == '\\n' or\n len(decoded_sentence) > max_decoder_seq_length):\n stop_condition = True\n\n target_seq = numpy.zeros((1, 1, num_decoder_tokens))\n target_seq[0, 0, sampled_token_index] = 1.\n\n states_value = [h, c]\n\n return decoded_sentence\n",
"_____no_output_____"
],
[
"for seq_index in range(2100, 2120):\n # Take one sequence (part of the training set)\n # for trying out decoding.\n input_seq = encoder_input_data[seq_index: seq_index + 1]\n decoded_sentence = decode_sequence(input_seq)\n print('-')\n print('English: ', input_texts[seq_index])\n print('German (true): ', target_texts[seq_index][1:-1])\n print('German (pred): ', decoded_sentence[0:-1])\n",
"_____no_output_____"
]
],
[
[
"### 4.2. Translate an English sentence to the target language\n\n1. Tokenization\n2. One-hot encode\n3. Translate",
"_____no_output_____"
]
],
[
[
"input_sentence = 'why is that'\n\ninput_sequence = <do tokenization...>\n\ninput_x = <do one-hot encode...>\n\ntranslated_sentence = <do translation...>\n\nprint('source sentence is: ' + input_sentence)\nprint('translated sentence is: ' + translated_sentence)",
"_____no_output_____"
]
],
[
[
"## 5. Evaluate the translation using BLEU score\n\nReference: \n- https://machinelearningmastery.com/calculate-bleu-score-for-text-python/\n- https://en.wikipedia.org/wiki/BLEU\n\n\n**Hint:** \n\n- Randomly partition the dataset to training, validation, and test. \n\n- Evaluate the BLEU score using the test set. Report the average.\n\n- A reasonable BLEU score should be 0.1 ~ 0.3.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecc47cb80bdf8a0eb63f6d1d50ea1511e6053106 | 173,812 | ipynb | Jupyter Notebook | Oefening_6b_BERT_Semantic_Search.ipynb | HansHenseler/lcdstrack | 64b12e82a37bde502bc76acac7f4491adcf49bc0 | [
"Apache-2.0"
] | null | null | null | Oefening_6b_BERT_Semantic_Search.ipynb | HansHenseler/lcdstrack | 64b12e82a37bde502bc76acac7f4491adcf49bc0 | [
"Apache-2.0"
] | null | null | null | Oefening_6b_BERT_Semantic_Search.ipynb | HansHenseler/lcdstrack | 64b12e82a37bde502bc76acac7f4491adcf49bc0 | [
"Apache-2.0"
] | null | null | null | 37.1076 | 392 | 0.501197 | [
[
[
"<a href=\"https://colab.research.google.com/github/HansHenseler/lcdstrack/blob/main/Oefening_6b_BERT_Semantic_Search.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Sentence Embeddings using Siamese BERT-Networks\n---\nThis Google Colab Notebook illustrates using the Sentence Transformer python library to quickly create BERT embeddings for sentences and perform fast semantic searches.\n\nThe Sentence Transformer library is available on [pypi](https://pypi.org/project/sentence-transformers/) and [github](https://github.com/UKPLab/sentence-transformers). The library implements code from the ACL 2019 paper entitled \"[Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://www.aclweb.org/anthology/D19-1410.pdf)\" by Nils Reimers and Iryna Gurevych.\n",
"_____no_output_____"
],
[
"## Install Sentence Transformer Library",
"_____no_output_____"
]
],
[
[
"# Install the library using pip\n!pip install sentence-transformers",
"Collecting sentence-transformers\n Downloading sentence-transformers-2.1.0.tar.gz (78 kB)\n\u001b[K |████████████████████████████████| 78 kB 3.6 MB/s \n\u001b[?25hCollecting transformers<5.0.0,>=4.6.0\n Downloading transformers-4.12.3-py3-none-any.whl (3.1 MB)\n\u001b[K |████████████████████████████████| 3.1 MB 10.9 MB/s \n\u001b[?25hCollecting tokenizers>=0.10.3\n Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)\n\u001b[K |████████████████████████████████| 3.3 MB 35.2 MB/s \n\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (4.62.3)\nRequirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.10.0+cu111)\nRequirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.11.1+cu111)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.19.5)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.22.2.post1)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.4.1)\nRequirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (3.2.5)\nCollecting sentencepiece\n Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)\n\u001b[K |████████████████████████████████| 1.2 MB 35.3 MB/s \n\u001b[?25hCollecting huggingface-hub\n Downloading huggingface_hub-0.1.2-py3-none-any.whl (59 kB)\n\u001b[K |████████████████████████████████| 59 kB 6.3 MB/s \n\u001b[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.6.0->sentence-transformers) (3.10.0.2)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (3.3.2)\nCollecting pyyaml>=5.1\n Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)\n\u001b[K |████████████████████████████████| 596 kB 46.7 MB/s \n\u001b[?25hRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (21.2)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (2.23.0)\nCollecting sacremoses\n Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)\n\u001b[K |████████████████████████████████| 895 kB 36.9 MB/s \n\u001b[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (2019.12.20)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (4.8.2)\nRequirement already satisfied: pyparsing<3,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers<5.0.0,>=4.6.0->sentence-transformers) (2.4.7)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers<5.0.0,>=4.6.0->sentence-transformers) (3.6.0)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from nltk->sentence-transformers) (1.15.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (2021.10.8)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers<5.0.0,>=4.6.0->sentence-transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers<5.0.0,>=4.6.0->sentence-transformers) (1.1.0)\nRequirement already satisfied: pillow!=8.3.0,>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision->sentence-transformers) (7.1.2)\nBuilding wheels for collected packages: sentence-transformers\n Building wheel for sentence-transformers (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for sentence-transformers: filename=sentence_transformers-2.1.0-py3-none-any.whl size=121000 sha256=a1ca8dfcb54fa8b5af815fb741e3657f51b253d0dac3a7dca6975bb9c1c5a1a9\n Stored in directory: /root/.cache/pip/wheels/90/f0/bb/ed1add84da70092ea526466eadc2bfb197c4bcb8d4fa5f7bad\nSuccessfully built sentence-transformers\nInstalling collected packages: pyyaml, tokenizers, sacremoses, huggingface-hub, transformers, sentencepiece, sentence-transformers\n Attempting uninstall: pyyaml\n Found existing installation: PyYAML 3.13\n Uninstalling PyYAML-3.13:\n Successfully uninstalled PyYAML-3.13\nSuccessfully installed huggingface-hub-0.1.2 pyyaml-6.0 sacremoses-0.0.46 sentence-transformers-2.1.0 sentencepiece-0.1.96 tokenizers-0.10.3 transformers-4.12.3\n"
]
],
[
[
"## Load the BERT Model",
"_____no_output_____"
]
],
[
[
"from sentence_transformers import SentenceTransformer\n\n# Load the BERT model. Various models trained on Natural Language Inference (NLI) https://github.com/UKPLab/sentence-transformers/blob/master/docs/pretrained-models/nli-models.md and \n# Semantic Textual Similarity are available https://github.com/UKPLab/sentence-transformers/blob/master/docs/pretrained-models/sts-models.md\n\nmodel = SentenceTransformer('sentence-transformers/stsb-xlm-r-multilingual')\n",
"_____no_output_____"
]
],
[
[
"## Setup a Corpus",
"_____no_output_____"
]
],
[
[
"# A corpus is a list with documents split by sentences.\n\nsentences = ['A man is eating food.',\n 'A man is eating a piece of bread.',\n 'Apple sold fewer iPhones this quarter.',\n 'Apple pie is delicious.',\n 'Dombo is an elephant',\n 'A man is riding a horse'\n ]\n\n# Each sentence is encoded as a 1-D vector with 78 columns\nsentence_embeddings = model.encode(sentences)\n\nprint('Sample BERT embedding vector - length', len(sentence_embeddings[0]))",
"Sample BERT embedding vector - length 768\n"
]
],
[
[
"## Perform Semantic Search",
"_____no_output_____"
]
],
[
[
"import scipy\n\nquery = 'Donald duck' #@param {type: 'string'}\n\nqueries = [query]\nquery_embeddings = model.encode(queries)\n\n# Find the closest 5 sentences of the corpus for each query sentence based on cosine similarity\nnumber_top_matches = 5 #@param {type: \"number\"}\n\nprint(\"Semantic Search Results\")\n\nfor query, query_embedding in zip(queries, query_embeddings):\n distances = scipy.spatial.distance.cdist([query_embedding], sentence_embeddings, \"cosine\")[0]\n\n results = zip(range(len(distances)), distances)\n results = sorted(results, key=lambda x: x[1])\n\n print(\"\\n\\n======================\\n\\n\")\n print(\"Query:\", query)\n print(\"\\nTop 5 most similar sentences in corpus:\")\n\n for idx, distance in results[0:number_top_matches]:\n print(sentences[idx].strip(), \"(Cosine Score: %.4f)\" % (1-distance))",
"Semantic Search Results\n\n\n======================\n\n\nQuery: Donald duck\n\nTop 5 most similar sentences in corpus:\nDombo is an elephant (Cosine Score: 0.3147)\nA man is riding a horse (Cosine Score: 0.1038)\nApple pie is delicious. (Cosine Score: 0.1030)\nApple sold fewer iPhones this quarter. (Cosine Score: 0.0728)\nA man is eating food. (Cosine Score: -0.0254)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc491139c4c149d23561801b346299bcf676804 | 17,831 | ipynb | Jupyter Notebook | notebook/StudyCases/ReadCase2_Results.ipynb | quexiang/STWR | e06bd4415f85de7f8543dfb7bef446f38b069349 | [
"BSD-3-Clause"
] | 27 | 2019-10-02T05:22:43.000Z | 2022-02-28T08:36:08.000Z | notebook/StudyCases/ReadCase2_Results.ipynb | quexiang/STWR | e06bd4415f85de7f8543dfb7bef446f38b069349 | [
"BSD-3-Clause"
] | null | null | null | notebook/StudyCases/ReadCase2_Results.ipynb | quexiang/STWR | e06bd4415f85de7f8543dfb7bef446f38b069349 | [
"BSD-3-Clause"
] | 11 | 2020-06-11T12:43:41.000Z | 2022-02-03T13:24:40.000Z | 46.800525 | 155 | 0.327688 | [
[
[
"import numpy as np\nimport libpysal as ps\nfrom stwr.gwr import GWR, MGWR,STWR\nfrom stwr.sel_bw import *\nfrom stwr.utils import shift_colormap, truncate_colormap\nimport geopandas as gp\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport pandas as pd\nimport math\nfrom matplotlib.gridspec import GridSpec\nimport time\nimport csv \nimport copy ",
"_____no_output_____"
],
[
"import rasterio\nimport rasterio.plot\nimport rasterio.features\nimport rasterio.warp\nimport pyproj",
"_____no_output_____"
],
[
"#读入数据需要有这些\ncal_coords_list =[]\ncal_y_list =[]\ncal_X_list =[]\ndelt_stwr_intervel =[0.0]\ncsvFile = open(\"./Data_STWR/SimulatedData/case2/outfile_tol_adj.csv\", \"r\")\ndf = pd.read_csv(csvFile,header = 0, names=['','cal_y','cal_x1','cal_x2','cal_coordsX','cal_coordsY','time_stamps'],\n dtype = {\"\" : \"float64\",\"cal_y\":\"float64\",\n \"cal_x1\":\"float64\",\"cal_x2\":\"float64\",\"cal_coordsX\":\"float64\",\"cal_coordsY\":\"float64\",\n \"time_stamps\":\"float64\"},\n skip_blank_lines = True,\n keep_default_na = False)\n\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 333 entries, 0 to 332\nData columns (total 7 columns):\n 333 non-null float64\ncal_y 333 non-null float64\ncal_x1 333 non-null float64\ncal_x2 333 non-null float64\ncal_coordsX 333 non-null float64\ncal_coordsY 333 non-null float64\ntime_stamps 333 non-null float64\ndtypes: float64(7)\nmemory usage: 18.3 KB\n"
],
[
"df = df.sort_values(by=['time_stamps']) \nall_data = df.values\ntick_time = all_data[0,-1]\ncal_coord_tick = []\ncal_X_tick =[]\ncal_y_tick =[]\ntime_tol = 1.0e-7\nlensdata = len(all_data)\nfor row in range(lensdata):\n cur_time = all_data[row,-1]\n if(abs(cur_time-tick_time)>time_tol):\n cal_coords_list.append(np.asarray(cal_coord_tick))\n cal_X_list.append(np.asarray(cal_X_tick))\n cal_y_list.append(np.asarray(cal_y_tick))\n delt_t = cur_time - tick_time\n delt_stwr_intervel.append(delt_t) \n tick_time =cur_time\n cal_coord_tick = []\n cal_X_tick =[]\n cal_y_tick =[]\n coords_tick = np.array([all_data[row,4],all_data[row,5]])\n cal_coord_tick.append(coords_tick)\n\n x_tick = np.array([all_data[row,2],all_data[row,3]])\n cal_X_tick.append(x_tick)\n y_tick = np.array([all_data[row,1]])\n cal_y_tick.append(y_tick)\n#最后在放一次\n#gwr解出最后一期 \ncal_cord_gwr = np.asarray(cal_coord_tick)\ncal_X_gwr = np.asarray(cal_X_tick)\ncal_y_gwr = np.asarray(cal_y_tick) \ncal_coords_list.append(np.asarray(cal_coord_tick))\ncal_X_list.append(np.asarray(cal_X_tick))\ncal_y_list.append(np.asarray(cal_y_tick))",
"_____no_output_____"
],
[
"#stwr \nstwr_selector_ = Sel_Spt_BW(cal_coords_list, cal_y_list, cal_X_list,#gwr_bw0,\n delt_stwr_intervel)\noptalpha,optsita,opt_btticks,opt_gwr_bw0 = stwr_selector_.search() \nstwr_model = STWR(cal_coords_list,cal_y_list,cal_X_list,delt_stwr_intervel,optsita,opt_gwr_bw0,tick_nums=opt_btticks+1,alpha =optalpha,recorded=1)\n\n",
"_____no_output_____"
],
[
"stwr_results = stwr_model.fit()\nprint(stwr_results.summary())",
"===========================================================================\nModel type Gaussian\nNumber of observations: 333\nNumber of covariates: 3\n\nGlobal Regression Results\n---------------------------------------------------------------------------\nResidual sum of squares: 5085961.816\nLog-likelihood: -464.977\nAIC: 935.954\nAICc: 938.610\nBIC: 5085697.867\nR2: 0.494\nAdj. R2: 0.478\n\nVariable Est. SE t(Est/SE) p-value\n------------------------------- ---------- ---------- ---------- ----------\nX0 732.637 435.360 1.683 0.092\nX1 1.970 0.252 7.829 0.000\nX2 -0.168 2.107 -0.080 0.937\n\nSpatiotemporal Weighted Regression (STWR) Results\n---------------------------------------------------------------------------\nSpatial kernel: Adaptive spt_bisquare\nModel sita used: 0.000\nModel alpha used: 0.000\nInit Bandwidth used: 4.000\nModel Ticktimes used: 5.000\nModel Ticktimes Intervels: 79.700\n\nDiagnostic information\n---------------------------------------------------------------------------\nResidual sum of squares: 52688.545\nEffective number of parameters (trace(S)): 35.090\nDegree of freedom (n - trace(S)): 297.910\nSigma estimate: 13.299\nLog-likelihood: -314.172\nAIC: 700.526\nAICc: 709.573\nBIC: 837.963\nR2: 0.995\nAdj. alpha (95%): 0.004\nAdj. critical t value (95%): 2.877\n\nSummary Statistics For STWR Parameter Estimates\n---------------------------------------------------------------------------\nVariable Mean STD Min Median Max\n-------------------- ---------- ---------- ---------- ---------- ----------\nX0 -1.897 64.883 -198.752 3.411 164.384\nX1 2.915 0.652 1.319 2.904 4.501\nX2 2.734 1.215 0.893 2.597 4.910\n===========================================================================\n\n===========================================================================\nModel type Gaussian\nNumber of observations: 333\nNumber of covariates: 3\n\nGlobal Regression Results\n---------------------------------------------------------------------------\nResidual sum of squares: 5085961.816\nLog-likelihood: -464.977\nAIC: 935.954\nAICc: 938.610\nBIC: 5085697.867\nR2: 0.494\nAdj. R2: 0.478\n\nVariable Est. SE t(Est/SE) p-value\n------------------------------- ---------- ---------- ---------- ----------\nX0 732.637 435.360 1.683 0.092\nX1 1.970 0.252 7.829 0.000\nX2 -0.168 2.107 -0.080 0.937\n\nSpatiotemporal Weighted Regression (STWR) Results\n---------------------------------------------------------------------------\nSpatial kernel: Adaptive spt_bisquare\nModel sita used: 0.000\nModel alpha used: 0.000\nInit Bandwidth used: 4.000\nModel Ticktimes used: 5.000\nModel Ticktimes Intervels: 79.700\n\nDiagnostic information\n---------------------------------------------------------------------------\nResidual sum of squares: 52688.545\nEffective number of parameters (trace(S)): 35.090\nDegree of freedom (n - trace(S)): 297.910\nSigma estimate: 13.299\nLog-likelihood: -314.172\nAIC: 700.526\nAICc: 709.573\nBIC: 837.963\nR2: 0.995\nAdj. alpha (95%): 0.004\nAdj. critical t value (95%): 2.877\n\nSummary Statistics For STWR Parameter Estimates\n---------------------------------------------------------------------------\nVariable Mean STD Min Median Max\n-------------------- ---------- ---------- ---------- ---------- ----------\nX0 -1.897 64.883 -198.752 3.411 164.384\nX1 2.915 0.652 1.319 2.904 4.501\nX2 2.734 1.215 0.893 2.597 4.910\n===========================================================================\n\n"
],
[
"gwr_selector = Sel_BW(cal_cord_gwr, cal_y_gwr, cal_X_gwr)\ngwr_bw= gwr_selector.search(bw_min=2)\ngwr_model = GWR(cal_cord_gwr, cal_y_gwr, cal_X_gwr, gwr_bw)\ngwr_results = gwr_model.fit()\nprint(gwr_results.summary())\ngw_rscale = gwr_results.scale \ngwr_residuals = gwr_results.resid_response",
"===========================================================================\nModel type Gaussian\nNumber of observations: 66\nNumber of covariates: 3\n\nGlobal Regression Results\n---------------------------------------------------------------------------\nResidual sum of squares: 5085961.816\nLog-likelihood: -464.977\nAIC: 935.954\nAICc: 938.610\nBIC: 5085697.867\nR2: 0.494\nAdj. R2: 0.478\n\nVariable Est. SE t(Est/SE) p-value\n------------------------------- ---------- ---------- ---------- ----------\nX0 732.637 435.360 1.683 0.092\nX1 1.970 0.252 7.829 0.000\nX2 -0.168 2.107 -0.080 0.937\n\nGeographically Weighted Regression (GWR) Results\n---------------------------------------------------------------------------\nSpatial kernel: Adaptive bisquare\nBandwidth used: 15.000\n\nDiagnostic information\n---------------------------------------------------------------------------\nResidual sum of squares: 300088.969\nEffective number of parameters (trace(S)): 26.535\nDegree of freedom (n - trace(S)): 39.465\nSigma estimate: 87.201\nLog-likelihood: -371.582\nAIC: 798.234\nAICc: 840.178\nBIC: 858.526\nR2: 0.970\nAdj. alpha (95%): 0.006\nAdj. critical t value (95%): 2.862\n\nSummary Statistics For GWR Parameter Estimates\n---------------------------------------------------------------------------\nVariable Mean STD Min Median Max\n-------------------- ---------- ---------- ---------- ---------- ----------\nX0 -8.204 884.981 -1593.295 -62.845 1980.045\nX1 2.918 1.185 0.279 2.738 5.112\nX2 2.567 4.142 -6.512 2.604 10.784\n===========================================================================\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc49bb94524ca55c076c21ff123b4b3b4de425b | 8,806 | ipynb | Jupyter Notebook | fdm-devito-notebooks/02_wave/staggered.ipynb | devitocodes/devito_book | 30405c3d440a1f89df69594fd0704f69650c1ded | [
"CC-BY-4.0"
] | 7 | 2020-07-17T13:19:15.000Z | 2021-03-27T05:21:09.000Z | fdm-devito-notebooks/02_wave/staggered.ipynb | devitocodes/devito_book | 30405c3d440a1f89df69594fd0704f69650c1ded | [
"CC-BY-4.0"
] | 73 | 2020-07-14T15:38:52.000Z | 2020-09-25T11:54:59.000Z | fdm-devito-notebooks/02_wave/staggered.ipynb | devitocodes/devito_book | 30405c3d440a1f89df69594fd0704f69650c1ded | [
"CC-BY-4.0"
] | 1 | 2021-03-27T05:21:14.000Z | 2021-03-27T05:21:14.000Z | 28.967105 | 177 | 0.534749 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecc4bce98de67c0f8ed1f825a9aa00006042ae63 | 29,139 | ipynb | Jupyter Notebook | Python_for_Data_Science/Request_Library.ipynb | PawelRosikiewicz/MachineLearning | a119ff3327f77937101a70ea12a66652b9f42f2a | [
"MIT"
] | 1 | 2022-03-25T14:24:57.000Z | 2022-03-25T14:24:57.000Z | Python_for_Data_Science/Request_Library.ipynb | PawelRosikiewicz/MachineLearning | a119ff3327f77937101a70ea12a66652b9f42f2a | [
"MIT"
] | null | null | null | Python_for_Data_Science/Request_Library.ipynb | PawelRosikiewicz/MachineLearning | a119ff3327f77937101a70ea12a66652b9f42f2a | [
"MIT"
] | null | null | null | 30.071207 | 385 | 0.514499 | [
[
[
"# REQUEST LIBRARY\nhttps://realpython.com/python-requests/#other-http-methods\n",
"_____no_output_____"
],
[
"## ----------- BASICS --------------",
"_____no_output_____"
],
[
"## GET RESPONSE OBJECT, \n## - - - CHECK SATUS CODE, \n## - - - - - - RISE EXCEPTIONS",
"_____no_output_____"
]
],
[
[
"\"\"\"\n GET & RESPONSE OBJECT \n\"\"\"\nimport requests\n\n# GET request to GitHub’s Root REST API,\nresponse = requests.get('https://api.github.com')\nresponse.status_code # 200 is ok",
"_____no_output_____"
],
[
"\"\"\"\n STATUS CODE\n \n # 200 - ok\n # 204 - no content\n # 404 - not found\n # 1xx informational response – the request was received, continuing process\n # 2xx successful – the request was successfully received, understood and accepted\n # 3xx redirection – further action needs to be taken in order to complete the request\n # 4xx client error – the request contains bad syntax or cannot be fulfilled\n # 5xx server error – the server failed to fulfill an apparently valid request\n \n\"\"\" \n\n# --------------------------------------------------------------------\n# satatus code\nprint(response.status_code,\"\\n\")\n\n \n# --------------------------------------------------------------------\n# Use if/else to test status code\n\n# if/else returns True is status_code is between 200 and 400, i.e. any workable material!\nif response:\n print('Success!')\nelse:\n print('An error has occurred.')\n\n # Why if else returns True/False with return object?\n # __bool__() is an overloaded method on Response.\n # ie. the default behavior of Response has been redefined \n # to take the status code into account when determining the truth value of the object.",
"_____no_output_____"
],
[
"\"\"\"\n RAISE EXCEPTIONS WITH TRY/EXCEPT\n \n * Try/except/finaly/else code blocks\n Why to use that approach?\n - try: ... are used to test the code or a function\n - this approach allows running block of code with errors and cathes err message to display, \n without crashing the whole program\n - it also allows to test you what type of error you got (except errorType), and write custome messages, or functions \n that will help you understang what happened wiht your code, especially when dealing with much larger programs.\n\"\"\"\n\nimport requests\nfrom requests.exceptions import HTTPError\n\nfor i, url in enumerate(['https://api.github.com', 'https://api.github.com/invalid']):\n \n # Try your coce/function\n try:\n response = requests.get(url)\n\n # If the response was successful, no Exception will be raised\n response.raise_for_status()\n \n # except; will catch err message and allows to test what type of error was made\n except HTTPError as http_err:\n print(f'url {i}:{url}: HTTP error occurred: {http_err}') # you can test specific error types separately\n except Exception as err:\n print(f'url {i}:{url}: Other error occurred: {err}') # or you can raise any exception generated with the system\n \n # else; is executed if no errors raised in try, after code in try was already executed !\n else:\n print(f'url {i}:{url}: Success!')\n \n # finally is executed irrespectively whether error was rised or not\n finally:\n print(\"\\n\",f':::::::::::: code testing for url nr {i} was done properly ::::::::::::',\"\\n\")",
"url 0:https://api.github.com: Success!\n\n :::::::::::: code testing for url nr 0 was done properly :::::::::::: \n\nurl 1:https://api.github.com/invalid: HTTP error occurred: 404 Client Error: Not Found for url: https://api.github.com/invalid\n\n :::::::::::: code testing for url nr 1 was done properly :::::::::::: \n\n"
]
],
[
[
"## INSPECT RESPONSE CONTENT\n## - - - DESERIALIZE THE CONTENT\n## - - - - - - QUERY STRING PARAMETERS & HEADERS",
"_____no_output_____"
]
],
[
[
"\"\"\"\n INSPECT CONTENT/PAYLOAD\n\"\"\"\n\n\nimport requests\nresponse = requests.get('https://api.github.com')\nprint(response.status_code)\n\n\n# --------------------------------------------------------------\n# See response's content\n\n# in bytes (ie. small int. 0 to 255)\nprint(\"\\n content (byte): \", response.content[0:100])\n\n# as string (utf-8) \nprint(\"\\n text (uts-8, str): \", response.text[0:100])\n\n\n\n# --------------------------------------------------------------\n# content encoding; requests will try to guess the encoding based on headers\n\n# Set encoding to utf-8 (optional)\nresponse.encoding = 'utf-8'\nprint(\"\\n set: utf-8 encoding: \", response.text[0:100])\n\n\n\n# --------------------------------------------------------------\n# See metadata; ie. data on your data\n#. - stored in response's headers\n#. - eg: content type, time limit on how long to cache the response \n#. - IMPORTANT: the below function creates dct with CASE INSENSITIVE keys access headers\n\n# see headers;\nresponse.headers # function returns dct-like obj, keys are case insensitive (not like in dct)\n\n# access one header; like normal dct, case insensitive\nprint(\"\\n\", list(response.headers)[1], \": \",response.headers[list(response.headers)[1]])",
"200\n\n content (byte): b'{\"current_user_url\":\"https://api.github.com/user\",\"current_user_authorizations_html_url\":\"https://gi'\n\n text (uts-8, str): {\"current_user_url\":\"https://api.github.com/user\",\"current_user_authorizations_html_url\":\"https://gi\n\n set: utf-8 encoding: {\"current_user_url\":\"https://api.github.com/user\",\"current_user_authorizations_html_url\":\"https://gi\n\n Content-Type : application/json; charset=utf-8\n"
],
[
"\"\"\"\n DESERILIZE THE CONTENT\n \n * What is deserialization why You may need that?\n \n - Typically, request payload is a serialized JSON object:\n - When transmitting data or storing them in a file, the data are required \n to be byte strings but they are rarely in that format\n thus, SERIALIZATION can convert these complex objects into byte strings\n - After the byte strings are transmitted, \n the receiver will have to recover the original object from the byte string. \n This is known as DESERIALIZATION\n \n - pickle library (pickle & unpickle methods) can be used\n - eg:\n dct: {foo: [1, 4, 7, 10], bar: \"baz\"}\n JSON: '{\"foo\":[1,4,7,10],\"bar\":\"baz\"}'\n\n\"\"\"\n\n# get some content\nimport requests\nresponse = requests.get('https://api.github.com')\nprint(response.status_code)\n# 200\n\n# create dct with deserialized json in response\ndct = response.json()\nprint(\"\\n Deserialized JSON, keys:\",list(dct)[0:2]) # dct.keys() # keys: no \"index\", thus we use list(dct) to get the keys\nprint(\"Value example:\", list(dct)[0],\":\",dct[list(dct)[0]])",
"200\n\n Deserialized JSON, keys: ['current_user_url', 'current_user_authorizations_html_url']\nValue example: current_user_url : https://api.github.com/user\n"
],
[
"\"\"\"\n QUERY STRING PARAMETERS & HEADERS\n \n * Query String; part of a uniform resource locator (URL),\n which assigns values to specified parameters\n * get() params, used to create query string in automatic way\n\"\"\"\n\n\n# --------------------------------------------------------------\n# Parameters used to build a query\n\n# Our Example: Search GitHub's repositories for requests library in python\nulr = 'https://api.github.com/search/repositories'\nresponse = requests.get(ulr,\n params={'q': 'requests+language:python'},\n )\n # Other formats are:\n #. - bytes: params=b'q=requests+language:python'\n #. - list with tuples: params=[('q', 'requests+language:python')]\n\n## or ##\n\n# generate url with ulrlib\nfrom urllib.parse import urlencode\n\noath_params = {'q': 'requests+language:python'}\nresponse = requests.get(f'{ulr}?{urlencode(oath_params)}')\n\n\n# --------------------------------------------------------------\n# Inspect attributes of the `requests` repository\n\n# deserialize\njson_response = response.json() # deserialized json reponse\nrepository = json_response['items'][0] # get list with your items\n\nprint(f'Repository name: {repository[\"name\"]}') # \nprint(f'Repository description: {repository[\"description\"]}')\n\n\n\n# --------------------------------------------------------------\n# Heraders; used to specify search parameters by giving \n\n# Specify media-type as text, using Accept header.\n# THIS FUNCTION DIDN'T WORK BUT I PLACED IT HERE TO KNOW ON THAT HEADERS IN GET \n\"\"\"\nulr = 'https://api.github.com/search/repositories'\nresponse = requests.get(\n url,\n params={'q': 'requests+language:python'},\n headers={'Accept': 'application/vnd.github.v3.text-match+json'},\n)\n\napplication/vnd.github.v3.text-match+json; A proprietary GitHub Accept header \nwhere the content is a special JSON format - it was probably chnaged\n\"\"\"",
"Repository name: grequests\nRepository description: Requests + Gevent = <3\n"
]
],
[
[
"## ----------- OTHER METHODS --------------",
"_____no_output_____"
],
[
"## OTHER HTTP METHODS\n## - - - MESSAGE BODY USED IN SOME FUNCTIONS\n## - - - - - - INSPECTING REQUEST\n\n\n### httpbin.org\n * created by the author of requests, Kenneth Reitz\n * service that accepts test requests and responds with data about the requests.",
"_____no_output_____"
]
],
[
[
"\"\"\"\n OTHER HTTP METHODS\n \n >>> requests.post('https://httpbin.org/post', data={'key':'value'})\n >>> requests.put('https://httpbin.org/put', data={'key':'value'})\n >>> requests.delete('https://httpbin.org/delete')\n >>> requests.head('https://httpbin.org/get')\n >>> requests.patch('https://httpbin.org/patch', data={'key':'value'})\n >>> requests.options('https://httpbin.org/get')\n\"\"\"\n\n# You can inspect their responses in the same way you did before\nresponse = requests.head('https://httpbin.org/get')\nprint(response.headers['Content-Type'])\n\nresponse = requests.delete('https://httpbin.org/delete')\njson_response = response.json()\nprint(json_response['args'])",
"application/json\n{}\n"
],
[
"\"\"\"\n MESSAGE BODY USED IN SOME FUNCTIONS\n \n * used by POST, PUT, PATCH methods only!\n * the data are passed through the message body, not parameters in the query string\n * input formats: dictionary, a list of tuples, bytes, or a file-like object\n >>> requests.post('url', data={'key':'value'} # dct\n >>> requests.post('url', data=[('key', 'value')]) # list of tupples\n >>>. requests.post('url', json={'key':'value'} # json that is atumatically dserialized\n\"\"\"\n\n# sedn some message to httpbin.org\nresponse = requests.post('https://httpbin.org/post', json={'key':'value'})\njson_response = response.json()\nprint(json_response['data'])\nprint(json_response['headers']['Content-Type'])",
"{\"key\": \"value\"}\napplication/json\n"
],
[
"\"\"\"\n INSPECTING REQUEST\n \n Why?\n the requests library prepares the request before sending it to destination server. \n eg: headers validation and JSON serialization.\n\"\"\"\n\n# See prepared request by accessing .request\nresponse = requests.post('https://httpbin.org/post', json={'key':'value'})\n\n# see headers:\nresponse.request.headers['Content-Type']\n\n# see url\nresponse.request.url\n\n# see body\nresponse.request.body",
"_____no_output_____"
]
],
[
[
"## ----------- SECURITY --------------",
"_____no_output_____"
],
[
"## AUTHENTICATION\n## - - - SET CUSTOME AUTHENTICATION SCHEME\n## - - - - - - SSL CERTIFICATE VERIFICATION\n## - - - - - - - - - THE SESSION OBJECT",
"_____no_output_____"
]
],
[
[
"\"\"\"\n AUTHENTICATION\n \n * helps a service understand who you are,\n * you can provide your login and password with that method\n * Typically, you provide your credentials to a server by passing data through the Authorization header \n or a custom header defined by the service\n \n * auth; parameter in all requests methods used to pass your credentials\n \n * 401; Unauthorized; status code for get with no proper credentials\n \n\"\"\"\n\n# ------------------------------\n# login to GitHub\n\n# use getpass function to hide your password\nfrom getpass import getpass\n\nuser_name = \"PawelRosikiewicz\"\nrequests.get('https://api.github.com/user', auth=(user_name, getpass()))\n#:)",
" ···········\n"
],
[
"\"\"\"\n SET AUTHENTICATION SCHEME\n \n * When you pass your username and password in a tuple to the auth parameter, \n requests is applying the credentials using HTTP’s Basic access authentication scheme under the hood.\n Therefore, you could make the same request by passing explicit Basic authentication \n credentials using HTTPBasicAuth:\n \n * requests provides other methods of authentication: \n - HTTPDigestAuth\n - HTTPProxyAuth\n \n * to make your your own authentication mechanism with; from requests.auth import AuthBase\n visit: https://realpython.com/python-requests/#author\n\"\"\"\n\n# use explicitly HTTPBasicAuth for auth\n\nfrom requests.auth import HTTPBasicAuth\nfrom getpass import getpass\n\nuser_name = \"PawelRosikiewicz\"\nrequests.get(\n 'https://api.github.com/user',\n auth=HTTPBasicAuth(user_name, getpass())\n )",
" ···········\n"
],
[
"\"\"\"\n SSL CERTIFICATE VERIFICATION\n \n * by defaults is on!\n\"\"\"\n\n# disable ssl, to connect to siters without it!\nrequests.get('https://api.github.com', verify=False)\n\n\n\"\"\"\n !!! IMPORTANRT !!!\n\n certifi PACKAGE to provide Certificate Authorities in Python\n Update certifi frequently to keep your connections as secure as possible.\n\n\"\"\"\n",
"/Users/pawel/anaconda3/envs/exts-ml/lib/python3.6/site-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\n"
],
[
"\"\"\"\n THE SESSION OBJECT\n \n * used to persist parameters across requests\n - eg: if you want to use the same authentication across multiple requests\n \n * it creates persistent connection between clinets and the server\n -> When your app makes a connection to a server using a Session, \n it keeps that connection around in a connection pool, i.e.\n it will reuse a connection from the pool rather than establishing a new one\n -> improves the performance of your requests\n\"\"\"\n\n# imports\nimport requests\nfrom getpass import getpass\n\n# GitHub data:\nuser_name = \"PawelRosikiewicz\"\n\n# By using a context manager, you can ensure the resources used by\n# the session will be released after use\nwith requests.Session() as session:\n session.auth = (user_name, getpass())\n\n # Instead of requests.get(), you'll use session.get()\n response = session.get('https://api.github.com/user')\n\n# You can inspect the response just like you did before\nprint(response.headers)\nprint(response.json())# create dct with deserialized json in response",
"_____no_output_____"
]
],
[
[
"## ----------- PERFORMANCE --------------",
"_____no_output_____"
],
[
"## TIMEOUTS\n## - - - RISE TIMEOUT ECXCEPTION\n## - - - - - - MAX RETRIES",
"_____no_output_____"
]
],
[
[
"\"\"\"\n TIMEOUTS\n \n * When you make an inline request to an external service, \n your system will need to wait upon the response before moving on\n If your application waits too long for that response, requests to your service \n could back up, your user experience could suffer, or your background jobs could hang.\n \n * BY DEFAULT THERE IS NO TIMEOUT IN REQUESTS\n ie. the function will waith indefiniately\n\"\"\"\n\nimport requests\n\n# Specify timeout,\nrequests.get('https://api.github.com', timeout=3.05) # 3.05 sec\n\n# timout for in/out\nrequests.get('https://api.github.com', timeout=(2, 5)) \n # request must establish a connection within 2 sec. and receives data within 5 sec.",
"_____no_output_____"
],
[
"\"\"\"\n RISE TIMEOUT ECXCEPTION\n\"\"\"\n\n# Your program can catch the Timeout exception and respond accordingly.\nimport requests\nfrom requests.exceptions import Timeout\n\ntry:\n response = requests.get('https://api.github.com', timeout=1)\nexcept Timeout:\n print('The request timed out')\nelse:\n print('The request did not time out')",
"The request did not time out\n"
],
[
"\"\"\"\n MAX RETRIES\n \n * if request fails, you wish to try again, but you dont want to do that to many times\n it is a criminal offence to block someones webpage!!!!\n \n * You may want to set up transport adapters, a module of request liobrary that specify\n how clinet communicate with the server: https://2.python-requests.org//en/master/user/advanced/#transport-adapters\n - Transport Adapters let you define a set of configurations per service you’re interacting with\n \n * let’s say you want all requests to https://api.github.com \n to retry three times before finally raising a ConnectionError\n\"\"\"\n\n# imports\nimport requests\nfrom requests.adapters import HTTPAdapter\nfrom requests.exceptions import ConnectionError\n\n# choose tranposrt adapter and set up max_retires\ngithub_adapter = HTTPAdapter(max_retries=3) # set up max_retries !!!!! IMPORTANT\n\n# create a session\nsession = requests.Session()\n\n# Use `github_adapter` with selected transport ad.. for all requests \n#. to endpoints that start with this URL\nsession.mount('https://api.github.com', github_adapter)\n\ntry:\n session.get('https://api.github.com')\n \nexcept ConnectionError as ce:\n print(ce)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecc4bf015d6bfe41a2d0c93cbbc9d3a744cd0451 | 4,728 | ipynb | Jupyter Notebook | notebooks/Conversion_between_lon-lat-depth_and_x-y-z.ipynb | kasra-hosseini/geotree | d202c5e86a2c384f651120880160740581d9835c | [
"MIT"
] | 4 | 2021-05-28T00:04:54.000Z | 2021-05-29T10:08:54.000Z | notebooks/Conversion_between_lon-lat-depth_and_x-y-z.ipynb | kasra-hosseini/geotree | d202c5e86a2c384f651120880160740581d9835c | [
"MIT"
] | null | null | null | notebooks/Conversion_between_lon-lat-depth_and_x-y-z.ipynb | kasra-hosseini/geotree | d202c5e86a2c384f651120880160740581d9835c | [
"MIT"
] | null | null | null | 26.711864 | 117 | 0.526015 | [
[
[
"# Conversion between lon/lat/depth and x/y/z",
"_____no_output_____"
]
],
[
[
"# solve issue with autocomplete\n%config Completer.use_jedi = False\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"`geotree` can read lon/lat/depth or x/y/z as inputs. Here is a list of relevant functions:\n- `add_lonlatdep` (depth should be in meters; positive depths specify points inside the Earth.)\n- `add_lonlatdep_query` (same as above except for queries)\n- `add_xyz` (in meters)\n- `add_xyz_q` (for queries, in meters)\n\nIn this section, we show two functions in geotree: `lonlatdep2xyz_spherical` and `xyz2lonlatdep_spherical`. \nThese are used internally to convert between lon/lat/dep and x/y/z.",
"_____no_output_____"
]
],
[
[
"from geotree import convert as geoconvert\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Define a set of lons/lats/depths",
"_____no_output_____"
]
],
[
[
"npoints = 100\nlons = np.random.randint(-180, 180, npoints)\nlats = np.random.randint(-90, 90, npoints)\ndepths = np.zeros(npoints)",
"_____no_output_____"
]
],
[
[
"## lons/lats/depths ---> x/y/z",
"_____no_output_____"
]
],
[
[
"# Here, we use geoconvert.lonlatdep2xyz_spherical to convert lons/lats/depths ---> x/y/z (in meters)\n# Note that We set depths to zeros, i.e., all points are on a sphere with a radius of 6371000 meters.\nx, y, z = geoconvert.lonlatdep2xyz_spherical(lons, \n lats, \n depths, \n return_one_arr=False)",
"_____no_output_____"
],
[
"# Left pabel: in geographic coordinate, right panel: x/y/z in meters (on a sphere with radius of 6371000m)\nfig = plt.figure(figsize=(12, 5))\n\nplt.subplot(1, 2, 1)\n\nplt.scatter(lons, lats,\n c=\"k\", \n marker=\"o\",\n zorder=100)\n\nplt.xlabel(\"lons\", size=20)\nplt.ylabel(\"lats\", size=20)\nplt.xticks(size=14); plt.yticks(size=14)\nplt.xlim(-180, 180); plt.ylim(-90, 90)\nplt.grid()\n\n# ---\nax = fig.add_subplot(1, 2, 2, projection='3d')\n\nax.scatter3D(x, y, z, c=\"k\", marker=\"o\");\n\nax.set_xlabel('X (m)', size=16)\nax.set_ylabel('Y (m)', size=16)\nax.set_zlabel('Z (m)', size=16)\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## x/y/z ---> lons/lats/depths",
"_____no_output_____"
]
],
[
[
"# Just as test, we now use geoconvert.xyz2lonlatdep_spherical to convert x/y/z back to lons/lats/depths\nlons_conv, lats_conv, depths_conv = geoconvert.xyz2lonlatdep_spherical(x, y, z, \n return_one_arr=False)",
"_____no_output_____"
],
[
"# and, we measure the L1 error between original lons/lats/depths and the ones computed above:\nprint(max(abs(lons - lons_conv)))\nprint(max(abs(lats - lats_conv)))\nprint(max(abs(depths - depths_conv)))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecc4e2362a606eaa4493982b80cf6a120f190288 | 241,037 | ipynb | Jupyter Notebook | Python/1. Python Basics/Notebooks/4. Generators/4c. Modular Approach to Simulation using Python Generators.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Python/1. Python Basics/Notebooks/4. Generators/4c. Modular Approach to Simulation using Python Generators.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Python/1. Python Basics/Notebooks/4. Generators/4c. Modular Approach to Simulation using Python Generators.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | 2 | 2022-02-09T15:41:33.000Z | 2022-02-11T07:47:40.000Z | 186.41686 | 42,240 | 0.865867 | [
[
[
"<!--NOTEBOOK_HEADER-->\n*This notebook contains material from [CBE30338](https://jckantor.github.io/CBE30338);\ncontent is available [on Github](https://github.com/jckantor/CBE30338.git).*\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"# A.2 Modular Simulation using Python Generators\n\nThis notebook show how to use Python generators for creating system simulation. This technique implements simulation blocks as Python generators, then pieces blocks together to create more complex systems. This is an advanced technique that may be useful in control projects and a convenient alternative to block diagram simulators.",
"_____no_output_____"
],
[
"## A.2.1 Simulation using scipy.integrate.odeint()",
"_____no_output_____"
],
[
"### A.2.1.1 Typical Usage\n\nThe SciPy library provides a convenient and familiar means of simulating systems modeled by systems of ordinary differential equations. As demonstrated in other notebooks, the straightforward approach consists of several common steps\n\n1. Initialize graphics and import libraries\n2. Fix parameter values\n3. Write a function to evaluate RHS of the differential equations\n4. Choose initial conditions and time grid\n5. Perform the simulation by numerical solution of the differential equations\n6. Prepare visualizations and post-processing\n\nHere we demonstrate this approach for a two gravity-drained tanks connected in series with constant inflow.",
"_____no_output_____"
]
],
[
[
"# 1. Initialize graphics and import libraries\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\n\n# 2. Fix parameter values\nA = 0.2\nCv = 0.5\nqin = 0.4\n\n# 3. Write a function to evaluate RHS of the differential equations\ndef deriv(X,t,qin=0):\n h1,h2 = X\n dh1 = (qin - Cv*np.sqrt(h1))/A\n dh2 = (Cv*np.sqrt(h1) - Cv*np.sqrt(h2))/A \n return [dh1,dh2]\n\n# 4. Choose initial conditions and time grid\nIC = [0,0]\nt = np.linspace(0,8,500)\n\n# 5. Perform the simulation by numerical solution of the differential equations\nsol = odeint(deriv,IC,t,args=(qin,))\n\n# 6. Prepare visualizations and post-processing\nplt.plot(t,sol)\nplt.legend(['Tank 1','Tank 2'])\nplt.xlabel('Time')\nplt.ylabel('Height [m]')\nplt.title('Simulation of Two Gravity-Drained Tanks in Series')\nplt.grid()",
"_____no_output_____"
]
],
[
[
"## A.2.2 What's Wrong with That?\n\nIf direct simulation as outlined above meets the needs of your project, then be satisfied and move on. This is how these tools are intended to be used.\n\nHowever, as written above, simulation with scipy.integrate.odeint requires you to write a function that calculates the right hand side of a system of differential equations. This can be challenging for complex system. For example, you may have multiple PID controllers, each implementing logic for anti-reset windup. Or you may have components in the process that exhibit hysterisis, time-delay, or other difficult-to-model dynamics. These cases call for a more modular approach to modeling and simulation.\n\nIn these cases we'd like to combine the continous time dynamics modeled by differential equations with more complex logic executed at discrete points in the time.",
"_____no_output_____"
],
[
"## A.2.3 Python Generators",
"_____no_output_____"
],
[
"### A.2.3.1 Yield Statement\n\nOne of the more advanced and often overlooked features of Python is the use of [generators and iterators](http://nvie.com/posts/iterators-vs-generators/) for performing operations on sequences of information. In particular, a generator is a function that returns information to via the `yield` statement rather then the more commonly encountered return statement. When called again, the generator picks right up at the point of the yield statement.\n\nLet's demonsrate this by writing a generator of Fibonacci numbers. This generator returns all Fibonacci numbers less or equal to a given number $n$.",
"_____no_output_____"
]
],
[
[
"def fib(n):\n i = 0\n j = 1\n while j <= n:\n yield j\n i,j = j,i+j",
"_____no_output_____"
]
],
[
[
"Here's a typical usage. What are the Fibonacci numbers less than or equal to 100?",
"_____no_output_____"
]
],
[
[
"for k in fib(100):\n print(k)",
"1\n1\n2\n3\n5\n8\n13\n21\n34\n55\n89\n"
]
],
[
[
"The generator can also be used inside list comprehensions.",
"_____no_output_____"
]
],
[
[
"[k for k in fib(1000)]",
"_____no_output_____"
]
],
[
[
"### A.2.3.2 Iterators\n\nWhen called, a generator function creates an intermediate function called an iterator. Here we construct the iterator and use it within a loop to find the first 10 Fibonacci numbers.",
"_____no_output_____"
]
],
[
[
"f = fib(500)\nfor k in range(0,10):\n print(next(f))",
"1\n1\n2\n3\n5\n8\n13\n21\n34\n55\n"
]
],
[
[
"Using `next` on an iterator returns the next value. ",
"_____no_output_____"
],
[
"### A.2.3.3 Two-way communcation with Generators using Send\n\nSo far we have demonstrated the use of `yield` as a way to communicate information from the generator to the calling program. Which is fine if all you need is one-way communication. But for the modular simulation of processes, we need to be able to send information both ways. A feedback control module, for example, will need to obtain current values of the process variable in order to update its internal state to provide an update of the manipulated variable to calling programm.\n\nHere's the definition of a generator for negative feedback proportional control where the control gain $K_p$ and setpoint $SP$ are specified constants.",
"_____no_output_____"
]
],
[
[
"def proportionalControl(Kp,SP):\n MV = None\n while True:\n PV = yield MV\n MV = Kp*(SP-PV)",
"_____no_output_____"
]
],
[
[
"The `yield` statement is now doing double duty. When first called it sends the value of MV back to the calling program, then stops and waits. It is waiting for the calling program to send a value of PV using the `.send()` method. Execution resumes until the yield statement is encountered again and the new value of MV returned to the calling program.\n\nWith this behavior in mind, gettting the generator ready for use is a two step process. The first step is to create an instance (i.e., an iterator). The second step is to initialize the instance by issuing `.send(None)` command. This is will halt execution at the first `yield` statement. At that point the generator instance will be ready to go for subsequent simulation.\n\nHere's the initialization of a new instance of proportional control with $K_p = 2.5$ and $SP = 2$.",
"_____no_output_____"
]
],
[
[
"pc = proportionalControl(2.5,2)\npc.send(None)",
"_____no_output_____"
]
],
[
[
"This shows it in use.",
"_____no_output_____"
]
],
[
[
"for PV in range(0,5):\n print(PV, pc.send(PV))",
"0 5.0\n1 2.5\n2 0.0\n3 -2.5\n4 -5.0\n"
]
],
[
[
"You can verify that these results satisfy the proportional control relationship.",
"_____no_output_____"
],
[
"## A.2.4 Example Application: Modeling Gravity-Drained Tanks with Python Generators",
"_____no_output_____"
],
[
"The first step in using a Python generator for simulation is to write the generator. It will be used to create instances of the dynamical process being modeled by the generator. Parameters should include a sample time `dt` and any other model parameters you choose to specify a particular instance of the process. The yield statement should provide time plus any other relevant process data. The yield statement will produce new values of process inputs valid for the next time step.",
"_____no_output_____"
],
[
"### A.2.4.1 Generator for a Gravity-Drained Tank",
"_____no_output_____"
]
],
[
[
"# generator for a gravity-drained tank model\n\ndef gravtank_generator(dt, A=1, Cv=1, IC=0):\n \n def qout(h):\n return Cv*np.sqrt(float(h))\n \n def deriv(h,t):\n dh = (qin - qout(h))/A\n return dh\n \n h = IC\n t = 0\n while True:\n qin = yield t,qout(h),float(h)\n h = odeint(deriv,h,[t,t+dt])[-1]\n t += dt",
"_____no_output_____"
]
],
[
[
"### A.2.4.2 Simulation of a Single Tank with Constant Inflow\n\nNext we show how to use the generator to create a simulation consisting of a single gravity drained tank with constant inflow.\n\n1. Choose a sample time for the simulation.\n2. Create instances of the processes to be used in your simulation.\n3. The first call to an instance is f.send(None). This will return the initial condition.\n4. Subsequent calls to the instance should be f.send(u) where u is variable, tuple, or other data time being passed to the process. The return value will be a tuple contaning the next value of time plus other process data.\n",
"_____no_output_____"
]
],
[
[
"# 1. select sample time\ndt = 0.02\n\n# 2. create a process instance\ntank = gravtank_generator(dt, A=0.2, Cv=.5)\n\n# 3. get initial condition\ny = [tank.send(None)]\n\n# 4. append subsequent states \ny += [tank.send(0.5) for t in np.arange(0,10,dt)]\n\n# 5. extract information into numpy arrays for plotting\nt,q,h = np.asarray(y).transpose()\nplt.plot(t,q,t,h)\nplt.xlabel('Time')\nplt.legend(['Outlet Flow','Level'])\nplt.grid()",
"_____no_output_____"
]
],
[
[
"### A.2.4.3 Simulation of Two Tanks in Series",
"_____no_output_____"
]
],
[
[
"dt = 0.02\n\ntank1 = gravtank_generator(dt, A=0.2, Cv=.5)\ntank2 = gravtank_generator(dt, A=0.2, Cv=.5)\n\ny1 = [tank1.send(None)]\ny2 = [tank2.send(None)]\n\nfor t in np.arange(dt,10,dt):\n t1,q1,h1 = tank1.send(0.5)\n t2,q2,h2 = tank2.send(q1)\n \n y1.append([t1,q1,h1])\n y2.append([t2,q2,h2])\n \nt1,q1,h1 = np.asarray(y1).transpose()\nt2,q2,h2 = np.asarray(y2).transpose()\nplt.plot(t1,q1,t1,h1)\nplt.plot(t2,q2,t2,h2)",
"_____no_output_____"
]
],
[
[
"### A.2.4.4 Simulation of Two Tanks in Series with PI Level Control on the Second Tank",
"_____no_output_____"
]
],
[
[
"dt = 0.02\n\ntank1 = gravtank_generator(dt, A=0.2, Cv=.5)\ntank2 = gravtank_generator(dt, A=0.2, Cv=.5)\n\ny1 = [tank1.send(None)]\ny2 = [tank2.send(None)]\n\nu = 0.0\nr2 = 1.5\nKp = .6\nKi = .6\necurr = 0\nulog = [u]\n\nfor t in np.arange(dt,10,dt):\n t1,q1,h1 = tank1.send(u)\n t2,q2,h2 = tank2.send(q1)\n \n eprev,ecurr = ecurr,r2-h2\n u += Kp*(ecurr-eprev) + Ki*ecurr*dt\n u = max(0,min(1,u))\n \n y1.append([t1,q1,h1])\n y2.append([t2,q2,h2])\n ulog.append(u)\n \nt1,q1,h1 = np.asarray(y1).transpose()\nt2,q2,h2 = np.asarray(y2).transpose()\nplt.plot(t1,q1,t1,h1)\nplt.plot(t2,q2,t2,h2)\nplt.plot(t1,ulog)",
"_____no_output_____"
]
],
[
[
"### A.2.4.5 Adding a PI Control Generator",
"_____no_output_____"
]
],
[
[
"def PI_generator(dt, Kp, Ki, MVmin = 0, MVmax = np.Inf):\n\n ecurr = 0\n eprev = 0\n t = 0\n u = MVmin\n \n while True:\n r,y,u = yield t,u\n eprev,ecurr = ecurr,r-y\n u += Kp*(ecurr - eprev) + Ki*ecurr*dt\n u = max(MVmin,min(MVmax,u))\n t += dt",
"_____no_output_____"
],
[
"dt = 0.02\n\ntank1 = gravtank_generator(dt, A=0.2, Cv=.5)\ntank2 = gravtank_generator(dt, A=0.2, Cv=.5)\npi = PI_generator(dt, Kp = 0.6, Ki = 0.6, MVmin = 0, MVmax = 1)\n\ny1 = [tank1.send(None)]\ny2 = [tank2.send(None)]\nulog = [pi.send(None)[1]]\n\nu = 0\n\nfor t in np.arange(dt,10,dt):\n t1,q1,h1 = tank1.send(u)\n t2,q2,h2 = tank2.send(q1)\n t3,u = pi.send((r2,h2,u))\n \n y1.append([t,q1,h1])\n y2.append([t,q2,h2])\n ulog.append(u)\n \nt1,q1,h1 = np.asarray(y1).transpose()\nt2,q2,h2 = np.asarray(y2).transpose()\n\nplt.plot(t1,q1,t1,h1)\nplt.plot(t2,q2,t2,h2)\nplt.plot(t1,ulog)",
"_____no_output_____"
]
],
[
[
"### A.2.4.6 Implementing Cascade Control for Two Tanks in Series with Unmeasured Disturbance",
"_____no_output_____"
]
],
[
[
"# disturbance function\ndef d(t):\n if t > 10:\n return 0.1\n else:\n return 0\n\n# simulation\ndt = 0.02\n\ntank1 = gravtank_generator(dt, A=0.2, Cv=.5)\ntank2 = gravtank_generator(dt, A=0.2, Cv=.5)\n\n# level control for tank 1. \npi1 = PI_generator(dt, Kp = 1, Ki = 0.6, MVmin = 0, MVmax = 1)\n\n# cascade level control for tank 2. Manipulated variable is the setpoint to pi1\npi2 = PI_generator(dt, Kp = 0.6, Ki = 0.6, MVmin = 0, MVmax = 2)\n\ny1 = [tank1.send(None)]\ny2 = [tank2.send(None)]\nulog = [pi1.send(None)[1]]\npi2.send(None)\n\nu = 0\nr1 = 0\nr2 = 1.3\n\nfor t in np.arange(dt,20,dt):\n t1,q1,h1 = tank1.send(u)\n t2,q2,h2 = tank2.send(q1 + d(t))\n t3,r1 = pi2.send((r2,h2,r1))\n t4,u = pi1.send((r1,h1,u))\n \n y1.append([t,q1,h1])\n y2.append([t,q2,h2])\n ulog.append(u)\n \nt1,q1,h1 = np.asarray(y1).transpose()\nt2,q2,h2 = np.asarray(y2).transpose()\nplt.plot(t1,q1,t1,h1)\nplt.plot(t2,q2,t2,h2)\nplt.plot(t1,ulog)",
"_____no_output_____"
]
],
[
[
"## A.2.5 Enhancing Modularity with Class Definitions for Process Units\n\nOne of the key goals of a modular approach to simulation is to implement process specific behavior within the definitions of the process, and separate from the organization of information flow among units that takes place in the main simulation loop.\n\nBelow we define two examples of class definitions demonstrating how this can be done. The class definitions add features for defining names and parameters for instances of each class, and functions to log and plot data gathered in the course of simulations.",
"_____no_output_____"
],
[
"### A.2.5.1 Gravity-Drained Tank Class",
"_____no_output_____"
]
],
[
[
"class gravtank():\n \n def __init__(self, name='', A=1, Cv=1):\n self.name = name\n self.A = A\n self.Cv = Cv\n self._log = []\n self.qin = 0\n \n def qout(self,h):\n return self.Cv*np.sqrt(float(h))\n \n def deriv(self,h,t):\n dh = (self.qin - self.qout(h))/self.A\n return dh\n \n def plot(self):\n t,qout,h = np.asarray(self._log).transpose()\n plt.plot(t,qout,label=self.name + ' qout')\n plt.plot(t,h,label=self.name + ' h')\n plt.legend()\n \n def generator(self,dt,IC = 0):\n h = IC\n while True:\n t,self.qin = yield self.qout(h),float(h)\n h = odeint(self.deriv,h,[t,t+dt])[-1]\n self._log.append([t,self.qout(h),float(h)])\n t += dt",
"_____no_output_____"
]
],
[
[
"### A.2.5.2 PI Controller Class",
"_____no_output_____"
]
],
[
[
"class PI():\n \n def __init__(self, name='', Kp = 0, Ki = 0, MVmin = 0, MVmax = np.Inf):\n self.name = name\n self.Kp = Kp\n self.Ki = Ki\n self.MVmin = MVmin\n self.MVmax = MVmax\n self._log = []\n \n def plot(self):\n t,r,y,u = np.asarray(self._log).transpose()\n plt.subplot(1,2,1)\n p = plt.plot(t,y,label=self.name + ' PV')\n plt.plot(t,r,'--',color=p[-1].get_color(),label=self.name + ' SP')\n plt.legend()\n plt.title('Process Variable and Setpoint')\n plt.subplot(1,2,2)\n plt.plot(t,u,label=self.name + ' MV')\n plt.title('Manipulated Variable')\n plt.legend()\n plt.tight_layout()\n \n def generator(self,dt):\n ecurr = 0\n eprev = 0\n u = self.MVmin\n while True:\n t,r,y,u = yield u\n self._log.append([t,r,y,u]) \n eprev,ecurr = ecurr,r-y\n u += Kp*(ecurr - eprev) + Ki*ecurr*dt\n u = max(self.MVmin,min(self.MVmax,u))\n t += dt",
"_____no_output_____"
]
],
[
[
"### A.2.5.3 Modular Simulation of Cascade Control for Two Tanks in Series\n\nThe following simulation shows how to use the class definitions in a simulation. Each process instance used in the simulation requires three actions:\n\n1. Create an instance of the process. This is the step at which you can provide an instance name, parameters specific to the process and instance. Methods associated with the instance will be used to examine simulation logs and plot simulation results.\n\n2. Create a generator. A call to the generator function for each process instance creates an associated iterator. A sample time must be specified.\n\n3. An initial call to the iterator with an argument of `None` is needed to advance execution to the first `yield` statement.",
"_____no_output_____"
]
],
[
[
"# disturbance function\ndef d(t):\n if t > 10:\n return 0.1\n else:\n return 0\n\n# sample time\ndt = 0.1\n\n# create and initialize tank1\ntank1_obj = gravtank(name='Tank 1',A=0.2, Cv=.5)\ntank1 = tank1_obj.generator(dt)\ntank1.send(None)\n\n# create and initailize tank2\ntank2_obj = gravtank(name='Tank 2',A=0.2, Cv=0.5)\ntank2 = tank2_obj.generator(dt)\ntank2.send(None)\n\n# level control for tank 1. \npi1_obj = PI('Tank 1',Kp = 1, Ki = 0.6, MVmin = 0, MVmax = 1)\npi1 = pi1_obj.generator(dt)\npi1.send(None)\n\n# cascade level control for tank 2. Manipulated variable is the setpoint to for pi1\npi2_obj = PI('Tank 2',Kp = 0.6, Ki = 0.6, MVmin = 0, MVmax = 2)\npi2 = pi2_obj.generator(dt)\npi2.send(None)\n\n# initial signals\nu, r1 = 0, 0\n\n# setpoint for tank 2 level\nr2 = 1.3\n\nfor t in np.arange(0,20,dt):\n qout1,h1 = tank1.send((t,u))\n qout2,h2 = tank2.send((t,qout1 + d(t)))\n r1 = pi2.send((t,r2,h2,r1))\n u = pi1.send((t,r1,h1,u)) \n\nplt.figure()\ntank1_obj.plot()\ntank2_obj.plot()\n\nplt.figure(figsize=(11,4))\npi1_obj.plot()\npi2_obj.plot()",
"_____no_output_____"
]
],
[
[
"<!--NAVIGATION-->\n< [A.1 Python Library for CBE 30338](https://jckantor.github.io/CBE30338/A.01-Python-Library-for-CBE30338.html) | [Contents](toc.html) | [Tag Index](tag_index.html) | [A.3 Animation in Jupyter Notebooks](https://jckantor.github.io/CBE30338/A.03-Animation-in-Jupyter-Notebooks.html) ><p><a href=\"https://colab.research.google.com/github/jckantor/CBE30338/blob/master/docs/A.02-Modular-Approach-to-Simulation-using-Python-Generators.ipynb\"> <img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a><p><a href=\"https://jckantor.github.io/CBE30338/A.02-Modular-Approach-to-Simulation-using-Python-Generators.ipynb\"> <img align=\"left\" src=\"https://img.shields.io/badge/Github-Download-blue.svg\" alt=\"Download\" title=\"Download Notebook\"></a>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecc4e60ae0a9ad4ec7b5cbc1246e7c29849edff7 | 442,069 | ipynb | Jupyter Notebook | Fertility Rate Per Woman, Education Year and GDP Per Capita Model 2013.ipynb | kaiicheng/Fertility-Rate-and-Male-Years-of-schooling | 15cf5de8bfd8d58a9a8eaa9c1d0f484012257c11 | [
"MIT"
] | null | null | null | Fertility Rate Per Woman, Education Year and GDP Per Capita Model 2013.ipynb | kaiicheng/Fertility-Rate-and-Male-Years-of-schooling | 15cf5de8bfd8d58a9a8eaa9c1d0f484012257c11 | [
"MIT"
] | null | null | null | Fertility Rate Per Woman, Education Year and GDP Per Capita Model 2013.ipynb | kaiicheng/Fertility-Rate-and-Male-Years-of-schooling | 15cf5de8bfd8d58a9a8eaa9c1d0f484012257c11 | [
"MIT"
] | null | null | null | 177.823411 | 165,704 | 0.818259 | [
[
[
"import statsmodels.api as sm\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df = pd.read_excel(\"Fertility Rate Per Woman and Education Relationship Data 2013.xlsx\")\ndf.head(15)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"data = np.array(df)\ndata = data[:,2:]\ndata",
"_____no_output_____"
],
[
"print(df['Fertility rate, total (births per woman) 2013'])",
"0 5.359\n1 1.690\n2 2.990\n3 2.322\n4 1.728\n ... \n159 1.978\n160 4.326\n161 5.132\n162 4.030\n163 1.070\nName: Fertility rate, total (births per woman) 2013, Length: 164, dtype: float64\n"
],
[
"print(df['GDP per capita (current US$) 2013'])",
"0 637.165523\n1 4413.060861\n2 5499.581487\n3 13080.254732\n4 3838.185801\n ... \n159 1886.671896\n160 1607.152365\n161 1878.907001\n162 1430.000818\n163 21973.000000\nName: GDP per capita (current US$) 2013, Length: 164, dtype: float64\n"
],
[
"X = df['GDP per capita (current US$) 2013']\ny = df['Fertility rate, total (births per woman) 2013']",
"_____no_output_____"
],
[
"length = len(df['Country Name'])",
"_____no_output_____"
],
[
"country_name = df['Country Name']",
"_____no_output_____"
],
[
"# Create a plot.\nplt.figure(figsize = (20, 15))\n\nplt.scatter(X, y)\nplt.title(\"2013\")\nplt.xlabel(\"GDP per capita (current US$) 2013\")\nplt.ylabel(\"Fertility rate, total (births per woman)\")\n\n# Add country name tag.\nfor i in range(length):\n if country_name[i] == \"Taiwan\":\n plt.text(X[i], y[i]*1.02, country_name[i], fontsize=10, color = \"green\", style = \"italic\", weight = \"light\", verticalalignment='center', horizontalalignment='right',rotation=0)\n else:\n plt.text(X[i], y[i]*1.02, country_name[i], fontsize=10, color = \"r\", style = \"italic\", weight = \"light\", verticalalignment='center', horizontalalignment='right',rotation=0)\n\nplt.show()",
"_____no_output_____"
],
[
"X = df['Mean years of schooling (males aged 25 years and above) (years) 2013']\ny = df['Fertility rate, total (births per woman) 2013']",
"_____no_output_____"
],
[
"# Create a plot.\nplt.figure(figsize = (20, 15))\n\nplt.scatter(X, y)\nplt.title(\"2013\")\nplt.xlabel(\"Mean years of schooling (males aged 25 years and above) (years)\")\nplt.ylabel(\"Fertility rate, total (births per woman)\")\n\n# Add country name tag.\nfor i in range(length):\n if country_name[i] == \"Taiwan\":\n plt.text(X[i], y[i]*1.02, country_name[i], fontsize=10, color = \"green\", style = \"italic\", weight = \"light\", verticalalignment='center', horizontalalignment='right',rotation=0)\n else:\n plt.text(X[i], y[i]*1.02, country_name[i], fontsize=10, color = \"r\", style = \"italic\", weight = \"light\", verticalalignment='center', horizontalalignment='right',rotation=0)\n\nplt.show()",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"data = np.array(df)",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data = data[:,3:5]",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data = np.array(data, dtype=float)",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data = sm.add_constant(data, prepend = True)",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"y = np.array(y)",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
],
[
"print(len(X))",
"164\n"
],
[
"print(len(data))",
"164\n"
],
[
"print(len(y))",
"164\n"
],
[
"# Ordinary least square method.\nmod = sm.OLS(y, data)\nres = mod.fit()\nprint(res.summary())",
" OLS Regression Results \n==============================================================================\nDep. Variable: y R-squared: 0.597\nModel: OLS Adj. R-squared: 0.592\nMethod: Least Squares F-statistic: 119.4\nDate: Tue, 01 Sep 2020 Prob (F-statistic): 1.61e-32\nTime: 23:37:51 Log-Likelihood: -214.18\nNo. Observations: 164 AIC: 434.4\nDf Residuals: 161 BIC: 443.7\nDf Model: 2 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 5.9332 0.235 25.208 0.000 5.468 6.398\nx1 -0.3654 0.030 -12.081 0.000 -0.425 -0.306\nx2 -3.776e-06 4.15e-06 -0.909 0.365 -1.2e-05 4.43e-06\n==============================================================================\nOmnibus: 0.458 Durbin-Watson: 1.939\nProb(Omnibus): 0.795 Jarque-Bera (JB): 0.562\nSkew: 0.117 Prob(JB): 0.755\nKurtosis: 2.835 Cond. No. 8.63e+04\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 8.63e+04. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n"
],
[
"# Replace GDP Per Capita with log(GDP Per Capita)",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data[:,2] = np.log(data[:,2])",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"# Ordinary least square method.\nmod = sm.OLS(y, data)\nres = mod.fit()\nprint(res.summary())",
" OLS Regression Results \n==============================================================================\nDep. Variable: y R-squared: 0.658\nModel: OLS Adj. R-squared: 0.654\nMethod: Least Squares F-statistic: 154.7\nDate: Tue, 01 Sep 2020 Prob (F-statistic): 3.25e-38\nTime: 23:37:51 Log-Likelihood: -200.82\nNo. Observations: 164 AIC: 407.6\nDf Residuals: 161 BIC: 416.9\nDf Model: 2 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 8.1159 0.438 18.541 0.000 7.252 8.980\nx1 -0.2126 0.039 -5.520 0.000 -0.289 -0.137\nx2 -0.4059 0.075 -5.428 0.000 -0.554 -0.258\n==============================================================================\nOmnibus: 0.597 Durbin-Watson: 2.008\nProb(Omnibus): 0.742 Jarque-Bera (JB): 0.279\nSkew: -0.012 Prob(JB): 0.870\nKurtosis: 3.200 Cond. No. 85.2\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc4f7a0eefe93692b22519f33adf3829de6a401 | 118,254 | ipynb | Jupyter Notebook | Untitled.ipynb | zedian/esm | 9d2b50cd96753e8a703ca810e875c9e887047ed9 | [
"MIT"
] | null | null | null | Untitled.ipynb | zedian/esm | 9d2b50cd96753e8a703ca810e875c9e887047ed9 | [
"MIT"
] | null | null | null | Untitled.ipynb | zedian/esm | 9d2b50cd96753e8a703ca810e875c9e887047ed9 | [
"MIT"
] | null | null | null | 41.726888 | 164 | 0.608115 | [
[
[
"import torch\nimport pathlib\nimport pandas as pd\nimport numpy as np\n\nfrom torch.utils import data\nfrom torch import nn\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\n\nfrom time import time\nimport copy\nfrom tqdm import tqdm\nfrom sklearn.metrics import roc_auc_score, average_precision_score, f1_score, roc_curve, confusion_matrix, precision_score, recall_score, auc\nfrom sklearn.model_selection import KFold\ntorch.manual_seed(1) # reproducible torch:2 np:3\nnp.random.seed(1)\n\nfrom esm import Alphabet, FastaBatchedDataset, ProteinBertModel, pretrained\n\nimport sys\nPATH_TO_REPO = \"/home/zdx/Documents/gitlab/meta-rl/esm\"\nsys.path.append(PATH_TO_REPO)",
"_____no_output_____"
],
[
"from models import Embeddings, Encoder_MultipleLayers, Encoder\nfrom stream import BIN_Data_Encoder, BIN_combined_encoder",
"_____no_output_____"
],
[
"model_location = \"esm1_t6_43M_UR50S\"\nfasta_file = \"./examples/P62593.fasta\"\noutput_dir = pathlib.Path(\"./examples/P62593_reprs/\")\ntoks_per_batch = 4096\nrepr_layers = [-1]\ninclude = [\"per_tok\"]\ntruncate = True\nnogpu = True",
"_____no_output_____"
],
[
"model, alphabet = pretrained.load_model_and_alphabet(model_location)\nmodel.eval()\nif torch.cuda.is_available() and not nogpu:\n model = model.cuda()\n print(\"Transferred model to GPU\")\n\ndataset = FastaBatchedDataset.from_file(fasta_file)\nbatches = dataset.get_batch_indices(toks_per_batch, extra_toks_per_seq=1)\ndata_loader = torch.utils.data.DataLoader(\n dataset, collate_fn=alphabet.get_batch_converter(), batch_sampler=batches\n)\nprint(f\"Read {fasta_file} with {len(dataset)} sequences\")\n\noutput_dir.mkdir(parents=True, exist_ok=True)\nreturn_contacts = \"contacts\" in include\n\nassert all(-(model.num_layers + 1) <= i <= model.num_layers for i in repr_layers)\nrepr_layers = [(i + model.num_layers + 1) % (model.num_layers + 1) for i in repr_layers]\n\nwith torch.no_grad():\n for batch_idx, (labels, strs, toks) in enumerate(data_loader):\n print(\n f\"Processing {batch_idx + 1} of {len(batches)} batches ({toks.size(0)} sequences)\"\n )\n print(toks.shape)\n print(len(strs))\n if torch.cuda.is_available() and not nogpu:\n toks = toks.to(device=\"cuda\", non_blocking=True)\n\n # The model is trained on truncated sequences and passing longer ones in at\n # infernce will cause an error. See https://github.com/facebookresearch/esm/issues/21\n if truncate:\n toks = toks[:, :1022]\n \n print(toks.dtype)\n out = model(toks, repr_layers=repr_layers, return_contacts=return_contacts)\n\n logits = out[\"logits\"].to(device=\"cpu\")\n representations = {\n layer: t.to(device=\"cpu\") for layer, t in out[\"representations\"].items()\n }\n \n print(representations[6].shape)\n break\n# if return_contacts:\n# contacts = out[\"contacts\"].to(device=\"cpu\")\n\n# for i, label in enumerate(labels):\n# output_file = output_dir / f\"{label}.pt\"\n# output_file.parent.mkdir(parents=True, exist_ok=True)\n# result = {\"label\": label}\n# # Call clone on tensors to ensure tensors are not views into a larger representation\n# # See https://github.com/pytorch/pytorch/issues/1995\n# if \"per_tok\" in include:\n# result[\"representations\"] = {\n# layer: t[i, 1 : len(strs[i]) + 1].clone()\n# for layer, t in representations.items()\n# }\n# if \"mean\" in include:\n# result[\"mean_representations\"] = {\n# layer: t[i, 1 : len(strs[i]) + 1].mean(0).clone()\n# for layer, t in representations.items()\n# }\n# if \"bos\" in include:\n# result[\"bos_representations\"] = {\n# layer: t[i, 0].clone() for layer, t in representations.items()\n# }\n# if return_contacts:\n# result[\"contacts\"] = contacts[i, : len(strs[i]), : len(strs[i])].clone()\n\n# torch.save(\n# result,\n# output_file,\n# )",
"Read ./examples/P62593.fasta with 5397 sequences\nProcessing 1 of 386 batches (14 sequences)\ntorch.Size([14, 287])\n14\ntorch.int64\ntorch.Size([14, 287, 768])\n"
],
[
"repr_dir = pathlib.Path(\"./embeddings/P62593_reprs/0|beta-lactamase_P20P|1.581033423.pt\")",
"_____no_output_____"
],
[
"beta_lactamase = torch.load(repr_dir)\nbeta_lactamase[\"mean_representations\"][34]",
"_____no_output_____"
],
[
"fasta_to_idx = {'<null_0>': 0, '<pad>': 1, '<eos>': 2, '<unk>': 3,\n 'L': 4, 'A': 5, 'G': 6, 'V': 7, 'S': 8, 'E': 9, 'R': 10, 'T': 11,\n 'I': 12, 'D': 13, 'P': 14, 'K': 15, 'Q': 16, 'N':17, 'F':18, 'Y': 19,\n 'M': 20, 'H': 21, 'W': 22, 'C': 23, 'X': 24,'B': 25,'U': 26,\n 'Z': 27, 'O': 28, '.': 29, '-': 30, '<null_1>': 31, '<cls>': 32,\n '<mask>': 33, '<sep>': 34}\n\nsmile_to_idx = {'#': 35,'%': 36,'(': 38,')': 37,'+': 39,'-': 40,'.': 41,\n '0': 43,'1': 42,'2': 45,'3': 44,'4': 47,'5': 46,'6': 49,'7': 48,'8': 51,\n '9': 50,'=': 52,'A': 53,'B': 55,'C': 54,'D': 57,'E': 56,'F': 59,'G': 58,\n 'H': 61,'I': 60,'K': 62,'L': 64,'M': 63,'N': 66,'O': 65,'P': 67,'R': 69,\n 'S': 68,'T': 71,'U': 70,'V': 73,'W': 72,'Y': 74,'Z': 76,'[': 75,']': 77,\n '_': 78,'a': 79,'b': 81,'c': 80,'d': 83,'e': 82,'f': 85,'g': 84,'h': 87,\n 'i': 86,'l': 89,'m': 88,'n': 91,'o': 90,'r': 93,'s': 92,'t': 95,'u': 94,'y': 96}\n",
"_____no_output_____"
],
[
"class Simple_Protein_Drug_Transformer(nn.Sequential):\n def __init__(self):\n super(Simple_Protein_Drug_Transformer, self).__init__()\n self.max_d = 50\n self.max_p = 545\n self.dropout_rate = 0.1\n self.emb_size = 768\n self.hidden_size = 768\n self.input_dim = 16693\n self.intermediate_size = 1536\n self.num_attention_heads = 8\n self.attention_probs_dropout_prob = 0.1\n self.hidden_dropout_prob = 0.1\n self.vocab_size = 596\n self.n_layer = 2\n self.batch_size = 2\n \n self.flatten_dim = 78192\n \n # specialized embedding with positional one DRUG\n self.emb = Embeddings(self.input_dim, self.emb_size, \n self.max_d, self.dropout_rate)\n \n self.encoder = Encoder_MultipleLayers(Encoder, self.n_layer, \n self.hidden_size, \n self.intermediate_size, \n self.num_attention_heads, \n self.attention_probs_dropout_prob, \n self.hidden_dropout_prob)\n \n self.model, _ = pretrained.load_model_and_alphabet(model_location)\n \n self.model = self.model.cuda()\n self.icnn = nn.Conv2d(1, 3, 3, padding = 0)\n self.decoder = nn.Sequential(\n nn.Linear(self.flatten_dim, 512),\n nn.ReLU(True),\n \n nn.BatchNorm1d(512),\n nn.Linear(512, 64),\n nn.ReLU(True),\n \n nn.BatchNorm1d(64),\n nn.Linear(64, 32),\n nn.ReLU(True),\n \n #output layer\n nn.Linear(32, 1)\n )\n def forward(self, drug, protein, drug_mask, protein_mask):\n ex_d_mask = drug_mask.unsqueeze(1).unsqueeze(2)\n ex_p_mask = protein_mask.unsqueeze(1).unsqueeze(2)\n \n ex_d_mask = (1.0 - ex_d_mask) * -10000.0\n ex_p_mask = (1.0 - ex_p_mask) * -10000.0\n\n d_emb = self.emb(drug) # batch_size x seq_length x embed_size\n d_encoded_layers = self.encoder(d_emb.float(), ex_d_mask.float())\n p_repr = self.model(protein, repr_layers=[6], \n return_contacts=False)[\"representations\"][6]\n\n \n# print(d_encoded_layers.shape)\n# print(p_repr.shape)\n \n d_aug = torch.unsqueeze(d_encoded_layers, 2).repeat(1, 1, self.max_p, 1) # repeat along protein size\n p_aug = torch.unsqueeze(p_repr, 1).repeat(1, self.max_d, 1, 1) # repeat along drug size\n i = d_aug * p_aug # interaction\n \n# print(\"Interaction shape: \", i.shape)\n# if self.gpus != 0:\n# i_v = i.view(int(self.batch_size/self.gpus), -1, self.max_d, self.max_p)\n# else:\n i_v = i.view(self.batch_size, -1, self.max_d, self.max_p)\n# print(i_v.shape)\n # batch_size x embed size x max_drug_seq_len x max_protein_seq_len\n i_v = torch.sum(i_v, dim = 1)\n# print(i_v.shape)\n i_v = torch.unsqueeze(i_v, 1)\n# print(i_v.shape)\n \n i_v = F.dropout(i_v, p = self.dropout_rate) \n \n #f = self.icnn2(self.icnn1(i_v))\n f = self.icnn(i_v)\n \n #print(f.shape)\n \n #f = self.dense_net(f)\n #print(f.shape)\n \n# f = f.view(int(self.batch_size/self.gpus), -1)\n f = f.view(self.batch_size, -1)\n# print(f.shape)\n \n #f_encode = torch.cat((d_encoded_layers[:,-1], p_encoded_layers[:,-1]), dim = 1)\n \n #score = self.decoder(torch.cat((f, f_encode), dim = 1))\n score = self.decoder(f)\n return score ",
"_____no_output_____"
],
[
"lr = 1e-5\nBATCH_SIZE = 2\ntrain_epoch = 10\nuse_cuda = True\nloss_history = []\n\nmodel = Simple_Protein_Drug_Transformer()\nif use_cuda:\n model = model.cuda()\n\nif torch.cuda.device_count() > 1:\n print(\"Let's use\", torch.cuda.device_count(), \"GPUs!\")\n model = nn.DataParallel(model, dim = 0)\n\nopt = torch.optim.Adam(model.parameters(), lr = lr)\n#opt = torch.optim.SGD(model.parameters(), lr = lr, momentum=0.9)\n\nprint('--- Data Preparation ---')\n\nparams = {'batch_size': BATCH_SIZE,\n 'shuffle': True,\n 'num_workers': 6, \n 'drop_last': True}\n\ndataFolder = '../MolTrans/dataset/BindingDB'\ndf_train = pd.read_csv(dataFolder + '/train.csv')\ndf_val = pd.read_csv(dataFolder + '/val.csv')\ndf_test = pd.read_csv(dataFolder + '/test.csv')",
"--- Data Preparation ---\n"
],
[
"train_set = BIN_combined_encoder(df_train.index.values, df_train.Label.values, df_train, sep=True)\ntrain_gen = data.DataLoader(train_set, **params)\n\nvalid_set = BIN_combined_encoder(df_val.index.values, df_val.Label.values, df_val, sep=True)\nvalid_gen = data.DataLoader(valid_set, **params)\n\ntest_set = BIN_combined_encoder(df_test.index.values, df_test.Label.values, df_test, sep=True)\ntest_gen = data.DataLoader(test_set, **params)",
"_____no_output_____"
],
[
"def test(data_generator, model):\n y_pred = []\n y_label = []\n model.eval()\n loss_accumulate = 0.0\n count = 0.0\n for i, (d, p, d_mask, p_mask, label) in enumerate(data_generator):\n score = model(d.long().cuda(), p.long().cuda(), d_mask.long().cuda(), p_mask.long().cuda())\n# score = model(d.long(), p.long(), d_mask.long(), p_mask.long())\n m = torch.nn.Sigmoid()\n logits = torch.squeeze(m(score))\n \n loss_fct = torch.nn.BCELoss() \n \n label = Variable(torch.from_numpy(np.array(label)).float()).cuda()\n# label = Variable(torch.from_numpy(np.array(label)).float())\n\n loss = loss_fct(logits, label)\n \n loss_accumulate += loss\n count += 1\n \n logits = logits.detach().cpu().numpy()\n \n label_ids = label.to('cpu').numpy()\n y_label = y_label + label_ids.flatten().tolist()\n y_pred = y_pred + logits.flatten().tolist()\n \n# for i, (feature, mask, label) in enumerate(data_generator):\n# score = model(feature.long().cuda(), mask.long().cuda())\n# # score = model(feature.long(), mask.long())\n# m = torch.nn.Sigmoid()\n# logits = torch.squeeze(m(score))\n \n# loss_fct = torch.nn.BCELoss() \n \n# # label = Variable(torch.from_numpy(np.array(label)).float()).cuda()\n# label = Variable(torch.from_numpy(np.array(label)).float()).cuda()\n\n# loss = loss_fct(logits, label)\n \n# loss_accumulate += loss\n# count += 1\n \n# logits = logits.detach().cpu().numpy()\n \n# label_ids = label.to('cpu').numpy()\n# y_label = y_label + label_ids.flatten().tolist()\n# y_pred = y_pred + logits.flatten().tolist()\n \n loss = loss_accumulate/count\n \n fpr, tpr, thresholds = roc_curve(y_label, y_pred)\n\n precision = tpr / (tpr + fpr)\n\n f1 = 2 * precision * tpr / (tpr + precision + 0.00001)\n\n thred_optim = thresholds[5:][np.argmax(f1[5:])]\n\n print(\"optimal threshold: \" + str(thred_optim))\n\n y_pred_s = [1 if i else 0 for i in (y_pred >= thred_optim)]\n\n auc_k = auc(fpr, tpr)\n print(\"AUROC:\" + str(auc_k))\n print(\"AUPRC: \"+ str(average_precision_score(y_label, y_pred)))\n\n cm1 = confusion_matrix(y_label, y_pred_s)\n print('Confusion Matrix : \\n', cm1)\n print('Recall : ', recall_score(y_label, y_pred_s))\n print('Precision : ', precision_score(y_label, y_pred_s))\n\n total1=sum(sum(cm1))\n #####from confusion matrix calculate accuracy\n accuracy1=(cm1[0,0]+cm1[1,1])/total1\n print ('Accuracy : ', accuracy1)\n\n sensitivity1 = cm1[0,0]/(cm1[0,0]+cm1[0,1])\n print('Sensitivity : ', sensitivity1 )\n\n specificity1 = cm1[1,1]/(cm1[1,0]+cm1[1,1])\n print('Specificity : ', specificity1)\n\n outputs = np.asarray([1 if i else 0 for i in (np.asarray(y_pred) >= 0.5)])\n return roc_auc_score(y_label, y_pred), average_precision_score(y_label, y_pred), f1_score(y_label, outputs), y_pred, loss.item()\n\nmax_auc = 0\nmodel_max = copy.deepcopy(model)\n\nfor epo in range(train_epoch):\n model.train()\n\n for i, (d, p, d_mask, p_mask, label) in enumerate(train_gen):\n score = model(d.long().cuda(), p.long().cuda(), d_mask.long().cuda(), p_mask.long().cuda())\n\n# score = model(d.long(), p.long(), d_mask.long(), p_mask.long())\n label = Variable(torch.from_numpy(np.array(label)).float()).cuda()\n# label = Variable(torch.from_numpy(np.array(label)).float())\n loss_fct = torch.nn.BCELoss()\n m = torch.nn.Sigmoid()\n n = torch.squeeze(m(score))\n\n loss = loss_fct(n, label)\n loss_history.append(loss)\n\n opt.zero_grad()\n loss.backward()\n opt.step()\n\n if (i % 100 == 0):\n print('Training at Epoch ' + str(epo + 1) + ' iteration ' + str(i) + ' with loss ' + str(loss.cpu().detach().numpy()))\n \n if (i % 1000 == 0):\n # every epoch test\n with torch.set_grad_enabled(False):\n aucc, auprc, f1, logits, loss = test(valid_gen, model)\n if aucc > max_auc:\n model_max = copy.deepcopy(model)\n max_auc = aucc\n\n print('Validation at Epoch '+ str(epo + 1) + ' , AUROC: '+ str(auc) + ' , AUPRC: ' + str(auprc) + ' , F1: '+str(f1))\n \n with torch.set_grad_enabled(False):\n aucc, auprc, f1, logits, loss = test(valid_gen, model)\n if aucc > max_auc:\n model_max = copy.deepcopy(model)\n max_auc = aucc\n \nprint('--- Go for Testing ---')\ntry:\n with torch.set_grad_enabled(False):\n aucc, auprc, f1, logits, loss = test(test_gen, model_max)\n print('Testing AUROC: ' + str(aucc) + ' , AUPRC: ' + str(auprc) + ' , F1: '+str(f1) + ' , Test loss: '+str(loss))\nexcept:\n print('testing failed')",
"Training at Epoch 1 iteration 0 with loss 0.7095413\n"
],
[
"# BindingDB\n# optimal threshold: 0.21793487668037415\n# AUROC:0.9167200191559495\n# AUPRC: 0.6239972299961172\n# Confusion Matrix : \n# [[4444 1273]\n# [ 81 846]]\n# Recall : 0.912621359223301\n# Precision : 0.3992449268522888\n# Accuracy : 0.796207104154124\n# Sensitivity : 0.7773307678852545\n# Specificity : 0.912621359223301",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc5049fdffe9f035706d65644cf21af19255ab5 | 9,671 | ipynb | Jupyter Notebook | textgeneration.ipynb | git-vinit/mycaptain-AI | da55e30422743b5b66286d10c92187a5a64ccd7e | [
"MIT"
] | null | null | null | textgeneration.ipynb | git-vinit/mycaptain-AI | da55e30422743b5b66286d10c92187a5a64ccd7e | [
"MIT"
] | null | null | null | textgeneration.ipynb | git-vinit/mycaptain-AI | da55e30422743b5b66286d10c92187a5a64ccd7e | [
"MIT"
] | null | null | null | 28.868657 | 1,008 | 0.587737 | [
[
[
"import numpy\nimport sys\nimport nltk\nnltk.download('stopwords')\nfrom nltk.tokenize import RegexpTokenizer\nfrom nltk.corpus import stopwords\nfrom keras.models import Sequential\nfrom keras.layers import Dense , Dropout , LSTM\nfrom keras.utils import np_utils\nfrom keras.callbacks import ModelCheckpoint",
"[nltk_data] Downloading package stopwords to\n[nltk_data] C:\\Users\\vinit\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n"
],
[
"# load data\nfile = open(\"data.txt\").read()",
"_____no_output_____"
],
[
"# tokenization - process of breaking a stream of text up into words , phrases , symbols or other meaningful elements \n# standarization\ndef tokenize_words(input):\n input = input.lower()\n # instantiating the tokenizer\n tokenizer = RegexpTokenizer(r'\\w+')\n # tokenizing the text into tokens\n tokens = tokenizer.tokenize(input)\n # filtering the stopwords using lambda\n filtered = filter(lambda token: token not in stopwords.words('english'), tokens)\n return \"\".join(filtered)\n# preprocessing the input data\nprocessed_inputs = tokenize_words(file)",
"_____no_output_____"
],
[
"# chars to numbers\nchars = sorted(list(set(processed_inputs)))\nchar_to_num = dict((c,i) for i , c in enumerate(chars))",
"_____no_output_____"
],
[
"\n#check if words to chars or chars to num(?!) has worked?\ninput_len = len(processed_inputs)\nvocab_len = len(chars)\nprint (\" Total number of characters:\" , input_len)\nprint(\"Toatl vocab:\" , vocab_len)",
" Total number of characters: 23999\nToatl vocab: 35\n"
],
[
"# seq length\nseq_length = 100\nx_data = []\ny_data = []",
"_____no_output_____"
],
[
"# loop through the sequence\nfor i in range (0 , input_len - seq_length , 1):\n in_seq = processed_inputs[i:i + seq_length]\n out_seq = processed_inputs[i + seq_length]\n x_data.append([char_to_num[char] for char in in_seq])\n y_data.append(char_to_num[out_seq])\n \nn_patterns = len(x_data)\nprint(\"Total Patterns:\" , n_patterns)",
"Total Patterns: 23899\n"
],
[
"# conert input sequence into np array and so on\nX = numpy.reshape(x_data , (n_patterns , seq_length , 1))\nX = X/float(vocab_len)",
"_____no_output_____"
],
[
"\n# one hot encoding\ny=np_utils.to_categorical(y_data)",
"_____no_output_____"
],
[
"# creating the model\nmodel = Sequential()\nmodel.add(LSTM(256, input_shape = (X.shape[1], X.shape[2]), return_sequences = True))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(256 , return_sequences =True))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(128))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(y.shape[1] , activation ='softmax'))",
"_____no_output_____"
],
[
"# compile the model\nmodel.compile(loss='categorical_crossentropy', optimizer = 'adam')",
"_____no_output_____"
],
[
"# saving weights\nfilepath =\"model_weights_saved.hdf5\"\ncheckpoint = ModelCheckpoint(filepath , monitor = 'loss' , verbose =1 , save_best_only = True , mode = 'min')\ndesired_callbacks = [checkpoint]",
"_____no_output_____"
],
[
"model.fit(X, y , epochs =4 , batch_size = 256 , callbacks = desired_callbacks)",
"Epoch 1/4\n94/94 [==============================] - 398s 4s/step - loss: 2.9843\n\nEpoch 00001: loss improved from inf to 2.98430, saving model to model_weights_saved.hdf5\nEpoch 2/4\n94/94 [==============================] - 556s 6s/step - loss: 2.9250\n\nEpoch 00002: loss improved from 2.98430 to 2.92501, saving model to model_weights_saved.hdf5\nEpoch 3/4\n94/94 [==============================] - 730s 8s/step - loss: 2.9179\n\nEpoch 00003: loss improved from 2.92501 to 2.91787, saving model to model_weights_saved.hdf5\nEpoch 4/4\n94/94 [==============================] - 557s 6s/step - loss: 2.9132\n\nEpoch 00004: loss improved from 2.91787 to 2.91323, saving model to model_weights_saved.hdf5\n"
],
[
"# recompile model with the saved weights\nfilename = \"model_weights_saved.hdf5\"\nmodel.load_weights(filename)\nmodel.compile(loss = 'categorical_crossentropy',optimizer = 'adam')",
"_____no_output_____"
],
[
"\n#output of the model back into characters\nnum_to_char = dict((i,c) for i , c in enumerate(chars))",
"_____no_output_____"
],
[
"# random seed to help generate\nstart = numpy.random.randint(0, len(x_data) - 1)\npattern = x_data[start]\nprint(\"Random Seed:\")\nprint(\"\\\"\", ''.join([num_to_char[value] for value in pattern]), \"\\\"\")",
"Random Seed:\n\" apparitionsoonexplainedpermissionmotherprevailedrusticguardiansyieldchargefondsweetorphanpresencesee \"\n"
],
[
"\nfor i in range(1000):\n x = numpy.reshape(pattern, (1, len(pattern), 1))\n x = x / float(vocab_len)\n prediction = model.predict(x, verbose=0)\n index = numpy.argmax(prediction)\n result = num_to_char[index]\n\n sys.stdout.write(result)\n\n pattern.append(index)\n pattern = pattern[1:len(pattern)]",
"eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc50d4c4033c951af0f28749c9e538f8100d296 | 468,215 | ipynb | Jupyter Notebook | 6. Modular Network.ipynb | kambliketan/DataForScience_DeepLearning | fa4b4ee7b722c78c9c584fcc1d13d9c821b8dff2 | [
"MIT"
] | 1 | 2020-03-31T22:07:39.000Z | 2020-03-31T22:07:39.000Z | 6. Modular Network.ipynb | aleipf/MachineLearningORielly | fa4b4ee7b722c78c9c584fcc1d13d9c821b8dff2 | [
"MIT"
] | null | null | null | 6. Modular Network.ipynb | aleipf/MachineLearningORielly | fa4b4ee7b722c78c9c584fcc1d13d9c821b8dff2 | [
"MIT"
] | null | null | null | 776.475954 | 286,400 | 0.954651 | [
[
[
"<div style=\"width: 100%; overflow: hidden;\">\n <div style=\"width: 150px; float: left;\"> <img src=\"data/D4Sci_logo_ball.png\" alt=\"Data For Science, Inc\" align=\"left\" border=\"0\"> </div>\n <div style=\"float: left; margin-left: 10px;\"> <h1>Deep Learning From Scratch</h1>\n<h2>Modular Network</h2>\n <p>Bruno Gonçalves<br/>\n <a href=\"http://www.data4sci.com/\">www.data4sci.com</a><br/>\n @bgoncalves, @data4sci</p></div>\n</div>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport watermark\n\n%load_ext watermark\n%matplotlib inline",
"_____no_output_____"
],
[
"%watermark -i -n -v -m -g -iv",
"numpy 1.16.2\nseaborn 0.9.0\nwatermark 1.8.1\npandas 0.24.2\nmatplotlib 3.1.0\nSun Dec 01 2019 2019-12-01T23:21:36-04:00\n\nCPython 3.7.3\nIPython 6.2.1\n\ncompiler : Clang 4.0.1 (tags/RELEASE_401/final)\nsystem : Darwin\nrelease : 19.0.0\nmachine : x86_64\nprocessor : i386\nCPU cores : 8\ninterpreter: 64bit\nGit hash : 1aa3ac8e019f5a80d5859a063dba4cefad42fbaf\n"
],
[
"plt.style.use('./d4sci.mplstyle')",
"_____no_output_____"
]
],
[
[
"# Load Dataset",
"_____no_output_____"
]
],
[
[
"X_train = np.load('input/X_train.npy')\nX_test = np.load('input/X_test.npy')\ny_train = np.load('input/y_train.npy')\ny_test = np.load('input/y_test.npy')",
"_____no_output_____"
]
],
[
[
"Preprocessing",
"_____no_output_____"
]
],
[
[
"input_layer_size = X_train.shape[1]\n\nX_train /= 255.\nX_test /= 255.",
"_____no_output_____"
]
],
[
[
"## Initialize weights",
"_____no_output_____"
],
[
"We define the initializatino function as we'll have to call it more than once",
"_____no_output_____"
]
],
[
[
"def init_weights(L_in, L_out):\n epsilon = 0.12\n\n return 2*np.random.rand(L_out, L_in+1)*epsilon - epsilon",
"_____no_output_____"
]
],
[
[
"Set the layer sizes we'll be using",
"_____no_output_____"
]
],
[
[
"hidden_layer_size = 50\nnum_labels = 10",
"_____no_output_____"
]
],
[
[
"Initialize the weights. In this case we use a array of weight matrices so that we can easily add/remove layers",
"_____no_output_____"
]
],
[
[
"Thetas = []\nThetas.append(init_weights(input_layer_size, hidden_layer_size))\nThetas.append(init_weights(hidden_layer_size, num_labels))",
"_____no_output_____"
]
],
[
[
"## Utility functions",
"_____no_output_____"
],
[
"One-hot encoding to define the labels",
"_____no_output_____"
]
],
[
[
"def one_hot(K, pos):\n y0 = np.zeros(K)\n y0[pos] = 1\n\n return y0",
"_____no_output_____"
]
],
[
[
"Activation function base class. Here we must provide an interface to both the activation function and its derivative",
"_____no_output_____"
]
],
[
[
"class Activation(object):\n def f(z):\n pass\n\n def df(z):\n pass",
"_____no_output_____"
]
],
[
[
"The various activation functions simply extend the base class",
"_____no_output_____"
]
],
[
[
"class Linear(Activation):\n def f(z):\n return z\n\n def df(z):\n return np.ones(z.shape)\n\nclass ReLu(Activation):\n def f(z):\n return np.where(z > 0, z, 0)\n\n def df(z):\n return np.where(z > 0, 1, 0)\n\nclass Sigmoid(Activation):\n def f(z):\n return 1./(1+np.exp(-z))\n \n def df(z):\n h = Sigmoid.f(z)\n return h*(1-h)\n\nclass TanH(Activation):\n def f(z):\n return np.tanh(z)\n\n def df(z):\n return 1-np.power(np.tanh(z), 2.0)",
"_____no_output_____"
]
],
[
[
"## Forward Propagation and Prediction",
"_____no_output_____"
],
[
"The forward and predict functions are also generalized",
"_____no_output_____"
]
],
[
[
"def forward(Theta, X, active):\n N = X.shape[0]\n\n # Add the bias column\n X_ = np.concatenate((np.ones((N, 1)), X), 1)\n\n # Multiply by the weights\n z = np.dot(X_, Theta.T)\n\n # Apply the activation function\n a = active.f(z)\n\n return a",
"_____no_output_____"
]
],
[
[
"The predict function now takes the entire model as input and it must loop over the various layers",
"_____no_output_____"
]
],
[
[
"def predict(model, X):\n h = X.copy()\n\n for i in range(0, len(model), 2):\n theta = model[i]\n activation = model[i+1]\n\n h = forward(theta, h, activation)\n\n return np.argmax(h, 1)",
"_____no_output_____"
]
],
[
[
"The accuracy function is just the same as before",
"_____no_output_____"
]
],
[
[
"def accuracy(y_, y):\n return np.mean((y_ == y.flatten()))*100.",
"_____no_output_____"
]
],
[
[
"## Back propagation",
"_____no_output_____"
]
],
[
[
"def backprop(model, X, y):\n M = X.shape[0]\n\n Thetas=[0]\n Thetas.extend(model[0::2])\n activations = [0]\n activations.extend(model[1::2])\n\n layers = len(Thetas)\n\n K = Thetas[-1].shape[0]\n J = 0\n\n Deltas = [0]\n\n for i in range(1, layers):\n Deltas.append(np.zeros(Thetas[i].shape))\n\n deltas = [0]*(layers+1)\n\n for i in range(M):\n As = [0]\n Zs = [0, 0]\n Hs = [0, X[i]]\n\n # Forward propagation, saving intermediate results\n As.append(np.concatenate(([1], Hs[1]))) # Input layer\n\n for l in range(2, layers+1):\n Zs.append(np.dot(Thetas[l-1], As[l-1]))\n Hs.append(activations[l-1].f(Zs[l]))\n As.append(np.concatenate(([1], Hs[l])))\n\n y0 = one_hot(K, y[i])\n\n # Cross entropy\n J -= np.dot(y0.T, np.log(Hs[-1]))+np.dot((1-y0).T, np.log(1-Hs[-1]))\n\n deltas[layers] = Hs[layers]-y0\n\n # Calculate the weight deltas\n for l in range(layers-1, 1, -1):\n deltas[l] = np.dot(Thetas[l].T, deltas[l+1])[1:]*activations[l].df(Zs[l])\n\n Deltas[2] += np.outer(deltas[3], As[2])\n Deltas[1] += np.outer(deltas[2], As[1])\n\n J /= M\n\n grads = []\n\n grads.append(Deltas[1]/M)\n grads.append(Deltas[2]/M)\n\n return [J, grads]",
"_____no_output_____"
]
],
[
[
"## Model Definition",
"_____no_output_____"
]
],
[
[
"model = []\n\nmodel.append(Thetas[0])\nmodel.append(Sigmoid)\nmodel.append(Thetas[1])\nmodel.append(Sigmoid)",
"_____no_output_____"
]
],
[
[
"## Training procedure\nThe same basic idea as before",
"_____no_output_____"
]
],
[
[
"step = 0\ntol = 1e-3\nJ_old = 1/tol\ndiff = 1\n\nacc_train = []\nacc_test = []\nJ_val = []\nsteps = []\n\nwhile diff > tol:\n J_train, grads = backprop(model, X_train, y_train)\n\n diff = abs(J_old-J_train)\n J_old = J_train\n J_val.append(J_train)\n\n step += 1\n\n if step % 10 == 0:\n pred_train = predict(model, X_train)\n pred_test = predict(model, X_test)\n\n J_test, grads = backprop(model, X_test, y_test)\n\n acc_train.append(accuracy(pred_train, y_train))\n acc_test.append(accuracy(pred_test, y_test))\n steps.append(step)\n \n print(step, J_train, J_test, acc_train[-1], acc_test[-1])\n\n for i in range(len(Thetas)):\n Thetas[i] -= .5*grads[i]",
"10 3.1151794124172834 3.1284540761600312 42.54 37.7\n20 2.8930812858260424 2.9245023338635803 57.42 52.7\n30 2.5765997922138273 2.6310790222300082 62.8 58.699999999999996\n40 2.2588077102548567 2.3318459886065654 67.75999999999999 63.5\n50 2.0051323700810464 2.0891283681509045 72.24000000000001 68.30000000000001\n60 1.8097577603263248 1.9000148128895225 75.68 71.8\n70 1.6538674046564426 1.7479445203727637 78.36 74.4\n80 1.5254851412895871 1.6220707937282561 80.62 76.0\n90 1.4177850084709327 1.516218295726227 82.42 77.8\n100 1.3263270795325142 1.426330741765939 83.84 79.3\n110 1.2478535139169256 1.349344780549471 84.86 81.8\n120 1.1798644429405565 1.2828370182092799 85.94000000000001 82.5\n130 1.12043074815847 1.2248925006341562 86.68 84.0\n140 1.068066397216133 1.1740112750037648 87.16000000000001 84.39999999999999\n150 1.0216224348437368 1.129020954006512 87.58 85.0\n160 0.9801994355021234 1.0889983585542797 88.03999999999999 85.7\n170 0.9430794853485821 1.0532054608870782 88.42 86.2\n180 0.9096765952685723 1.0210410647120078 88.8 86.8\n190 0.8795023850156974 0.9920065050292086 89.05999999999999 87.8\n"
]
],
[
[
"## Accuracy during training",
"_____no_output_____"
]
],
[
[
"plt.plot(np.arange(1, len(J_val)+1), J_val)\nplt.xlabel(\"Iterations\")\nplt.ylabel(\"Cost function\")\nplt.gcf().set_size_inches(11, 8)",
"_____no_output_____"
],
[
"plt.plot(steps, acc_train, label='Training dataset')\nplt.plot(steps, acc_test, label='Testing dataset')\nplt.xlabel(\"iterations\")\nplt.ylabel(\"Accuracy (%)\")\nplt.legend()\nplt.gcf().set_size_inches(11, 8)",
"_____no_output_____"
]
],
[
[
"<div style=\"width: 100%; overflow: hidden;\">\n <img src=\"data/D4Sci_logo_full.png\" alt=\"Data For Science, Inc\" align=\"center\" border=\"0\" width=300px> \n</div>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecc522877638eac9fdcf19b144454f1e0e5ac5fe | 4,849 | ipynb | Jupyter Notebook | docs/notebooks/atomic/windows/lateral_movement/SDWIN-200806015757.ipynb | onesorzer0es/Security-Datasets | 6a0eec7d9a2ec6026c6ba239ad647c4f59d2a6ef | [
"MIT"
] | 294 | 2020-08-27T01:41:47.000Z | 2021-06-28T00:17:15.000Z | docs/notebooks/atomic/windows/lateral_movement/SDWIN-200806015757.ipynb | onesorzer0es/Security-Datasets | 6a0eec7d9a2ec6026c6ba239ad647c4f59d2a6ef | [
"MIT"
] | 18 | 2020-09-01T14:51:13.000Z | 2021-06-22T14:12:04.000Z | docs/notebooks/atomic/windows/lateral_movement/SDWIN-200806015757.ipynb | onesorzer0es/Security-Datasets | 6a0eec7d9a2ec6026c6ba239ad647c4f59d2a6ef | [
"MIT"
] | 48 | 2020-08-31T07:30:05.000Z | 2021-06-28T00:17:37.000Z | 25.521053 | 314 | 0.558672 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecc5280ab2956c9b5db7cc3f302ce6059e7240c2 | 36,924 | ipynb | Jupyter Notebook | Lessons/Filled-Out/4 - Think Python - Chapter 10.ipynb | Klaynie/LearnToCodeAsGroup | 162ac60ff0d1665c84174f055aa52e8413912077 | [
"BSD-3-Clause"
] | null | null | null | Lessons/Filled-Out/4 - Think Python - Chapter 10.ipynb | Klaynie/LearnToCodeAsGroup | 162ac60ff0d1665c84174f055aa52e8413912077 | [
"BSD-3-Clause"
] | null | null | null | Lessons/Filled-Out/4 - Think Python - Chapter 10.ipynb | Klaynie/LearnToCodeAsGroup | 162ac60ff0d1665c84174f055aa52e8413912077 | [
"BSD-3-Clause"
] | null | null | null | 21.861456 | 783 | 0.504956 | [
[
[
"# Chapter 10 - Lists\n\nThis chapter will introduce one of the most useful built-in type, lists. We will learn more about objects and what can happen when you have more than one name for the same object.\n\n<i>Note: Lists are similar to the `array` in other programming languages. Arrays usually can only hold data of the same type, so there are Integer-Arrays, Float-Arrays, String-Arrays and so on. In Python the `list` can contain a mix of different data-types</i>",
"_____no_output_____"
],
[
"***",
"_____no_output_____"
],
[
"## A list is a sequence\n\nLike a string, a <b>list</b> is a sequence of values. In a string, the values are characters; in a list, they can be any type. The values in a list are called <b>elements</b> or sometimes <b>items</b>.<br>\nThere are several ways to create a new list; the simplest is to enclose the elements in square brackets `[` and `]`:",
"_____no_output_____"
]
],
[
[
"[10, 20, 30, 40]",
"_____no_output_____"
],
[
"['crunchy frog', 'ram bladder', 'lark vomit']",
"_____no_output_____"
]
],
[
[
"A list within another list is <b>nested<b>.",
"_____no_output_____"
]
],
[
[
"['spam', 2.0, 5, [10, 20]]",
"_____no_output_____"
]
],
[
[
"A list that contains no elements is called an empty list; you can create one with empty brackets, `[]`.",
"_____no_output_____"
]
],
[
[
"[]",
"_____no_output_____"
]
],
[
[
"As you might expect, you can assign list values to variables:",
"_____no_output_____"
]
],
[
[
"cheeses = ['Cheddar', 'Edam', 'Gouda']\nnumbers = [42, 123]\nempty = []\n\nprint(cheeses, numbers, empty)",
"['Cheddar', 'Edam', 'Gouda'] [42, 123] []\n"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## Lists are mutable\n\nThe syntax for accessing the elements of a list is the same as for accessing the characters of a string — the bracket operator. The expression inside the brackets specifies the index.<br>\nRemember that the indices start at `0`:",
"_____no_output_____"
]
],
[
[
"cheeses[0]",
"_____no_output_____"
]
],
[
[
"Unlike strings, lists are mutable. When the bracket operator appears on the left side of an assignment, it identifies the element of the list that will be assigned.",
"_____no_output_____"
]
],
[
[
"numbers = [42, 123]\nnumbers[1] = 5\nnumbers",
"_____no_output_____"
]
],
[
[
"<img src=\"https://i.imgur.com/IvfhVLy.png\" title=\"State Diagram for lists\" />",
"_____no_output_____"
],
[
"The `in` operator also works on lists.",
"_____no_output_____"
]
],
[
[
"cheeses = ['Cheddar', 'Edam', 'Gouda']",
"_____no_output_____"
],
[
"'Edam' in cheeses",
"_____no_output_____"
],
[
"'Brie' in cheeses",
"_____no_output_____"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## Traversing a list\n\nThe most common way to traverse the elements of a list is with a for loop. The syntax is the same as for strings:",
"_____no_output_____"
]
],
[
[
"for cheese in cheeses:\n print(cheese)",
"Cheddar\nEdam\nGouda\n"
]
],
[
[
"This works well if you only need to read the elements of the list. But if you want to write or update the elements, you need the indices. A common way to do that is to combine the built-in functions `range` and `len`:",
"_____no_output_____"
]
],
[
[
"numbers = [42, 123]\n\nfor i in range(len(numbers)):\n numbers[i] = numbers[i] * 2\n\nnumbers",
"_____no_output_____"
]
],
[
[
"Although a list can contain another list, the nested list still counts as a single element. The length of this list is four:",
"_____no_output_____"
]
],
[
[
"some_list = ['spam', 1, ['Brie', 'Roquefort', 'Pol le Veq'], [1, 2, 3]]\nprint(len(some_list))",
"4\n"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## List operations\n\nThe `+` operator concatenates lists:",
"_____no_output_____"
]
],
[
[
"a = [1, 2, 3]\nb = [4, 5, 6]\nc = a + b\n\nc",
"_____no_output_____"
]
],
[
[
"The `*` operator repeats a list a given number of times:",
"_____no_output_____"
]
],
[
[
"[0] * 4",
"_____no_output_____"
],
[
"[1, 2, 3] * 3",
"_____no_output_____"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## List slices\n\nThe slice operator also works on lists:",
"_____no_output_____"
]
],
[
[
"t = ['a', 'b', 'c', 'd', 'e', 'f']\n\nt[1:3]",
"_____no_output_____"
],
[
"t[:4]",
"_____no_output_____"
],
[
"t[3:]",
"_____no_output_____"
]
],
[
[
"If you omit the first index, the slice starts at the beginning. If you omit the second, the slice goes to the end. So if you omit both, the slice is a copy of the whole list.",
"_____no_output_____"
]
],
[
[
"t[:]",
"_____no_output_____"
]
],
[
[
"Since lists are mutable, it is often useful to make a copy before performing operations that\nmodify lists.<br>\n<br>\nA slice operator on the left side of an assignment can update multiple elements:",
"_____no_output_____"
]
],
[
[
"t = ['a', 'b', 'c', 'd', 'e', 'f']\nt[1:3] = ['x', 'y']\nt",
"_____no_output_____"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## List methods\n\nPython provides methods that operate on lists. For example, `append` adds a new element to the end of a list:",
"_____no_output_____"
]
],
[
[
"t = ['a', 'b', 'c']\nt.append('d')\nt",
"_____no_output_____"
]
],
[
[
"`extend` takes a list as an argument and appends all of the elements:",
"_____no_output_____"
]
],
[
[
"t1 = ['a', 'b', 'c']\nt2 = ['d', 'e']\nt1.extend(t2)\nt1",
"_____no_output_____"
]
],
[
[
"This example leaves t2 unmodified.<br>\n<br>\n`sort` arranges the elements of the list from low to high:",
"_____no_output_____"
]
],
[
[
"t = ['d', 'c', 'e', 'b', 'a']\nt.sort()\nt",
"_____no_output_____"
]
],
[
[
"Most list methods are <b>void</b>; they modify the list and return `None`. If you accidentally write `t = t.sort()`, you will be disappointed with the result.",
"_____no_output_____"
]
],
[
[
"t = ['d', 'c', 'e', 'b', 'a']\nt = t.sort()\nprint(t)\ntype(t)",
"None\n"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## Map, filter and reduce\nTo add up all the numbers in a list, you can use a loop like this:",
"_____no_output_____"
]
],
[
[
"def add_all(t):\n total = 0\n for x in t:\n total += x\n return total",
"_____no_output_____"
]
],
[
[
"`total` is initialized to `0`. Each time through the loop, `x` gets one element from the list. The `+=` operator provides a short way to update a variable.",
"_____no_output_____"
],
[
"Adding up the elements of a list is such a common operation that Python provides it as a built-in function, `sum`:",
"_____no_output_____"
]
],
[
[
"t = [1, 2, 3]\nsum(t)",
"_____no_output_____"
]
],
[
[
"An operation like this that combines a sequence of elements into a single value is sometimes called <b>reduce</b>.",
"_____no_output_____"
],
[
"Sometimes you want to traverse one list while building another. For example, the following function takes a list of strings and returns a new list that contains capitalized strings:",
"_____no_output_____"
]
],
[
[
"def capitalize_all(t):\n res = []\n for s in t:\n res.append(s.capitalize())\n return res",
"_____no_output_____"
]
],
[
[
"An operation like `capitalize_all` is sometimes called a <b>map</b> because it \"maps\" a function (in this case the method `capitalize`) onto each of the elements in a sequence.",
"_____no_output_____"
],
[
"Another common operation is to select some of the elements from a list and return a sublist. For example, the following function takes a list of strings and returns a list that contains only the uppercase strings:",
"_____no_output_____"
]
],
[
[
"def only_upper(t):\n res = []\n for s in t:\n if s.isupper():\n res.append(s)\n return res",
"_____no_output_____"
]
],
[
[
"`isupper` is a string method that returns `True` if the string contains only upper case letters.<br>\n<br>\nAn operation like `only_upper` is called a <b>filter</b> because it selects some of the elements and filters out the others.<br>\n<br>\nMost common list operations can be expressed as a combination of map, filter and reduce.",
"_____no_output_____"
],
[
"***",
"_____no_output_____"
],
[
"## Deleting elements\nThere are several ways to delete elements from a list. If you know the index of the element you want, you can use `pop`:",
"_____no_output_____"
]
],
[
[
"t = ['a', 'b', 'c']\nx = t.pop(1)",
"_____no_output_____"
],
[
"t",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
]
],
[
[
"`pop` modifies the list and returns the element that was removed. If you do not provide an index, it deletes and returns the last element.<br>\n<br>\nIf you do not need the removed value, you can use the `del` operator:",
"_____no_output_____"
]
],
[
[
"t = ['a', 'b', 'c']\ndel t[1]\nt",
"_____no_output_____"
]
],
[
[
"If you know the element you want to remove (but not the index), you can use `remove`:",
"_____no_output_____"
]
],
[
[
"t = ['a', 'b', 'c']\nt.remove('b')\nt",
"_____no_output_____"
]
],
[
[
"The return value from `remove` is `None`.",
"_____no_output_____"
],
[
"To remove more than one element, you can use `del` with a slice index:",
"_____no_output_____"
]
],
[
[
"t = ['a', 'b', 'c', 'd', 'e', 'f']\ndel t[1:5]\nt",
"_____no_output_____"
]
],
[
[
"As usual, the slice selects all the elements up to but not including the second index.",
"_____no_output_____"
],
[
"***",
"_____no_output_____"
],
[
"## Lists and strings\nA string is a sequence of characters and a list is a sequence of values, but a list of characters\nis not the same as a string. To convert from a string to a list of characters, you can use `list`:",
"_____no_output_____"
]
],
[
[
"s = 'spam'\nt = list(s)\nt",
"_____no_output_____"
]
],
[
[
"Because `list` is the name of a built-in function, you should avoid using it as a variable name. I also avoid `l` because it looks too much like `1`. So that is why I use `t`.<br>\n<br>\nThe `list` function breaks a string into individual letters. If you want to break a string into words, you can use the `split` method:",
"_____no_output_____"
]
],
[
[
"s = 'pining for the fjords'\nt = s.split()\nt",
"_____no_output_____"
]
],
[
[
"An optional argument called a <b>delimiter</b> specifies which characters to use as word boundaries. The following example uses a hyphen as a delimiter:",
"_____no_output_____"
]
],
[
[
"s = 'spam-spam-spam'\ndelimiter = '-'\nt = s.split(delimiter)\nt",
"_____no_output_____"
]
],
[
[
"`join` is the inverse of `split`. It takes a list of strings and concatenates the elements. `join` is a string method, so you have to invoke it on the delimiter and pass the list as a parameter:",
"_____no_output_____"
]
],
[
[
"t = ['pining', 'for', 'the', 'fjords']\ndelimiter = ' '\ns = delimiter.join(t)\ns",
"_____no_output_____"
]
],
[
[
"In this case the delimiter is a space character, so `join` puts a space between words. To concatenate strings without spaces, you can use the empty string, `''`, as a delimiter.",
"_____no_output_____"
]
],
[
[
"t = ['pining', 'for', 'the', 'fjords']\nempty_string = ''\ns = empty_string.join(t)\ns",
"_____no_output_____"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## Objects and values\nIf we run these assignment statements:",
"_____no_output_____"
]
],
[
[
"a = 'banana'\nb = 'banana'",
"_____no_output_____"
]
],
[
[
"We know that `a` and `b` both refer to a string, but we do not know whether they refer to the <i>same</i> string. There are two possible states, as shown below:<br>\n<img src=\"https://i.imgur.com/6Yxszvm.png\" title=\"Possible Object references\" /><br>\nIn one case, `a` and `b` refer to two different objects that have the same value. In the second case, they refer to the same object.<br>\nTo check whether two variables refer to the same object, you can use the `is` operator.",
"_____no_output_____"
]
],
[
[
"a = 'banana'\nb = 'banana'\na is b",
"_____no_output_____"
]
],
[
[
"In this example, Python only created one string object, and both `a` and `b` refer to it. But when you create two lists, you get two objects:",
"_____no_output_____"
]
],
[
[
"a = [1, 2, 3]\nb = [1, 2, 3]\na is b",
"_____no_output_____"
]
],
[
[
"So the state diagram looks like this:<br>\n<img src=\"https://i.imgur.com/v34Mq9K.png\" title=\"Different objects\" /><br>\nIn this case we would say that the two lists are <b>equivalent</b>, because they have the same elements, but not identical, because they are not the same object. If two objects are identical, they are also equivalent, but if they are equivalent, they are not necessarily identical.<br>\nUntil now, we have been using \"object\" and \"value\" interchangeably, but it is more precise to say that an object has a value. If you evaluate `[1, 2, 3]`, you get a list object whose value is a sequence of integers. If another list has the same elements, we say it has the same value, but it is not the same object.",
"_____no_output_____"
],
[
"***",
"_____no_output_____"
],
[
"## Aliasing\nIf a refers to an object and you assign `b = a`, then both variables refer to the same object:",
"_____no_output_____"
]
],
[
[
"a = [1, 2, 3]\nb = a\nb is a",
"_____no_output_____"
]
],
[
[
"The state diagram looks like this:<br>\n<img src=\"https://i.imgur.com/Jh79tT4.png\" title=\"Same object\" /><br>\nThe association of a variable with an object is called a <b>reference</b>. In this example, there are two references to the same object.<br>\nAn object with more than one reference has more than one name, so we say that the object is <b>aliased</b>.<br>\nIf the aliased object is mutable, changes made with one alias affect the other:",
"_____no_output_____"
]
],
[
[
"b[0] = 42\na\n[42, 2, 3]",
"_____no_output_____"
]
],
[
[
"Although this behavior can be useful, it is error-prone. In general, it is safer to avoid aliasing when you are working with mutable objects.<br>\n<br>\nFor immutable objects like strings, aliasing is not as much of a problem. In this example:",
"_____no_output_____"
]
],
[
[
"a = 'banana'\nb = 'banana'",
"_____no_output_____"
]
],
[
[
"It almost never makes a difference whether a and b refer to the same string or not.",
"_____no_output_____"
],
[
"***",
"_____no_output_____"
],
[
"## List arguments\nWhen you pass a list to a function, the function gets a reference to the list. If the function modifies the list, the caller sees the change. For example,`delete_head` removes the firstelement from a list:",
"_____no_output_____"
]
],
[
[
"def delete_head(t):\n del t[0]",
"_____no_output_____"
],
[
"letters = ['a', 'b', 'c']\ndelete_head(letters)\nletters",
"_____no_output_____"
]
],
[
[
"The parameter `t` and the variable letters are aliases for the same object. The stack diagram looks like this:<br>\n<img src=\"https://i.imgur.com/tRzIBnA.png\" title=\"source: imgur.com\" /><br>\nSince the list is shared by two frames, I drew it between them.<br>\nIt is important to distinguish between operations that modify lists and operations that create\nnew lists. For example, the `append` method modifies a list, but the `+` operator creates a\nnew list.<br>\nHere is an example using append:",
"_____no_output_____"
]
],
[
[
"t1 = [1, 2]\nt2 = t1.append(3)",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"print(t2)",
"None\n"
]
],
[
[
"The return value from `append` is `None`.<br>\n<br>\nHere is an example using the `+` operator:",
"_____no_output_____"
]
],
[
[
"t3 = t1 + [4]",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"t3",
"_____no_output_____"
]
],
[
[
"The result of the operator is a new list, and the original list is unchanged.<br>\n<br>\nThis difference is important when you write functions that are supposed to modify lists. For example, this function <i>does not</i> delete the head of a list:",
"_____no_output_____"
]
],
[
[
"def bad_delete_head(t):\n t = t[1:] # WRONG!",
"_____no_output_____"
]
],
[
[
"The slice operator creates a new list and the assignment makes `t` refer to it, but that does not affect the caller.",
"_____no_output_____"
]
],
[
[
"t4 = [1, 2, 3]\nbad_delete_head(t4)\nt4",
"_____no_output_____"
]
],
[
[
"At the beginning of `bad_delete_head`, `t` and `t4` refer to the same list. At the end, `t` refers to a new list, but `t4` still refers to the original, unmodified list.<br>\n<br>\nAn alternative is to write a function that creates and returns a new list. For example, `tail` returns all but the first element of a list:",
"_____no_output_____"
]
],
[
[
"def tail(t):\n return t[1:]",
"_____no_output_____"
]
],
[
[
"This function leaves the original list unmodified. Here is how it is used:",
"_____no_output_____"
]
],
[
[
"letters = ['a', 'b', 'c']\nrest = tail(letters)\nrest",
"_____no_output_____"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## Debugging\nCareless use of lists (and other mutable objects) can lead to long hours of debugging. Here are some common pitfalls and ways to avoid them:\n1. Most list methods modify the argument and return `None`. This is the opposite of the string methods, which return a new string and leave the original alone. <br> If you are used to writing string code like this:<br><br>`word = word.strip()`<br><br>It is tempting to write list code like this:<br><br>`t = t.sort() # WRONG!`<br><br>Because sort returns `None`, the next operation you perform with `t` is likely to fail.<br>Before using list methods and operators, you should read the documentation carefully and then test them in interactive mode.\n<br><br>\n2. Pick an idiom and stick with it.<br> Part of the problem with lists is that there are too many ways to do things. For example, to remove an element from a list, you can use `pop`, `remove`, `del`, or even a slice assignment.<br>To add an element, you can use the `append` method or the `+` operator. Assuming that `t` is a list and `x` is a list element, these are correct:<br><br>`t.append(x)`<br>`t = t + [x]`<br>`t += [x]`<br><br>And these are wrong:<br><br>`t.append([x]) # WRONG!`<br>`t = t.append(x) # WRONG!`<br>`t + [x] # WRONG!`<br>`t = t + x # WRONG!`<br><br>Try out each of these examples in interactive mode to make sure you understand what they do. Notice that only the last one causes a runtime error; the other three are legal, but they do the wrong thing.\n<br><br>\n3. Make copies to avoid aliasing.<br>If you want to use a method like `sort` that modifies the argument, but you need to keep the original list as well, you can make a copy.<br><br>`>>> t = [3, 1, 2]`<br>`>>> t2 = t[:]`<br>`>>> t2.sort()`<br>`>>> t`<br>`[3, 1, 2]`<br>`>>> t2`<br>`[1, 2, 3]`<br><br>In this example you could also use the built-in function `sorted`, which returns a new, sorted list and leaves the original alone.<br><br>`>>> t2 = sorted(t)`<br>`>>> t`<br>`[3, 1, 2]`<br>`>>> t2`<br>`[1, 2, 3]`",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecc52a5a1396624d1ee168182d0e8f121301607f | 19,325 | ipynb | Jupyter Notebook | Visual Analysis Global Military Spending/Cleaning For Tableau.ipynb | Sanca94/Python | cd05e9cb8238a5eedc8aaff8c5f4ab5d80b9b05b | [
"MIT"
] | 1 | 2021-12-04T18:36:33.000Z | 2021-12-04T18:36:33.000Z | Visual Analysis Global Military Spending/Cleaning For Tableau.ipynb | Sanca94/Python | cd05e9cb8238a5eedc8aaff8c5f4ab5d80b9b05b | [
"MIT"
] | null | null | null | Visual Analysis Global Military Spending/Cleaning For Tableau.ipynb | Sanca94/Python | cd05e9cb8238a5eedc8aaff8c5f4ab5d80b9b05b | [
"MIT"
] | null | null | null | 33.550347 | 92 | 0.346546 | [
[
[
"import numpy as np\nimport pandas as pd \nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"df = pd.read_csv('Military Expenditure.csv')",
"_____no_output_____"
],
[
"df = df.drop(['Indicator Name'], axis=1)",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"Countires = df[df['Type']=='Country'] # isolates the data points to countires\nCountires = Countires.drop(['Code', 'Type'], axis=1) # drops code/type\nCountires = Countires.set_index('Name')\nCountires.index = Countires.index.rename('Year')\nCountires = Countires.dropna(axis=0, how='all') # if a column is all null drop\nCountires.head()",
"_____no_output_____"
],
[
"Countires = Countires.T # the countries are now columns\nCountires = Countires.fillna(Countires.mean())\nCountires.head()",
"_____no_output_____"
],
[
"#Countires.to_csv(\"Military_Tab.csv\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc5337ee68d0f14da9bc5c4ced8e5692f01d64d | 394,082 | ipynb | Jupyter Notebook | VectorSearch.ipynb | jobergum/notebooks | 7b5532ed7efbf671898a952a03f67bda2beaa2d3 | [
"Apache-2.0"
] | 4 | 2020-01-29T16:21:45.000Z | 2022-01-19T10:34:29.000Z | VectorSearch.ipynb | jobergum/notebooks | 7b5532ed7efbf671898a952a03f67bda2beaa2d3 | [
"Apache-2.0"
] | null | null | null | VectorSearch.ipynb | jobergum/notebooks | 7b5532ed7efbf671898a952a03f67bda2beaa2d3 | [
"Apache-2.0"
] | 2 | 2021-02-15T22:05:20.000Z | 2021-06-15T22:14:27.000Z | 231.676661 | 21,325 | 0.916538 | [
[
[
"# Real World Vector Search using Vespa.ai \n\nIn this notebook we give an introduction to how to use \n[Vespa's approximate nearest neighbor search](https://docs.vespa.ai/en/approximate-nn-hnsw.html) support \nand how to use it for a real world vector use case. Given an image of a product we want to find\nsimilar products, but limited by filters (e.g inventory status, price or other real world constraints).\n\n\nWe'll be using the \n[Amazon Products dataset](http://jmcauley.ucsd.edu/data/amazon/links.html). \n\n- The dataset product information like title, description, price and we show how to map the data into the Vespa document schema model \n- The dataset also contains binary feature vectors for images, features produced by a CNN model. We'll use these image vector and show you how to represent vector data in Vespa. \n\n\nWe also look at [real world challenges with using nearest neighbor search](https://blog.vespa.ai/using-approximate-nearest-neighbor-search-in-real-world-applications/) \n\n- Combining the nearest neighbor search with filters, for example on inventory status \n- Real time indexing of vectors and update documents with vectors \n- True partial updates to update inventory status at scale \n\nSince we use the Amazon Products dataset we also recommend:\n\n- [Vespa.ai Use case Shopping Search](https://docs.vespa.ai/en/use-case-shopping.html)\n- [E-commerce search and recommendation with Vespa.ai ](https://blog.vespa.ai/e-commerce-search-and-recommendation-with-vespaai/)\n- [Approximate nearest neighbor search in Vespa](https://blog.vespa.ai/approximate-nearest-neighbor-search-in-vespa-part-1/)\n",
"_____no_output_____"
],
[
"## Dependencies and requirements\n\n- This notebook assumes some familiarity with python and docker installed. The user which runs the notebook must be part of the docker user group when run on a Linux system, e.g Centos 7. [Docker Linux instructions](https://docs.docker.com/engine/install/linux-postinstall/) \n- This notebook was run on a c5.xlarge with 4 v-cpu and 8 GB memory on AWS EC2, using the Centos 7 image. \n\n<pre>\nsudo yum -y install docker python3 tmux wget\nsudo python3 -m pip install -U pip\npip3 install --user torch pyvespa pyvespa[ml] jupyterlab\n\n# Follow instructions on https://docs.docker.com/engine/install/linux-postinstall/\nsudo groupadd docker; sudo usermod -aG docker $USER; newgrp docker \n\nsudo service docker stop\nsudo service docker start\ntmux new -s my_session\njupyter notebook\n</pre>\n\nFirst let us install [pyvespa](https://pyvespa.readthedocs.io/en/latest/) which is a simple python api built on top of Vespa's native HTTP apis.\n\nThe python api is not meant as a production ready api but an api to explore features in Vespa and also for training ML models which can be deployed to Vespa for serving. \n",
"_____no_output_____"
]
],
[
[
"!pip3 install -U pyvespa pyvespa[ml]",
"_____no_output_____"
]
],
[
[
"Let us download the demo data from the Amazon Products dataset, we use the *Amazon_Fashion* subset. \n\nThis notebook was inspired by [this notebook](https://github.com/alexklibisz/elastiknn/blob/master/examples/tutorial-notebooks/multimodal-search-amazon-products.ipynb) so we'll reuse some of the utils from it to process the dataset. ",
"_____no_output_____"
]
],
[
[
"!wget -nc https://raw.githubusercontent.com/alexklibisz/elastiknn/master/examples/tutorial-notebooks/amazonutils.py\n!wget -nc http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Amazon_Fashion.json.gz\n!wget -nc http://snap.stanford.edu/data/amazon/productGraph/image_features/categoryFiles/image_features_Amazon_Fashion.b",
"_____no_output_____"
],
[
"from amazonutils import *\nfrom pprint import pprint",
"_____no_output_____"
]
],
[
[
"Let us have a look at a selected slice of the dataset:",
"_____no_output_____"
]
],
[
[
"for p in islice(iter_products('meta_Amazon_Fashion.json.gz'), 221,223):\n pprint(p)\n display(Image(p['imUrl'], width=128, height=128))",
"{'asin': 'B0001KHRKU',\n 'categories': [['Amazon Fashion']],\n 'imUrl': 'http://ecx.images-amazon.com/images/I/31J78QMEKDL.jpg',\n 'salesRank': {'Watches': 220635},\n 'title': 'Midas Remote Control Watch'}\n"
]
],
[
[
"## Build basic search functionality ",
"_____no_output_____"
],
[
"A Vespa instance is described by a [Vespa application package](https://docs.vespa.ai/en/cloudconfig/application-packages.html). Let us create an application called product and define our [document schema](https://docs.vespa.ai/en/schemas.html). We use pyvespa to define our application. This is not a requirement for creating a Vespa application but we use the api since after all we are in a notebook :). \n",
"_____no_output_____"
]
],
[
[
"from vespa.package import ApplicationPackage\napp_package = ApplicationPackage(name = \"product\")",
"_____no_output_____"
],
[
"from vespa.package import Field\napp_package.schema.add_fields( \n Field(name = \"asin\", type = \"string\", indexing = [\"attribute\", \"summary\"]),\n Field(name = \"title\", type = \"string\", indexing = [\"index\", \"summary\"], index = \"enable-bm25\"),\n Field(name = \"description\", type = \"string\", indexing = [\"index\", \"summary\"], index = \"enable-bm25\"),\n Field(name = \"price\", type = \"float\", indexing = [\"attribute\", \"summary\"]),\n Field(name = \"salesRank\", type = \"weightedset<string>\", indexing = [\"summary\",\"attribute\"]),\n Field(name = \"imUrl\", type = \"string\", indexing = [\"summary\"])\n)",
"_____no_output_____"
]
],
[
[
"We define a fieldset which is a way to combine matching over multiple fields. We chose to only match queries over the *title* and *description* field. ",
"_____no_output_____"
]
],
[
[
"from vespa.package import FieldSet\napp_package.schema.add_field_set(\n FieldSet(name = \"default\", fields = [\"title\", \"description\"])\n)",
"_____no_output_____"
]
],
[
[
"Then define a simple [ranking](https://docs.vespa.ai/en/ranking.html) function which uses a linear combination of the [bm25](https://docs.vespa.ai/en/reference/bm25.html) text ranking feature over our two free text string fields. ",
"_____no_output_____"
]
],
[
[
"from vespa.package import RankProfile\napp_package.schema.add_rank_profile(\n RankProfile(\n name = \"bm25\", \n first_phase = \"0.9*bm25(title) + 0.2*bm25(description)\")\n)",
"_____no_output_____"
]
],
[
[
"So let us deploy this application. We use docker in this example. See also [Vespa quick start](https://docs.vespa.ai/en/vespa-quick-start.html)",
"_____no_output_____"
]
],
[
[
"from vespa.package import VespaDocker\nvespa_docker = VespaDocker(port=8080)\n\napp = vespa_docker.deploy(\n application_package = app_package,\n disk_folder=\"/home/centos/product_search\" # include the desired absolute path here\n)",
"Waiting for configuration server.\nWaiting for configuration server.\nWaiting for configuration server.\nWaiting for configuration server.\nWaiting for configuration server.\nWaiting for application status.\nWaiting for application status.\nFinished deployment.\n"
]
],
[
[
"Pyvespa does expose a feed api, but in this notebook we use the raw [Vespa http /document/v1 feed api](https://docs.vespa.ai/en/document-v1-api-guide.html).\n\nThe HTTP document api is synchronous and the operation is visible in search when acked with a response code 200. In this case the feed throughput is limited by the client as we are posting one document at a time. For high throughput use cases use the asynchronous feed api, or use more client threads with the synchronous api. \n",
"_____no_output_____"
]
],
[
[
"import requests \nsession = requests.Session()\n\ndef index_document(product):\n asin = product['asin']\n doc = {\n \"fields\": {\n \"asin\": asin,\n \"title\": product.get(\"title\",None),\n \"description\": product.get('description',None),\n \"price\": product.get(\"price\",None),\n \"imUrl\": product.get(\"imUrl\",None),\n \"salesRank\": product.get(\"salesRank\",None) \n }\n }\n resource = \"http://localhost:8080/document/v1/demo/product/docid/{}\".format(asin)\n request_response = session.post(resource,json=doc)\n request_response.raise_for_status()",
"_____no_output_____"
]
],
[
[
"With our routine defined we can iterate over the data and index the documents:",
"_____no_output_____"
]
],
[
[
"from tqdm import tqdm\nfor product in tqdm(iter_products(\"meta_Amazon_Fashion.json.gz\")):\n index_document(product)",
"24145it [01:46, 226.40it/s]\n"
]
],
[
[
"So we have our index ready, no need to perform any additional index maintainance operation. All the data is searchable. Let us define a simple routine to display search results. Parsing the [Vespa JSON search response](https://docs.vespa.ai/en/reference/default-result-format.html)format: ",
"_____no_output_____"
]
],
[
[
"def display_hits(res, ranking):\n time = 1000*res['timing']['searchtime'] #convert to ms\n totalCount = res['root']['fields']['totalCount']\n print(\"Found {} hits in {:.2f} ms.\".format(totalCount,time))\n print(\"Showing top {}, ranked by {}\".format(len(res['root']['children']),ranking))\n print(\"\")\n for hit in res['root']['children']:\n fields = hit['fields']\n print(\"{}\".format(fields.get('title', None)))\n display(Image(fields.get(\"imUrl\"), width=128, height=128)) \n print(\"documentid: {}\".format(fields.get('documentid')))\n if 'inventory' in fields:\n print(\"Inventory: {}\".format(fields.get('inventory')))\n print(\"asin: {}\".format(fields.get('asin')))\n if 'price' in fields:\n print(\"price: {}\".format(fields.get('price',None)))\n if 'priceRank' in fields:\n print(\"priceRank: {}\".format(fields.get('priceRank',None))) \n print(\"relevance score: {:.2f}\".format(hit.get('relevance')))\n print(\"\")\n",
"_____no_output_____"
]
],
[
[
"## Query our product data\n\nWe use the [Vespa HTTP Search API](https://docs.vespa.ai/en/query-api.html#http) to search our product index. \n\nIn this example we assume there is a user query 'mens wrist watch' which we use as input to the \n[YQL](https://docs.vespa.ai/en/query-language.html) query language. Vespa allows combining the structured application logic expressed by YQL with a user query language called [Vespa simple query language](https://docs.vespa.ai/en/reference/simple-query-language-reference.html). \n\nIn this case we use *type=any* so matching any of our 3 terms is enough to retrieve the document. \nIn the YQL statement we select the fields we want to return. Only fields which are marked as *summary* in the schema can be returned with the hit result.\n\nWe don't mention which fields we want to search so Vespa uses the fieldset defined earlier called default which will search both the title and the description fields.",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, asin,title,imUrl,price from sources * where userQuery();',\n 'query': 'mens wrist watch',\n 'ranking': 'bm25',\n 'type': 'any',\n 'presentation.timing': True,\n 'hits': 2\n}\ndisplay_hits(app.query(body=query).json, \"bm25\")",
"Found 3285 hits in 4.00 ms.\nShowing top 2, ranked by bm25\n\nGeekbuying 814 Analog Alloy Quartz Men's Wrist Watch - Black (White)\n"
]
],
[
[
"So there we have our basic search functionality. Note that ranking by bm25 for e-commerce data not give the best results as bm25 treats text as a bag of words. ",
"_____no_output_____"
],
[
"## Similarity search using image vector data\nNow we have basic search functionality up and running, but the Amazon Product dataset also inclues image features which we can also index in Vespa and use approximate nearest neighbor search to search efficiently. Let us load the image feature data. We reduce the vector dimensionality to something more practical and use 256 dimensions. ",
"_____no_output_____"
]
],
[
[
"vectors = []\nreduced = iter_vectors_reduced(\"image_features_Amazon_Fashion.b\", 256, 1000)\nfor asin,v in tqdm(reduced(\"image_features_Amazon_Fashion.b\")):\n vectors.append((asin,v))",
"22929it [00:04, 4739.67it/s]\n"
]
],
[
[
"We need to re-configure our application to add our image vector field. \nWe also define a *HNSW* index for it and using *angular* as our [distance metric](https://docs.vespa.ai/documentation/reference/schema-reference.html#distance-metric). \n\nWe also need to define the input query vector in the application package. Without definining our query input tensor we won't be able to perform our nearest neighbor search so make sure you remember to include that.\n\nMost changes like adding or remove a field is a [live change](https://docs.vespa.ai/en/reference/schema-reference.html#modifying-schemas) in Vespa, no need to re-index the data. \n",
"_____no_output_____"
]
],
[
[
"from vespa.package import HNSW\napp_package.schema.add_fields(\n Field(name = \"image_vector\", \n type = \"tensor<float>(x[256])\", \n indexing = [\"attribute\",\"index\"],\n ann=HNSW(\n distance_metric=\"angular\",\n max_links_per_node=16,\n neighbors_to_explore_at_insert=200)\n )\n)\nfrom vespa.package import QueryTypeField\napp_package.query_profile_type.add_fields(\n QueryTypeField(name=\"ranking.features.query(query_image_vector)\", type=\"tensor<float>(x[256])\")\n)",
"_____no_output_____"
]
],
[
[
"We also need to define a ranking profile on how we want to score our documents. We use the *closeness* [ranking feature](https://docs.vespa.ai/en/reference/rank-features.html). Note that it's also possible to retrieve results using approximate nearest neighbor search operator and use the first phase ranking function as a re-ranking stage (e.g by sales popularity etc). \n",
"_____no_output_____"
]
],
[
[
"app_package.schema.add_rank_profile(\n RankProfile(\n name = \"vector_similarity\", \n first_phase = \"closeness(field,image_vector)\")\n)",
"_____no_output_____"
]
],
[
[
"Now, we need to re-deploy our application package to make the changes effective.",
"_____no_output_____"
]
],
[
[
"app = vespa_docker.deploy(\n application_package = app_package,\n disk_folder=\"/home/centos/product_search\" # include the desired absolute path here\n)",
"Finished deployment.\n"
]
],
[
[
"Now we are ready to feed and index the image vectors. \n\nWe update the documents in the index by running partial update operations, adding the vectors using real time updates of the existing documents. partially updating a tensor field, with or without tensor does not trigger re-indexing.",
"_____no_output_____"
]
],
[
[
"for asin,vector in tqdm(vectors):\n update_doc = {\n \"fields\": {\n \"image_vector\": {\n \"assign\": {\n \"values\": vector\n }\n }\n }\n }\n url = \"http://localhost:8080/document/v1/demo/product/docid/{}\".format(asin)\n response = session.put(url, json=update_doc)",
"100%|██████████| 22929/22929 [01:40<00:00, 228.94it/s]\n"
]
],
[
[
"We now want to get similar products using the image feature data and we do so by first fetching the \nvector of the product we want to find similar products for and use this vector as input to the nearest neighbor search operator of Vespa. First we define a simple get vector utility to fetch the vector of a given product *asin*. ",
"_____no_output_____"
]
],
[
[
"def get_vector(asin):\n resource = \"http://localhost:8080/document/v1/demo/product/docid/{}\".format(asin)\n response = session.get(resource)\n response.raise_for_status()\n document = response.json()\n \n cells = document['fields']['image_vector']['cells']\n vector = {}\n for i,cell in enumerate(cells):\n v = cell['value']\n adress = cell['address']['x']\n vector[int(adress)] = v\n values = []\n for i in range(0,256):\n values.append(vector[i])\n return values",
"_____no_output_____"
]
],
[
[
"Let us repeat the query from above to find an image to find similar products for",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, asin,title,imUrl,price from sources * where userQuery();',\n 'query': 'mens wrist watch',\n 'ranking': 'bm25',\n 'type': 'any',\n 'presentation.timing': True,\n 'hits': 1\n}\ndisplay_hits(app.query(body=query).json, \"bm25\")",
"Found 3285 hits in 4.00 ms.\nShowing top 1, ranked by bm25\n\nGeekbuying 814 Analog Alloy Quartz Men's Wrist Watch - Black (White)\n"
]
],
[
[
"Let us search for similar images using **exact** nearest neighbor search. We ask for 3 most similar to the product image with asin id **B00GLP1GTW**",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, asin,title,imUrl,description,price from sources * where \\\n ([{\"targetHits\":3,\"approximate\":false}]nearestNeighbor(image_vector,query_image_vector));',\n 'ranking': 'vector_similarity',\n 'hits': 3, \n 'presentation.timing': True,\n 'ranking.features.query(query_image_vector)': get_vector('B00GLP1GTW')\n}\ndisplay_hits(app.query(body=query).json, \"vector_similarity\")",
"Found 46 hits in 10.00 ms.\nShowing top 3, ranked by vector_similarity\n\nGeekbuying 814 Analog Alloy Quartz Men's Wrist Watch - Black (White)\n"
]
],
[
[
"Let us repeat the same query but this time using the much faster approximate version. When there is a HNSW index on the tensor the default behavior is to use approximate:true so we remove the approximation flag. ",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, asin,title,imUrl,description,price from sources * where \\\n ([{\"targetHits\":3}]nearestNeighbor(image_vector,query_image_vector));',\n 'ranking': 'vector_similarity',\n 'hits': 3, \n 'presentation.timing': True,\n 'ranking.features.query(query_image_vector)': get_vector('B00GLP1GTW')\n}\ndisplay_hits(app.query(body=query).json, \"vector_similarity\")",
"Found 3 hits in 6.00 ms.\nShowing top 3, ranked by vector_similarity\n\nGeekbuying 814 Analog Alloy Quartz Men's Wrist Watch - Black (White)\n"
]
],
[
[
"## Combining nearest neighbor search with filters",
"_____no_output_____"
],
[
"If we look at the results for the above exact and approximate nearest neighbor searches we got the same results using the approximate version (perfect recall). But naturally the first listed product was the same product that we used as input and the closeness score was 1.0 simple because the angular distance is 0. Since the user is already presented with the product we want to remove it from the the result and we can do that by combining the search for nearest neighbors with a filter, expressed by the YQL query language using **and**.",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, asin,title,imUrl,description,price from sources * where \\\n ([{\"targetHits\":3}]nearestNeighbor(image_vector,query_image_vector)) and \\\n !(asin contains \"B00GLP1GTW\");',\n 'ranking': 'vector_similarity',\n 'hits': 3, \n 'presentation.timing': True,\n 'ranking.features.query(query_image_vector)': get_vector('B00GLP1GTW')\n}\ndisplay_hits(app.query(body=query).json, \"vector_similarity\")",
"Found 3 hits in 5.00 ms.\nShowing top 3, ranked by vector_similarity\n\nAvalon EZC Unisex Low-Vision Silver-Tone Flex Bracelet One-Button Talking Watch, # 2609-1B\n"
]
],
[
[
"That is better. The original product is removed from the list of similar products.\n\nIf we want to add a price filter we can do that to. In the below example we filter also by price, to limit the search for nearest neighbors by a price filter.\n\nWe still ask for the 3 nearest neighbors. We could do so automatically or giving the user a choice of price ranges using [Vespa's grouping and aggregation support](https://docs.vespa.ai/en/grouping.html). ",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, asin,title,imUrl,description,price from sources * where \\\n ([{\"targetHits\":3}]nearestNeighbor(image_vector,query_image_vector)) and \\\n !(asin contains \"B00GLP1GTW\") and \\\n price > 100;',\n 'ranking': 'vector_similarity',\n 'hits': 3, \n 'presentation.timing': True,\n 'ranking.features.query(query_image_vector)': get_vector('B00GLP1GTW')\n}\ndisplay_hits(app.query(body=query).json, \"vector_similarity\")",
"Found 19 hits in 7.00 ms.\nShowing top 3, ranked by vector_similarity\n\nHamilton Men's H64455133 Khaki King II Black Dial Watch\n"
]
],
[
[
"In the result above the search for nearest neighbors have been filtered by price. This search also removes products which have no price value. Ranking is still done by the the closeness ranking feature. We could also use a better ranking profile taken into account more signals, like the salesRank of the product by changing our ranking profile to use a linear combination of features. \n\n<pre>\nRankProfile(\n name = \"vector_similarity_\", \n first_phase = \"12.0 + 23.24*closeness(field,image_vector) + 12.4*(1/attribute(popularity))\")\n)\n\n</pre>",
"_____no_output_____"
],
[
"# Keeping the index fresh by true partial updates\n\nIn retail and e-commerce search one very important aspect is to be able to update the search index to keep it fresh so that we can use the latest information at search time. Examples of updates which Vespa can perform at scale:\n\n - inventory status, which could be used as a *hard* filter so that our results only includes products which are in stock, or as a feature to be used when ranking products. \n - Product attributes which can used as ranking signals, for example category popularity (salesRank), click through rate and conversion rate. \n\nVespa, with its true partial update of **attribute** fields can support very high volumes of updates per node as updates of attribute fields are performed in-place without having to re-index the entire document. \n\nTo demonstrate this, we will add a new field to our product index which we call *inventory* and which keeps track of the inventory or in stock status of our product index. We want to ensure that the products we display have a positive inventory status. In this case we use it as a hard filter but this can also be a soft filter, used as a ranking signal. \n\nLet us change our application:",
"_____no_output_____"
]
],
[
[
"app_package.schema.add_fields( \n Field(name = \"inventory\", type = \"int\", indexing = [\"attribute\", \"summary\"])\n)",
"_____no_output_____"
],
[
"app = vespa_docker.deploy(\n application_package = app_package,\n disk_folder=\"/home/centos/product_search\" # include the desired absolute path here\n)",
"Finished deployment.\n"
]
],
[
[
"We iterate over our products and assign a random inventory count. We use partial update to do this. Vespa can handle up to 50K updates of integer fields per node and the partial update is performed in place so the document is not re-indexed in any way.",
"_____no_output_____"
]
],
[
[
"import random\nfor product in tqdm(iter_products(\"meta_Amazon_Fashion.json.gz\")):\n asin = product['asin']\n update_doc = {\n \"fields\": {\n \"inventory\": {\n \"assign\": random.randint(0,10)\n }\n }\n }\n url = \"http://localhost:8080/document/v1/demo/product/docid/{}\".format(asin)\n response = session.put(url, json=update_doc)",
"24145it [01:30, 268.25it/s]\n"
]
],
[
[
"Let us repeat our query for expensive similar products using image similarity, now we also display the inventory status",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, inventory, asin,title,imUrl,description,price from sources * where \\\n ([{\"targetHits\":3}]nearestNeighbor(image_vector,query_image_vector)) and \\\n !(asin contains \"B00GLP1GTW\") and \\\n price > 100;',\n 'ranking': 'vector_similarity',\n 'hits': 3, \n 'presentation.timing': True,\n 'ranking.features.query(query_image_vector)': get_vector('B00GLP1GTW')\n}\ndisplay_hits(app.query(body=query).json, \"vector_similarity\")",
"Found 19 hits in 9.00 ms.\nShowing top 3, ranked by vector_similarity\n\nHamilton Men's H64455133 Khaki King II Black Dial Watch\n"
]
],
[
[
"So as we can see the second hit, B00CM1RPW6 has inventory status 1. Let us update the inventory count for document **B00CM1RPW6** in real time. In this case we assign it the value 0 (out of stock). We could also use \"increment\", \"decrement\". Immidately after we have performed the update we perform our search. We now expect that the displayed inventory is 0. ",
"_____no_output_____"
]
],
[
[
"update_doc = {\n \"fields\": {\n \"inventory\": {\n \"assign\": 0\n }\n }\n}\nresource = \"http://localhost:8080/document/v1/demo/product/docid/{}\".format('B00CM1RPW6')\nresponse = session.put(resource, json=update_doc)\nprint(\"Got response {}\".format(response.json()))\ndisplay_hits(app.query(body=query).json, \"vector_similarity\")",
"Got response {'pathId': '/document/v1/demo/product/docid/B00CM1RPW6', 'id': 'id:demo:product::B00CM1RPW6'}\nFound 19 hits in 4.00 ms.\nShowing top 3, ranked by vector_similarity\n\nHamilton Men's H64455133 Khaki King II Black Dial Watch\n"
]
],
[
[
"As we can see, product **B00CM1RPW6** now displays an inventory status of 0. We can also add inventory as a hard filter and re-do our query but this time with a inventory > 0 filter:\n",
"_____no_output_____"
]
],
[
[
"query = {\n 'yql': 'select documentid, inventory, asin,title,imUrl,description,price from sources * where \\\n ([{\"targetHits\":3}]nearestNeighbor(image_vector,query_image_vector)) and \\\n !(asin contains \"B00GLP1GTW\") and \\\n price > 100 and inventory > 0;',\n 'ranking': 'vector_similarity',\n 'hits': 3, \n 'presentation.timing': True,\n 'ranking.features.query(query_image_vector)': get_vector('B00GLP1GTW')\n}\ndisplay_hits(app.query(body=query).json, \"vector_similarity\")",
"Found 20 hits in 5.00 ms.\nShowing top 3, ranked by vector_similarity\n\nHamilton Men's H64455133 Khaki King II Black Dial Watch\n"
]
],
[
[
"That's better. Now all related items have inventory status > 0 and price > 100$. Using it as a hard filter, or as a ranking signal is up to you. ",
"_____no_output_____"
],
[
"# Summary\n\nIn this notebook we have demonstrated Vespa's nearest neighbor search and approximate nearest neighbor search and how Vespa allows combining nearest neighbor search with filters. To learn more see [https://vespa.ai](https://vespa.ai)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecc53655bcdb3d9c0ad9ef6906ba36ee42353c91 | 183,826 | ipynb | Jupyter Notebook | notebooks/4.1 Exploratory data analysis and working with texts.ipynb | BL-Labs/ADA-DHOxSS | e8f28ad912b756e42db6bfc5d3943ba4c0e1c004 | [
"MIT"
] | 14 | 2020-04-01T14:40:32.000Z | 2022-02-02T09:50:42.000Z | notebooks/4.1 Exploratory data analysis and working with texts.ipynb | BL-Labs/ADA-DHOxSS | e8f28ad912b756e42db6bfc5d3943ba4c0e1c004 | [
"MIT"
] | 4 | 2019-07-18T15:59:18.000Z | 2019-07-19T06:30:44.000Z | notebooks/4.1 Exploratory data analysis and working with texts.ipynb | BL-Labs/ADA-DHOxSS | e8f28ad912b756e42db6bfc5d3943ba4c0e1c004 | [
"MIT"
] | 4 | 2020-04-01T14:40:44.000Z | 2020-07-16T09:43:13.000Z | 73.004766 | 20,460 | 0.771871 | [
[
[
"# Exploratory data analysis and working with texts\n\nIn this notebook, we learn about:\n1. descriptive statistics to explore data;\n2. working with texts",
"_____no_output_____"
],
[
"# Part 1: descriptive statistics\n\n*\"The goal of exploratory data analysis is to develop an understanding of your data. EDA is fundamentally a creative process. And like most creative processes, the key to asking quality questions is to generate a large quantity of questions.\"* \n\nKey questions:\n* Which kind of variation occurs within variables?\n* Which kind of co-variation occurs between variables?\n\nhttps://r4ds.had.co.nz/exploratory-data-analysis.html",
"_____no_output_____"
]
],
[
[
"# imports\n\nimport os, codecs\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Import the dataset\nLet us import the Venetian apprenticeship contracts dataset in memory.",
"_____no_output_____"
]
],
[
[
"root_folder = \"../data/apprenticeship_venice/\"\ndf_contracts = pd.read_csv(codecs.open(os.path.join(root_folder,\"professions_data.csv\"), encoding=\"utf8\"), sep=\";\")\ndf_professions = pd.read_csv(codecs.open(os.path.join(root_folder,\"professions_classification.csv\"), encoding=\"utf8\"), sep=\",\")",
"_____no_output_____"
]
],
[
[
"Let's take another look to the dataset.",
"_____no_output_____"
]
],
[
[
"df_contracts.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 9653 entries, 0 to 9652\nData columns (total 47 columns):\npage_title 9653 non-null object\nregister 9653 non-null object\nannual_salary 7870 non-null float64\na_profession 9653 non-null object\nprofession_code_strict 9618 non-null object\nprofession_code_gen 9614 non-null object\nprofession_cat 9597 non-null object\ncorporation 9350 non-null object\nkeep_profession_a 9653 non-null int64\ncomplete_profession_a 9653 non-null int64\nenrolmentY 9628 non-null float64\nenrolmentM 9631 non-null float64\nstartY 9533 non-null float64\nstartM 9539 non-null float64\nlength 9645 non-null float64\nhas_fled 9653 non-null int64\nm_profession 9535 non-null object\nm_profession_code_strict 9508 non-null object\nm_profession_code_gen 9506 non-null object\nm_profession_cat 9489 non-null object\nm_corporation 9276 non-null object\nkeep_profession_m 9653 non-null int64\ncomplete_profession_m 9653 non-null int64\nm_gender 9554 non-null float64\nm_name 9623 non-null object\nm_surname 6960 non-null object\nm_patronimic 2620 non-null object\nm_atelier 1434 non-null object\nm_coords 9639 non-null object\na_name 9653 non-null object\na_age 9303 non-null float64\na_gender 9522 non-null float64\na_geo_origins 7149 non-null object\na_geo_origins_std 4636 non-null object\na_coords 9610 non-null object\na_quondam 7848 non-null float64\naccommodation_master 9653 non-null int64\npersonal_care_master 9653 non-null int64\nclothes_master 9653 non-null int64\ngeneric_expenses_master 9653 non-null int64\nsalary_in_kind_master 9653 non-null int64\npledge_goods_master 9653 non-null int64\npledge_money_master 9653 non-null int64\nsalary_master 9653 non-null int64\nfemale_guarantor 9653 non-null int64\nperiod_cat 7891 non-null float64\nincremental_salary 9653 non-null int64\ndtypes: float64(11), int64(15), object(21)\nmemory usage: 3.5+ MB\n"
],
[
"df_contracts.head(5)",
"_____no_output_____"
],
[
"df_contracts.columns",
"_____no_output_____"
]
],
[
[
"Every row represents an apprenticeship contract. Contracts were registered both at the guild's and at a public office. This is a sample of contracts from a much larger set of records.\n\nSome of the variables we will work with are:\n* `annual_salary`: the annual salary paid to the apprencice, if any (in Venetian ducats).\n* `a_profession` to `corporation`: increasingly generic classifications for the apprentice's stated profession.\n* `startY` and `enrolmentY`: contract start and registration year respectively.\n* `length`: of the contract, in years.\n* `m_gender` and `a_gender`: of master and apprentice respectively.\n* `a_age`: age of the apprentice at entry, in years.\n* `female_guarantor`: if at least one of the contract's guarantors was female, boolean.",
"_____no_output_____"
]
],
[
[
"df_professions.head(3)",
"_____no_output_____"
]
],
[
[
"The professions data frame contains a classification system for each profession as found in the records (transcription, first column). The last column is the guild (or corporation) which governed the given profession. This work was performed manually by historians. We don't use it here as the classifications we need are already part of the main dataframe.",
"_____no_output_____"
],
[
"### Questions\n\n* Plot the distribution (histogram) of the apprentices' age, contract length, annual salary and start year.\n* Calculate the proportion of female apprentices and masters, and of contracts with a female guarantor.\n* How likely it is for a female apprentice to have a female master? And for a male apprentice?",
"_____no_output_____"
]
],
[
[
"df_contracts.annual_salary.hist(bins=100)",
"_____no_output_____"
],
[
"df_contracts[df_contracts.annual_salary < 20].annual_salary.hist(bins=25)",
"_____no_output_____"
],
[
"df_contracts.a_gender.sum()/df_contracts.shape[0]",
"_____no_output_____"
],
[
"df_contracts.m_gender.sum()/df_contracts.shape[0]",
"_____no_output_____"
],
[
"df_contracts[(df_contracts.a_gender == 0) & (df_contracts.startY < 1600)].m_gender.sum()\n /df_contracts[(df_contracts.a_gender == 0) & (df_contracts.startY < 1600)].shape[0]",
"_____no_output_____"
],
[
"df_contracts.startY.hist(bins=10)",
"_____no_output_____"
]
],
[
[
"## Looking at empirical distributions",
"_____no_output_____"
]
],
[
[
"df_contracts[df_contracts.annual_salary < 50].annual_salary.hist(bins=40)",
"_____no_output_____"
],
[
"df_contracts[df_contracts.a_age < 30].a_age.hist(bins=25)",
"_____no_output_____"
]
],
[
[
"### Two very important distributions",
"_____no_output_____"
],
[
"#### Normal\n\nAlso known as Gaussian, is a bell-shaped distribution with mass around the mean and exponentially decaying on the sides. It is fully characterized by the mean (center of mass) and standard deviation (spread).\n\nhttps://en.wikipedia.org/wiki/Normal_distribution",
"_____no_output_____"
]
],
[
[
"s1 = np.random.normal(5, 1, 10000)\nsns.distplot(s1)",
"_____no_output_____"
],
[
"# for boxplots see https://en.wikipedia.org/wiki/Interquartile_range (or ask!)\nsns.boxplot(s1)",
"_____no_output_____"
]
],
[
[
"#### Heavy-tailed\nDistributions with a small but non-negligible amount of observations with high values. Several probability distributions follow this pattern: https://en.wikipedia.org/wiki/Heavy-tailed_distribution#Common_heavy-tailed_distributions.\n\nWe pick the lognormal here: https://en.wikipedia.org/wiki/Log-normal_distribution",
"_____no_output_____"
]
],
[
[
"s2 = np.random.lognormal(5, 1, 10000)\nsns.distplot(s2)",
"_____no_output_____"
],
[
"sns.boxplot(s2)",
"_____no_output_____"
],
[
"# Why \"lognormal\"?\n\nsns.distplot(np.log(s2))",
"_____no_output_____"
]
],
[
[
"#### Box plots\n\n<img src=\"figures/eda-boxplot.png\" width=\"800px\" heigth=\"800px\">",
"_____no_output_____"
],
[
"### Outliers, missing values\n\nAn *outlier* is an observation far from the center of mass of the distribution. It might be an error or a genuine observation: this distinction requires domain knowledge. Outliers infuence the outcomes of several statistics and machine learning methods: it is important to decide how to deal with them.\n\nA *missing value* is an observation without a value. There can be many reasons for a missing value: the value might not exist (hence its absence is informative and it should be left empty) or might not be known (hence the value is existing but missing in the dataset and it should be marked as NA).\n\n*One way to think about the difference is with this Zen-like koan: An explicit missing value is the presence of an absence; an implicit missing value is the absence of a presence.*",
"_____no_output_____"
],
[
"## Summary statistics\nA statistic is a measure over a distribution, and it is said to be *robust* if not sensitive to outliers.\n\n* Not robust: min, max, mean, standard deviation.\n* Robust: mode, median, other quartiles.\n\nA closer look at the mean:\n\n$\\bar{x} = \\frac{1}{n} \\sum_{i}x_i$\n\nAnd variance (the standard deviation is the square root of the variance):\n\n$Var(x) = \\frac{1}{n} \\sum_{i}(x_i - \\bar{x})^2$",
"_____no_output_____"
],
[
"<img src=\"figures/2560px-Comparison_mean_median_mode.svg.png\" width=\"400px\" heigth=\"400px\">",
"_____no_output_____"
]
],
[
[
"# Not robust: min, max, mean, mode, standard deviation\n\nprint(np.mean(s1)) # should be 5\nprint(np.mean(s2))",
"4.992350413024771\n245.32716462110596\n"
],
[
"# Robust: median, other quartiles\n\nprint(np.quantile(s1, 0.5)) # should coincide with mean and mode\nprint(np.quantile(s2, 0.5))",
"5.003974843024547\n148.8969352474893\n"
]
],
[
[
"#### Questions\n\n* Calculate the min, max, mode and sd. *hint: explore the numpy documentation!*\n* Calculate the 90% quantile values.\n* Consider our normally distributed data in s1. Add an outlier (e.g., value 100). What happens to the mean and mode? Write down your answer and then check.",
"_____no_output_____"
]
],
[
[
"# Let's explore our dataset\ndf_contracts[[\"annual_salary\",\"a_age\",\"length\"]].describe()",
"_____no_output_____"
]
],
[
[
"## Relating two variables\n\n#### Covariance\n\nMeasure of joint linear variability of two variables:\n\n<img src=\"figures/covariance.png\" width=\"400px\" heigth=\"400px\">\n\nIts normalized version is called the (Pearson's) correlation coefficient:\n\n<img src=\"figures/pearson.png\" width=\"400px\" heigth=\"400px\">\n\nCorrelation is helpful to spot possible relations, but is of tricky interpretation and is not exhaustive:\n\n<img src=\"figures/800px-Correlation_examples2.svg.png\" width=\"700px\" heigth=\"&00px\">\n\nSee: https://en.wikipedia.org/wiki/Covariance and https://en.wikipedia.org/wiki/Pearson_correlation_coefficient.\n\n*Note: correlation is not causation!*",
"_____no_output_____"
]
],
[
[
"df_contracts[[\"annual_salary\",\"a_age\",\"length\"]].corr()",
"_____no_output_____"
],
[
"sns.scatterplot(df_contracts.length,df_contracts.annual_salary)",
"_____no_output_____"
]
],
[
[
"#### Questions\n\n* Try to explore the correlation of other variables in the dataset.\n* Can you think of a possible motivation for the trend we see: older apprentices with a shorter contract getting on average a higher annual salary?",
"_____no_output_____"
],
[
"## Sampling and uncertainty (mention)\n\nOften, we work with samples and we want the sample to be representative of the population it is taken from, in order to draw conclusions that generalise from the sample to the full population.\n\nSampling is *tricky*. Samples have *variance* (variation between samples from the same population) and *bias* (systematic variation from the population).",
"_____no_output_____"
],
[
"# Part 2: working with texts\n\nLet's get some basics (or a refresher) of working with texts in Python. Texts are sequences of discrete symbols (words or, more generically, tokens).\n\nKey challenge: representing text for further processing. Two mainstream approaches:\n* *Bag of words*: a text is a collection of tokens occurring with a certain frequence and assumed independently from each other within the text. The mapping from texts to features is determinsitic and straighforward, each text is represented as a vector of the size of the vocabulary.\n* *Embeddings*: a method is used (typically, neural networks), to learn a mapping from each token to a (usually small) vector representing it. A text can be represented in turn as an aggregation of these embeddings.",
"_____no_output_____"
],
[
"## Import the dataset\nLet us import the Elon Musk's tweets dataset in memory.\n\n<img src=\"figures/elon_loop.jpeg\" width=\"400px\" heigth=\"400px\">",
"_____no_output_____"
]
],
[
[
"root_folder = \"../data/musk_tweets\"\ndf_elon = pd.read_csv(codecs.open(os.path.join(root_folder,\"elonmusk_tweets.csv\"), encoding=\"utf8\"), sep=\",\")\ndf_elon['text'] = df_elon['text'].str[1:]",
"_____no_output_____"
],
[
"df_elon.head(5)",
"_____no_output_____"
],
[
"df_elon.shape",
"_____no_output_____"
]
],
[
[
"## Natural Language Processing in Python",
"_____no_output_____"
]
],
[
[
"# import some of the most popular libraries for NLP in Python\nimport spacy\nimport nltk\nimport string\nimport sklearn",
"_____no_output_____"
],
[
"# nltk.download('punkt')",
"_____no_output_____"
]
],
[
[
"A typical NLP pipeline might look like the following:\n \n<img src=\"figures/spacy_pipeline.png\" width=\"600px\" heigth=\"600px\">\n\n### Tokenization: splitting a text into constituent tokens.",
"_____no_output_____"
]
],
[
[
"from nltk.tokenize import TweetTokenizer, word_tokenize\ntknzr = TweetTokenizer(preserve_case=True, reduce_len=False, strip_handles=False)",
"_____no_output_____"
],
[
"example_tweet = df_elon.text[1]\nprint(example_tweet)",
"\"@ForIn2020 @waltmossberg @mims @defcon_5 Exactly. Tesla is absurdly overvalued if based on the past, but that's irr\\xe2\\x80\\xa6 https://t.co/qQcTqkzgMl\"\n"
],
[
"tkz1 = tknzr.tokenize(example_tweet)\nprint(tkz1)\ntkz2 = word_tokenize(example_tweet)\nprint(tkz2)",
"['\"', '@ForIn2020', '@waltmossberg', '@mims', '@defcon_5', 'Exactly', '.', 'Tesla', 'is', 'absurdly', 'overvalued', 'if', 'based', 'on', 'the', 'past', ',', 'but', \"that's\", 'irr', '\\\\', 'xe2', '\\\\', 'x80', '\\\\', 'xa6', 'https://t.co/qQcTqkzgMl', '\"']\n['``', '@', 'ForIn2020', '@', 'waltmossberg', '@', 'mims', '@', 'defcon_5', 'Exactly', '.', 'Tesla', 'is', 'absurdly', 'overvalued', 'if', 'based', 'on', 'the', 'past', ',', 'but', 'that', \"'s\", 'irr\\\\xe2\\\\x80\\\\xa6', 'https', ':', '//t.co/qQcTqkzgMl', \"''\"]\n"
]
],
[
[
"Question: can you spot what the Twitter tokenizer is doing instead of a standard one?",
"_____no_output_____"
]
],
[
[
"string.punctuation",
"_____no_output_____"
],
[
"# some more pre-processing\n\ndef filter(tweet):\n \n # remove punctuation and short words and urls\n tweet = [t for t in tweet if t not in string.punctuation and len(t) > 3 and not t.startswith(\"http\")]\n return tweet\n\ndef tokenize_and_string(tweet):\n \n tkz = tknzr.tokenize(tweet)\n \n tkz = filter(tkz)\n \n return \" \".join(tkz)",
"_____no_output_____"
],
[
"print(tkz1)\nprint(filter(tkz1))",
"['\"', '@ForIn2020', '@waltmossberg', '@mims', '@defcon_5', 'Exactly', '.', 'Tesla', 'is', 'absurdly', 'overvalued', 'if', 'based', 'on', 'the', 'past', ',', 'but', \"that's\", 'irr', '\\\\', 'xe2', '\\\\', 'x80', '\\\\', 'xa6', 'https://t.co/qQcTqkzgMl', '\"']\n['@ForIn2020', '@waltmossberg', '@mims', '@defcon_5', 'Exactly', 'Tesla', 'absurdly', 'overvalued', 'based', 'past', \"that's\"]\n"
],
[
"df_elon[\"clean_text\"] = df_elon[\"text\"].apply(tokenize_and_string)",
"_____no_output_____"
],
[
"df_elon.head(5)",
"_____no_output_____"
],
[
"# save cleaned up version\n\ndf_elon.to_csv(os.path.join(root_folder,\"df_elon.csv\"), index=False)",
"_____no_output_____"
]
],
[
[
"### Building a dictionary",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import CountVectorizer\ncount_vect = CountVectorizer(lowercase=False, tokenizer=tknzr.tokenize)\nX_count = count_vect.fit_transform(df_elon.clean_text)\nX_count.shape",
"_____no_output_____"
],
[
"word_list = count_vect.get_feature_names() \ncount_list = X_count.toarray().sum(axis=0)\ndictionary = dict(zip(word_list,count_list))\ncount_vect.vocabulary_.get(\"robots\")",
"_____no_output_____"
],
[
"X_count[:,count_vect.vocabulary_.get(\"robots\")].toarray().sum()",
"_____no_output_____"
],
[
"dictionary[\"robots\"]",
"_____no_output_____"
]
],
[
[
"#### Questions\n\n* Find the tokens most used by Elon.\n* Find the twitter users most referred to by Elon (hint: use the @ handler to spot them).",
"_____no_output_____"
]
],
[
[
"dictionary_list = sorted(dictionary.items(), key=lambda x:x[1], reverse=True)\n[d for d in dictionary_list][:10]",
"_____no_output_____"
],
[
"dictionary_list_users = sorted(dictionary.items(), key=lambda x:x[1], reverse=True)\n[d for d in dictionary_list if d[0].startswith('@')][:10]",
"_____no_output_____"
]
],
[
[
"### Representing tweets as vectors\n\nTexts are of variable length and need to be represented numerically in some way. Most typically, we represent them as *equally-sized vectors*.\n\nActually, this is what we have already done! Let's take a closer look at `X_count` above..",
"_____no_output_____"
]
],
[
[
"# This is the first Tweet of the data frame\n\ndf_elon.loc[0]",
"_____no_output_____"
],
[
"# let's get the vector representation for this Tweet\n\nvector_representation = X_count[0,:]",
"_____no_output_____"
],
[
"# there are 3 positions not to zero, as we would expect: the vector contains 1 in the columns related to the 3 words that make up the Tweet. \n# It would contain a number higher than 1 if a given word were occurring multiple times.\n\nnp.sum(vector_representation)",
"_____no_output_____"
],
[
"# Let's check that indeed the vector contains 1s for the right words\n# Remember, the vector has shape (1 x size of the vocabulary)\n\nprint(vector_representation[0,count_vect.vocabulary_.get(\"robots\")])\nprint(vector_representation[0,count_vect.vocabulary_.get(\"spared\")])\nprint(vector_representation[0,count_vect.vocabulary_.get(\"humanity\")])",
"1\n1\n1\n"
]
],
[
[
"### Term Frequency - Inverse Document Frequency\nWe can use boolean counts (1/0) and raw counts (as we did before) to represent a Tweet over the space of the vocabulary, but there exist improvements on this basic idea. For example, the TF-IDF weighting scheme:\n\n$tfidf(t, d, D) = tf(t, d) \\cdot idf(t, D)$\n\n$tf(t, d) = f_{t,d}$\n\n$idf(t, D) = log \\Big( \\frac{|D|}{|{d \\in D: t \\in d}|} \\Big)$",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import TfidfVectorizer\ncount_vect = TfidfVectorizer(lowercase=False, tokenizer=tknzr.tokenize)\nX_count_tfidf = count_vect.fit_transform(df_elon.clean_text)\nX_count_tfidf.shape",
"_____no_output_____"
],
[
"X_count_tfidf[0,:].sum()",
"_____no_output_____"
],
[
"X_count[0,:].sum()",
"_____no_output_____"
]
],
[
[
"#### Sparse vectors (mention)\nHow is Python representing these vectors in memory? Most of their cells are set to zero. \n\nWe call any vector or matrix whose cells are mostly to zero *sparse*.\nThere are efficient ways to store them in memory.",
"_____no_output_____"
]
],
[
[
"X_count_tfidf[0,:]",
"_____no_output_____"
]
],
[
[
"### Spacy pipelines\n\nUseful to construct sequences of pre-processing steps: https://spacy.io/usage/processing-pipelines.",
"_____no_output_____"
]
],
[
[
"# Install the required model\n\n#!python -m spacy download en_core_web_sm",
"_____no_output_____"
],
[
"# without this line spacy is not able to find the downloaded model\n\n#!python -m spacy link --force en_core_web_sm en_core_web_sm",
"_____no_output_____"
],
[
"# Load a pre-trained pipeline (Web Small): https://spacy.io/usage/models\n\n#!python -m spacy download en_core_web_sm\nnlp = spacy.load('en_core_web_sm')",
"_____no_output_____"
]
],
[
[
"*.. the model’s meta.json tells spaCy to use the language \"en\" and the pipeline [\"tagger\", \"parser\", \"ner\"]. spaCy will then initialize spacy.lang.en.English, and create each pipeline component and add it to the processing pipeline. It’ll then load in the model’s data from its data directory and return the modified Language class for you to use as the nlp object.*\n\nLet's create a simple pipeline that does **lemmatization**, **part of speech tagging** and **named entity recognition** using spaCy models.\n\n*If you don't know what these NLP tasks are, please ask!*",
"_____no_output_____"
]
],
[
[
"tweet_pos = list()\ntweet_ner = list()\ntweet_lemmas = list()\n\nfor tweet in df_elon.text.values:\n spacy_tweet = nlp(tweet)\n \n local_tweet_pos = list()\n local_tweet_ner = list()\n local_tweet_lemmas = list()\n \n for sentence in list(spacy_tweet.sents):\n # --- lemmatization, remove punctuation and stop wors\n local_tweet_lemmas.extend([token.lemma_ for token in sentence if not token.is_punct | token.is_stop])\n local_tweet_pos.extend([token.pos_ for token in sentence if not token.is_punct | token.is_stop])\n for ent in spacy_tweet.ents:\n local_tweet_ner.append(ent)\n\n tweet_pos.append(local_tweet_pos)\n tweet_ner.append(local_tweet_ner)\n tweet_lemmas.append(local_tweet_lemmas)",
"_____no_output_____"
],
[
"tweet_lemmas[0]",
"_____no_output_____"
],
[
"tweet_pos[0]",
"_____no_output_____"
],
[
"tweet_ner[0]",
"_____no_output_____"
],
[
"# but it actually works!\n\ntweet_ner[3]",
"_____no_output_____"
]
],
[
[
"*Note: we are really just scratching the surface of spaCy, but it is worth knowing it's there.*",
"_____no_output_____"
],
[
"### Searching tweets\n\nOnce we have represented Tweets as vectors, we can easily find similar ones using basic operations such as filtering.",
"_____no_output_____"
]
],
[
[
"target = 0\nprint(df_elon.clean_text[target])",
"robots spared humanity\n"
],
[
"condition = X_count_tfidf[target,:] > 0",
"_____no_output_____"
],
[
"print(condition)",
" (0, 5198)\tTrue\n (0, 6617)\tTrue\n (0, 6949)\tTrue\n"
],
[
"X_filtered = X_count_tfidf[:,np.ravel(condition.toarray())]",
"_____no_output_____"
],
[
"X_filtered",
"_____no_output_____"
],
[
"print(X_filtered)",
" (0, 0)\t0.49528340735923404\n (0, 2)\t0.6406029997190413\n (0, 1)\t0.5867896924329815\n (217, 0)\t0.2972381925908634\n (271, 0)\t0.3284547085372313\n (464, 0)\t0.22738802397468952\n (473, 0)\t0.5667220639589731\n (734, 1)\t0.3846355279044392\n (940, 0)\t0.27312597149485407\n (1004, 0)\t0.28161575586607157\n (1550, 1)\t0.33303254164524276\n (1862, 0)\t0.3196675199194523\n (2493, 0)\t0.2685018991334563\n (2559, 0)\t0.311452470142279\n (2565, 0)\t0.2645117238497897\n (2661, 0)\t0.2729016388865858\n"
],
[
"from scipy import sparse\n\nsparse.find(X_filtered)",
"_____no_output_____"
],
[
"tweet_indices = list(sparse.find(X_filtered)[0])",
"_____no_output_____"
],
[
"print(\"TARGET: \" + df_elon.clean_text[target])\n\nfor n, tweet_index in enumerate(list(set(tweet_indices))):\n if tweet_index != target:\n print(str(n) +\")\"+ df_elon.clean_text[tweet_index])",
"TARGET: robots spared humanity\n1)@JustBe74 important make humanity proud this case particular duty owed American taxpayer\n2)@pud Faith restored humanity French toast money\n3)humanity have exciting inspiring future cannot confined Earth forever @love_to_dream #APSpaceChat\n4)@ShireeshAgrawal like humanity\n5)Creating neural lace thing that really matters humanity achieve symbiosis with machines\n6)@tzepr Certainly agree that first foremost triumph humanity cheering good spirit\n7)@ReesAndersen @FLIxrisk believe that critical ensure good future humanity\n8)@NASA #Mars hard x99s worth risks extend humanity x99s frontier beyond Earth Learn about neighbor planet\n9)Astronomer Royal Martin Rees soon will robots take over world @Telegraph\n10)@thelogicbox @IanrossWins Mars critical long-term survival humanity life Earth know\n11)humanity wishes become multi-planet species then must figure move millions people Mars\n12)Sure feels weird find myself defending robots\n13)Neil Armstrong hero humanity spirit will carry stars\n"
]
],
[
[
"#### Questions\n\n* Can you rank the matched tweets using their tf-idf weights, so to put higher weighted tweets first?\n* Which limitations do you think a bag of words representation has?\n* Can you spot any limitations of this approach based on similarity measures over bag of words representations?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecc553285ca11e9dcd23cbc63602b118cdc15223 | 92,159 | ipynb | Jupyter Notebook | climate_analysis.ipynb | nataliekramer/sqlalchemy-flask_challenge | 8520c16e256ef3f541a40677e42d8040e23769ec | [
"MIT"
] | null | null | null | climate_analysis.ipynb | nataliekramer/sqlalchemy-flask_challenge | 8520c16e256ef3f541a40677e42d8040e23769ec | [
"MIT"
] | null | null | null | climate_analysis.ipynb | nataliekramer/sqlalchemy-flask_challenge | 8520c16e256ef3f541a40677e42d8040e23769ec | [
"MIT"
] | null | null | null | 123.869624 | 48,600 | 0.845994 | [
[
[
"# Climate Analysis Script",
"_____no_output_____"
]
],
[
[
"# import dependencies\nimport pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, inspect, func, Column, Integer\n\nimport datetime as dt",
"_____no_output_____"
],
[
"# connect to database\nengine = create_engine(\"sqlite:///hawaii.sqlite\", echo = False)\nsession = Session(engine)\nconn = engine.connect()",
"_____no_output_____"
],
[
"# declare a base \nBase = automap_base()\n\n# use the Base class to reflect the database tables\nBase.prepare(engine, reflect = True)\nBase.classes.keys()",
"_____no_output_____"
],
[
"# Assign tables to variables\nstations = Base.classes.station_info\n\nmeasurements = Base.classes.climate_measurements",
"_____no_output_____"
]
],
[
[
"## Precipitation Analysis",
"_____no_output_____"
]
],
[
[
"# query the measurements table\nresults = session.query(measurements.date, measurements.prcp).\\\n filter(measurements.date.between('2016-08-23', '2017-08-23')).all()",
"_____no_output_____"
],
[
"precip_df = pd.DataFrame(results)\nprecip_df.set_index(\"date\", inplace=True)\nprecip_df.head()",
"_____no_output_____"
],
[
"precip_df.plot(x_compat = True, figsize =(14, 5), rot = 90, color = \"lightblue\")\nplt.tight_layout()\nplt.title(\"Daily Precipitation in Inches: 23 Aug 2016 - 23 Aug 2017\")\nplt.ylabel(\"inches\")\nplt.savefig(\"Images/precipitation.png\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Precipitation Descriptives",
"_____no_output_____"
]
],
[
[
"precip_df.describe()",
"_____no_output_____"
]
],
[
[
"## Station Analysis",
"_____no_output_____"
],
[
"### Total Stations",
"_____no_output_____"
]
],
[
[
"print(\"Total Station Count: \"+ str(session.query(stations.station).count()))",
"Total Station Count: 9\n"
]
],
[
[
"### Station by Observation Counts",
"_____no_output_____"
]
],
[
[
"#query to find collect the stations and number of observations\nts = session.query(measurements.station, func.count(measurements.tobs)).\\\n group_by(measurements.station).order_by(func.sum(measurements.tobs).desc()).all()\nlabels = ['Station', 'Observations']\ntop_stations = pd.DataFrame.from_records(ts, columns=labels)\ntop_stations",
"_____no_output_____"
]
],
[
[
"### Top Station",
"_____no_output_____"
]
],
[
[
"# query to find name of top station and print\ntop_station = session.query(stations.name).filter(stations.station == top_stations['Station'][0]).all()\nprint(\"Station with highest observation count: \"+ str(top_stations['Station'][0])+\" \"+ str(top_station[0]))",
"Station with highest observation count: USC00519397 ('WAIKIKI 717.2, HI US',)\n"
]
],
[
[
"### Temperature Observations from Top Station",
"_____no_output_____"
]
],
[
[
"# query to find station with most tobs with the year timeframe\ntop_station_in_year = session.query(measurements.station, func.count(measurements.tobs)).\\\n filter(measurements.date.between('2016-08-23', '2017-08-23')).\\\n group_by(measurements.station).order_by(func.count(measurements.tobs).\\\n desc()).all()\ntop_station_in_year",
"_____no_output_____"
],
[
"# query to find tobs/day for the year timeframe\ntop_station_tobs = session.query(measurements.date, measurements.tobs).\\\n filter(measurements.station == top_station_in_year[0][0]).\\\n filter(measurements.date.between('2016-08-23','2017-08-23')).all()\n \n#stations_df.set_index(\"date\", inplace=True)\ntop_station_tobs_df = pd.DataFrame(top_station_tobs)\ntop_station_tobs_df.head()",
"_____no_output_____"
],
[
"# histogram plot\ntop_station_tobs_df.plot(kind = 'hist', bins = 12, figsize = (10,7), color = \"lightblue\")\nplt.title(\"Station: \" + str(top_station_in_year[0][0]))\nplt.ylabel(\"Frequency\")\nplt.savefig(\"images/station_histogram.png\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Temperature Analysis",
"_____no_output_____"
]
],
[
[
"# function to return temperature average, min, and max for given dates\ndef calc_temps(start_date, end_date):\n \n data = pd.DataFrame(session.query(measurements.tobs).\\\n filter(measurements.date >= (pd.to_datetime(start_date)- dt.timedelta(days = 365)).strftime('%Y-%m-%d'),\\\n measurements.date < (pd.to_datetime(end_date)- dt.timedelta(days = 365)).strftime('%Y-%m-%d')).all())\n \n avg = round(data[\"tobs\"].mean())\n low = data[\"tobs\"].min()\n high = data[\"tobs\"].max()\n \n print(\"Historical average temperature for your vacation is: \"+ str(avg) +\" degrees\")\n print(\"Historical low temperature for your vacation is: \"+ str(low) +\" degrees\")\n print(\"Historical high temperature for your vacation is: \"+ str(high) +\" degrees\")\n \n return avg, low, high",
"_____no_output_____"
],
[
"start_date = str(input(\"What is the first day of your vacation: \"))\nend_date = str(input(\"What is the last day of your vacation: \"))\nprint()\nprint()\naverage, low, high = calc_temps(start_date, end_date)",
"What is the first day of your vacation: 8-6-2018\nWhat is the last day of your vacation: 8-15-2018\n\n\nHistorical average temperature for your vacation is: 79 degrees\nHistorical low temperature for your vacation is: 71 degrees\nHistorical high temperature for your vacation is: 84 degrees\n"
],
[
"fig, ax = plt.subplots(figsize = (3,7))\nax.bar(1, average, color = 'lightblue', edgecolor = 'black', yerr=(high - low), capsize = 7, label = 'test')\nax.set_xticklabels([])\nplt.yticks(np.arange(0, average + 20, 10))\nplt.title (\"Average Temperature of Your Vacation (based on historical data)\")\nplt.ylabel (\"temperature (F)\")\nplt.savefig(\"images/vacation_temp.png\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecc559d31218251667f095dca0db0aca9935f267 | 7,301 | ipynb | Jupyter Notebook | demo/.ipynb_checkpoints/simple trading system-checkpoint.ipynb | kite8/quant_learning | d823974cd2b5a6b8e2a20fe42d7334051fa46ea0 | [
"MIT"
] | 1 | 2019-02-22T08:12:41.000Z | 2019-02-22T08:12:41.000Z | demo/.ipynb_checkpoints/simple trading system-checkpoint.ipynb | kite8/quant_learning | d823974cd2b5a6b8e2a20fe42d7334051fa46ea0 | [
"MIT"
] | null | null | null | demo/.ipynb_checkpoints/simple trading system-checkpoint.ipynb | kite8/quant_learning | d823974cd2b5a6b8e2a20fe42d7334051fa46ea0 | [
"MIT"
] | 5 | 2019-02-22T08:14:09.000Z | 2020-06-28T05:54:39.000Z | 24.5 | 99 | 0.474319 | [
[
[
"## 数据获取",
"_____no_output_____"
],
[
"daily_crawler.py",
"_____no_output_____"
]
],
[
[
"class DailyCrawler\n def __init__():\n \"\"\"\n 一些初始化,比如创建daily和daily_hfq的connection\n \"\"\"\n pass\n def crawl_index():\n \"\"\"\n 爬取index的数据并保存到daily中\n 注意'index':True\n \"\"\"\n pass\n def crawl():\n \"\"\"\n 爬取股票数据,分为复权和不复权数据两种,\n 注意'index':False,\n \"\"\"\n pass\n def save():\n \"\"\"\n 保存数据到到数据库中\n \"\"\"\n \n \"\"\"\n create update requests list (None)\n \n 对df_daily进行for循环(按行循环)\n 将每行数据转换成dict\n 将dict封装成一个update request,\n 并append到上述的list中去\n 最后是批量写入到数据库中去\n \"\"\"\n pass\n def daily_obj_2_doc():\n \"\"\"\n 将dataframe一行的数据转成dict\n \"\"\"",
"_____no_output_____"
]
],
[
[
"```\n-daily_fixing.py\n\n--fill_is_trading_between\n--fill_is_trading\n--fill_single_date_is_trading\n--fill_daily_k_at_suspension_days\n--fill_daily_k_at_suspension_days_at_date_one_collection\n--fill_au_factor_pre_close\n\n```",
"_____no_output_____"
]
],
[
[
"def fill_is_trading_between():\n \"\"\"\n 填充指定时间区间段的is_trading字段\n 按照trading date进行for循环,每一行都是\n 一个交易日\n for date in all_dates:\n fill_single_date_is_trading : daily\n fill_single_date_is_trading : daily_hfq\n 即在不复权和后复权都填一遍\n \"\"\"\n pass\ndef fill_is_trading(date=None):\n \"\"\"\n 和上面的函数一样,只是判断了一下date是否为None,\n if date is None:\n all_dates = get_trading_dates()\n \"\"\"\n pass\ndef fill_single_date_is_trading():\n \"\"\"\n 填充某一个交易日的is_trading字段\n \n 根据{'date':date,'index':False}\n 找出需要更新的数据;\n \n create update_requests = []\n 对daily_cursor进行for循环\n for daily in daily_cursor:\n is_trading = True\n \n if daily['volume'] == 0:\n is_trading = False\n \n update_requests.append\n 批量写入\n \"\"\"\n pass\ndef fill_daily_k_at_suspension_days():\n \"\"\"\n \n \"\"\"\n \"\"\"\n set before = datetime.datetime.now() - timedelta(days=1)\n while True:\n last_trading_date = before.strftime('%Y-%m-%d')\n cursor : {'date':last_trading_date}\n show 'code', 'timeToMarket' into basics\n if find data, break the while loop\n \n before -= timedelta(days=1)\n retrieve the trading date\n all_dates = get_trading_dates()\n \n fill_daily_k_at_suspension_days_at_date_one_collection(basics, all_dates, 'daily')\n fill_daily_k_at_suspension_days_at_date_one_collection(basics, all_dates, 'daily_hfq')\n \"\"\"\n pass\ndef fill_daily_k_at_suspension_days_at_date_one_colleciton():\n \"\"\"\n create a dict : code_last_trading_daily_dict = {}\n for date in all_dates:\n create a update_requests = []\n # a trick\n last_daily_code_set = set(code_last_trading_daily_dict.keys())\n for basic in basics:\n code = basic['code']\n \n if date < basic['timeToMarket']:\n print('XX还没上市')\n else:\n daily = DB_CONN[collection].find_one({'code':code, \n 'date':date, 'index':False})\n if daily is not None:\n code_last_trading_daily_dict[code] = daily\n last_daily_code_set.add(code)\n \"\"\"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc55e6bee646c06973a63a4d8db435d638abb5a | 3,765 | ipynb | Jupyter Notebook | 00_01_python_refresher/Statistics.ipynb | ibrahimasall/udacity-nd880-aitdn | 87bfaf7617ab3acb4535ebfdab18d6be6e8ba53b | [
"MIT"
] | null | null | null | 00_01_python_refresher/Statistics.ipynb | ibrahimasall/udacity-nd880-aitdn | 87bfaf7617ab3acb4535ebfdab18d6be6e8ba53b | [
"MIT"
] | null | null | null | 00_01_python_refresher/Statistics.ipynb | ibrahimasall/udacity-nd880-aitdn | 87bfaf7617ab3acb4535ebfdab18d6be6e8ba53b | [
"MIT"
] | null | null | null | 24.290323 | 112 | 0.495086 | [
[
[
"# Descriptive Statistics",
"_____no_output_____"
],
[
"## Measures of Center\nThe are three measurues of center: **Mean, Median, Mode**.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy import stats\n\nx = np.array([8, 12, 32, 10, 3, 4, 4, 4, 4, 5, 12, 20])\n\nprint('data: {}'.format(x))\nprint('mean: {:.2f}'.format(np.mean(x)))\nprint('median: {:.2f}'.format(np.median(x)))\nprint('mode: {}'.format(stats.mode(x)))",
"data: [ 8 12 32 10 3 4 4 4 4 5 12 20]\nmean: 9.83\nmedian: 6.50\nmode: ModeResult(mode=array([4]), count=array([4]))\n"
]
],
[
[
"# Measures of Spread\n\nCommon measures of spread include : **Range, Interquartile Range (IQR), Standard Deviation, Variance.**\n\nThe five number summary consist of 5 values:\n\n1. Minimum: The smallest number in the dataset.\n2. 1st quartile: The value such that 25% of the data fall below.\n3. 2nd quartile: The value such that 50% of the data fall below.\n4. 3rd quartile: The value such that 75% of the data fall below.\n5. Maximum: The largest value in the dataset.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nx = np.array([1, 5, 10, 3, 8, 12, 4, 1, 2, 8])\n\n# we can either do it using numpy\nprint('data: {}'.format(x))\nprint('std: {:.2f}'.format(np.std(x, ddof=1)))\nprint('min: {:.2f}'.format(np.min(x)))\nprint('1st quartile: {:.2f}'.format(np.quantile(x, 0.25)))\nprint('2nd quartile (median): {:.2f}'.format(np.quantile(x, 0.5)))\nprint('3rd quartile: {:.2f}'.format(np.quantile(x, 0.75)))\nprint('max: {:.2f}'.format(np.max(x)))\n\n# or using pandas\npd.Series(x).describe()",
"data: [ 1 5 10 3 8 12 4 1 2 8]\nstd: 3.89\nmin: 1.00\n1st quartile: 2.25\n2nd quartile (median): 4.50\n3rd quartile: 8.00\nmax: 12.00\n"
]
],
[
[
"TODO: histogram, boxplot, qqplot, skewness, kurstosis",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecc56278d8915f45774f9e89a7c1eabbcd82763a | 140,080 | ipynb | Jupyter Notebook | notebooks/test_gaussian_kernel.ipynb | Dai-z/pytorch-superpoint | 90e71045238fdcce13f9f0d02bdd0e1126145a10 | [
"MIT"
] | 390 | 2019-12-16T07:36:02.000Z | 2022-03-29T07:27:32.000Z | notebooks/test_gaussian_kernel.ipynb | Dai-z/pytorch-superpoint | 90e71045238fdcce13f9f0d02bdd0e1126145a10 | [
"MIT"
] | 69 | 2019-12-14T20:38:44.000Z | 2022-03-25T12:53:21.000Z | notebooks/test_gaussian_kernel.ipynb | Dai-z/pytorch-superpoint | 90e71045238fdcce13f9f0d02bdd0e1126145a10 | [
"MIT"
] | 98 | 2020-01-07T04:29:17.000Z | 2022-03-30T16:09:31.000Z | 325.767442 | 80,456 | 0.926406 | [
[
[
"## test bilinear sampling",
"_____no_output_____"
]
],
[
[
"from scipy import interpolate\nx = np.arange(-5.01, 5.01, 0.25)\ny = np.arange(-5.01, 5.01, 0.25)\nxx, yy = np.meshgrid(x, y)\nz = np.sin(xx**2+yy**2)\nf = interpolate.interp2d(x, y, z, kind='cubic')",
"_____no_output_____"
],
[
"xnew = np.arange(-5.01, 5.01, 1e-2)\nynew = np.arange(-5.01, 5.01, 1e-2)\nznew = f(xnew, ynew)\nplt.plot(x, z[0, :], 'ro-', xnew, znew[0, :], 'b-')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## test gaussian kernel",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\n# add your module path\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n# change your base path\nos.chdir('../')\nprint(os.getcwd())",
"/home/yoyee/Documents/deepSfm\n"
],
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"from scipy.ndimage import gaussian_filter\na = np.arange(50, step=2).reshape((5,5))\na = np.zeros((5,5))\na[2,2] = 1\nprint(\"a: \", a)\nplt.imshow(a)\nplt.show()\n\na_filtered = gaussian_filter(a, sigma=3)\nprint(\"a_filtered: \", a_filtered)\nplt.imshow(a_filtered)\nplt.show()",
"a: [[0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0.]\n [0. 0. 1. 0. 0.]\n [0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0.]]\n"
],
[
"from scipy import misc\nimport matplotlib.pyplot as plt\nfig = plt.figure()\nplt.gray() # show the filtered result in grayscale\nax1 = fig.add_subplot(121) # left side\nax2 = fig.add_subplot(122) # right side\nascent = misc.ascent()\nresult = gaussian_filter(ascent, sigma=5)\nax1.imshow(ascent)\nax2.imshow(result)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## extropolate points",
"_____no_output_____"
]
],
[
[
"import torch\ndef extrapolate_points(pnts):\n pnts_int = pnts.long().type(torch.FloatTensor)\n pnts_x, pnts_y = pnts_int[:,0], pnts_int[:,1]\n\n stack_1 = lambda x, y: torch.stack((x, y), dim=1)\n pnts_ext = torch.cat((pnts_int, stack_1(pnts_x, pnts_y+1),\n stack_1(pnts_x+1, pnts_y), pnts_int+1), dim=0)\n\n pnts_res = pnts - pnts_int # (x, y)\n x_res, y_res = pnts_res[:,0], pnts_res[:,1] # residuals\n res_ext = torch.cat(((1-x_res)*(1-y_res), (1-x_res)*y_res, \n x_res*(1-y_res), x_res*y_res), dim=0)\n return pnts_ext, res_ext",
"_____no_output_____"
],
[
"import torch\n# pnts = torch.tensor([[1.5, 1.5],[2.2, 2.4]])\npnts = torch.tensor([[2.2, 2.4]])\npnts_ext, res_ext = extrapolate_points(pnts)\nprint(\"pnts_ext: \", pnts_ext)\nprint(\"res_ext: \", res_ext)\n# torch.cat()",
"pnts_ext: tensor([[2., 2.],\n [2., 3.],\n [3., 2.],\n [3., 3.]])\nres_ext: tensor([0.4800, 0.3200, 0.1200, 0.0800])\n"
]
],
[
[
"## gaussian augmentation",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport imgaug.augmenters as iaa\nfrom imgaug import parameters as iap\n\nimages = np.random.randint(0, 255, (16, 128, 128, 3), dtype=np.uint8)\nimages = np.zeros((2,5,5,1), dtype=np.uint8)\nimages[:,2,2] = 255\nseq = iaa.Sequential([iaa.GaussianBlur((3.0))])\n\n# images_aug = seq(images)\n# images_aug = iaa.GaussianBlur(3.0)(images=images)\n\n# Show an image with 8*8 augmented versions of image 0 and 8*8 augmented\n# versions of image 1. Identical augmentations will be applied to\n# image 0 and 1.\n# seq.show_grid([images[0], images[1]], cols=8, rows=8)\n\nblurer = iaa.GaussianBlur(sigma=0.2)\nimages_aug = blurer.augment_images(images)\nprint(\"images:\", images.shape)\nplt.imshow(images_aug[0].squeeze())\nprint(\"images_aug[0].squeeze() \", images_aug[0].squeeze())",
"images: (2, 5, 5, 1)\nimages_aug[0].squeeze() [[ 0 0 0 0 0]\n [ 0 0 0 0 0]\n [ 0 0 255 0 0]\n [ 0 0 0 0 0]\n [ 0 0 0 0 0]]\n"
],
[
"from imgaug import augmenters as iaa\nimport numpy as np\n\n\nimages = np.random.randint(0, 255, (16, 128, 128, 3), dtype=np.uint8)\n\n# always horizontally flip each input image\nimages_aug = iaa.Fliplr(1.0)(images=images)\n\n# vertically flip each input image with 90% probability\nimages_aug = iaa.Flipud(0.9)(images=images)\n\n# blur image 2 by a sigma of 3.0\nimages_aug = iaa.GaussianBlur(3.0)(images=images)\n\n# move each input image by 8 to 16px to the left\nimages_aug = iaa.Affine(translate_px={\"x\": (-8, -16)})(images=images)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecc5719530c0343604e9c0013abe6e3ca037e4ed | 5,574 | ipynb | Jupyter Notebook | .ipynb_checkpoints/interactive_sinus-checkpoint.ipynb | zolabar/Interactive-Calculus | 5b4b01124eba7a981e4e9df7afcb6ab33cd7341f | [
"MIT"
] | 1 | 2022-03-11T01:26:50.000Z | 2022-03-11T01:26:50.000Z | .ipynb_checkpoints/interactive_sinus-checkpoint.ipynb | zolabar/Interactive-Calculus | 5b4b01124eba7a981e4e9df7afcb6ab33cd7341f | [
"MIT"
] | null | null | null | .ipynb_checkpoints/interactive_sinus-checkpoint.ipynb | zolabar/Interactive-Calculus | 5b4b01124eba7a981e4e9df7afcb6ab33cd7341f | [
"MIT"
] | null | null | null | 29.648936 | 149 | 0.426982 | [
[
[
"from ipywidgets import interactive, interact\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport ipywidgets as widgets\nimport sympy as sym\nimport seaborn as sns\nimport plotly.graph_objects as go\nfrom plotly.offline import init_notebook_mode, iplot\nfrom numba import jit\n\ninit_notebook_mode(connected=True)\njit(nopython=True, parallel=True)\nsns.set()\n",
"_____no_output_____"
],
[
"\n\nclass plot():\n \n def __init__(self, preWidgetN):\n \n self.N = preWidgetN\n \n x,y,n ,k = sym.symbols('x, y,n,k', real=True)\n X=np.linspace(0, 10, 100)\n \n f = sym.Sum((-1)**k*(x**(2*k+1))/(sym.factorial(2*k+1)),(k,0, n))\n #f = sym.Sum((-1)**k*(x**(2*k))/(sym.factorial(2*k)),(k,0, n))\n #print(sym.latex(f))\n f = f.subs(n, self.N.value)\n f = sym.lambdify(x, f)\n self.trace1 = go.Scatter(x=X, y=np.sin(X),\n mode='lines+markers',\n name='sin'\n )\n self.trace2 = go.Scatter(x=X, y=f(X),\n mode='lines',\n name=r'$\\sum_{k=0}^{%s} \\frac{\\left(-1\\right)^{k} x^{2 k + 1}}{\\left(2 k + 1\\right)!}$' %(self.N.value)\n )\n \n layout = go.Layout(template='plotly_dark')\n\n self.fig = go.FigureWidget(data=[self.trace1, self.trace2], \n layout = layout,\n layout_yaxis_range=[-3 , 3]\n )\n\n\n def sineSeries(self, change):\n\n x,y,n ,k = sym.symbols('x, y,n,k', real=True)\n X=np.linspace(0, 10, 100)\n f = sym.Sum((-1)**k*(x**(2*k+1))/(sym.factorial(2*k+1)),(k,0, n))\n #f = sym.Sum((-1)**k*(x**(2*k))/(sym.factorial(2*k)),(k,0, n))\n f = f.subs(n, self.N.value)\n f = sym.lambdify(x, f)\n\n\n\n\n with self.fig.batch_update():\n self.fig.data[1].x = X\n self.fig.data[1].y = f(X)\n self.fig.data[1].name = r'$\\sum_{k=0}^{%s} \\frac{\\left(-1\\right)^{k} x^{2 k + 1}}{\\left(2 k + 1\\right)!}$' %(self.N.value)\n\n return \n \n def show(self):\n self.N.observe(self.sineSeries, names='value')\n display(self.N, self.fig)\n return",
"_____no_output_____"
],
[
"N = widgets.IntSlider(min=0, max=20, step=1, value=0, description='partial sum order')\n\np = plot(N)\n\np.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ecc5783021fbe8cd25db6d5bf7fb7f84c2070042 | 105,592 | ipynb | Jupyter Notebook | naive_bayes/Mini_Project_Naive_Bayes.ipynb | cathyxinxyz/mini_projects | feafe2f9989e7d47d4f827b2132463c30585b06f | [
"MIT"
] | null | null | null | naive_bayes/Mini_Project_Naive_Bayes.ipynb | cathyxinxyz/mini_projects | feafe2f9989e7d47d4f827b2132463c30585b06f | [
"MIT"
] | null | null | null | naive_bayes/Mini_Project_Naive_Bayes.ipynb | cathyxinxyz/mini_projects | feafe2f9989e7d47d4f827b2132463c30585b06f | [
"MIT"
] | null | null | null | 69.149967 | 31,408 | 0.745738 | [
[
[
"# Basic Text Classification with Naive Bayes\n***\nIn the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on [Lab 10 of Harvard's CS109](https://github.com/cs109/2015lab10) class. Please free to go to the original lab for additional exercises and solutions.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nfrom six.moves import range\n\n# Setup Pandas\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\n\n# Setup Seaborn\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")",
"_____no_output_____"
]
],
[
[
"# Table of Contents\n\n* [Rotten Tomatoes Dataset](#Rotten-Tomatoes-Dataset)\n * [Explore](#Explore)\n* [The Vector Space Model and a Search Engine](#The-Vector-Space-Model-and-a-Search-Engine)\n * [In Code](#In-Code)\n* [Naive Bayes](#Naive-Bayes)\n * [Multinomial Naive Bayes and Other Likelihood Functions](#Multinomial-Naive-Bayes-and-Other-Likelihood-Functions)\n * [Picking Hyperparameters for Naive Bayes and Text Maintenance](#Picking-Hyperparameters-for-Naive-Bayes-and-Text-Maintenance)\n* [Interpretation](#Interpretation)\n",
"_____no_output_____"
],
[
"## Rotten Tomatoes Dataset",
"_____no_output_____"
]
],
[
[
"critics = pd.read_csv('./critics.csv')\n#let's drop rows with missing quotes\ncritics = critics[~critics.quote.isnull()]\ncritics.head()",
"_____no_output_____"
]
],
[
[
"### Explore",
"_____no_output_____"
]
],
[
[
"n_reviews = len(critics)\nn_movies = critics.rtid.unique().size\nn_critics = critics.critic.unique().size\n\n\nprint(\"Number of reviews: {:d}\".format(n_reviews))\nprint(\"Number of critics: {:d}\".format(n_critics))\nprint(\"Number of movies: {:d}\".format(n_movies))",
"Number of reviews: 15561\nNumber of critics: 623\nNumber of movies: 1921\n"
],
[
"df = critics.copy()\ndf['fresh'] = df.fresh == 'fresh'\ngrp = df.groupby('critic')\ncounts = grp.critic.count() # number of reviews by each critic\nmeans = grp.fresh.mean() # average freshness for each critic\n\nmeans[counts > 100].hist(bins=10, edgecolor='w', lw=1)\nplt.xlabel(\"Average Rating per critic\")\nplt.ylabel(\"Number of Critics\")\nplt.yticks([0, 2, 4, 6, 8, 10]);",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set I</h3>\n<br/>\n<b>Exercise:</b> Look at the histogram above. Tell a story about the average ratings per critic. What shape does the distribution look like? What is interesting about the distribution? What might explain these interesting things?\n</div>",
"_____no_output_____"
],
[
"The rating per critic shows a bimodal distribution. The distribution suggests the most likely pattern of a ranomdly chosen critic: a critic tend less to rate a movie very high, very low or at the middle. Intead, the largest group of critics are those who on average gives a rating right above middle (0.6). ",
"_____no_output_____"
],
[
"## The Vector Space Model and a Search Engine",
"_____no_output_____"
],
[
"All the diagrams here are snipped from [*Introduction to Information Retrieval* by Manning et. al.]( http://nlp.stanford.edu/IR-book/) which is a great resource on text processing. For additional information on text mining and natural language processing, see [*Foundations of Statistical Natural Language Processing* by Manning and Schutze](http://nlp.stanford.edu/fsnlp/).\n\nAlso check out Python packages [`nltk`](http://www.nltk.org/), [`spaCy`](https://spacy.io/), [`pattern`](http://www.clips.ua.ac.be/pattern), and their associated resources. Also see [`word2vec`](https://en.wikipedia.org/wiki/Word2vec).\n\nLet us define the vector derived from document $d$ by $\\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry \"slot\" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a *corpus*.\n\nTo define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So \"hello\" may be at index 5 and \"world\" at index 99.\n\nSuppose we have the following corpus:\n\n`A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree. The grapes seemed ready to burst with juice, and the Fox's mouth watered as he gazed longingly at them.`\n\nSuppose we treat each sentence as a document $d$. The vocabulary (often called the *lexicon*) is the following:\n\n$V = \\left\\{\\right.$ `a, along, and, as, at, beautiful, branches, bunch, burst, day, fox, fox's, from, gazed, grapes, hanging, he, juice, longingly, mouth, of, one, ready, ripe, seemed, spied, the, them, to, trained, tree, vine, watered, with`$\\left.\\right\\}$\n\nThen the document\n\n`A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree`\n\nmay be represented as the following sparse vector of word counts:\n\n$$\\bar V(d) = \\left( 4,1,0,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,2,1,0,1,0,0,1,0,0,0,1,1,0,0 \\right)$$\n\nor more succinctly as\n\n`[(0, 4), (1, 1), (5, 1), (6, 1), (7, 1), (9, 1), (10, 1), (12, 1), (14, 1), (15, 1), (20, 2), (21, 1), (23, 1),`\n`(26, 1), (30, 1), (31, 1)]`\n\nalong with a dictionary\n\n``\n{\n 0: a, 1: along, 5: beautiful, 6: branches, 7: bunch, 9: day, 10: fox, 12: from, 14: grapes, \n 15: hanging, 19: mouth, 20: of, 21: one, 23: ripe, 24: seemed, 25: spied, 26: the, \n 30: tree, 31: vine, \n}\n``\n\nThen, a set of documents becomes, in the usual `sklearn` style, a sparse matrix with rows being sparse arrays representing documents and columns representing the features/words in the vocabulary.\n\nNotice that this representation loses the relative ordering of the terms in the document. That is \"cat ate rat\" and \"rat ate cat\" are the same. Thus, this representation is also known as the Bag-Of-Words representation.\n\nHere is another example, from the book quoted above, although the matrix is transposed here so that documents are columns:\n\n\n\nSuch a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, `jealous` and `jealousy` after stemming are the same feature. One could also make use of other \"Natural Language Processing\" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove \"stopwords\" from our vocabulary, such as common words like \"the\". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all depends on our application.\n\nFrom the book:\n>The standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\\bar V(d_1)$ and $\\bar V(d_2)$:\n\n$$S_{12} = \\frac{\\bar V(d_1) \\cdot \\bar V(d_2)}{|\\bar V(d_1)| \\times |\\bar V(d_2)|}$$\n\n\n\n\n>There is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below. \n\n\n\n>The key idea now: to assign to each document d a score equal to the dot product:\n\n$$\\bar V(q) \\cdot \\bar V(d)$$\n\nThen we can use this simple Vector Model as a Search engine.",
"_____no_output_____"
],
[
"### In Code",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import CountVectorizer\n\ntext = ['Hop on pop', 'Hop off pop', 'Hop Hop hop']\nprint(\"Original text is\\n{}\".format('\\n'.join(text)))\n\nvectorizer = CountVectorizer(min_df=0)\n\n# call `fit` to build the vocabulary\nvectorizer.fit(text)\n\n# call `transform` to convert text to a bag of words\nx = vectorizer.transform(text)\n\n# CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to \n# convert back to a \"normal\" numpy array\nx = x.toarray()\n\nprint(\"\")\nprint(\"Transformed text vector is \\n{}\".format(x))\n\n# `get_feature_names` tracks which word is associated with each column of the transformed x\nprint(\"\")\nprint(\"Words for each feature:\")\nprint(vectorizer.get_feature_names())\n\n# Notice that the bag of words treatment doesn't preserve information about the *order* of words, \n# just their frequency",
"Original text is\nHop on pop\nHop off pop\nHop Hop hop\n\nTransformed text vector is \n[[1 0 1 1]\n [1 1 0 1]\n [3 0 0 0]]\n\nWords for each feature:\n['hop', 'off', 'on', 'pop']\n"
],
[
"def make_xy(critics, vectorizer=None):\n #Your code here \n if vectorizer is None:\n vectorizer = CountVectorizer()\n X = vectorizer.fit_transform(critics.quote)\n X = X.toarray() \n y = (critics.fresh == 'fresh').values.astype(np.int)\n return X, y\nX, y = make_xy(critics)",
"_____no_output_____"
],
[
"X.shape",
"_____no_output_____"
]
],
[
[
"## Naive Bayes",
"_____no_output_____"
],
[
"From Bayes' Theorem, we have that\n\n$$P(c \\vert f) = \\frac{P(c \\cap f)}{P(f)}$$\n\nwhere $c$ represents a *class* or category, and $f$ represents a feature vector, such as $\\bar V(d)$ as above. **We are computing the probability that a document (or whatever we are classifying) belongs to category *c* given the features in the document.** $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as\n\n$$P(c \\vert f) \\propto P(f \\vert c) P(c) $$\n\n$P(c)$ is called the *prior* and is simply the probability of seeing class $c$. But what is $P(f \\vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the *likelihood* and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are *conditionally independent* given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear *within that class*. This is a very important distinction. Recall that if two events are independent, then:\n\n$$P(A \\cap B) = P(A) \\cdot P(B)$$\n\nThus, conditional independence implies\n\n$$P(f \\vert c) = \\prod_i P(f_i | c) $$\n\nwhere $f_i$ is an individual feature (a word in this example).\n\nTo make a classification, we then choose the class $c$ such that $P(c \\vert f)$ is maximal.\n\nThere is a small caveat when computing these probabilities. For [floating point underflow](http://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html) we change the product into a sum by going into log space. This is called the LogSumExp trick. So:\n\n$$\\log P(f \\vert c) = \\sum_i \\log P(f_i \\vert c) $$\n\nThere is another caveat. What if we see a term that didn't exist in the training data? This means that $P(f_i \\vert c) = 0$ for that term, and thus $P(f \\vert c) = \\prod_i P(f_i | c) = 0$, which doesn't help us at all. Instead of using zeros, we add a small negligible value called $\\alpha$ to each count. This is called Laplace Smoothing.\n\n$$P(f_i \\vert c) = \\frac{N_{ic}+\\alpha}{N_c + \\alpha N_i}$$\n\nwhere $N_{ic}$ is the number of times feature $i$ was seen in class $c$, $N_c$ is the number of times class $c$ was seen and $N_i$ is the number of times feature $i$ was seen globally. $\\alpha$ is sometimes called a regularization parameter.",
"_____no_output_____"
],
[
"### Multinomial Naive Bayes and Other Likelihood Functions\n\nSince we are modeling word counts, we are using variation of Naive Bayes called Multinomial Naive Bayes. This is because the likelihood function actually takes the form of the multinomial distribution.\n\n$$P(f \\vert c) = \\frac{\\left( \\sum_i f_i \\right)!}{\\prod_i f_i!} \\prod_{f_i} P(f_i \\vert c)^{f_i} \\propto \\prod_{i} P(f_i \\vert c)$$\n\nwhere the nasty term out front is absorbed as a normalization constant such that probabilities sum to 1.\n\nThere are many other variations of Naive Bayes, all which depend on what type of value $f_i$ takes. If $f_i$ is continuous, we may be able to use *Gaussian Naive Bayes*. First compute the mean and variance for each class $c$. Then the likelihood, $P(f \\vert c)$ is given as follows\n\n$$P(f_i = v \\vert c) = \\frac{1}{\\sqrt{2\\pi \\sigma^2_c}} e^{- \\frac{\\left( v - \\mu_c \\right)^2}{2 \\sigma^2_c}}$$",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set II</h3>\n\n<p><b>Exercise:</b> Implement a simple Naive Bayes classifier:</p>\n\n<ol>\n<li> split the data set into a training and test set\n<li> Use `scikit-learn`'s `MultinomialNB()` classifier with default parameters.\n<li> train the classifier over the training set and test on the test set\n<li> print the accuracy scores for both the training and the test sets\n</ol>\n\nWhat do you notice? Is this a good classifier? If not, why not?\n</div>",
"_____no_output_____"
]
],
[
[
"#your turn\nfrom sklearn.naive_bayes import MultinomialNB \nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25, random_state=42)\nclf = MultinomialNB()\nclf.fit(X_train,y_train)",
"_____no_output_____"
],
[
"train_accuracy=clf.score(X_train,y_train)",
"_____no_output_____"
],
[
"test_accuracy=clf.score(X_test, y_test)",
"_____no_output_____"
],
[
"print ('the train accuracy is {} and test accuracy is {}'.format(train_accuracy, test_accuracy))",
"the train accuracy is 0.9209083119108826 and test accuracy is 0.7782061166795168\n"
]
],
[
[
"The accuracy of training data is high, but the accuracy of test data is relatively low. The classfier is not accurate enough. ",
"_____no_output_____"
],
[
"### Picking Hyperparameters for Naive Bayes and Text Maintenance",
"_____no_output_____"
],
[
"We need to know what value to use for $\\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise.",
"_____no_output_____"
],
[
"First, let's find an appropriate value for `min_df` for the `CountVectorizer`. `min_df` can be either an integer or a float/decimal. If it is an integer, `min_df` represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum *percentage* of documents a word must appear in to be included in the vocabulary. From the documentation:",
"_____no_output_____"
],
[
">min_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set III</h3>\n\n<p><b>Exercise:</b> Construct the cumulative distribution of document frequencies (df). The $x$-axis is a document count $x_i$ and the $y$-axis is the percentage of words that appear less than $x_i$ times. For example, at $x=5$, plot a point representing the percentage or number of words that appear in 5 or fewer documents.</p>\n\n<p><b>Exercise:</b> Look for the point at which the curve begins climbing steeply. This may be a good value for `min_df`. If we were interested in also picking `max_df`, we would likely pick the value where the curve starts to plateau. What value did you choose?</p>\n</div>",
"_____no_output_____"
]
],
[
[
"# Your turn.\nfreq_of_doc=(pd.DataFrame(X)>0).sum(axis=0)",
"_____no_output_____"
],
[
"freq_of_doc.sort_values()",
"_____no_output_____"
],
[
"freq_of_doc.value_counts().sort_index().cumsum(axis=0).plot()\nplt.xscale('log')\nplt.xlabel('maximum number of documents a word appears in')\nplt.ylabel('cumulative count')",
"_____no_output_____"
]
],
[
[
"There is no particular point where the curve starts to steeply climb, so min_df could be just at default 5. There is a clear point where the curve starts to approach plateau, which is around 100, so max_df=100. ",
"_____no_output_____"
],
[
"The parameter $\\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function `cv_score` performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold. ",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import KFold\ndef cv_score(clf, X, y, scorefunc):\n result = 0.\n nfold = 5\n for train, test in KFold(nfold).split(X): # split data into train/test groups, 5 times\n clf.fit(X[train], y[train]) # fit the classifier, passed is as clf.\n result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data\n return result / nfold # average",
"_____no_output_____"
]
],
[
[
"We use the log-likelihood as the score here in `scorefunc`. The higher the log-likelihood, the better. Indeed, what we do in `cv_score` above is to implement the cross-validation part of `GridSearchCV`.\n\nThe custom scoring function `scorefunc` allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using `roc_auc`, precision, recall, or `F1-score` as the scoring function.",
"_____no_output_____"
]
],
[
[
"def log_likelihood(clf, x, y):\n prob = clf.predict_log_proba(x)\n rotten = y == 0\n fresh = ~rotten\n return prob[rotten, 0].sum() + prob[fresh, 1].sum()",
"_____no_output_____"
]
],
[
[
"We'll cross-validate over the regularization parameter $\\alpha$.",
"_____no_output_____"
],
[
"Let's set up the train and test masks first, and then we can run the cross-validation procedure.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n_, itest = train_test_split(range(critics.shape[0]), train_size=0.7)\nmask = np.zeros(critics.shape[0], dtype=np.bool)\nmask[itest] = True",
"C:\\anaconda\\lib\\site-packages\\sklearn\\model_selection\\_split.py:2026: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.\n FutureWarning)\n"
]
],
[
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set IV</h3>\n\n<p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p>\n\n<p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\\alpha$ that is too high?</p>\n\n<p><b>Exercise:</b> Using the skeleton code below, find the best values of the parameter `alpha`, and use the value of `min_df` you chose in the previous exercise set. Use the `cv_score` function above with the `log_likelihood` function for scoring.</p>\n</div>",
"_____no_output_____"
],
[
"answer to first two questions: \n \n i. What does using the function log_likelihood as the score mean? What are we trying to optimize for?\n \n A: Using function log_likelihood as the score means that we try to find the probability distribution of topics given a certain feature vector. The topic with highest probability density is the topic that the document most likely belongs to. \n \n ii. Without writing any code, what do you think would happen if you choose a value of α that is too high?\n \n A: A high value of α would reduce the impact of features on the classification and cause underfitting. ",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import MultinomialNB\n\n#the grid of parameters to search over\nalphas = [.1, 1, 5, 10, 50]\nbest_min_df = 0 # YOUR TURN: put your value of min_df here.\n\n#Find the best value for alpha and min_df, and the best classifier\nbest_alpha = None\nmaxscore=-np.inf\nloglks=list()\n\nfor j,alpha in enumerate(alphas): \n vectorizer = CountVectorizer(min_df=best_min_df) \n Xthis, ythis = make_xy(critics, vectorizer)\n Xtrainthis = Xthis[mask]\n ytrainthis = ythis[mask]\n clf = MultinomialNB(alpha=alpha)\n loglks.append(cv_score(clf, Xtrainthis, ytrainthis, log_likelihood))\n\nbest_alpha=alphas[loglks.index(max(loglks))]\nprint ('best alpha is ', best_alpha)",
"best alpha is 1\n"
]
],
[
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set V: Working with the Best Parameters</h3>\n\n<p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p>\n\n</div>",
"_____no_output_____"
]
],
[
[
"vectorizer = CountVectorizer(min_df=best_min_df)\nX, y = make_xy(critics, vectorizer)\nxtrain=X[mask]\nytrain=y[mask]\nxtest=X[~mask]\nytest=y[~mask]\n\nclf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)\n\n#your turn. Print the accuracy on the test and training dataset\ntraining_accuracy = clf.score(xtrain, ytrain)\ntest_accuracy = clf.score(xtest, ytest)\n\nprint(\"Accuracy on training data: {:2f}\".format(training_accuracy))\nprint(\"Accuracy on test data: {:2f}\".format(test_accuracy))",
"Accuracy on training data: 0.931677\nAccuracy on test data: 0.734392\n"
]
],
[
[
"The accuracy on training data is improved but the accuracy on test data decreased. This may be because of the change in test-train-split percentage. In previous exercise, 75% data are saved for training and 25% are saved for testing. The training/testing split becomes 70%/30% in this section, which means fewer data are saved for training while more for testing. In terms of bias-variance tradeoff, this may cause increase in bias but reduction in variance. So fitting on training data would be improved but fitting on testing data would perform less well. ",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import confusion_matrix\nprint(confusion_matrix(ytest, clf.predict(xtest)))",
"[[2000 2238]\n [ 618 6036]]\n"
]
],
[
[
"## Interpretation",
"_____no_output_____"
],
[
"### What are the strongly predictive features?\n\nWe use a neat trick to identify strongly predictive features (i.e. words). \n\n* first, create a data set such that each row has exactly one feature. This is represented by the identity matrix.\n* use the trained classifier to make predictions on this matrix\n* sort the rows by predicted probabilities, and pick the top and bottom $K$ rows",
"_____no_output_____"
]
],
[
[
"words = np.array(vectorizer.get_feature_names())\n\nx = np.eye(xtest.shape[1])\nprobs = clf.predict_log_proba(x)[:, 0]\nind = np.argsort(probs)\n\ngood_words = words[ind[:10]]\nbad_words = words[ind[-10:]]\n\ngood_prob = probs[ind[:10]]\nbad_prob = probs[ind[-10:]]\n\nprint(\"Good words\\t P(fresh | word)\")\nfor w, p in zip(good_words, good_prob):\n print(\"{:>20}\".format(w), \"{:.2f}\".format(1 - np.exp(p)))\n \nprint(\"Bad words\\t P(fresh | word)\")\nfor w, p in zip(bad_words, bad_prob):\n print(\"{:>20}\".format(w), \"{:.2f}\".format(1 - np.exp(p)))",
"Good words\t P(fresh | word)\n spectacular 0.96\n delight 0.95\n smart 0.95\n rare 0.95\n surprising 0.95\n beautifully 0.94\n masterpiece 0.94\n superb 0.94\n intimate 0.94\n absorbing 0.93\nBad words\t P(fresh | word)\n uninspired 0.11\n supposed 0.11\n begins 0.11\n unfunny 0.11\n dull 0.10\n pointless 0.10\n muddled 0.10\n problem 0.09\n unfortunately 0.08\n lame 0.06\n"
]
],
[
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VI</h3>\n\n<p><b>Exercise:</b> Why does this method work? What does the probability for each row in the identity matrix represent</p>\n\n</div>",
"_____no_output_____"
],
[
"The above exercise is an example of *feature selection*. There are many other feature selection methods. A list of feature selection methods available in `sklearn` is [here](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_selection). The most common feature selection technique for text mining is the chi-squared $\\left( \\chi^2 \\right)$ [method](http://nlp.stanford.edu/IR-book/html/htmledition/feature-selectionchi2-feature-selection-1.html).",
"_____no_output_____"
],
[
"### Prediction Errors\n\nWe can see mis-predictions as well.",
"_____no_output_____"
]
],
[
[
"x, y = make_xy(critics, vectorizer)\n\nprob = clf.predict_proba(x)[:, 0]\npredict = clf.predict(x)\n\nbad_rotten = np.argsort(prob[y == 0])[:5]\nbad_fresh = np.argsort(prob[y == 1])[-5:]\n\nprint(\"Mis-predicted Rotten quotes\")\nprint('---------------------------')\nfor row in bad_rotten:\n print(critics[y == 0].quote.iloc[row])\n print(\"\")\n\nprint(\"Mis-predicted Fresh quotes\")\nprint('--------------------------')\nfor row in bad_fresh:\n print(critics[y == 1].quote.iloc[row])\n print(\"\")",
"Mis-predicted Rotten quotes\n---------------------------\nIt survives today only as an unusually pure example of a typical 50s art-film strategy: the attempt to make the most modern and most popular of art forms acceptable to the intelligentsia by forcing it into an arcane, antique mold.\n\nWhat if this lesser-known chapter of German resistance had been more deeply captured? What if the moral conflicts running through this movie about love of country and revolt said more about Germany, war and, yes, genocide?\n\nHerzog offers some evidence of Kinski's great human warmth, somewhat more of his rage of unimaginable proportions, and a good demonstration of Kinski's uncanny capacity to corkscrew his way into the frame.\n\nWhat emerges in the end is a strange ambiguity of attitude to the American political system and a hollow humour about cultural values. The cinema of cynicism, really.\n\nIf it's to be experienced at all, Return to Paradise is best seen as a lively piece of pulp, not a profound exploration of the vagaries of the human soul.\n\nMis-predicted Fresh quotes\n--------------------------\nDeftly structured by director Penny Marshall and writers Lowell Ganz and Babaloo Mandel to resemble a 40s musical (albeit, somewhat anachronistically, one in 'Scope); the rest is mainly streamlined and spirited teamwork.\n\nA gooey, swooning swatch of romantic hyperventilation, its queasy charms. And let it be said that surrendering to those charms could be as guilt-inducing as polishing off a pint of Haagen-Dazs chocolate ice cream before lunch.\n\nWhat it lacks in irony and suspense, Gilbert Adler's Tales From the Crypt Presents Bordello of Blood makes up for in whimsy and cheeky self-assurance.\n\nThis one is neither crude clowning nor crude prejudice, but a literate and knowingly directed satire which lands many a shrewd crack about phony Five Year Plans, collective farms, Communist jargon and pseudo-scientific gab.\n\nThe film has a fairly uninteresting narrative motor in its thriller subplot, but hits on an edgy black comic tone for Stretch and Spoon's increasingly pained dealings with the unsympathetic representatives of authority.\n\n"
]
],
[
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VII: Predicting the Freshness for a New Review</h3>\n<br/>\n<div>\n<b>Exercise:</b>\n<ul>\n<li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'*\n<li> Is the result what you'd expect? Why (not)?\n</ul>\n</div>\n</div>",
"_____no_output_____"
]
],
[
[
"#your turn\ntext=['This movie is not remarkable, touching, or superb in any way'] \nvectorizer = CountVectorizer()\nvectorizer.fit(critics.quote)\nx = vectorizer.transform(text)\nx = x.toarray()\n\npredict = clf.predict(x)\ntags=['rotten', 'fresh']\nprint ('this sentence is predicted as ', tags[int(predict)])",
"this sentence is predicted as fresh\n"
]
],
[
[
"This is expected because it contains word \"superb\" that is expected to occur frequently among fresh comments.",
"_____no_output_____"
],
[
"### Aside: TF-IDF Weighting for Term Importance\n\nTF-IDF stands for \n\n`Term-Frequency X Inverse Document Frequency`.\n\nIn the standard `CountVectorizer` model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weight this term frequency by the inverse of its popularity in all documents. For example, if the word \"movie\" showed up in all the documents, it would not have much predictive value. It could actually be considered a stopword. By weighing its counts by 1 divided by its overall frequency, we downweight it. We can then use this TF-IDF weighted features as inputs to any classifier. **TF-IDF is essentially a measure of term importance, and of how discriminative a word is in a corpus.** There are a variety of nuances involved in computing TF-IDF, mainly involving where to add the smoothing term to avoid division by 0, or log of 0 errors. The formula for TF-IDF in `scikit-learn` differs from that of most textbooks: \n\n$$\\mbox{TF-IDF}(t, d) = \\mbox{TF}(t, d)\\times \\mbox{IDF}(t) = n_{td} \\log{\\left( \\frac{\\vert D \\vert}{\\vert d : t \\in d \\vert} + 1 \\right)}$$\n\nwhere $n_{td}$ is the number of times term $t$ occurs in document $d$, $\\vert D \\vert$ is the number of documents, and $\\vert d : t \\in d \\vert$ is the number of documents that contain $t$",
"_____no_output_____"
]
],
[
[
"# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction\n# http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref\nfrom sklearn.feature_extraction.text import TfidfVectorizer\ntfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')\nXtfidf=tfidfvectorizer.fit_transform(critics.quote)",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VIII: Enrichment</h3>\n\n<p>\nThere are several additional things we could try. Try some of these as exercises:\n<ol>\n<li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram contains 2 words, a trigram contains 3 words, and 6-gram contains 6 words. This is useful because \"not good\" and \"so good\" mean very different things. On the other hand, as n increases, the model does not scale well since the feature set becomes more sparse.\n<li> Try a model besides Naive Bayes, one that would allow for interactions between words -- for example, a Random Forest classifier.\n<li> Try adding supplemental features -- information about genre, director, cast, etc.\n<li> Use word2vec or [Latent Dirichlet Allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) to group words into topics and use those topics for prediction.\n<li> Use TF-IDF weighting instead of word counts.\n</ol>\n</p>\n\n<b>Exercise:</b> Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result.\n</div>",
"_____no_output_____"
],
[
"### Your turn",
"_____no_output_____"
]
],
[
[
"# check the columns to pick information other than quotes\ncritics.head()",
"_____no_output_____"
],
[
"import scipy\n\ndf_otherinfo=pd.get_dummies(critics[['critic', 'imdb', 'publication']])",
"_____no_output_____"
],
[
"other_features=df_otherinfo.columns\nX_otherinfo=scipy.sparse.csr_matrix(df_otherinfo.values)",
"_____no_output_____"
],
[
"# construct a tfidf vectorizer\ntfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english', ngram_range=(1, 5))\nXtfidf=tfidfvectorizer.fit_transform(critics.quote)",
"_____no_output_____"
],
[
"# concatenate both matrix: one for tfidf words and ngrams and one for other features\nfrom scipy.sparse import hstack\n\nX=hstack((Xtfidf, X_otherinfo))\ny=(critics.fresh=='fresh').astype(int)",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25, random_state=42)",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import GridSearchCV\n# initiate a random forest classifier\nclf = RandomForestClassifier()\n\n# hyperparameters to tune for random forest\nparam_grid = {\n 'n_estimators':[100, 500, 1000],\n 'max_features': [10, 'log2', 'sqrt'],\n 'max_depth':[8, 10, 40]\n}\n\ngrid = GridSearchCV(clf, cv=5, param_grid=param_grid) \ngrid.fit(X_train, y_train)",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix, accuracy_score\n\ny_trained_pred=grid.predict(X_train)\ny_test_pred=grid.predict(X_test)\naccuracy_train=accuracy_score(y_train, y_trained_pred)\naccuracy_test=accuracy_score(y_test, y_test_pred)",
"_____no_output_____"
],
[
"print ('accuracy of training data is', accuracy_train)\nprint ('confusion matrix:\\n')\nprint (confusion_matrix(y_train, y_trained_pred))\nprint ('accuracy of testing data is', accuracy_test)\nprint ('confusion matrix:\\n')\nprint (confusion_matrix(y_test, y_test_pred))",
"accuracy of training data is 0.611311053985\nconfusion matrix:\n\n[[ 46 4536]\n [ 0 7088]]\naccuracy of testing data is 0.616037008481\nconfusion matrix:\n\n[[ 3 1494]\n [ 0 2394]]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc5848902c46fd06a2ec6e71d7fcd71e0f59661 | 79,032 | ipynb | Jupyter Notebook | Supervised Learning/Data Normalization Forcast.ipynb | Pandula1234/Data-Analysis-Intern | 6c6a25e9b4587be1103e9c4b0c50fcb8c04d3062 | [
"Apache-2.0"
] | null | null | null | Supervised Learning/Data Normalization Forcast.ipynb | Pandula1234/Data-Analysis-Intern | 6c6a25e9b4587be1103e9c4b0c50fcb8c04d3062 | [
"Apache-2.0"
] | null | null | null | Supervised Learning/Data Normalization Forcast.ipynb | Pandula1234/Data-Analysis-Intern | 6c6a25e9b4587be1103e9c4b0c50fcb8c04d3062 | [
"Apache-2.0"
] | null | null | null | 47.898182 | 13,292 | 0.607235 | [
[
[
"# Importing Libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport math\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import normalize",
"_____no_output_____"
],
[
"Forcast=pd.read_csv('/Users/Asus/Documents/InternCsv/Forcast_Official.csv')",
"_____no_output_____"
],
[
"Forcast.head(4)",
"_____no_output_____"
],
[
"#Renaming the Unnamed Column into Plants and creating and Index\nForcast = Forcast.rename(columns={'Unnamed: 0': 'Plants'})",
"_____no_output_____"
],
[
"Forcast.drop('Plants',inplace=True,axis=1)",
"_____no_output_____"
],
[
"#Forcast.head(2)",
"_____no_output_____"
],
[
"Forcast.drop('Date',inplace=True,axis=1)",
"_____no_output_____"
],
[
"Plant_Forcast=Forcast.groupby([\"Plant\"])",
"_____no_output_____"
],
[
"Plant_Sum=Plant_Forcast.sum()",
"_____no_output_____"
],
[
"Plant_Sum[\"Total_Dispatch\"] = Plant_Sum.sum(axis=1)",
"_____no_output_____"
],
[
"Plant_Sum.head(5)",
"_____no_output_____"
],
[
"Plant_Sum.shape",
"_____no_output_____"
],
[
"Plant_Sum.reset_index(level=0,inplace=True)",
"_____no_output_____"
],
[
"Plant_Sum.head(3)",
"_____no_output_____"
],
[
"Plant_Sum.shape",
"_____no_output_____"
]
],
[
[
"## Define X & Y",
"_____no_output_____"
]
],
[
[
"x=Plant_Sum.iloc[:,1:48].values\ny=Plant_Sum.iloc[:,49].values",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
]
],
[
[
"## Split the dataset in training Set & test Set",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nx_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=0)",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression\nml=LinearRegression()\nml.fit(x_train,y_train)",
"_____no_output_____"
],
[
"reg= linear_model.LinearRegression()\nreg.fit(Plant_Sum.iloc[:,1:48],Plant_Sum['Total_Dispatch'])",
"_____no_output_____"
],
[
"reg.coef_",
"_____no_output_____"
],
[
"reg.intercept_",
"_____no_output_____"
]
],
[
[
"## Evaluate Model",
"_____no_output_____"
]
],
[
[
"y_pred=ml.predict(x_test)\nprint(y_pred)",
"[ 3.20537872e+04 1.07711283e+05 6.92569119e+05 -5.82076609e-11\n 3.10851249e+03 8.07145374e+04 6.97191018e+05 -5.82076609e-11\n 6.59475187e+02 1.59279582e+05 2.89371833e+03 1.07751957e+04]\n"
],
[
"from sklearn.metrics import r2_score\nr2_score (y_test,y_pred) #High Accurcay Rate can Be Gain From the Model",
"_____no_output_____"
]
],
[
[
"## Plotting Predicting Model with Actual Model",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nplt.figure(figsize=(8,5))\nplt.scatter(y_test,y_pred)\nmyline = np.linspace(0, 10, 100)\n#plt.plot(myline, my_model(myline), color =\"r\")\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('Actual Vs. Predicted') #With The Accuracy is Very High Prediction and Actual Values are Almost Similar",
"_____no_output_____"
]
],
[
[
"## Model Value Differances",
"_____no_output_____"
]
],
[
[
"pred_y_df=pd.DataFrame({'Actual Value':y_test,'Predicted value':y_pred,'Differance':y_test-y_pred})\npred_y_df[0:20]",
"_____no_output_____"
]
],
[
[
"## Data Normalization in Plant_Sum",
"_____no_output_____"
]
],
[
[
"Plant_Sum = normalize(Plant_Sum.iloc[:,1:49] , axis=0) #Without using normalization first dataset importing mechanism can be used",
"_____no_output_____"
],
[
"Plant_Sum",
"_____no_output_____"
],
[
"from numpy import genfromtxt\nx1=Plant_Sum[1:,0:47]\ny1=Plant_Sum[1:,47]\nprint(x1[0:10])\nprint(y1[0:10])",
"[[3.54599849e-03 1.77120693e-03 1.18516873e-03 1.78255753e-03\n 1.19111467e-03 1.19434565e-03 1.19293263e-03 1.19163354e-03\n 1.18041800e-03 3.57079806e-03 5.68084424e-03 2.34971631e-02\n 2.76950084e-02 2.94150238e-02 2.73161014e-02 2.98801636e-02\n 3.03769989e-02 3.04999714e-02 3.05094914e-02 3.04325658e-02\n 3.04203222e-02 3.02994814e-02 3.01441029e-02 3.00852042e-02\n 3.01618750e-02 2.97668789e-02 3.04800946e-02 3.05012717e-02\n 3.04414533e-02 3.03297885e-02 3.03414581e-02 3.02666662e-02\n 3.02185715e-02 3.02356261e-02 3.02827483e-02 3.02403799e-02\n 2.97521541e-02 2.98137412e-02 3.11649169e-02 3.10227730e-02\n 3.04612171e-02 3.09093448e-02 2.97032803e-02 2.95786224e-02\n 2.72147585e-02 2.18781789e-02 1.59059563e-02]\n [9.92657952e-06 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 6.99320331e-04\n 6.93404916e-04 6.94007540e-04 6.94336521e-04 6.90055533e-04\n 6.88558530e-04 6.83200234e-04 6.81015432e-04 6.79298344e-04\n 6.79025049e-04 6.76327710e-04 6.72859440e-04 6.71544737e-04\n 6.73256138e-04 6.76519976e-04 6.80359254e-04 6.80831958e-04\n 6.79496725e-04 6.77004207e-04 6.77264689e-04 6.75595227e-04\n 6.74521685e-04 6.74902367e-04 6.75954203e-04 6.75008480e-04\n 6.70493742e-04 6.48498401e-04 6.33774633e-04 6.41317519e-04\n 6.51249471e-04 6.60830275e-04 6.69392324e-04 6.78821357e-04\n 0.00000000e+00 0.00000000e+00 0.00000000e+00]\n [5.90999748e-04 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 5.59456265e-04\n 5.54723933e-04 5.55206032e-04 5.55469217e-04 5.52044426e-04\n 5.50846824e-04 5.46560187e-04 5.44812346e-04 5.43438675e-04\n 5.43220040e-04 5.41062168e-04 5.38287552e-04 5.37235790e-04\n 5.38604910e-04 5.41215981e-04 5.44287403e-04 5.44665566e-04\n 5.43597380e-04 5.41603365e-04 5.41811752e-04 5.40476182e-04\n 5.39617348e-04 5.39921894e-04 5.40763363e-04 5.40006784e-04\n 5.36394994e-04 5.18798721e-04 5.07019706e-04 5.13054015e-04\n 5.20999577e-04 5.28664220e-04 5.35513859e-04 5.43057085e-04\n 0.00000000e+00 7.70331993e-06 0.00000000e+00]\n [3.10274868e-03 3.09961212e-03 2.96751435e-03 3.11947569e-03\n 2.44922953e-03 2.45587324e-03 2.45296773e-03 2.45029646e-03\n 4.60363022e-03 8.01752317e-03 1.22848257e-02 4.37354935e-02\n 4.81777736e-02 5.07591840e-02 4.93358802e-02 5.09516511e-02\n 5.07674204e-02 5.08164334e-02 5.06539279e-02 5.05262108e-02\n 5.00175216e-02 5.02584464e-02 5.00881883e-02 4.99903208e-02\n 4.97676256e-02 5.00088911e-02 5.02042468e-02 5.01506199e-02\n 5.05822730e-02 5.04435834e-02 5.00769850e-02 4.97238087e-02\n 5.00832351e-02 4.98482888e-02 5.01895996e-02 4.97683752e-02\n 4.94355036e-02 4.80667015e-02 4.76345014e-02 4.76178258e-02\n 4.83552732e-02 4.84454674e-02 4.87585369e-02 4.91806073e-02\n 4.85545045e-02 4.36507568e-02 2.86477579e-02]\n [1.09775160e-01 9.93726568e-02 9.34481841e-02 8.82173612e-02\n 8.87595571e-02 8.53806353e-02 7.95769944e-02 7.88907204e-02\n 8.24251218e-02 9.87640436e-02 1.10676075e-01 1.10422680e-01\n 1.09488636e-01 1.09583791e-01 1.09635737e-01 1.08959769e-01\n 1.07690554e-01 1.06852517e-01 1.06510814e-01 1.06242261e-01\n 1.06199518e-01 1.05777654e-01 1.05235216e-01 1.05029597e-01\n 1.05297260e-01 1.05807724e-01 1.06408187e-01 1.06482118e-01\n 1.06273288e-01 1.05883458e-01 1.05924197e-01 1.05663093e-01\n 1.05495192e-01 1.05554730e-01 1.05719237e-01 1.05571326e-01\n 1.05870962e-01 1.04083993e-01 1.01720829e-01 1.02931462e-01\n 1.04525540e-01 1.06063259e-01 1.07437468e-01 1.07917526e-01\n 1.07523931e-01 1.07685924e-01 1.09832492e-01]\n [5.32431673e-03 4.28226174e-03 3.25839920e-03 2.67301930e-03\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 2.65859645e-03 2.60522511e-03 2.55893629e-03 4.03256076e-03\n 1.49955747e-02 1.23877570e-02 9.77209220e-03 6.46609637e-03\n 5.06028549e-03 1.01546100e-02 1.37045503e-02 1.52079955e-02\n 1.48671856e-02 1.77316217e-02 2.33776265e-02 2.75147996e-02\n 2.77142523e-02 1.15954847e-02 7.87597479e-03 6.68393158e-03\n 1.57051399e-02 2.78738203e-02 2.49183965e-02 3.55746152e-02\n 3.63269724e-02 3.42059417e-02 1.94499062e-02 1.13402100e-02\n 1.09456092e-02 4.61808681e-02 5.62812789e-02 5.55429070e-02\n 4.77522813e-02 3.45774815e-02 2.05707608e-02 1.44747114e-02\n 1.37460331e-02 8.72862292e-03 4.39935649e-03]\n [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.61580770e-03\n 2.59368109e-03 2.59593520e-03 9.04026151e-04 2.76022213e-03\n 2.75423412e-03 2.73280094e-03 2.72406173e-03 2.71719338e-03\n 2.71610020e-03 2.70531084e-03 2.69143776e-03 2.68617895e-03\n 2.69302455e-03 2.70607990e-03 2.72143701e-03 2.72332783e-03\n 2.71798690e-03 2.70801683e-03 2.70905876e-03 2.70238091e-03\n 2.69808674e-03 2.69960947e-03 2.70381681e-03 2.70003392e-03\n 2.46949550e-03 2.38848446e-03 2.33425535e-03 2.36203655e-03\n 2.39861693e-03 2.43390398e-03 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00]\n [1.41447663e-02 9.48584629e-03 8.05944366e-03 3.54067918e-03\n 3.18243505e-03 3.19106763e-03 3.05987220e-03 3.05654002e-03\n 4.06949107e-03 6.15664809e-03 1.00884693e-02 1.89887848e-02\n 2.82071573e-02 2.74872096e-02 2.15966432e-02 2.86483455e-02\n 3.27599623e-02 3.44612347e-02 3.59800202e-02 3.79055269e-02\n 3.42863513e-02 3.79310280e-02 4.21825003e-02 4.26692139e-02\n 4.23700959e-02 4.17583985e-02 3.97059021e-02 3.70696661e-02\n 4.18960014e-02 4.13185145e-02 4.21824153e-02 4.65529025e-02\n 4.53698800e-02 4.39326346e-02 4.19739170e-02 3.87589194e-02\n 3.03356177e-02 4.15592794e-02 4.63308270e-02 4.22196638e-02\n 3.89660142e-02 3.66912133e-02 3.26167434e-02 2.87362730e-02\n 2.31489129e-02 1.72685534e-02 1.69883645e-02]\n [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00]\n [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00]]\n[0.01310602 0. 0. 0.02508662 0.104485 0.\n 0. 0.01817078 0. 0. ]\n"
]
],
[
[
"### Analyzing Training Curve in Plant_Sum",
"_____no_output_____"
]
],
[
[
"def gradient(x1,y1,alpha,epoch):\n m=x1.shape[0] #number of samples\n ones = np.ones((m,1))\n x1=np.concatenate((ones,x1),axis=1)\n n=x1.shape[1]\n Theta = np.ones(n) #n = 5th parameter\n h = np.dot(x1, Theta) #Compute Hypothesis\n\n #Gradient descent Algorithm\n cost = np.ones(epoch)\n for i in range (0,epoch):\n Theta[0] = Theta[0] - (alpha/ x1.shape[0]) * sum(h-y1)\n for j in range(1,n):\n Theta[j] = Theta[j] - (alpha/ x1.shape[0]) * sum((h-y1) * x1[:, j])\n h = np.dot(x1,Theta)\n cost[i] = 1/(2*m) * sum(np.square(h-y1)) #compute Cost\n return cost, Theta ",
"_____no_output_____"
],
[
"#Calcualting Theta & Cost\ncost, Theta = gradient(x1, y1, 0.005, 2000)\nprint(Theta)",
"[-4.27037795e-02 -2.78030322e-03 -7.01235062e-03 -9.14424421e-03\n -1.07766942e-02 -1.07528665e-02 -1.10711805e-02 -1.24159744e-02\n -1.28940119e-02 -1.23870492e-02 -1.00236121e-02 -4.88920098e-06\n 1.06622033e-02 1.83829834e-02 1.66703424e-02 1.19365529e-02\n 1.46054517e-02 2.09330776e-02 2.67902574e-02 2.96023916e-02\n 3.11021513e-02 3.10149870e-02 3.56007260e-02 4.07416058e-02\n 4.26516840e-02 4.04776422e-02 3.45633220e-02 2.86759426e-02\n 2.75963285e-02 3.02790451e-02 3.41207374e-02 3.30892621e-02\n 3.60989580e-02 3.78669990e-02 3.68915210e-02 3.25895204e-02\n 3.03460690e-02 4.14642363e-02 7.86035263e-02 1.02159757e-01\n 9.08501502e-02 7.07413664e-02 5.58070163e-02 4.01070879e-02\n 2.71471409e-02 1.51860597e-02 8.29495855e-03 1.49291187e-03]\n"
]
],
[
[
"## Training Data Cost in Each Plants in the Forcast Table",
"_____no_output_____"
]
],
[
[
"plt.plot(cost)\nplt.xlabel('Number of Iterations (epoch)')\nplt.ylabel(\"Cost or Lost\")\nplt.show\nprint(\"Lowest Cost =\"+ str(np.min(cost)))\n#Computation Cost Of Data Training\n#print(\"Cost after 10 iterations =\" + str(cost[-1]))",
"Lowest Cost =0.0005811410646404037\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc587bac74e588b692743fc0aafe96795b691ae | 56,922 | ipynb | Jupyter Notebook | week01_intro/crossentropy_method.ipynb | harshraj22/Practical_RL | 28acee010f7fab066f782d2e5cbc128f6d0c5ba7 | [
"Unlicense"
] | null | null | null | week01_intro/crossentropy_method.ipynb | harshraj22/Practical_RL | 28acee010f7fab066f782d2e5cbc128f6d0c5ba7 | [
"Unlicense"
] | null | null | null | week01_intro/crossentropy_method.ipynb | harshraj22/Practical_RL | 28acee010f7fab066f782d2e5cbc128f6d0c5ba7 | [
"Unlicense"
] | null | null | null | 87.303681 | 27,014 | 0.779189 | [
[
[
"# Crossentropy method\n\nThis notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.",
"_____no_output_____"
]
],
[
[
"import sys, os\nif 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):\n !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash\n !touch .setup_complete\n\n# This code creates a virtual display to draw game images on.\n# It will have no effect if your machine has a monitor.\nif type(os.environ.get(\"DISPLAY\")) is not str or len(os.environ.get(\"DISPLAY\")) == 0:\n !bash ../xvfb start\n os.environ['DISPLAY'] = ':1'",
"Selecting previously unselected package xvfb.\n(Reading database ... 155013 files and directories currently installed.)\nPreparing to unpack .../xvfb_2%3a1.19.6-1ubuntu4.9_amd64.deb ...\nUnpacking xvfb (2:1.19.6-1ubuntu4.9) ...\nSetting up xvfb (2:1.19.6-1ubuntu4.9) ...\nProcessing triggers for man-db (2.8.3-2ubuntu0.1) ...\nStarting virtual X frame buffer: Xvfb.\n"
],
[
"import gym\nimport numpy as np\n\nenv = gym.make(\"Taxi-v3\")\nenv.reset()\nenv.render()",
"+---------+\n|\u001b[34;1mR\u001b[0m: | : :G|\n| : | : : |\n| : : : : |\n| | : |\u001b[43m \u001b[0m: |\n|Y| : |\u001b[35mB\u001b[0m: |\n+---------+\n\n"
],
[
"n_states = env.observation_space.n\nn_actions = env.action_space.n\n\nprint(\"n_states=%i, n_actions=%i\" % (n_states, n_actions))",
"n_states=500, n_actions=6\n"
]
],
[
[
"# Create stochastic policy\n\nThis time our policy should be a probability distribution.\n\n```policy[s,a] = P(take action a | in state s)```\n\nSince we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.\n\nPlease initialize the policy __uniformly__, that is, probabililities of all actions should be equal.",
"_____no_output_____"
]
],
[
[
"def initialize_policy(n_states, n_actions):\n # <YOUR CODE: create an array to store action probabilities>\n # policy.shape: (n_states, n_actions)\n policy = [[1./n_actions for _ in range(n_actions)] for _ in range(n_states)]\n\n policy = np.array(policy)\n return policy\n\npolicy = initialize_policy(n_states, n_actions)",
"_____no_output_____"
],
[
"policy.shape",
"_____no_output_____"
],
[
"assert type(policy) in (np.ndarray, np.matrix)\nassert np.allclose(policy, 1./n_actions)\nassert np.allclose(np.sum(policy, axis=1), 1)",
"_____no_output_____"
]
],
[
[
"# Play the game\n\nJust like before, but we also record all states and actions we took.",
"_____no_output_____"
]
],
[
[
"def generate_session(env, policy, t_max=10**4):\n \"\"\"\n Play game until end or for t_max ticks.\n :param policy: an array of shape [n_states,n_actions] with action probabilities\n :returns: list of states, list of actions and sum of rewards\n \"\"\"\n states, actions = [], []\n total_reward = 0.\n\n s = env.reset()\n\n for t in range(t_max):\n # Hint: you can use np.random.choice for sampling action\n # https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html\n a = np.random.choice(n_actions, p=policy[s]) # <YOUR CODE: sample action from policy>\n\n new_s, r, done, info = env.step(a)\n\n # Record information we just got from the environment.\n states.append(s)\n actions.append(a)\n total_reward += r\n\n s = new_s\n if done:\n break\n\n return states, actions, total_reward",
"_____no_output_____"
],
[
"s, a, r = generate_session(env, policy)\nassert type(s) == type(a) == list\nassert len(s) == len(a)\nassert type(r) in [float, np.float]",
"_____no_output_____"
],
[
"# let's see the initial reward distribution\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nsample_rewards = [generate_session(env, policy, t_max=1000)[-1] for _ in range(200)]\n\nplt.hist(sample_rewards, bins=20)\nplt.vlines([np.percentile(sample_rewards, 50)], [0], [100], label=\"50'th percentile\", color='green')\nplt.vlines([np.percentile(sample_rewards, 90)], [0], [100], label=\"90'th percentile\", color='red')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Crossentropy method steps",
"_____no_output_____"
]
],
[
[
"def select_elites(states_batch, actions_batch, rewards_batch, percentile):\n \"\"\"\n Select states and actions from games that have rewards >= percentile\n :param states_batch: list of lists of states, states_batch[session_i][t]\n :param actions_batch: list of lists of actions, actions_batch[session_i][t]\n :param rewards_batch: list of rewards, rewards_batch[session_i]\n\n :returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions\n\n Please return elite states and actions in their original order \n [i.e. sorted by session number and timestep within session]\n\n If you are confused, see examples below. Please don't assume that states are integers\n (they will become different later).\n \"\"\"\n\n reward_threshold = np.percentile(rewards_batch, percentile) # <YOUR CODE: compute minimum reward for elite sessions. Hint: use np.percentile()>\n\n elite_states = [] #<YOUR CODE>\n elite_actions = [] #<YOUR CODE>\n for index, reward in enumerate(rewards_batch):\n if reward < reward_threshold:\n continue\n elite_states.extend(states_batch[index])\n elite_actions.extend(actions_batch[index])\n\n return elite_states, elite_actions",
"_____no_output_____"
],
[
"states_batch = [\n [1, 2, 3], # game1\n [4, 2, 0, 2], # game2\n [3, 1], # game3\n]\n\nactions_batch = [\n [0, 2, 4], # game1\n [3, 2, 0, 1], # game2\n [3, 3], # game3\n]\nrewards_batch = [\n 3, # game1\n 4, # game2\n 5, # game3\n]\n\ntest_result_0 = select_elites(states_batch, actions_batch, rewards_batch, percentile=0)\ntest_result_30 = select_elites(states_batch, actions_batch, rewards_batch, percentile=30)\ntest_result_90 = select_elites(states_batch, actions_batch, rewards_batch, percentile=90)\ntest_result_100 = select_elites(states_batch, actions_batch, rewards_batch, percentile=100)\n\n# print(test_result_0)\n\nassert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \\\n and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]), \\\n \"For percentile 0 you should return all states and actions in chronological order\"\nassert np.all(test_result_30[0] == [4, 2, 0, 2, 3, 1]) and \\\n np.all(test_result_30[1] == [3, 2, 0, 1, 3, 3]), \\\n \"For percentile 30 you should only select states/actions from two first\"\nassert np.all(test_result_90[0] == [3, 1]) and \\\n np.all(test_result_90[1] == [3, 3]), \\\n \"For percentile 90 you should only select states/actions from one game\"\nassert np.all(test_result_100[0] == [3, 1]) and\\\n np.all(test_result_100[1] == [3, 3]), \\\n \"Please make sure you use >=, not >. Also double-check how you compute percentile.\"\n\nprint(\"Ok!\")",
"Ok!\n"
],
[
"def get_new_policy(elite_states, elite_actions):\n \"\"\"\n Given a list of elite states/actions from select_elites,\n return a new policy where each action probability is proportional to\n\n policy[s_i,a_i] ~ #[occurrences of s_i and a_i in elite states/actions]\n\n Don't forget to normalize the policy to get valid probabilities and handle the 0/0 case.\n For states that you never visited, use a uniform distribution (1/n_actions for all states).\n\n :param elite_states: 1D list of states from elite sessions\n :param elite_actions: 1D list of actions from elite sessions\n\n \"\"\"\n\n from collections import defaultdict\n # counter = defaultdict(lambda: 0)\n\n # for state in chain(elite_states, elite_actions):\n # counter[state] += 1\n\n new_policy = np.zeros([n_states, n_actions])\n for state, action in zip(elite_states, elite_actions):\n new_policy[state][action] += 1\n\n for state, policy in enumerate(new_policy):\n if policy.max() > 0.0:\n new_policy[state] = policy / policy.sum()\n else:\n new_policy[state] = np.full(n_actions, 1./n_actions)\n # <YOUR CODE: set probabilities for actions given elite states & actions>\n # Don't forget to set 1/n_actions for all actions in unvisited states.\n\n return new_policy",
"_____no_output_____"
],
[
"elite_states = [1, 2, 3, 4, 2, 0, 2, 3, 1]\nelite_actions = [0, 2, 4, 3, 2, 0, 1, 3, 3]\n\nnew_policy = get_new_policy(elite_states, elite_actions)\n\nassert np.isfinite(new_policy).all(), \\\n \"Your new policy contains NaNs or +-inf. Make sure you don't divide by zero.\"\nassert np.all(new_policy >= 0), \\\n \"Your new policy can't have negative action probabilities\"\nassert np.allclose(new_policy.sum(axis=-1), 1), \\\n \"Your new policy should be a valid probability distribution over actions\"\n\nreference_answer = np.array([\n [1., 0., 0., 0., 0.],\n [0.5, 0., 0., 0.5, 0.],\n [0., 0.33333333, 0.66666667, 0., 0.],\n [0., 0., 0., 0.5, 0.5]])\nassert np.allclose(new_policy[:4, :5], reference_answer)\n\nprint(\"Ok!\")",
"Ok!\n"
]
],
[
[
"# Training loop\nGenerate sessions, select N best and fit to those.",
"_____no_output_____"
]
],
[
[
"from IPython.display import clear_output\n\ndef show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):\n \"\"\"\n A convenience function that displays training progress. \n No cool math here, just charts.\n \"\"\"\n\n mean_reward = np.mean(rewards_batch)\n threshold = np.percentile(rewards_batch, percentile)\n log.append([mean_reward, threshold])\n \n plt.figure(figsize=[8, 4])\n plt.subplot(1, 2, 1)\n plt.plot(list(zip(*log))[0], label='Mean rewards')\n plt.plot(list(zip(*log))[1], label='Reward thresholds')\n plt.legend()\n plt.grid()\n\n plt.subplot(1, 2, 2)\n plt.hist(rewards_batch, range=reward_range)\n plt.vlines([np.percentile(rewards_batch, percentile)],\n [0], [100], label=\"percentile\", color='red')\n plt.legend()\n plt.grid()\n clear_output(True)\n print(\"mean reward = %.3f, threshold=%.3f\" % (mean_reward, threshold))\n plt.show()",
"_____no_output_____"
],
[
"# reset policy just in case\npolicy = initialize_policy(n_states, n_actions)",
"_____no_output_____"
],
[
"n_sessions = 250 # sample this many sessions\npercentile = 50 # take this percent of session with highest rewards\nlearning_rate = 0.5 # how quickly the policy is updated, on a scale from 0 to 1\n\nlog = []\n\nfor i in range(100):\n %time sessions = [ generate_session(env, policy, t_max=10**4) for _ in range(n_sessions) ]\n # [ <YOUR CODE: generate a list of n_sessions new sessions> ]\n\n states_batch, actions_batch, rewards_batch = zip(*sessions)\n\n elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile=50)\n # <YOUR CODE: select elite states & actions>\n\n new_policy = get_new_policy(elite_states, elite_actions)\n # <YOUR CODE: compute new policy>\n\n policy = learning_rate * new_policy + (1 - learning_rate) * policy\n\n # display results on chart\n show_progress(rewards_batch, log, percentile)",
"mean reward = -97.784, threshold=5.000\n"
]
],
[
[
"### Reflecting on results\n\nYou may have noticed that the taxi problem quickly converges from less than -1000 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.\n\nIn case CEM failed to learn how to win from one distinct starting point, it will simply discard it because no sessions from that starting point will make it into the \"elites\".\n\nTo mitigate that problem, you can either reduce the threshold for elite sessions (duct tape way) or change the way you evaluate strategy (theoretically correct way). For each starting state, you can sample an action randomly, and then evaluate this action by running _several_ games starting from it and averaging the total reward. Choosing elite sessions with this kind of sampling (where each session's reward is counted as the average of the rewards of all sessions with the same starting state and action) should improve the performance of your policy.",
"_____no_output_____"
],
[
"\n### You're not done yet!\n\nGo to [`./deep_crossentropy_method.ipynb`](./deep_crossentropy_method.ipynb) for a more serious task",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecc5898a97086c54e4778589fec16f411163d88f | 562,071 | ipynb | Jupyter Notebook | SNN/conductance_synapse_model.ipynb | bamboo-nova/Julia_practice | 52124bbb6e6ff9fefafb0f2fb002389ad270bfd0 | [
"MIT"
] | null | null | null | SNN/conductance_synapse_model.ipynb | bamboo-nova/Julia_practice | 52124bbb6e6ff9fefafb0f2fb002389ad270bfd0 | [
"MIT"
] | null | null | null | SNN/conductance_synapse_model.ipynb | bamboo-nova/Julia_practice | 52124bbb6e6ff9fefafb0f2fb002389ad270bfd0 | [
"MIT"
] | null | null | null | 172.361546 | 12,484 | 0.686746 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecc59ef427c72c9813f6cd6c11dd809af225a817 | 24,301 | ipynb | Jupyter Notebook | notebook/procs-dgraph-fulltext.ipynb | samlet/stack | 47db17fd4fdab264032f224dca31a4bb1d19b754 | [
"Apache-2.0"
] | 3 | 2020-01-11T13:55:38.000Z | 2020-08-25T22:34:15.000Z | notebook/procs-dgraph-fulltext.ipynb | samlet/stack | 47db17fd4fdab264032f224dca31a4bb1d19b754 | [
"Apache-2.0"
] | null | null | null | notebook/procs-dgraph-fulltext.ipynb | samlet/stack | 47db17fd4fdab264032f224dca31a4bb1d19b754 | [
"Apache-2.0"
] | 1 | 2021-01-01T05:21:44.000Z | 2021-01-01T05:21:44.000Z | 31.437257 | 135 | 0.381836 | [
[
[
"def lines(filename):\n with open(filename) as f:\n lines = f.readlines() \n return [line.split('\\t') for line in lines]\n \ndataf=\"/pi/ai/seq2seq/jpn-eng/jpn.txt\"\npairs=lines(dataf)",
"_____no_output_____"
],
[
"import sagas.graph.dgraph_helper as helper\nimport pydgraph\nclient=helper.reset('sents: string @index(term) .')",
"_____no_output_____"
],
[
"def filter_term(s):\n return s.replace('\"',\"'\").strip()\n\ndef add_term(index, pair, rs, lang='ja'):\n rs.append('<_:s%d> <sents> \"%s\" .'%(index, filter_term(pair[0])))\n rs.append('<_:s%d> <ja> \"%s\"@%s .'%(index, filter_term(pair[1]), lang))\n\n# client=helper.reset('sents: string @index(fulltext) .')\nclient=helper.reset('''\nsents: string @index(fulltext) .\nja: string @index(fulltext) @lang .\n''')\nrs=[]\nfor i in range(2000,2100):\n add_term(i, pairs[i], rs)\ncnt='\\n'.join(rs)\n# print(cnt)\n_=helper.set_nquads(client, cnt)",
"_____no_output_____"
],
[
"helper.run_q(client, '''{\n data(func: alloftext(sents, \"diet\")) {\n sents\n ja\n }\n}''')",
"{\n \"data\": [\n {\n \"sents\": \"I'm on a diet.\"\n },\n {\n \"sents\": \"I'm on a diet.\"\n }\n ]\n}\n"
],
[
"helper.run_q(client, '''{\n data(func: alloftext(sents, \"on a diet\")) {\n sents\n ja\n }\n}''')",
"{\n \"data\": [\n {\n \"sents\": \"I'm on a diet.\"\n },\n {\n \"sents\": \"I'm on a diet.\"\n }\n ]\n}\n"
],
[
"def query_with_vars(client, query, vars):\n response = client.txn().query(query, variables=vars)\n helper.print_rs(response)\n\nvars = {'$a': 'on a diet'}\nquery_with_vars(client, '''query data($a: string){\n data(func: alloftext(sents, $a)) {\n sents\n ja\n }\n}''', vars)",
"{\n \"data\": [\n {\n \"sents\": \"I'm on a diet.\"\n },\n {\n \"sents\": \"I'm on a diet.\"\n }\n ]\n}\n"
],
[
"print([i for i in range(0,5)])",
"[0, 1, 2, 3, 4]\n"
],
[
"client=helper.reset('''\nsents: string @index(fulltext) .\nja: string @index(fulltext) @lang .\n''')\n\nrs=[]\ncount=len(pairs)\nfor i in range(0,count):\n add_term(i, pairs[i], rs)\ncnt='\\n'.join(rs)\n# print(cnt)\n_=helper.set_nquads(client, cnt)",
"_____no_output_____"
],
[
"helper.run_q(client, '''{\n data(func: alloftext(ja@ja, \"風邪\")) {\n sents\n ja@ja\n }\n}''')",
"{\n \"data\": [\n {\n \"sents\": \"Don't catch a cold.\",\n \"ja@ja\": \"風邪引かないようにね。\"\n },\n {\n \"sents\": \"I'm afraid she may have the mumps.\",\n \"ja@ja\": \"おたふく風邪ではないでしょうか。\"\n },\n {\n \"sents\": \"I caught a cold and was in bed yesterday.\",\n \"ja@ja\": \"私は昨日は風邪をひいて寝ていた。\"\n },\n {\n \"sents\": \"Be careful not to catch a cold.\",\n \"ja@ja\": \"風邪をひかないように気をつけてください。\"\n },\n {\n \"sents\": \"He catches colds very easily.\",\n \"ja@ja\": \"彼は非常に風邪を引きやすい。\"\n },\n {\n \"sents\": \"A lot of colds are going around.\",\n \"ja@ja\": \"風邪がはやっている。\"\n },\n {\n \"sents\": \"Since I had a cold, I didn't go to school.\",\n \"ja@ja\": \"風邪を引いていたので、私は学校を休んだ。\"\n },\n {\n \"sents\": \"Tom was in bed with a cold.\",\n \"ja@ja\": \"トムは風邪で寝込んでいた。\"\n },\n {\n \"sents\": \"I have a cold now.\",\n \"ja@ja\": \"私は今風邪をひいている。\"\n },\n {\n \"sents\": \"It seems I have a slight cold.\",\n \"ja@ja\": \"私は風邪気味のようです。\"\n },\n {\n \"sents\": \"I was absent from school because I had a cold.\",\n \"ja@ja\": \"風邪をひいていたので、学校を休んだ。\"\n },\n {\n \"sents\": \"'I caught a bad cold.' 'That's too bad.'\",\n \"ja@ja\": \"「悪い風邪を引きました」「それはいけませんね」\"\n },\n {\n \"sents\": \"Don't catch a cold.\",\n \"ja@ja\": \"風邪引かないでね。\"\n },\n {\n \"sents\": \"I caught a cold last month.\",\n \"ja@ja\": \"私は先月風邪をひいた。\"\n },\n {\n \"sents\": \"It'll take me a long time to get over my cold.\",\n \"ja@ja\": \"風邪を治すまで長くかかりそうだ。\"\n },\n {\n \"sents\": \"I caught a bad cold last week.\",\n \"ja@ja\": \"私は先週ひどい風邪を引いた。\"\n },\n {\n \"sents\": \"Colds are contagious.\",\n \"ja@ja\": \"風邪は伝染する。\"\n },\n {\n \"sents\": \"He came down with a cold.\",\n \"ja@ja\": \"彼は風邪でダウンした。\"\n },\n {\n \"sents\": \"Tom doesn't like being around children because he's always afraid of catching a cold from one of them.\",\n \"ja@ja\": \"トムは子どもの近くにいるのを好まない。というのは、そのうちの一人から風邪をうつされることをいつも恐れているからだ。\"\n },\n {\n \"sents\": \"She has a cold and is absent from school.\",\n \"ja@ja\": \"彼女は風邪をひいて、学校を休んでいる。\"\n },\n {\n \"sents\": \"I caught a cold and was in bed yesterday.\",\n \"ja@ja\": \"昨日は風邪を引いて寝ていた。\"\n },\n {\n \"sents\": \"He catches colds easily.\",\n \"ja@ja\": \"彼は風邪をひきやすい。\"\n },\n {\n \"sents\": \"I seem to have caught cold. I'm a little feverish.\",\n \"ja@ja\": \"風邪を引いたらしい。少し熱がある。\"\n },\n {\n \"sents\": \"Because I had a bad cold, I went to bed earlier than usual.\",\n \"ja@ja\": \"私はひどい風邪をひいたので、いつもより早く床についた。\"\n },\n {\n \"sents\": \"When you have a cold, you should drink plenty of liquids.\",\n \"ja@ja\": \"風邪を引いたときはたくさん水分を摂らないといけない。\"\n },\n {\n \"sents\": \"I have a bad cold.\",\n \"ja@ja\": \"私はひどい風邪を引いている。\"\n },\n {\n \"sents\": \"He has a cold.\",\n \"ja@ja\": \"彼は風邪をひいている。\"\n },\n {\n \"sents\": \"Children catch colds easily.\",\n \"ja@ja\": \"子どもは風邪を引きやすい。\"\n },\n {\n \"sents\": \"I have caught a cold.\",\n \"ja@ja\": \"風邪をひきました。\"\n },\n {\n \"sents\": \"You'd better be careful not to catch cold.\",\n \"ja@ja\": \"風邪をひかないようにしなさいよ。\"\n },\n {\n \"sents\": \"I caught a cold. That is why I could not attend the meeting yesterday.\",\n \"ja@ja\": \"私は風邪をひいた。そういうわけで昨日会合に出席できなかった。\"\n },\n {\n \"sents\": \"I must have caught a cold.\",\n \"ja@ja\": \"風邪を引いたに違いない。\"\n },\n {\n \"sents\": \"I seem to have caught a bad cold.\",\n \"ja@ja\": \"どうやらひどい風邪にかかったようだ。\"\n },\n {\n \"sents\": \"I can't get rid of this cold.\",\n \"ja@ja\": \"この風邪がなかなか治らない。\"\n },\n {\n \"sents\": \"My daughter caught a cold.\",\n \"ja@ja\": \"うちの娘が風邪をひいた。\"\n },\n {\n \"sents\": \"I seem to have caught a cold.\",\n \"ja@ja\": \"どうやら風邪を引いたらしい。\"\n },\n {\n \"sents\": \"If you take a nap here, you'll catch a cold.\",\n \"ja@ja\": \"こんなところでうたた寝してると、風邪ひくよ。\"\n },\n {\n \"sents\": \"Colds are prevalent this winter.\",\n \"ja@ja\": \"今年の冬は風邪が大流行している。\"\n },\n {\n \"sents\": \"She is down with a cold.\",\n \"ja@ja\": \"彼女は風邪で寝ている。\"\n },\n {\n \"sents\": \"My throat hurts, and I have a fever. Can I have some cold medicine?\",\n \"ja@ja\": \"喉が痛くて、熱があります。風邪薬はありますか。\"\n },\n {\n \"sents\": \"He caught a cold.\",\n \"ja@ja\": \"彼は風邪にかかった。\"\n },\n {\n \"sents\": \"I've had a scratchy throat since this morning. I wonder if I've caught a cold.\",\n \"ja@ja\": \"朝から喉がいがらっぽいんだ。風邪でも引いたかな。\"\n },\n {\n \"sents\": \"I caught a cold.\",\n \"ja@ja\": \"私は風邪をひいた。\"\n },\n {\n \"sents\": \"He caught a chill because he went out in the rain.\",\n \"ja@ja\": \"雨の中を出かけたので、彼は風邪をひいた。\"\n },\n {\n \"sents\": \"He is suffering from a cold.\",\n \"ja@ja\": \"彼は風邪で参っている。\"\n },\n {\n \"sents\": \"I can't shake off my cold.\",\n \"ja@ja\": \"風邪が直らない。\"\n },\n {\n \"sents\": \"She came down with a cold.\",\n \"ja@ja\": \"彼女は風邪をひいた。\"\n },\n {\n \"sents\": \"If you go out so lightly dressed, you'll catch a cold.\",\n \"ja@ja\": \"そんな薄着で外出たら風邪引くよ。\"\n },\n {\n \"sents\": \"She came down with a cold.\",\n \"ja@ja\": \"彼女は風邪でダウンした。\"\n },\n {\n \"sents\": \"I can't get rid of my cold.\",\n \"ja@ja\": \"風邪がなかなか治らない。\"\n },\n {\n \"sents\": \"I'm glad you've gotten over your cold.\",\n \"ja@ja\": \"風邪治ってよかったね。\"\n },\n {\n \"sents\": \"This medicine is good for a cold.\",\n \"ja@ja\": \"この薬は風邪に効きます。\"\n },\n {\n \"sents\": \"It's just a cold.\",\n \"ja@ja\": \"風邪ですね。\"\n },\n {\n \"sents\": \"Tom is suffering from a cold.\",\n \"ja@ja\": \"トムは風邪をひいている。\"\n },\n {\n \"sents\": \"I caught a cold two days ago.\",\n \"ja@ja\": \"二日前に風邪をひきました。\"\n },\n {\n \"sents\": \"Several students were absent from school because of colds.\",\n \"ja@ja\": \"数人の生徒が風邪で学校を休んだ。\"\n },\n {\n \"sents\": \"I have a bad cold.\",\n \"ja@ja\": \"ひどい風邪を引いています。\"\n },\n {\n \"sents\": \"I'm glad you're over your cold.\",\n \"ja@ja\": \"風邪治ってよかったね。\"\n },\n {\n \"sents\": \"I can't get rid of my cold.\",\n \"ja@ja\": \"風邪が直らない。\"\n },\n {\n \"sents\": \"Take care not to catch a cold.\",\n \"ja@ja\": \"風邪をひかないようにしなさいよ。\"\n },\n {\n \"sents\": \"I've had a slight sore throat since this morning. I wonder if I've caught a cold.\",\n \"ja@ja\": \"朝から喉がいがらっぽいんだ。風邪でも引いたかな。\"\n },\n {\n \"sents\": \"Take a sweater with you so you don't catch a cold.\",\n \"ja@ja\": \"風邪をひかないようにセーターを持って行きなさい。\"\n },\n {\n \"sents\": \"I often catch colds in the winter.\",\n \"ja@ja\": \"私は冬によく風邪をひきます。\"\n },\n {\n \"sents\": \"Tom catches colds easily.\",\n \"ja@ja\": \"トムはすぐ風邪を引く。\"\n },\n {\n \"sents\": \"He caught a terrible cold.\",\n \"ja@ja\": \"彼は酷い風邪をひいた。\"\n },\n {\n \"sents\": \"It took more than a month to get over my cold, but I'm OK now.\",\n \"ja@ja\": \"風邪を治すのにひと月以上かかったが、もう大丈夫だ。\"\n },\n {\n \"sents\": \"She was absent from school with a cold.\",\n \"ja@ja\": \"彼女は風邪で学校を休んだ。\"\n },\n {\n \"sents\": \"I've caught a cold.\",\n \"ja@ja\": \"風邪をひいてるんだ。\"\n },\n {\n \"sents\": \"I am suffering from a bad cold.\",\n \"ja@ja\": \"私はたちの悪い風邪にかかっている。\"\n },\n {\n \"sents\": \"He must have gotten over his cold.\",\n \"ja@ja\": \"彼は風邪がよくなったに違いない。\"\n },\n {\n \"sents\": \"You shouldn't have come to school if you have a cold.\",\n \"ja@ja\": \"風邪引いてんなら学校来んなよ。\"\n },\n {\n \"sents\": \"I didn't want the baby to catch a cold, so I closed the window.\",\n \"ja@ja\": \"赤ちゃんが風邪をひくといけないので、私は窓を閉めた。\"\n },\n {\n \"sents\": \"My mother is sick with a bad cold.\",\n \"ja@ja\": \"母はひどい風邪にかかっている。\"\n },\n {\n \"sents\": \"A slight cold prevented me from going to Ibusuki with my family.\",\n \"ja@ja\": \"風邪気味だったので、家族と指宿には行けなかった。\"\n },\n {\n \"sents\": \"Tom caught a cold from Mary.\",\n \"ja@ja\": \"トムはメアリーに風邪をうつされた。\"\n },\n {\n \"sents\": \"I've got a cold.\",\n \"ja@ja\": \"風邪をひきました。\"\n },\n {\n \"sents\": \"Be careful not to catch a cold.\",\n \"ja@ja\": \"風邪を引かないように注意しなければいけません。\"\n },\n {\n \"sents\": \"I've caught a cold.\",\n \"ja@ja\": \"風邪をひきました。\"\n },\n {\n \"sents\": \"She had a touch of a cold last night.\",\n \"ja@ja\": \"昨晩彼女は風邪気味だった。\"\n },\n {\n \"sents\": \"I must've caught a cold.\",\n \"ja@ja\": \"風邪を引いたに違いない。\"\n },\n {\n \"sents\": \"A slight cold prevented me from going to Ibusuki with my family.\",\n \"ja@ja\": \"わたしは風邪気味であったために、家族と指宿へ行かれなかった。\"\n },\n {\n \"sents\": \"Take care not to catch a cold.\",\n \"ja@ja\": \"風邪引かないようにね。\"\n },\n {\n \"sents\": \"Be careful not to catch a cold.\",\n \"ja@ja\": \"風邪引かないようにね。\"\n },\n {\n \"sents\": \"She caught colds often.\",\n \"ja@ja\": \"彼女は風邪を引きやすかった。\"\n },\n {\n \"sents\": \"You can get rid of the cold if you take this medicine.\",\n \"ja@ja\": \"この薬を飲めば風邪も治せるよ。\"\n },\n {\n \"sents\": \"She was susceptible to colds.\",\n \"ja@ja\": \"彼女は風邪を引きやすかった。\"\n },\n {\n \"sents\": \"He came down with a cold.\",\n \"ja@ja\": \"彼は風邪にかかった。\"\n },\n {\n \"sents\": \"I have recovered from my bad cold.\",\n \"ja@ja\": \"ひどい風邪が治った。\"\n },\n {\n \"sents\": \"I have a cold.\",\n \"ja@ja\": \"風邪をひきました。\"\n },\n {\n \"sents\": \"I often catch colds.\",\n \"ja@ja\": \"私はよく風邪を引く。\"\n },\n {\n \"sents\": \"Don't come to school if you have a cold.\",\n \"ja@ja\": \"風邪引いてんなら学校来んなよ。\"\n },\n {\n \"sents\": \"I caught a cold.\",\n \"ja@ja\": \"風邪をひきました。\"\n },\n {\n \"sents\": \"He gave me a bad cold.\",\n \"ja@ja\": \"彼にひどい風邪をうつされた。\"\n },\n {\n \"sents\": \"My father is suffering from a cold.\",\n \"ja@ja\": \"父は風邪を患っている。\"\n },\n {\n \"sents\": \"I caught a cold.\",\n \"ja@ja\": \"風邪ひいた。\"\n },\n {\n \"sents\": \"If you go out wearing that little you will catch a cold.\",\n \"ja@ja\": \"そんな薄着で外出たら風邪引くよ。\"\n },\n {\n \"sents\": \"I gave my cold to him.\",\n \"ja@ja\": \"私は彼に風邪をうつした。\"\n },\n {\n \"sents\": \"Is it true that you recover from colds when you give them to someone else?\",\n \"ja@ja\": \"風邪って人にうつすと治るってほんと?\"\n },\n {\n \"sents\": \"I have a bad cold.\",\n \"ja@ja\": \"風邪がひどいのです。\"\n },\n {\n \"sents\": \"How's your cold?\",\n \"ja@ja\": \"風邪の具合はどうですか。\"\n },\n {\n \"sents\": \"I rarely catch a cold.\",\n \"ja@ja\": \"私はめったに風邪をひかない。\"\n },\n {\n \"sents\": \"Tom is in bed with a cold.\",\n \"ja@ja\": \"トムは風邪で寝込んでいる。\"\n },\n {\n \"sents\": \"I can't get over my cold.\",\n \"ja@ja\": \"風邪がなかなか治らない。\"\n },\n {\n \"sents\": \"I can't shake off my cold.\",\n \"ja@ja\": \"風邪がなかなか治らない。\"\n },\n {\n \"sents\": \"I can't get over my cold.\",\n \"ja@ja\": \"風邪が直らない。\"\n },\n {\n \"sents\": \"Be careful not to catch a cold.\",\n \"ja@ja\": \"風邪をひかないように注意しなさい。\"\n },\n {\n \"sents\": \"You'd better be careful not to catch cold.\",\n \"ja@ja\": \"風邪を引かないように注意しなければいけません。\"\n },\n {\n \"sents\": \"If you have a cold, lack of sleep is very bad for you.\",\n \"ja@ja\": \"風邪を引いているなら、睡眠不足はよくないよ。\"\n },\n {\n \"sents\": \"I am afraid I have a touch of a cold.\",\n \"ja@ja\": \"私は風邪気味のようです。\"\n },\n {\n \"sents\": \"Since he had a bad cold, he was absent from school today.\",\n \"ja@ja\": \"ひどい風邪をひいたので、彼は今日学校を休んだ。\"\n },\n {\n \"sents\": \"Do you have anything for a cold?\",\n \"ja@ja\": \"風邪に効く薬はありますか。\"\n },\n {\n \"sents\": \"If you catch a cold, you cannot easily get rid of it.\",\n \"ja@ja\": \"風邪を引いたら、簡単には治りません。\"\n },\n {\n \"sents\": \"Since I had a cold, I was absent from school.\",\n \"ja@ja\": \"風邪を引いていたので、私は学校を休んだ。\"\n },\n {\n \"sents\": \"Be careful not to catch a cold.\",\n \"ja@ja\": \"風邪をひかないようにしなさいよ。\"\n },\n {\n \"sents\": \"She said she had a slight cold.\",\n \"ja@ja\": \"彼女は風邪気味だと言った。\"\n }\n ]\n}\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc5a47a75f4b7e8e64b33dffbf4d55b23c10eb6 | 204,037 | ipynb | Jupyter Notebook | TEMA-3/.ipynb_checkpoints/Clase20_ValuacionOpcionesAsiaticas-checkpoint.ipynb | kitziafigueroa/SPF-2019-II | 7c27ffe068a94a6140ba98ad8dffe2b3a2369df4 | [
"MIT"
] | null | null | null | TEMA-3/.ipynb_checkpoints/Clase20_ValuacionOpcionesAsiaticas-checkpoint.ipynb | kitziafigueroa/SPF-2019-II | 7c27ffe068a94a6140ba98ad8dffe2b3a2369df4 | [
"MIT"
] | null | null | null | TEMA-3/.ipynb_checkpoints/Clase20_ValuacionOpcionesAsiaticas-checkpoint.ipynb | kitziafigueroa/SPF-2019-II | 7c27ffe068a94a6140ba98ad8dffe2b3a2369df4 | [
"MIT"
] | null | null | null | 248.82561 | 81,244 | 0.905086 | [
[
[
"# Valuación de opciones asiáticas ",
"_____no_output_____"
],
[
"- Las opciones que tratamos la clase pasada dependen sólo del valor del precio del subyacente $S_t$, en el instante que se ejerce.\n\n- Cambios bruscos en el precio, cambian que la opción esté *in the money* a estar *out the money*.\n\n- **Posibilidad de evitar esto** $\\longrightarrow$ suscribir un contrato sobre el valor promedio del precio del subyacente. \n\n- <font color ='red'> Puede proveer protección contra fluctuaciones extremas del precio en mercados volátiles. </font>\n\n- **Nombre**: Banco Trust de Tokio ofreció este tipo de opciones\n\n### ¿Dónde se negocian?\n\n- Mercados OTC (Over the Counter / Independientes). Una explicación de esto podría ser el último punto de la lámina anterior.\n\n- Las condiciones para el cálculo matemático del promedio y otras condiciones son especificadas en el contrato. Lo que las hace un poco más “personalizables”. \n\nExisten diversos tipos de opciones asiáticas y se clasiflcan de acuerdo con lo siguiente.\n\n1. La media que se utiliza puede ser **aritmética** o geométrica.\n - Media aritmética: $$ \\bar x = \\frac{1}{n}\\sum_{i=1}^{n} x_i$$\n - Media geométrica: $$ {\\bar {x}}={\\sqrt[{n}]{\\prod _{i=1}^{n}{x_{i}}}}={\\sqrt[{n}]{x_{1}\\cdot x_{2}\\cdots x_{n}}}$$\n * **Ventajas**:\n - Considera todos los valores de la distribución.\n - Es menos sensible que la media aritmética a los valores extremos.\n * **Desventajas**\n - Es de significado estadístico menos intuitivo que la media aritmética.\n - Su cálculo es más difícil.\n - Si un valor $x_i = 0$ entonces la media geométrica se anula o no queda determinada.\n\nLa media aritmética de un conjunto de números positivos siempre es igual o superior a la media geométrica:\n$$\n\\sqrt[n]{x_1 \\cdot x_2 \\dots x_n} \\le \\frac{x_1+ \\dots + x_n}{n}\n$$ \n\n2. Media se calcula para $S_t \\longrightarrow$ \"Precio de ejercicio fijo\". Media se calcula para precio de ejercicio $\\longrightarrow$ \"Precio de ejercicio flotante\". \n\n3. Si la opción sólo se puede ejercer al final del tiempo del contrato se dice que es asiática de tipo europeo o **euroasiática**, y si puede ejercer en cualquier instante, durante la vigencia del contrato se denomina **asiática de tipo americano.**\n\nLos tipos de opciones euroasiáticas son:\n\n- Call con precio de ejercicio fijo, función de pago: $\\max\\{A-K,0\\}$.\n- Put con precio de ejercicio fijo, función de pago: $\\max\\{K-A,0\\}$.\n- Call con precio de ejercicio flotante, función de pago: $\\max\\{S-K,0\\}$.\n- Put con precio de ejercicio flotante, función de pago: $\\max\\{K-S,0\\}$.\n\nDonde $A$ es el promedio del precio del subyacente.\n\n$$\\text{Promedio aritmético} \\quad A={1\\over T} \\int_0^TS_tdt$$\n$$\\text{Promedio geométrico} \\quad A=\\exp\\Big({1\\over T} \\int_0^T Ln(S_t) dt\\Big)$$\n\nDe aquí en adelante denominaremos **Asiática ** $\\longrightarrow$ Euroasiática y se analizará el call asiático con **K Fijo**.\n\nSe supondrá un solo activo con riesgo, cuyos proceso de precios $\\{S_t | t\\in [0,T]\\}$ satisface un movimiento browniano geométrico, en un mercado que satisface las suposiciones del modelo de Black y Scholes.\n\n__Suposiciones del modelo__:\n- El precio del activo sigue un movimiento browniano geométrico. \n$$\\frac{dS_t}{S_t}=\\mu dt + \\sigma dW_t,\\quad 0\\leq t \\leq T, S_0 >0$$\n- El comercio puede tener lugar continuamente sin ningún costo de transacción o impuestos.\n- Se permite la venta en corto y los activos son perfectamente divisibles. Por lo tanto, se pueden vender activos que no son propios y se puede comprar y vender cualquier número (no necesariamente un número entero) de los activos subyacentes.\n- La tasa de interés libre de riesgo continuamente compuesta es constante.\n- Los inversores pueden pedir prestado o prestar a la misma tasa de interés sin riesgo.\n- No hay oportunidades de arbitraje sin riesgo. De ello se deduce que todas las carteras libres de riesgo deben obtener el mismo rendimiento.\n\nRecordemos que bajo esta medida de probabilidad, $P^*$, denominada de riesgo neutro, bajo la cual el precio del activo, $S_t$, satisface:\n\n$$dS_t = rS_tdt+\\sigma S_tdW_t,\\quad 0\\leq t \\leq T, S_0 >0$$\n\nPara un call asiático de promedio aritmético y con precio de ejercicios fijo, está dado por\n$$\\max \\{A(T)-K,0\\} = (A(T)-K)_+$$\n\ncon $A(x)={1\\over x} \\int_0^x S_u du$",
"_____no_output_____"
],
[
"Se puede ver que el valor en el tiempo t de la opción call asiática está dado por:\n\n$$ V_t(K) = e^{-r(T-t)}E^*[(A(T)-K)_+]$$\n\nPara el caso de interés, *Valución de la opción*, donde $t_0=0$ y $t=0$, se tiene:\n\n$$\\textbf{Valor call asiático}\\longrightarrow V_0(K)=e^{-rT}E\\Bigg[ \\Big({1\\over T} \\int_0^T S_u du -K\\Big)_+\\Bigg]$$ ",
"_____no_output_____"
],
[
"## Usando Monte Carlo\n\nPara usar este método es necesario que se calcule el promedio $S_u$ en el intervalo $[0,T]$. Para esto se debe aproximar el valor de la integral por los siguiente dos métodos.\n\nPara los dos esquemas se dividirá el intervalo $[0,T]$ en N subintervalos de igual longitud, $h={T\\over N}$, esto determina los tiempos $t_0,t_1,\\cdots,t_{N-1},t_N $, en donde $t_i=ih$ para $i=0,1,\\cdots,N$\n\n### 1. Sumas de Riemann\n\n$$\\int_0^T S_u du \\approx h \\sum_{i=0}^{n-1} S_{t_i}$$\n\nDe este modo, si con el método de Monte Carlo se generan $M$ trayectorias, entonces\nla aproximación de el valor del call asiático estaría dada por:\n\n$$\\hat V_0^{(1)}= {e^{-rT} \\over M} \\sum_{j=1}^{M} \\Bigg({1\\over N} \\sum_{i=0}^{N-1} S_{t_i}-K \\Bigg)_+$$\n",
"_____no_output_____"
],
[
"### 2. Mejorando la aproximación de las sumas de Riemann (esquema del trapecio)\n\n",
"_____no_output_____"
],
[
"Desarrollando la exponencial en serie de taylor y suponiendo que $h$ es pequeña, sólo se conservan los términos de orden uno, se tiene la siguiente aproximación:\n$$\\int_0^T S_u du \\approx {h \\over 2}\\sum_{i=0}^{N-1}S_{t_i}(2+rh+(W_{t_{i+1}}-W_{t_i})\\sigma)$$\n\nReemplazando esta aproximación en el precio del call, se tiene la siguiente estimación:\n$$\\hat V_0^{(2)}= {e^{-rT} \\over M} \\sum_{j=1}^{M} \\Bigg({h\\over 2T} \\sum_{i=0}^{N-1} S_{t_i}(2+rh+(W_{t_{i+1}}-W_{t_i})\\sigma)-K \\Bigg)_+$$\n\n> **Referencia**:\nhttp://mat.izt.uam.mx/mat/documentos/notas%20de%20clase/cfenaoe3.pdf",
"_____no_output_____"
],
[
"## Ejemplo\n\nComo caso de prueba se seleccionó el de un call asiático con precio inicial, $S_0 = 100$, precio de ejercicio $K = 100$, tasa libre de riesgo $r = 0.10$, volatilidad $\\sigma = 0.20$ y $T = 1$ año. Cuyo precio es $\\approx 7.04$.",
"_____no_output_____"
]
],
[
[
"#importar los paquetes que se van a usar\nimport pandas as pd\nimport pandas_datareader.data as web\nimport numpy as np\nimport datetime\nimport matplotlib.pyplot as plt\nimport scipy.stats as st\nimport seaborn as sns\n%matplotlib inline\n#algunas opciones para Pandas\npd.set_option('display.notebook_repr_html', True)\npd.set_option('display.max_columns', 9)\npd.set_option('display.max_rows', 10)\npd.set_option('display.width', 78)\npd.set_option('precision', 3)",
"_____no_output_____"
],
[
"def BSprices(mu,sigma,S0,NbTraj,NbStep):\n \"\"\"\n Expresión de la solución de la ecuación de Black-Scholes\n St = S0*exp((r-sigma^2/2)*t+ sigma*DeltaW)\n \n Parámetros\n ---------\n mu : Tasa libre de riesgo\n sigma : Desviación estándar de los rendimientos\n S0 : Precio inicial del activo subyacente\n NbTraj: Cantidad de trayectorias a simular\n NbStep: Número de días a simular\n \"\"\"\n # Datos para la fórmula de St\n nu = mu-(sigma**2)/2\n DeltaT = 1/NbStep\n SqDeltaT = np.sqrt(DeltaT)\n DeltaW = SqDeltaT*np.random.randn(NbTraj,NbStep-1)\n \n # Se obtiene --> Ln St = Ln S0+ nu*DeltaT + sigma*DeltaW\n increments = nu*DeltaT + sigma*DeltaW\n concat = np.concatenate((np.log(S0)*np.ones([NbTraj,1]),increments),axis=1)\n \n # Se utiliza cumsum por que se quiere simular los precios iniciando desde S0\n LogSt = np.cumsum(concat,axis=1)\n # Se obtienen los precios simulados para los NbStep fijados\n St = np.exp(LogSt)\n # Vector con la cantidad de días simulados\n t = np.arange(0,NbStep)\n\n return St.T,t\n\ndef calc_daily_ret(closes):\n return np.log(closes/closes.shift(1)).iloc[1:]",
"_____no_output_____"
],
[
"np.random.seed(5555)\nNbTraj = 2\nNbStep = 100\nS0 = 100\nr = 0.10\nsigma = 0.2\nK = 100\n\n# Resolvemos la ecuación de black scholes para obtener los precios\nSt,t = BSprices(r,sigma,S0,NbTraj,NbStep)\n# t = t*NbStep\n\nprices = pd.DataFrame(St,index=t)\nprices",
"_____no_output_____"
],
[
"# Graficamos los precios simulados\nplt.plot(t,St,label='precios')\n\n# Obtenemos los precios promedios en todo el tiempo y los graficamos \nAverage_t = prices.expanding(1,axis=0).mean()\n# Average_t = prices.rolling(window=20).mean()\nplt.plot(t,Average_t,label='Promedio de precios')\nplt.legend()\nplt.show()#",
"_____no_output_____"
],
[
"# Ilustración función rolling y expanding\ndata = pd.DataFrame([\n ['a', 1],\n ['a', 2],\n ['a', 4],\n ['b', 5],\n], columns = ['category', 'value'])\nprint('expanding\\n',data.value.expanding(2).sum())\nprint('rolling\\n',data.value.rolling(window=2).sum())",
"expanding\n 0 NaN\n1 3.0\n2 7.0\n3 12.0\nName: value, dtype: float64\nrolling\n 0 NaN\n1 3.0\n2 6.0\n3 9.0\nName: value, dtype: float64\n"
],
[
"# Diferencia entre el cálculo de la media usando expanding y una ventana móvil\nf_expand= prices.expanding(10).mean()\nf_rolling = prices.rolling(window=10).mean()\nplt.plot(t,f_rolling,label='Usando rolling')\nplt.plot(t,f_expand,label='Usando expanding')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"# Explicación de función expanding\npan = pd.DataFrame(np.matrix([[1,2,3],[4,5,6],[7,8,9]]))\npan.expanding(1,axis=0).mean()",
"_____no_output_____"
]
],
[
[
"### Método sumas de Riemann",
"_____no_output_____"
]
],
[
[
"#### Sumas de Riemann\nstrike = pd.DataFrame(K*np.ones([NbStep,NbTraj]), index=t)\nT = 1 # Tiempo de cierre des contrato\ncall = pd.DataFrame({'Prima':np.exp(-r*T) \\\n *np.fmax(Average_t-strike,np.zeros([NbStep,NbTraj])).mean(axis=1)}, index=t)\n# .mean(axis=1) realiza el promedio entre las columnas de np.fmax() \n# es decir entre las trayectorias simuladas \ncall.plot()\nprint('La prima estimada usando %i trayectoris es: %2.2f'%(NbTraj,call.iloc[-1].Prima))\n# intervalos de confianza\nconfianza = 0.95\nsigma_est = call.sem().Prima\nmean_est = call.iloc[-1].Prima\ni1 = st.t.interval(confianza,NbTraj-1, loc=mean_est, scale=sigma_est)\ni2 = st.norm.interval(confianza, loc=mean_est, scale=sigma_est)\nprint('El intervalor de confianza usando t-dist es:', i1)\nprint('El intervalor de confianza usando norm-dist es:', i2)\n# mean_est",
"La prima estimada usando 2 trayectoris es: 9.94\nEl intervalor de confianza usando t-dist es: (6.57405517707332, 13.314037058629602)\nEl intervalor de confianza usando norm-dist es: (9.424216553543172, 10.463875682159749)\n"
],
[
"call.iloc[-1].Prima",
"_____no_output_____"
]
],
[
[
"Ahora hagamos pruebas variando la cantidad de trayectorias `NbTraj` y la cantidad de números de puntos `NbStep` para ver como aumenta la precisión del método. Primero creemos una función que realice la aproximación de Riemann",
"_____no_output_____"
]
],
[
[
"def Riemann_approach(K:'Strike price',r:'Tasa libre de riesgo',S0:'Precio inicial',\n NbTraj:'Número trayectorias',NbStep:'Cantidad de pasos a simular',\n sigma:'Volatilidad',T:'Tiempo de cierre del contrato en años'):\n # Resolvemos la ecuación de black scholes para obtener los precios\n St,t = BSprices(r,sigma,S0,NbTraj,NbStep)\n # Almacenamos los precios en un dataframe\n prices = pd.DataFrame(St,index=t)\n # Obtenemos los precios promedios\n Average_t = prices.expanding().mean()\n # Definimos el dataframe de strikes\n strike = pd.DataFrame(K*np.ones([NbStep,NbTraj]), index=t)\n # Calculamos el call de la opción según la formula obtenida para Sumas de Riemann\n call = pd.DataFrame({'Prima':np.exp(-r*T) \\\n *np.fmax(Average_t-strike,np.zeros([NbStep,NbTraj])).mean(axis=1)}, index=t)\n # intervalos de confianza\n confianza = 0.95\n sigma_est = call.sem().Prima\n mean_est = call.iloc[-1].Prima\n i1 = st.norm.interval(confianza, loc=mean_est, scale=sigma_est)\n# return np.array([call.iloc[-1].Prima,i1[0],i1[1]])\n return call.iloc[-1].Prima",
"_____no_output_____"
],
[
"NbTraj = [1000,5000,10000]\nNbStep = [10,50,100]\n\nS0 = 100 # Precio inicial\nr = 0.10 # Tasa libre de riesgo \nsigma = 0.2 # volatilidad\nK = 100 # Strike price\nT = 1 # Tiempo de cierre - años\n\n# Call = np.zeros([len(NbTraj),len(NbStep)])\n# intervalos = []#np.zeros([len(NbTraj),len(NbStep)])\nT = 1\nM = list(map(lambda N_tra:list(map(lambda N_ste:Riemann_approach(K,r,S0,N_tra,N_ste,sigma,T),\n NbStep)),\n NbTraj))\nM = np.asmatrix(M)",
"_____no_output_____"
],
[
"# Visualización de datos \nfilas = ['Nbtray = %i' %i for i in NbTraj]\ncol = ['NbStep = %i' %i for i in NbStep]\ndf = pd.DataFrame(index=filas,columns=col)\ndf.loc[:,:] = M\ndf",
"_____no_output_____"
]
],
[
[
"# Tarea\n\nImplementar el método de esquemas del trapecio, para valuar la opción call y put asiática con precio inicial, $S_0 = 100$, precio de ejercicio $K = 100$, tasa libre de riesgo $r = 0.10$, volatilidad $\\sigma = 0.20$ y $T = 1$ año. Cuyo precio es $\\approx 7.04$. Realizar la simulación en base a la siguiente tabla:\n\n\nObserve que en esta tabla se encuentran los intervalos de confianza de la aproximación obtenida y además el tiempo de simulación que tarda en encontrar la respuesta cada método. \n- Se debe entonces realizar una simulación para la misma cantidad de trayectorias y número de pasos y construir una Dataframe de pandas para reportar todos los resultados obtenidos.**(70 puntos)**\n- Compare los resultados obtenidos con los resultados arrojados por la función `Riemann_approach`. Concluya. **(30 puntos)**",
"_____no_output_____"
],
[
"Se habilitará un enlace en canvas donde se adjuntará los resultados de dicha tarea\n\n>**Nota:** Para generar índices de manera como se especifica en la tabla referirse a:\n> - https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html\n> - https://jakevdp.github.io/PythonDataScienceHandbook/03.05-hierarchical-indexing.html\n> - https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.html\n",
"_____no_output_____"
],
[
"<script>\n $(document).ready(function(){\n $('div.prompt').hide();\n $('div.back-to-top').hide();\n $('nav#menubar').hide();\n $('.breadcrumb').hide();\n $('.hidden-print').hide();\n });\n</script>\n\n<footer id=\"attribution\" style=\"float:right; color:#808080; background:#fff;\">\nCreated with Jupyter by Oscar David Jaramillo Z.\n</footer>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
ecc5acb0c1ab5b69f872ca0d08dcc9eab5509ca7 | 677,949 | ipynb | Jupyter Notebook | riiid-EDA.ipynb | sahilabs/Riiid-Answer-Correctness | 154ba5facf3569c4cf5c82ed86c68265fbe75a3d | [
"Apache-2.0"
] | 4 | 2021-01-26T07:51:53.000Z | 2021-01-26T13:05:56.000Z | riiid-EDA.ipynb | sahilabs/Riiid-Answer-Correctness | 154ba5facf3569c4cf5c82ed86c68265fbe75a3d | [
"Apache-2.0"
] | null | null | null | riiid-EDA.ipynb | sahilabs/Riiid-Answer-Correctness | 154ba5facf3569c4cf5c82ed86c68265fbe75a3d | [
"Apache-2.0"
] | null | null | null | 180.545672 | 61,040 | 0.886882 | [
[
[
"# About the Riiid AIEd Challenge 2020\n\nRiiid Labs, an AI solutions provider delivering creative disruption to the education market, empowers global education players to rethink traditional ways of learning leveraging AI. With a strong belief in equal opportunity in education, Riiid launched an AI tutor based on deep-learning algorithms in 2017 that attracted more than one million South Korean students. This year, the company released EdNet, the world’s largest open database for AI education containing more than 100 million student interactions.\n\nIn this competition, your challenge is to create algorithms for \"Knowledge Tracing,\" the modeling of student knowledge over time. The goal is to accurately predict how students will perform on future interactions. You will pair your machine learning skills using Riiid’s EdNet data. \n\nSubmissions are evaluated on area under the ROC curve between the predicted probability and the observed target.",
"_____no_output_____"
],
[
"# Table of Contents\n\n[**1. EDA**](#1.-EDA)\n\n[1.1 Exploring Train](#1.1-Exploring-Train)\n\n[1.2 Exploring Questions](#1.2-Exploring-Questions)\n\n[1.3 Exploring Lectures](#1.3-Exploring-Lectures)\n \n[**2. Baseline model**](#2.-Baseline-model)",
"_____no_output_____"
],
[
"# 1. EDA\n\nAltogether, we are given 7 files.\n\n>Tailoring education to a student's ability level is one of the many valuable things an AI tutor can do. Your challenge in this competition is a version of that overall task; you will predict whether students are able to answer their next questions correctly. You'll be provided with the same sorts of information a complete education app would have: that student's historic performance, the performance of other students on the same question, metadata about the question itself, and more.\n\n>This is a time-series code competition, you will receive test set data and make predictions with Kaggle's time-series API. Please be sure to review the Time-series API Details section closely.\n\nSo we should realize that example_test.csv really is just an example. The submission happens via the API.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport matplotlib.style as style\nstyle.use('fivethirtyeight')\nimport seaborn as sns\nimport os\nfrom matplotlib.ticker import FuncFormatter\n\nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input/riiid-test-answer-prediction'):\n for filename in filenames:\n print(os.path.join(dirname, filename))",
"/kaggle/input/riiid-test-answer-prediction/example_sample_submission.csv\n/kaggle/input/riiid-test-answer-prediction/example_test.csv\n/kaggle/input/riiid-test-answer-prediction/questions.csv\n/kaggle/input/riiid-test-answer-prediction/train.csv\n/kaggle/input/riiid-test-answer-prediction/lectures.csv\n/kaggle/input/riiid-test-answer-prediction/riiideducation/competition.cpython-37m-x86_64-linux-gnu.so\n/kaggle/input/riiid-test-answer-prediction/riiideducation/__init__.py\n"
],
[
"%%time\n\ntrain = pd.read_pickle(\"../input/riiid-train-data-multiple-formats/riiid_train.pkl.gzip\")\n\nprint(\"Train size:\", train.shape)",
"Train size: (101230332, 10)\nCPU times: user 5.83 s, sys: 8.01 s, total: 13.8 s\nWall time: 43.7 s\n"
]
],
[
[
"Let's start by checking how much memory this dataframe is using.",
"_____no_output_____"
]
],
[
[
"train.memory_usage(deep=True)",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 101230332 entries, 0 to 101230331\nData columns (total 10 columns):\n # Column Dtype \n--- ------ ----- \n 0 row_id int64 \n 1 timestamp int64 \n 2 user_id int32 \n 3 content_id int16 \n 4 content_type_id bool \n 5 task_container_id int16 \n 6 user_answer int8 \n 7 answered_correctly int8 \n 8 prior_question_elapsed_time float32\n 9 prior_question_had_explanation object \ndtypes: bool(1), float32(1), int16(2), int32(1), int64(2), int8(2), object(1)\nmemory usage: 3.7+ GB\n"
]
],
[
[
"Hmm.....we can see that 'prior_question_had_explanation' is object and taking a lot of memory, while it is supposed to be boolean. Let's fix this before continuing.",
"_____no_output_____"
]
],
[
[
"train['prior_question_had_explanation'] = train['prior_question_had_explanation'].astype('boolean')\n\ntrain.memory_usage(deep=True)",
"_____no_output_____"
]
],
[
[
"The other files don't take very long to load, and I am importing the CSVs directly.",
"_____no_output_____"
]
],
[
[
"%%time\n\nquestions = pd.read_csv('/kaggle/input/riiid-test-answer-prediction/questions.csv')\nlectures = pd.read_csv('/kaggle/input/riiid-test-answer-prediction/lectures.csv')\nexample_test = pd.read_csv('/kaggle/input/riiid-test-answer-prediction/example_test.csv')\nexample_sample_submission = pd.read_csv('/kaggle/input/riiid-test-answer-prediction/example_sample_submission.csv')",
"CPU times: user 18 ms, sys: 7.03 ms, total: 25 ms\nWall time: 76 ms\n"
]
],
[
[
"# 1.1 Exploring Train\n\nThe columns in the train file are described as:\n* row_id: (int64) ID code for the row.\n* timestamp: (int64) the time in milliseconds between this user interaction and the first event completion from that user.\n* user_id: (int32) ID code for the user.\n* content_id: (int16) ID code for the user interaction\n* content_type_id: (int8) 0 if the event was a question being posed to the user, 1 if the event was the user watching a lecture.\n* task_container_id: (int16) Id code for the batch of questions or lectures. For example, a user might see three questions in a row before seeing the explanations for any of them. Those three would all share a task_container_id.\n* user_answer: (int8) the user's answer to the question, if any. Read -1 as null, for lectures.\n* answered_correctly: (int8) if the user responded correctly. Read -1 as null, for lectures.\n* prior_question_elapsed_time: (float32) The average time in milliseconds it took a user to answer each question in the previous question bundle, ignoring any lectures in between. Is null for a user's first question bundle or lecture. Note that the time is the average time a user took to solve each question in the previous bundle.\n* prior_question_had_explanation: (bool) Whether or not the user saw an explanation and the correct response(s) after answering the previous question bundle, ignoring any lectures in between. The value is shared across a single question bundle, and is null for a user's first question bundle or lecture. Typically the first several questions a user sees were part of an onboarding diagnostic test where they did not get any feedback\n\nThe train dataset is ordered by ascending user_id and ascending timestamp.",
"_____no_output_____"
]
],
[
[
"train.head(10)",
"_____no_output_____"
],
[
"print(f'We have {train.user_id.nunique()} unique users in our train set')",
"We have 393656 unique users in our train set\n"
]
],
[
[
"Content_type_id = False means that a question was asked. True means that the user was watching a lecture.",
"_____no_output_____"
]
],
[
[
"train.content_type_id.value_counts()",
"_____no_output_____"
]
],
[
[
"Content_id is a code for the user interaction. Basically, these are the questions if content_type is question (question_id: foreign key for the train/test content_id column, when the content type is question).",
"_____no_output_____"
]
],
[
[
"print(f'We have {train.content_id.nunique()} content ids in our train set, of which {train[train.content_type_id == False].content_id.nunique()} are questions.')",
"We have 13782 content ids in our train set, of which 13523 are questions.\n"
],
[
"cids = train.content_id.value_counts()[:30]\n\nfig = plt.figure(figsize=(12,6))\nax = cids.plot.bar()\nplt.title(\"Thirty most used content id's\")\nplt.xticks(rotation=90)\nax.get_yaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ','))) #add thousands separator\nplt.show()",
"_____no_output_____"
]
],
[
[
"task_container_id: (int16) Id code for the batch of questions or lectures. For example, a user might see three questions in a row before seeing the explanations for any of them. Those three would all share a task_container_id.",
"_____no_output_____"
]
],
[
[
"print(f'We have {train.task_container_id.nunique()} unique Batches of questions or lectures.')",
"We have 10000 unique Batches of questions or lectures.\n"
]
],
[
[
"User answer. Seems that the questions are multiple choice (answers 0-3). As mentioned in the data description, -1 is actually no-answer (as the interaction was a lecture instead of a question).",
"_____no_output_____"
]
],
[
[
"train.user_answer.value_counts()",
"_____no_output_____"
]
],
[
[
"timestamp: (int64) the time in milliseconds between this user interaction and the first event completion from that user. As you can see, most interactions are from users that were not active very long on the platform yet.",
"_____no_output_____"
]
],
[
[
"#1 year = 31536000000 ms\nts = train['timestamp']/(31536000000/12)\nfig = plt.figure(figsize=(12,6))\nts.plot.hist(bins=100)\nplt.title(\"Histogram of timestamp\")\nplt.xticks(rotation=0)\nplt.xlabel(\"Months between this user interaction and the first event completion from that user\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Do we have the full history of all user_id's? Yes, if we filter train on timestamp==0, we get a time 0 for all users.",
"_____no_output_____"
]
],
[
[
"print(f'Of the {train.user_id.nunique()} users in train we have {train[train.timestamp == 0].user_id.nunique()} users with a timestamp zero row.')",
"Of the 393656 users in train we have 393656 users with a timestamp zero row.\n"
]
],
[
[
"# The target: answered_correctly\nAnswered_correctly is our target, and we have to predict to probability for an answer to be correct. Without looking at the lecture interactions (-1), we see about 1/3 of the questions was answered incorrectly.",
"_____no_output_____"
]
],
[
[
"correct = train[train.answered_correctly != -1].answered_correctly.value_counts(ascending=True)\n\nfig = plt.figure(figsize=(12,4))\ncorrect.plot.barh()\nfor i, v in zip(correct.index, correct.values):\n plt.text(v, i, '{:,}'.format(v), color='white', fontweight='bold', fontsize=14, ha='right', va='center')\nplt.title(\"Questions answered correctly\")\nplt.xticks(rotation=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"I also want to find out if there is a relationship between timestamp and answered_correctly. To find out I have made 5 bins of timestamp. As you can see, the only noticable thing is that users who have registered relatively recently perform a little worse than users who are active longer.",
"_____no_output_____"
]
],
[
[
"bin_labels_5 = ['Bin_1', 'Bin_2', 'Bin_3', 'Bin_4', 'Bin_5']\ntrain['ts_bin'] = pd.qcut(train['timestamp'], q=5, labels=bin_labels_5)\n\n#make function that can also be used for other fields\ndef correct(field):\n correct = train[train.answered_correctly != -1].groupby([field, 'answered_correctly'], as_index=False).size()\n correct = correct.pivot(index= field, columns='answered_correctly', values='size')\n correct['Percent_correct'] = round(correct.iloc[:,1]/(correct.iloc[:,0] + correct.iloc[:,1]),2)\n correct = correct.sort_values(by = \"Percent_correct\", ascending = False)\n correct = correct.iloc[:,2]\n return(correct)\n\nbins_correct = correct(\"ts_bin\")\nbins_correct = bins_correct.sort_index()\n\nfig = plt.figure(figsize=(12,6))\nplt.bar(bins_correct.index, bins_correct.values)\nfor i, v in zip(bins_correct.index, bins_correct.values):\n plt.text(i, v, v, color='white', fontweight='bold', fontsize=14, va='top', ha='center')\nplt.title(\"Percent answered_correctly for 5 bins of timestamp\")\nplt.xticks(rotation=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Let's also check out what the distribution of answered_correctly looks like if we groupby the (10,000 unique) task_container_id's.",
"_____no_output_____"
]
],
[
[
"task_id_correct = correct(\"task_container_id\")\n\nfig = plt.figure(figsize=(12,6))\ntask_id_correct.plot.hist(bins=40)\nplt.title(\"Histogram of percent_correct grouped by task_container_id\")\nplt.xticks(rotation=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Below I am plotting the number of answers per user_id against the percentage of questions answered correctly (sample of 200). As some users have answered huge amounts of questions, I have taken out the outliers (user_ids with 1000+ questions answered). As you can see, the trend is upward but there is also a lot of variation among users that have answered few questions.",
"_____no_output_____"
]
],
[
[
"user_percent = train[train.answered_correctly != -1].groupby('user_id')['answered_correctly'].agg(Mean='mean', Answers='count')\nprint(f'the highest number of questions answered by a user is {user_percent.Answers.max()}')\n",
"the highest number of questions answered by a user is 17609\n"
],
[
"user_percent = user_percent.query('Answers <= 1000').sample(n=200, random_state=1)\n\nfig = plt.figure(figsize=(12,6))\nx = user_percent.Answers\ny = user_percent.Mean\nplt.scatter(x, y, marker='o')\nplt.title(\"Percent answered correctly versus number of questions answered User\")\nplt.xticks(rotation=0)\nplt.xlabel(\"Number of questions answered\")\nplt.ylabel(\"Percent answered correctly\")\nz = np.polyfit(x, y, 1)\np = np.poly1d(z)\nplt.plot(x,p(x),\"r--\")\n\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"Below, I am doing the same thing by content_id (is question_id for content_type is question). I am again taking a sample of 200, and have taken out the content_ids with more than 25,000 questions asked. As you can see there is a slight downward trend.",
"_____no_output_____"
]
],
[
[
"content_percent = train[train.answered_correctly != -1].groupby('content_id')['answered_correctly'].agg(Mean='mean', Answers='count')\nprint(f'The highest number of questions asked by content_id is {content_percent.Answers.max()}.')\nprint(f'Of {len(content_percent)} content_ids, {len(content_percent[content_percent.Answers > 25000])} content_ids had more than 25,000 questions asked.')",
"The highest number of questions asked by content_id is 213605.\nOf 13523 content_ids, 529 content_ids had more than 25,000 questions asked.\n"
],
[
"content_percent = content_percent.query('Answers <= 25000').sample(n=200, random_state=1)\n\nfig = plt.figure(figsize=(12,6))\nx = content_percent.Answers\ny = content_percent.Mean\nplt.scatter(x, y, marker='o')\nplt.title(\"Percent answered correctly versus number of questions answered Content_id\")\nplt.xticks(rotation=0)\nplt.xlabel(\"Number of questions answered\")\nplt.ylabel(\"Percent answered correctly\")\nz = np.polyfit(x, y, 1)\np = np.poly1d(z)\nplt.plot(x,p(x),\"r--\")\n\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"Does it help if the 'prior_question_had_explanation'? Yes, as you can see the percent answered correctly is about 17% higher when there was an explanation. Although it is probably better to treat not having an explanation as a disadvantage as there was an explanation before the vast majority of questions.\n\nIn addition, it is also interesting to see that the percent answered correctly for the missing values is closer to True than to False.",
"_____no_output_____"
]
],
[
[
"pq = train[train.answered_correctly != -1].groupby(['prior_question_had_explanation'], dropna=False).agg({'answered_correctly': ['mean', 'count']})\n#pq.index = pq.index.astype(str)\nprint(pq.iloc[:,1])\npq = pq.iloc[:,0]\n\nfig = plt.figure(figsize=(12,4))\npq.plot.barh()\n# for i, v in zip(pq.index, pq.values):\n# plt.text(v, i, round(v,2), color='white', fontweight='bold', fontsize=14, ha='right', va='center')\nplt.title(\"Answered_correctly versus Prior Question had explanation\")\nplt.xlabel(\"Percent answered correctly\")\nplt.ylabel(\"Prior question had explanation\")\nplt.xticks(rotation=0)\nplt.show()",
"prior_question_had_explanation\nFalse 9193234\nTrue 89685560\nNaN 392506\nName: (answered_correctly, count), dtype: int64\n"
]
],
[
[
"prior_question_elapsed_time: (float32) The average time in milliseconds it took a user to answer each question in the previous question bundle, ignoring any lectures in between. Is null for a user's first question bundle or lecture. Note that the time is the average time a user took to solve each question in the previous bundle.\n\nAt first glance, this does not seem very interesting regarding our target. For both wrong and correct answers, the mean is about 25 seconds.",
"_____no_output_____"
]
],
[
[
"pq = train[train.answered_correctly != -1]\npq = pq[['prior_question_elapsed_time', 'answered_correctly']]\npq = pq.groupby(['answered_correctly']).agg({'answered_correctly': ['count'], 'prior_question_elapsed_time': ['mean']})\n\npq",
"_____no_output_____"
]
],
[
[
"However, as the feature works with regards to the CV (see Baseline model), I also wanted to find out if there is a trend. Below, I have taken a sample of 200 rows. As you can see, there is s slightly downward trend.",
"_____no_output_____"
]
],
[
[
"#please be aware that there is an issues with train.prior_question_elapsed_time.mean()\n#see https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/195032\nmean_pq = train.prior_question_elapsed_time.astype(\"float64\").mean()\n\ncondition = ((train.answered_correctly != -1) & (train.prior_question_elapsed_time.notna()))\npq = train[condition][['prior_question_elapsed_time', 'answered_correctly']].sample(n=200, random_state=1)\npq = pq.set_index('prior_question_elapsed_time').iloc[:,0]\n\nfig = plt.figure(figsize=(12,6))\nx = pq.index\ny = pq.values\nplt.scatter(x, y, marker='o')\nplt.title(\"Answered_correctly versus prior_question_elapsed_time\")\nplt.xticks(rotation=0)\nplt.xlabel(\"Prior_question_elapsed_time\")\nplt.ylabel(\"Answered_correctly\")\nplt.vlines(mean_pq, ymin=-0.1, ymax=1.1)\nplt.text(x= 27000, y=0.4, s='mean')\nplt.text(x=80000, y=0.6, s='trend')\nz = np.polyfit(x, y, 1)\np = np.poly1d(z)\nplt.plot(x,p(x),\"r--\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 1.2 Exploring Questions\n\nMetadata for the questions posed to users.\n\n* question_id: foreign key for the train/test content_id column, when the content type is question (0).\n* bundle_id: code for which questions are served together.\n* correct_answer: the answer to the question. Can be compared with the train user_answer column to check if the user was right.\n* part: the relevant section of the TOEIC test.\n* tags: one or more detailed tag codes for the question. The meaning of the tags will not be provided, but these codes are sufficient for clustering the questions together.\n",
"_____no_output_____"
]
],
[
[
"questions.head()",
"_____no_output_____"
],
[
"questions.shape",
"_____no_output_____"
]
],
[
[
"The tags seem valuable to me. First, let's check if there are any question_id's without tags. As you can see, there is exactly one question_id without at least one tag. Not a big deal, but we need to keep in mind that we have to impute something here if we make features based on tags.",
"_____no_output_____"
]
],
[
[
"questions[questions.tags.isna()]",
"_____no_output_____"
]
],
[
[
"Also.....when looking at train, we see that this question was just asked once ;-). ",
"_____no_output_____"
]
],
[
[
"train.query('content_id == \"10033\" and answered_correctly != -1')",
"_____no_output_____"
],
[
"questions['tags'] = questions['tags'].astype(str)\n\ntags = [x.split() for x in questions[questions.tags != \"nan\"].tags.values]\ntags = [item for elem in tags for item in elem]\ntags = set(tags)\ntags = list(tags)\nprint(f'There are {len(tags)} different tags')",
"There are 188 different tags\n"
]
],
[
[
"Let's find out how many answers were Right and Wrong per question_id (so per content_id in train).",
"_____no_output_____"
]
],
[
[
"tags_list = [x.split() for x in questions.tags.values]\nquestions['tags'] = tags_list\nquestions.head()\n\ncorrect = train[train.answered_correctly != -1].groupby([\"content_id\", 'answered_correctly'], as_index=False).size()\ncorrect = correct.pivot(index= \"content_id\", columns='answered_correctly', values='size')\ncorrect.columns = ['Wrong', 'Right']\ncorrect = correct.fillna(0)\ncorrect[['Wrong', 'Right']] = correct[['Wrong', 'Right']].astype(int)\nquestions = questions.merge(correct, left_on = \"question_id\", right_on = \"content_id\", how = \"left\")\nquestions.head()",
"_____no_output_____"
]
],
[
[
"As you can see, I have also changed the tags column into lists of tags.",
"_____no_output_____"
]
],
[
[
"questions.tags.values",
"_____no_output_____"
]
],
[
[
"Now, I can add up all Wrong and Right answers for all questions that are labeled with a particular tag and calculate the percent correct for each tag. Please note that there is \"double counting\" of questions; for instance if a question has 5 tags, its answers are aggregated in the totals of each of the 5 tags. ",
"_____no_output_____"
]
],
[
[
"%%time\n\ntags_df = pd.DataFrame()\nfor x in range(len(tags)):\n df = questions[questions.tags.apply(lambda l: tags[x] in l)]\n df1 = df.agg({'Wrong': ['sum'], 'Right': ['sum']})\n df1['Total_questions'] = df1.Wrong + df1.Right\n df1['Question_ids_with_tag'] = len(df)\n df1['tag'] = tags[x]\n df1 = df1.set_index('tag')\n tags_df = tags_df.append(df1)\n\ntags_df[['Wrong', 'Right', 'Total_questions']] = tags_df[['Wrong', 'Right', 'Total_questions']].astype(int)\ntags_df['Percent_correct'] = tags_df.Right/tags_df.Total_questions\ntags_df = tags_df.sort_values(by = \"Percent_correct\")\n\ntags_df.head()",
"CPU times: user 2.53 s, sys: 1.07 ms, total: 2.53 s\nWall time: 2.53 s\n"
]
],
[
[
"As you can see, the differences are significant!",
"_____no_output_____"
]
],
[
[
"select_rows = list(range(0,10)) + list(range(178, len(tags_df)))\ntags_select = tags_df.iloc[select_rows,4]\n\nfig = plt.figure(figsize=(12,6))\nx = tags_select.index\ny = tags_select.values\nclrs = ['red' if y < 0.6 else 'green' for y in tags_select.values]\ntags_select.plot.bar(x, y, color=clrs)\nplt.title(\"Ten hardest and ten easiest tags\")\nplt.xlabel(\"Tag\")\nplt.ylabel(\"Percent answers correct of questions with the tag\")\nplt.xticks(rotation=90)\nplt.show()",
"_____no_output_____"
]
],
[
[
"However, we should also realize that the tag with the worst percent_correct only has about 250,000 answers. This a low number compared to the tags with most answers.",
"_____no_output_____"
]
],
[
[
"tags_select = tags_df.sort_values(by = \"Total_questions\", ascending = False).iloc[:30,:]\ntags_select = tags_select[\"Total_questions\"]\n\nfig = plt.figure(figsize=(12,6))\nax = tags_select.plot.bar()\nplt.title(\"Thirty tags with most questions answered\")\nplt.xticks(rotation=90)\nplt.ticklabel_format(style='plain', axis='y')\nax.get_yaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ','))) #add thousands separator\nplt.show()",
"_____no_output_____"
]
],
[
[
"What are the so-called \"Parts\"? When following the link provided in the data description we find out that this relates to a test.\n\n> The TOEIC L&R uses an optically-scanned answer sheet. There are 200 questions to answer in two hours in Listening (approximately 45 minutes, 100 questions) and Reading (75 minutes, 100 questions). \n\nThe listening section consists of Part 1-4 (Listening Section (approx. 45 minutes, 100 questions)).\n\nThe reading section consists of Part 5-7 (Reading Section (75 minutes, 100 questions)).",
"_____no_output_____"
],
[
"Below, I am displaying the count and percent correct by part. As you can see, Part 5 has a lot more question_id's and is also the most difficult.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nax1 = questions.groupby(\"part\").count()['question_id'].plot.bar()\nplt.title(\"Counts of part\")\nplt.xlabel(\"Part\")\nplt.xticks(rotation=0)\n\npart = questions.groupby('part').agg({'Wrong': ['sum'], 'Right': ['sum']})\npart['Percent_correct'] = part.Right/(part.Right + part.Wrong)\npart = part.iloc[:,2]\n\nax2 = fig.add_subplot(212)\nplt.bar(part.index, part.values)\nfor i, v in zip(part.index, part.values):\n plt.text(i, v, round(v,2), color='white', fontweight='bold', fontsize=14, va='top', ha='center')\n\nplt.title(\"Percent_correct by part\")\nplt.xlabel(\"Part\")\nplt.xticks(rotation=0)\nplt.tight_layout(pad=2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 1.3 Exploring Lectures\n\nMetadata for the lectures watched by users as they progress in their education.\n* lecture_id: foreign key for the train/test content_id column, when the content type is lecture (1).\n* part: top level category code for the lecture.\n* tag: one tag codes for the lecture. The meaning of the tags will not be provided, but these codes are sufficient for clustering the lectures together.\n* type_of: brief description of the core purpose of the lecture\n",
"_____no_output_____"
]
],
[
[
"lectures.head()",
"_____no_output_____"
],
[
"print(f'There are {lectures.shape[0]} lecture_ids.')",
"There are 418 lecture_ids.\n"
]
],
[
[
"Let's have a look at the type_of.",
"_____no_output_____"
]
],
[
[
"lect_type_of = lectures.type_of.value_counts()\n\nfig = plt.figure(figsize=(12,6))\nplt.bar(lect_type_of.index, lect_type_of.values)\nfor i, v in zip(lect_type_of.index, lect_type_of.values):\n plt.text(i, v, v, color='black', fontweight='bold', fontsize=14, va='bottom', ha='center')\nplt.title(\"Types of lectures\")\nplt.xlabel(\"type_of\")\nplt.ylabel(\"Count lecture_id\")\nplt.xticks(rotation=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Since there are not that many lectures, I want to check if it helps if a user watches lectures at all. As you can see, it helps indeed!",
"_____no_output_____"
]
],
[
[
"user_lect = train.groupby([\"user_id\", \"answered_correctly\"]).size().unstack()\nuser_lect.columns = ['Lecture', 'Wrong', 'Right']\nuser_lect['Lecture'] = user_lect['Lecture'].fillna(0)\nuser_lect = user_lect.astype('Int64')\nuser_lect['Watches_lecture'] = np.where(user_lect.Lecture > 0, True, False)\n\nwatches_l = user_lect.groupby(\"Watches_lecture\").agg({'Wrong': ['sum'], 'Right': ['sum']})\nprint(user_lect.Watches_lecture.value_counts())\n\nwatches_l['Percent_correct'] = watches_l.Right/(watches_l.Right + watches_l.Wrong)\n\nwatches_l = watches_l.iloc[:,2]\n\nfig = plt.figure(figsize=(12,4))\nwatches_l.plot.barh()\nfor i, v in zip(watches_l.index, watches_l.values):\n plt.text(v, i, round(v,2), color='white', fontweight='bold', fontsize=14, ha='right', va='center')\n\nplt.title(\"User watches lectures: Percent_correct\")\nplt.xlabel(\"Percent correct\")\nplt.ylabel(\"User watched at least one lecture\")\nplt.xticks(rotation=0)\nplt.show()",
"False 244050\nTrue 149606\nName: Watches_lecture, dtype: int64\n"
]
],
[
[
"Batches (task_container_id) may also contain lectures, and I want to find out if there are any batches with high numbers of lectures.",
"_____no_output_____"
]
],
[
[
"batch_lect = train.groupby([\"task_container_id\", \"answered_correctly\"]).size().unstack()\nbatch_lect.columns = ['Lecture', 'Wrong', 'Right']\nbatch_lect['Lecture'] = batch_lect['Lecture'].fillna(0)\nbatch_lect = batch_lect.astype('Int64')\nbatch_lect['Percent_correct'] = batch_lect.Right/(batch_lect.Wrong + batch_lect.Right)\nbatch_lect['Percent_lecture'] = batch_lect.Lecture/(batch_lect.Lecture + batch_lect.Wrong + batch_lect.Right)\nbatch_lect = batch_lect.sort_values(by = \"Percent_lecture\", ascending = False)\n\nprint(f'The highest number of lectures watched within a single task_container_id is {batch_lect.Lecture.max()}.')",
"The highest number of lectures watched within a single task_container_id is 5143.\n"
]
],
[
[
"As you can see below (table sorted on descending Percent_lecture), the percent of lectures of the task_container_id's is never high. We can also see the highest percentages of lectures are around 2.8%, which means one lecture on about 36 questions.",
"_____no_output_____"
]
],
[
[
"batch_lect.head()",
"_____no_output_____"
]
],
[
[
"Is there a correlation between the percent_lecture and the percent_correct? No, I don't really see it. If anything, the percent_correct actually seems to go down slightly.",
"_____no_output_____"
]
],
[
[
"batch = batch_lect.iloc[:, 3:]\n\nfig = plt.figure(figsize=(12,6))\nx = batch.Percent_lecture\ny = batch.Percent_correct\nplt.scatter(x, y, marker='o')\nplt.title(\"Percent lectures in a task_container versus percent answered correctly\")\nplt.xticks(rotation=0)\nplt.xlabel(\"Percent lectures\")\nplt.ylabel(\"Percent answered correctly\")\n\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"The last thing that I want to check is if having a lecture in a batch helps. As you can see, it does not. Batches without lectures have about 8% more correct answers than batches with lectures.",
"_____no_output_____"
]
],
[
[
"batch_lect['Has_lecture'] = np.where(batch_lect.Lecture == 0, False, True)\nprint(f'We have {batch_lect[batch_lect.Has_lecture == True].shape[0]} task_container_ids with lectures and {batch_lect[batch_lect.Has_lecture == False].shape[0]} task_container_ids without lectures.')",
"We have 9369 task_container_ids with lectures and 631 task_container_ids without lectures.\n"
],
[
"batch_lect = batch_lect[['Wrong', 'Right', 'Has_lecture']]\nbatch_lect = batch_lect.groupby(\"Has_lecture\").sum()\nbatch_lect['Percent_correct'] = batch_lect.Right/(batch_lect.Wrong + batch_lect.Right)\nbatch_lect = batch_lect[['Percent_correct']]\nbatch_lect",
"_____no_output_____"
]
],
[
[
"# Example test\nThis file is a very small file, and only good to check what's in there.\n\nImportant: In the `Updates, corrections, and clarifications` topic is said that:\n* the hidden test set contains new users but not new questions\n* The train/test data is complete, in the sense that there are no missing interactions in the union of train and test data. It remains possible that some questions weren't logged due to other issues that all datasets of mobile users are susceptible to,such as if a user lost their connection mid-question.\n* The test data follows chronologically after the train data. The test iterations give interactions of users chronologically.\n",
"_____no_output_____"
]
],
[
[
"example_test.shape",
"_____no_output_____"
],
[
"example_test.head()",
"_____no_output_____"
],
[
"batches_test = set(list(example_test.task_container_id.unique()))\nbatches_train = set(list(train.task_container_id.unique()))\nprint(f'All batches in example_test are also in train is {batches_test.issubset(batches_train)}.')",
"All batches in example_test are also in train is True.\n"
]
],
[
[
"Kaggle says that there are new users in the test set, but let's check this anyway with example_test. As we can see, there is a new user in example_test indeed.",
"_____no_output_____"
]
],
[
[
"user_test = set(list(example_test.user_id.unique()))\nuser_train = set(list(train.user_id.unique()))\n\nprint(f'User_ids in example_test but not in train: {user_test - user_train}.')",
"User_ids in example_test but not in train: {275030867}.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc5b2f4b62ba4422371e975cee76086ed84546d | 1,580 | ipynb | Jupyter Notebook | docs/pogil/notebooks/typecasting.ipynb | pythonforscientists/pythonforscientists.github.io | 501578c5fa71e81fda1b61b5f6097a763f3db0ac | [
"MIT"
] | null | null | null | docs/pogil/notebooks/typecasting.ipynb | pythonforscientists/pythonforscientists.github.io | 501578c5fa71e81fda1b61b5f6097a763f3db0ac | [
"MIT"
] | null | null | null | docs/pogil/notebooks/typecasting.ipynb | pythonforscientists/pythonforscientists.github.io | 501578c5fa71e81fda1b61b5f6097a763f3db0ac | [
"MIT"
] | null | null | null | 19.75 | 80 | 0.540506 | [
[
[
"# POGIL 4.6 - Typecasting\n\n## Python for Scientists",
"_____no_output_____"
],
[
"### Content Learning Objectives",
"_____no_output_____"
],
[
" After completing this activity, students should be able to:\n \n- Describe what typecasting is and what scenarios to use typecasting in\n- Typecast between the basic datatypes\n- Understand what can and cannot be typecast",
"_____no_output_____"
],
[
"### Process Skill Goals",
"_____no_output_____"
],
[
"*During this activity, you should make progress toward:*\n\n- Leveraging prior knowledge and experience of other students",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecc5b894e9ab400be4cd671456082bc63de43c6e | 1,386 | ipynb | Jupyter Notebook | mnist/mnist.ipynb | cimomo/tf-lib | 03eddfac368a7959a07c9a3bd7160ebb363d1b56 | [
"Apache-2.0"
] | null | null | null | mnist/mnist.ipynb | cimomo/tf-lib | 03eddfac368a7959a07c9a3bd7160ebb363d1b56 | [
"Apache-2.0"
] | null | null | null | mnist/mnist.ipynb | cimomo/tf-lib | 03eddfac368a7959a07c9a3bd7160ebb363d1b56 | [
"Apache-2.0"
] | null | null | null | 27.176471 | 70 | 0.526696 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecc5be593c7e746dcb14996a4616f0942a7092f3 | 87,304 | ipynb | Jupyter Notebook | All S&P 500 XGBoost.ipynb | aakula7/Deep-Learning-Trading-Algorithm | 6fdf1f94cedcacdf08ec7bc56b3d712423ffead8 | [
"MIT"
] | null | null | null | All S&P 500 XGBoost.ipynb | aakula7/Deep-Learning-Trading-Algorithm | 6fdf1f94cedcacdf08ec7bc56b3d712423ffead8 | [
"MIT"
] | null | null | null | All S&P 500 XGBoost.ipynb | aakula7/Deep-Learning-Trading-Algorithm | 6fdf1f94cedcacdf08ec7bc56b3d712423ffead8 | [
"MIT"
] | null | null | null | 32.685885 | 199 | 0.350293 | [
[
[
"import datetime as dt\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom matplotlib import style\nimport matplotlib.dates as mdates\nfrom pylab import rcParams\nimport pandas as pd\nimport numpy as np\nimport pandas_datareader.data as web\nimport mplfinance as mpf\nimport bs4 as bs\nimport pickle\nimport requests\nimport os\nfrom collections import Counter\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import mean_absolute_error\nfrom sklearn.metrics import r2_score\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\nfrom xgboost import XGBRegressor\nimport xgboost as xgb\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import GridSearchCV\n\ntest_size = 0.2\nvalid_size = 0.2\nN = 21\n\nn_estimators = 100\nmax_depth = 3\nlearning_rate = 0.1\nmin_child_weight = 1\nsubsample = 1\ncolsample_bytree = 1\ncolsample_bylevel = 1\ngamma = 0\nmodel_seed = 100\n\nstyle.use('seaborn-darkgrid')",
"c:\\python38\\lib\\site-packages\\pandas_datareader\\compat\\__init__.py:7: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n from pandas.util.testing import assert_frame_equal\n"
],
[
"resp = requests.get('http://en.wikipedia.org/wiki/List_of_S%26P_500_companies')\nsoup = bs.BeautifulSoup(resp.text, 'lxml')\ntable = soup.find('table', {'class': 'wikitable sortable'})\ntickers = []\nfor row in table.findAll('tr')[1:]:\n ticker = row.findAll('td')[0].text\n ticker = ticker[:-1]\n tickers.append(ticker)\nfor n, i in enumerate(tickers):\n if i == 'BRK.B':\n tickers[n] = 'BRKB'\n elif i == 'BF.B':\n tickers[n] = 'BFB'\n\ntickers",
"_____no_output_____"
],
[
"df = pd.read_csv('sp500_joined_closes.csv', index_col = 0)\ndf.head()",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"df.isnull().sum()",
"_____no_output_____"
],
[
"df = df.fillna(0)\ndf.isnull().sum()",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"df = df.astype({'Date':'datetime64'})\ndf.dtypes",
"_____no_output_____"
],
[
"start = dt.datetime(2010, 1, 1)\nend = dt.datetime.now()\n\nGSPC = web.DataReader('^GSPC', 'yahoo', start, end)\nGSPC = GSPC.reset_index()\nGSPC['GSPC_HL_PCT_DIFF'] = (GSPC['High'] - GSPC['Low']) / GSPC['Low']\nGSPC['GSPC_PCT_CHNG'] = (GSPC['Close'] - GSPC['Open']) / GSPC['Open']\nGSPC = GSPC.rename(columns = {'Adj Close':'GSPC'})\nGSPC = GSPC[['Date', 'GSPC', 'GSPC_HL_PCT_DIFF', 'GSPC_PCT_CHNG']]\nGSPC",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"GSPC.dtypes",
"_____no_output_____"
],
[
"df = pd.merge(df, GSPC, how = 'left', on = 'Date')\ndf.head()",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"#df.set_index('Date', inplace = True)\n#df.plot(subplots = True, sharex = True, sharey = True)\n#plt.show()",
"_____no_output_____"
],
[
"new_columns = ['Date', 'GSPC']\nfor tick in tickers:\n new_columns.append(tick)\ndf_Stocks = df[[c for c in new_columns]]\ndf_Stocks.head()",
"_____no_output_____"
],
[
"df_Stocks.tail()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc5bf02fd6cd12f2abc257f9962442d55e56c32 | 22,447 | ipynb | Jupyter Notebook | Fourier transform Gaussian.ipynb | scienceopen/python-examples | 0dd8129bda8f0ec2c46dae734d8e43628346388c | [
"MIT"
] | 5 | 2017-12-03T06:01:03.000Z | 2022-03-19T09:15:10.000Z | Fourier transform Gaussian.ipynb | scienceopen/python-examples | 0dd8129bda8f0ec2c46dae734d8e43628346388c | [
"MIT"
] | null | null | null | Fourier transform Gaussian.ipynb | scienceopen/python-examples | 0dd8129bda8f0ec2c46dae734d8e43628346388c | [
"MIT"
] | 4 | 2019-05-17T20:15:15.000Z | 2021-01-17T20:36:08.000Z | 277.123457 | 20,780 | 0.924533 | [
[
[
"from numpy import linspace, sqrt, pi,exp, angle\nfrom numpy.fft import fft,ifft,fftshift\nfrom matplotlib.pyplot import subplots,show\n\nx = linspace(-15,15,200)\nw = linspace(-pi, pi,200)\nsigma = 1\n\ng = 1/(sigma*sqrt(2*pi)) * exp(-(x-0)**2/(2*sigma**2))\nG = fft(g)\n\nGana = exp(-w**2 * sigma**2/2)\n\nfg,ax = subplots(4,1)\nax[0].plot(x,g)\nax[0].set_title('g(x)')\nax[0].set_xlabel('x')\n\nax[1].plot(w, abs(fftshift(G)),label='G($\\omega$)')\nax[2].plot(1/2*pi*w, abs(Gana), label='$G_{analytic}$')\nax[2].set_xlim((-pi,pi))\n\nax[3].plot(G.imag)\n\nshow()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ecc5c2076267d6329196873479e8a1f03f81b57d | 64,591 | ipynb | Jupyter Notebook | 1DataCleaning.ipynb | ywang-afk/mlds_lessons | b994363092c2c1e389b654c07d31d4411016b36b | [
"MIT"
] | null | null | null | 1DataCleaning.ipynb | ywang-afk/mlds_lessons | b994363092c2c1e389b654c07d31d4411016b36b | [
"MIT"
] | null | null | null | 1DataCleaning.ipynb | ywang-afk/mlds_lessons | b994363092c2c1e389b654c07d31d4411016b36b | [
"MIT"
] | null | null | null | 51.140934 | 10,476 | 0.656934 | [
[
[
"### MLDS Lesson 11/13 \nData Cleaning with Pandas",
"_____no_output_____"
]
],
[
[
"import zipfile\nimport pandas as pd",
"_____no_output_____"
],
[
"trains_zip = zipfile.ZipFile(r'C:\\Users\\Yuchen\\Downloads\\train_split_00.csv.zip')\ntrains_df = pd.read_csv(trains_zip.open('train_split_00.csv')) ",
"_____no_output_____"
]
],
[
[
"Take a quick look at the data using the .head() method. Note, these methods are specific to Pandas Dataframe objects.",
"_____no_output_____"
]
],
[
[
"trains_df.head()",
"_____no_output_____"
]
],
[
[
"The .memory_usage() method shows the memory in bytes that each column is using.",
"_____no_output_____"
]
],
[
[
"trains_df.memory_usage()",
"_____no_output_____"
]
],
[
[
"Based on the # of columns, we can see that the dataframe is using aproximately 62 Mb of data in-memory, i.e., in RAM.",
"_____no_output_____"
]
],
[
[
"len(trains_df.columns)",
"_____no_output_____"
],
[
"(8 * 8159400) / 1024",
"_____no_output_____"
],
[
"63745.3125 / 1024",
"_____no_output_____"
]
],
[
[
"In Excel, you will have slowdowns when opening the file and filtering columns as Excel needs to render each cell in-memory. The filtering process is also not computationally efficient. For a dataset of >100K rows, you may experience long filtering times depending on the # of columns. Our dataset has slightly over a million rows.",
"_____no_output_____"
]
],
[
[
"len(trains_df)",
"_____no_output_____"
]
],
[
[
"dtypes is an attribute (vs. a method), which shows us the data types of each column. This is an important practice to review data types as they can cause downstream issues. E.g., numbers are read in as strings and later on, you don't understand why your simple addition function is not working.",
"_____no_output_____"
]
],
[
[
"trains_df.dtypes",
"_____no_output_____"
]
],
[
[
"It looks like the [pickup_datetime] columns was read in as an 'object'. We can convert it into a datetime object using the following:",
"_____no_output_____"
]
],
[
[
"trains_df['pickup_datetime'] = pd.to_datetime(trains_df['pickup_datetime'], infer_datetime_format=True)",
"_____no_output_____"
],
[
"trains_df.dtypes",
"_____no_output_____"
],
[
"trains_df.count()",
"_____no_output_____"
]
],
[
[
"Notice that there appear to be 10 fewer dropoff_longitude and dropoff_latitude points than the rest - this is most likely missing data. We can quickly take care of this using the .dropna() method:",
"_____no_output_____"
]
],
[
[
"trains_df = trains_df.dropna()",
"_____no_output_____"
]
],
[
[
"Looking at the counts again, we can see that the rows with NaNs have been removed - the entire dataset is now uniform.",
"_____no_output_____"
]
],
[
[
"trains_df.count()",
"_____no_output_____"
]
],
[
[
"**Here's a great cheat sheet for Pandas:**: \n\nhttps://pandas.pydata.org/Pandas_Cheat_Sheet.pdf",
"_____no_output_____"
],
[
"We can also perform basic operations on columns:",
"_____no_output_____"
]
],
[
[
"min(trains_df['fare_amount'])",
"_____no_output_____"
]
],
[
[
"Note that this is the same:",
"_____no_output_____"
]
],
[
[
"trains_df['fare_amount'].min()",
"_____no_output_____"
]
],
[
[
"Summary statistics are simple to generate:",
"_____no_output_____"
]
],
[
[
"trains_df.describe()",
"_____no_output_____"
]
],
[
[
"You can also create basic visualizations for your data using Pandas:",
"_____no_output_____"
]
],
[
[
"trains_df['fare_amount'].hist()",
"_____no_output_____"
],
[
"trains_df.plot.scatter(x='pickup_longitude', y='pickup_latitude')",
"_____no_output_____"
],
[
"trains_df.plot.scatter(x='dropoff_longitude', y='dropoff_latitude')",
"_____no_output_____"
]
],
[
[
"Filtering is also a straightforward operation:",
"_____no_output_____"
]
],
[
[
"trains_df[(trains_df['pickup_datetime'] > '2012')]",
"_____no_output_____"
]
],
[
[
"We can see that this operation only takes ~66 ms to run. The %timeit is a 'magic' Jupyter notebook function that repeats the function a number of times to determine the average runtime.",
"_____no_output_____"
]
],
[
[
"%timeit trains_df[(trains_df['pickup_datetime'] > '2012')]",
"66.1 ms ± 2.13 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n"
]
],
[
[
"We can split our data for later processing by creating new dataframes based on the original:",
"_____no_output_____"
]
],
[
[
"y = trains_df['fare_amount']",
"_____no_output_____"
],
[
"# When you only pull 1 column, it creates a Series object. We use this method below to recreate y into a Dataframe object.\ny = pd.DataFrame(y)",
"_____no_output_____"
],
[
"y.head()",
"_____no_output_____"
],
[
"# Why do we have double brackets?\nX = trains_df[['pickup_longitude', 'pickup_latitude']]",
"_____no_output_____"
],
[
"X",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecc5d325151e30d95c59e00276357f4976204526 | 1,559 | ipynb | Jupyter Notebook | Bintang bintang/Gui.ipynb | ikhsanmn/Aplication-gui-Stars | 23f79c9ce5096807872e33eb6b8252bc4ebd7cb7 | [
"MIT"
] | null | null | null | Bintang bintang/Gui.ipynb | ikhsanmn/Aplication-gui-Stars | 23f79c9ce5096807872e33eb6b8252bc4ebd7cb7 | [
"MIT"
] | null | null | null | Bintang bintang/Gui.ipynb | ikhsanmn/Aplication-gui-Stars | 23f79c9ce5096807872e33eb6b8252bc4ebd7cb7 | [
"MIT"
] | null | null | null | 22.271429 | 54 | 0.483643 | [
[
[
"from tkinter import Tk, BOTH, X\nfrom tkinter.ttk import Frame, Entry\n\n\nclass membuatLabel(Frame):\n def __init__(self, parent):\n Frame.__init__(self, parent)\n\n self.window = parent\n self.initUI()\n self.teksEdit()\n\n def initUI(self):\n self.pack(fill=BOTH)\n root.geometry(\"300x100+300+300\")\n root.title(\"Text field/kolom teks\")\n\n def teksEdit(self):\n teksField = Frame(self)\n teksField.pack()\n\n masukKeWindow = Entry(teksField)\n masukKeWindow.pack()\n\n\nif __name__ == '__main__':\n root = Tk()\n app = membuatLabel(root)\n root.mainloop()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ecc5d51953d1dc71e73eea2d8c128274124703f5 | 35,884 | ipynb | Jupyter Notebook | Analysis/Omar Ankit/milestone2.ipynb | data301-2020-winter2/course-project-group_1017 | 879c7a613abb320d675b13eb25c6f5414f3270bd | [
"MIT"
] | 2 | 2021-02-08T19:43:03.000Z | 2021-02-09T00:44:23.000Z | Analysis/Omar Ankit/milestone2.ipynb | data301-2020-winter2/course-project-group_1017 | 879c7a613abb320d675b13eb25c6f5414f3270bd | [
"MIT"
] | 1 | 2021-03-20T00:34:39.000Z | 2021-03-29T23:43:04.000Z | Analysis/Omar Ankit/milestone2.ipynb | data301-2020-winter2/course-project-group_1017 | 879c7a613abb320d675b13eb25c6f5414f3270bd | [
"MIT"
] | null | null | null | 30.358714 | 103 | 0.315628 | [
[
[
"import os\nimport pandas as pd\ndata=pd.read_csv('medical_expenses.csv')\ndata\n",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"# **Task 3**",
"_____no_output_____"
],
[
"## **method chains**",
"_____no_output_____"
]
],
[
[
"#renaming some of the columns to make the data set clearer\ndata.rename(columns={'bmi': 'BMI', 'region':'area'})",
"_____no_output_____"
],
[
"#we round the columns to 2 decimals\ndata.round({'bmi': 2, 'charges': 2})",
"_____no_output_____"
],
[
"#Abbreviate region names\ndata.replace({'southwest': 'SW', 'southeast': 'SE', 'northeast': 'NE', 'northwest': 'NW'})",
"_____no_output_____"
],
[
"#sort the data from lowest to highest charges\ndata.sort_values('charges', ascending = True)",
"_____no_output_____"
],
[
"#drop all entries with NA charges\ndata.dropna(subset=['charges'], inplace=True)\ndata",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## **Functions**",
"_____no_output_____"
]
],
[
[
"df =(pd.read_csv('medical_expenses.csv')\n .rename(columns={'region':'area'})\n .replace({'southwest': 'SW', 'southeast': 'SE', 'northeast': 'NE', 'northwest': 'NW'})\n .sort_values('charges', ascending = True)\n .dropna(subset=['charges'], inplace=True)\n )\ndf",
"_____no_output_____"
],
[
"def load_and_process(address):\n df1 = (\n pd.read_csv(address)\n .dropna(subset=['charges'])\n .rename(columns={'region':'area'})\n .replace({'southwest': 'SW', 'southeast': 'SE', 'northeast': 'NE', 'northwest': 'NW'})\n .round({\"bmi\":2,\"charges\":2})\n .sort_values('bmi',ascending=True)\n .reset_index(drop=True)\n )\n return df1\n",
"_____no_output_____"
],
[
"load_and_process('medical_expenses.csv')",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecc5db652f51ab3b04e7d1d6c05bda0c1c4e41d2 | 97,900 | ipynb | Jupyter Notebook | decisionTree.ipynb | maviator/ml_in_depth | bdfc978a7f9eeccdfb2e61d099a172804be887bd | [
"MIT"
] | null | null | null | decisionTree.ipynb | maviator/ml_in_depth | bdfc978a7f9eeccdfb2e61d099a172804be887bd | [
"MIT"
] | null | null | null | decisionTree.ipynb | maviator/ml_in_depth | bdfc978a7f9eeccdfb2e61d099a172804be887bd | [
"MIT"
] | null | null | null | 86.79078 | 18,608 | 0.796027 | [
[
[
"In this post we will explore the most important parameters of Decision trees and how they impact our model in term of overfitting and underfitting.\n\nWe will use the Titanic Data from kaggle. For the sake of this post, we will perform as little feature engineering as possible as it is not the purpose of this post.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"Load train data",
"_____no_output_____"
]
],
[
[
"# get titanic & test csv files as a DataFrame\ntrain = pd.read_csv(\"input/train.csv\")\nprint train.shape",
"(891, 12)\n"
],
[
"train.head()",
"_____no_output_____"
]
],
[
[
"Check for missing values",
"_____no_output_____"
]
],
[
[
"#Checking for missing data\nNAs = pd.concat([train.isnull().sum()], axis=1, keys=['Train'])\nNAs[NAs.sum(axis=1) > 0]",
"_____no_output_____"
]
],
[
[
"We will remove 'Cabin', 'Name' and 'Ticket' columns as they require some processing to extract useful features",
"_____no_output_____"
]
],
[
[
"# At this point we will drop the Cabin feature since it is missing a lot of the data\ntrain.pop('Cabin')\n\n# At this point names don't affect our model so we drop it\ntrain.pop('Name')\n\n# At this point we drop Ticket feature\ntrain.pop('Ticket')\n\ntrain.shape",
"_____no_output_____"
]
],
[
[
"Fill the missing age values by the mean value",
"_____no_output_____"
]
],
[
[
"# Filling missing Age values with mean\ntrain['Age'] = train['Age'].fillna(train['Age'].mean())",
"_____no_output_____"
]
],
[
[
"Fill the missing 'Embarked' values by the most frequent value",
"_____no_output_____"
]
],
[
[
"# Filling missing Embarked values with most common value\ntrain['Embarked'] = train['Embarked'].fillna(train['Embarked'].mode()[0])",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
]
],
[
[
"'Pclass' is a categorical feature so we convert its values to strings",
"_____no_output_____"
]
],
[
[
"train['Pclass'] = train['Pclass'].apply(str)",
"_____no_output_____"
]
],
[
[
"Let's perform a basic one hot encoding of categorical features",
"_____no_output_____"
]
],
[
[
"# Getting Dummies from all other categorical vars\nfor col in train.dtypes[train.dtypes == 'object'].index:\n for_dummy = train.pop(col)\n train = pd.concat([train, pd.get_dummies(for_dummy, prefix=col)], axis=1)",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"labels = train.pop('Survived')",
"_____no_output_____"
]
],
[
[
"For testing, we choose to split our data to 75% train and 25% for test",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(train, labels, test_size=0.25)",
"_____no_output_____"
]
],
[
[
"Let's first fit a decision tree with default parameters to get a baseline idea of the performance",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\n\ndt = DecisionTreeClassifier()",
"_____no_output_____"
],
[
"dt.fit(x_train, y_train)",
"_____no_output_____"
],
[
"y_pred = dt.predict(x_test)",
"_____no_output_____"
]
],
[
[
"We will AUC (Area Under Curve) as the evaluation metric. Our target value is binary so it's a binary classification problem. AUC is a good way for evaluation for this type of problems",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import roc_curve, auc\nfalse_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)\nroc_auc = auc(false_positive_rate, true_positive_rate)\nroc_auc",
"_____no_output_____"
]
],
[
[
"## max_depth",
"_____no_output_____"
],
[
"The first parameter to tune is max_depth. This indicates how deep the built tree can be. The deeper the tree, the more splits it has and it captures more information about how the data. We fit a decision tree with depths ranging from 1 to 32 and plot the training and test errors.",
"_____no_output_____"
]
],
[
[
"max_depths = np.linspace(1, 32, 32, endpoint=True)",
"_____no_output_____"
],
[
"train_results = []\ntest_results = []\nfor max_depth in max_depths:\n dt = DecisionTreeClassifier(max_depth=max_depth)\n dt.fit(x_train, y_train)\n \n train_pred = dt.predict(x_train)\n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, train_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n train_results.append(roc_auc)\n \n y_pred = dt.predict(x_test)\n \n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n test_results.append(roc_auc)",
"_____no_output_____"
],
[
"from matplotlib.legend_handler import HandlerLine2D\n\nline1, = plt.plot(max_depths, train_results, 'b', label=\"Train AUC\")\nline2, = plt.plot(max_depths, test_results, 'r', label=\"Test AUC\")\n\nplt.legend(handler_map={line1: HandlerLine2D(numpoints=2)})\n\nplt.ylabel('AUC score')\nplt.xlabel('Tree depth')\nplt.show()",
"_____no_output_____"
]
],
[
[
"We see that our model overfits for large depth values. The tree perfectly predicts all of the train data, however, it fails to generalize the findings for new data",
"_____no_output_____"
],
[
"## min_samples_split",
"_____no_output_____"
],
[
"min_samples_split represents the minimum number of samples required to split an internal node. This can vary between considering at least one sample at each node to considering all of the samples at each node. When we increase this parameter, the tree becomes more constrained as it has to consider more samples at each node. Here we will vary the parameter from 10% to 100% of the samples",
"_____no_output_____"
]
],
[
[
"min_samples_splits = np.linspace(0.1, 1.0, 10, endpoint=True)",
"_____no_output_____"
],
[
"train_results = []\ntest_results = []\nfor min_samples_split in min_samples_splits:\n dt = DecisionTreeClassifier(min_samples_split=min_samples_split)\n dt.fit(x_train, y_train)\n \n train_pred = dt.predict(x_train)\n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, train_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n train_results.append(roc_auc)\n \n y_pred = dt.predict(x_test)\n \n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n test_results.append(roc_auc)",
"_____no_output_____"
],
[
"from matplotlib.legend_handler import HandlerLine2D\n\nline1, = plt.plot(min_samples_splits, train_results, 'b', label=\"Train AUC\")\nline2, = plt.plot(min_samples_splits, test_results, 'r', label=\"Test AUC\")\n\nplt.legend(handler_map={line1: HandlerLine2D(numpoints=2)})\n\nplt.ylabel('AUC score')\nplt.xlabel('min samples split')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## min_samples_leaf ",
"_____no_output_____"
],
[
"min_samples_leaf is The minimum number of samples required to be at a leaf node. This similar to min_samples_splits, however, this describe the minimum number of samples of samples at the leafs.",
"_____no_output_____"
]
],
[
[
"min_samples_leafs = np.linspace(0.1, 0.5, 5, endpoint=True)",
"_____no_output_____"
],
[
"train_results = []\ntest_results = []\nfor min_samples_leaf in min_samples_leafs:\n dt = DecisionTreeClassifier(min_samples_leaf=min_samples_leaf)\n dt.fit(x_train, y_train)\n \n train_pred = dt.predict(x_train)\n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, train_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n train_results.append(roc_auc)\n \n y_pred = dt.predict(x_test)\n \n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n test_results.append(roc_auc)",
"_____no_output_____"
],
[
"from matplotlib.legend_handler import HandlerLine2D\n\nline1, = plt.plot(min_samples_leafs, train_results, 'b', label=\"Train AUC\")\nline2, = plt.plot(min_samples_leafs, test_results, 'r', label=\"Test AUC\")\n\nplt.legend(handler_map={line1: HandlerLine2D(numpoints=2)})\n\nplt.ylabel('AUC score')\nplt.xlabel('min samples leaf')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## max_features",
"_____no_output_____"
],
[
"max_features represents the number of features to consider when looking for the best split.",
"_____no_output_____"
]
],
[
[
"train.shape",
"_____no_output_____"
],
[
"max_features = list(range(1,train.shape[1]))",
"_____no_output_____"
],
[
"train_results = []\ntest_results = []\nfor max_feature in max_features:\n dt = DecisionTreeClassifier(max_features=max_feature)\n dt.fit(x_train, y_train)\n \n train_pred = dt.predict(x_train)\n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, train_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n train_results.append(roc_auc)\n \n y_pred = dt.predict(x_test)\n \n \n false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)\n roc_auc = auc(false_positive_rate, true_positive_rate)\n test_results.append(roc_auc)",
"_____no_output_____"
],
[
"from matplotlib.legend_handler import HandlerLine2D\n\nline1, = plt.plot(max_features, train_results, 'b', label=\"Train AUC\")\nline2, = plt.plot(max_features, test_results, 'r', label=\"Test AUC\")\n\nplt.legend(handler_map={line1: HandlerLine2D(numpoints=2)})\n\nplt.ylabel('AUC score')\nplt.xlabel('max features')\nplt.show()",
"_____no_output_____"
]
],
[
[
"According to sklearn documentation for decision tree, the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecc5f1399191ece77817307ea4280e7d1bbc2594 | 68,912 | ipynb | Jupyter Notebook | Lernphase/Benzin/Tanken.ipynb | florianbaer/STAT | 7cb86406ed99b88055c92c1913b46e8995835cbb | [
"MIT"
] | null | null | null | Lernphase/Benzin/Tanken.ipynb | florianbaer/STAT | 7cb86406ed99b88055c92c1913b46e8995835cbb | [
"MIT"
] | null | null | null | Lernphase/Benzin/Tanken.ipynb | florianbaer/STAT | 7cb86406ed99b88055c92c1913b46e8995835cbb | [
"MIT"
] | null | null | null | 88.462131 | 10,732 | 0.781997 | [
[
[
"import matplotlib.pyplot as plt\nimport scipy.stats as st\nimport seaborn as sns\nimport pandas as pd\nfrom scipy.stats import norm, uniform, expon, t, probplot\nimport scipy.stats as st\nfrom scipy.integrate import quad\nfrom sympy.solvers import solve\nfrom sympy import Symbol\nimport numpy as np\nfrom pandas import Series, DataFrame\ndf = pd.read_csv(\"tanken.csv\", sep=\";\")",
"_____no_output_____"
],
[
"print(f'arithmetisches Mittel: {float(df[\"km.driven\"].mean())}')\nprint(f'empirische Varianz: {float(df[\"km.driven\"].var())}')\nprint(f'Standardabweichung: {float(df[\"km.driven\"].std())}')\nprint(f'Median: {float(df[\"km.driven\"].median())}')",
"arithmetisches Mittel: 560.4307692307692\nempirische Varianz: 7611.4723076923065\nStandardabweichung: 87.24375225591977\nMedian: 576.2\n"
],
[
"df.sort_values(by=\"km.driven\")\ndf.head()",
"_____no_output_____"
],
[
"df[\"km.driven\"]",
"_____no_output_____"
],
[
"print(f'0.25% Quantil ist: {df[\"km.driven\"].quantile(q=0.25)}')\nprint(f'0.50% Quantil ist: {df[\"km.driven\"].quantile(q=0.50)}')\nprint(f'0.75% Quantil ist: {df[\"km.driven\"].quantile(q=0.75)}')\n",
"0.25% Quantil ist: 518.8\n0.50% Quantil ist: 576.2\n0.75% Quantil ist: 605.2\n"
],
[
"q25, q75 = df[\"km.driven\"].quantile(q=[.25,.75])\nprint(f'Die Quantilsdifferenz ist: {q75 - q25}')",
"Die Quantilsdifferenz ist: 86.40000000000009\n"
],
[
"# mit linspace siehts auch no interessant aus\nq25, q50, q75 = df[\"km.driven\"].quantile(q=np.linspace(start=0.25, stop=0.75, num=3))\nprint(f'0.25% Quantil ist: {q25}')\nprint(f'0.50% Quantil ist: {q50}')\nprint(f'0.75% Quantil ist: {q75}')",
"0.25% Quantil ist: 518.8\n0.50% Quantil ist: 576.2\n0.75% Quantil ist: 605.2\n"
],
[
"df[\"km.driven\"].plot(kind=\"hist\", bins=7, edgecolor=\"black\")\nplt.title(\"Histogramm Tankfüllungen\")\nplt.xlabel(\"KM pro Füllung\")\nplt.ylabel(\"Häufigkeit\")\nplt.show()",
"_____no_output_____"
],
[
"df[\"km.driven\"].plot(kind=\"hist\", bins=7, normed=True, edgecolor=\"black\")\nplt.title(\"Histogramm Tankfüllungen\")\nplt.xlabel(\"KM pro Füllung\")\nplt.ylabel(\"Häufigkeit\")\nplt.show()",
"_____no_output_____"
],
[
"# Boxplot\ndf[\"km.driven\"].plot(kind=\"box\", title=\"Boxplot Tankfüllungen\", label=\"Tankung\")\nplt.ylabel(\"KM pro Füllung\")\nplt.show()",
"_____no_output_____"
],
[
"# empirische kumulative Verteilungsfunktion\ndf[\"km.driven\"].plot(kind=\"hist\", cumulative=True, histtype=\"step\",\nnormed=True, bins=8, edgecolor=\"black\")",
"C:\\Users\\flori\\Anaconda3\\lib\\site-packages\\pandas\\plotting\\_matplotlib\\hist.py:62: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n n, bins, patches = ax.hist(y, bins=bins, bottom=bottom, **kwds)\n"
],
[
"from scipy.stats import norm, uniform\nuniform.cdf(x=5, loc=4, scale=4)",
"_____no_output_____"
],
[
"data = uniform.rvs(loc=1, scale=9, size=50000)\nprint(((data <= 10) & (data >= 1)).astype(np.int).prod())",
"1\n"
],
[
"# empirische kumulative Verteilungsfunktion\ndf[\"km.driven\"].plot(kind=\"hist\", cumulative=True, histtype=\"step\",\nnormed=True, bins=8, edgecolor=\"black\")",
"_____no_output_____"
],
[
"from scipy.stats import norm, uniform\nuniform.cdf(x=5, loc=4, scale=4)",
"_____no_output_____"
],
[
"data = uniform.rvs(loc=1, scale=9, size=50000)\nprint(((data <= 10) & (data >= 1)).astype(np.int).prod())",
"1\n"
],
[
"df.head()",
"_____no_output_____"
],
[
"df[\"Date\"] = pd.to_datetime(df[\"Date\"], format=\"%d.%m.%y\")",
"_____no_output_____"
],
[
"df[\"Date\"]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc601af3a29afd184e92d9d4a2f333e101c5d63 | 6,195 | ipynb | Jupyter Notebook | content/_build/jupyter_execute/13_complex_comprehensions/exercise/questions.ipynb | aviadr1/learn-python3 | e90f05aa026772c6db7fd4f3ccc2518d983ac4fa | [
"MIT"
] | null | null | null | content/_build/jupyter_execute/13_complex_comprehensions/exercise/questions.ipynb | aviadr1/learn-python3 | e90f05aa026772c6db7fd4f3ccc2518d983ac4fa | [
"MIT"
] | null | null | null | content/_build/jupyter_execute/13_complex_comprehensions/exercise/questions.ipynb | aviadr1/learn-python3 | e90f05aa026772c6db7fd4f3ccc2518d983ac4fa | [
"MIT"
] | null | null | null | 22.445652 | 181 | 0.442938 | [
[
[
"\n<a href=\"https://colab.research.google.com/github/aviadr1/learn-advanced-python/blob/master/content/13_complex_comprehensions/exercise/questions.ipynb\" target=\"_blank\">\n<img src=\"https://colab.research.google.com/assets/colab-badge.svg\" \n title=\"Open this file in Google Colab\" alt=\"Colab\"/>\n</a>\n",
"_____no_output_____"
],
[
"# hotel comprehension\nwe have information about a hotel.\nThe hotel has stored its information in a Python list.\nThe list contains lists (representing rooms), and each sublist contains one or more dictionaries (representing people).\n\nhere's the data structure",
"_____no_output_____"
]
],
[
[
"### useful data about hotel\nrooms = [[{'age': 14, 'hobby': 'horses', 'name': 'Avram'}, \n {'age': 12, 'hobby': 'piano', 'name': 'Betty'}, \n {'age': 9, 'hobby': 'chess', 'name': 'Chen'}, \n {'age': 15, 'hobby': 'programming', 'name': 'Dov'}],\n [{'age': 17, 'hobby': 'driving', 'name': 'Efrat'}], \n [{'age': 45, 'hobby': 'writing', 'name': 'Fred'}, \n {'age': 43, 'hobby': 'chess', 'name': 'Greg'},\n {'age': 20, 'hobby': 'surfing', 'name': 'Hofit'}]\n ]",
"_____no_output_____"
]
],
[
[
"1. What are the names of the people staying at our hotel?",
"_____no_output_____"
],
[
"2. What are the names of people staying in our hotel who enjoy chess?",
"_____no_output_____"
],
[
"3. what unique hobbies are enjoyed in rooms with at least 2 people?",
"_____no_output_____"
],
[
"# group by length of word\ngiven a list of words\n```\nwords = ['everything', 'should', 'be', 'as', 'simple', 'as', 'possible',\n 'but', 'no', 'simpler', 'albert', 'einstein'\n ]\n```\n\ncreat a nested list of words, grouped by the length of the words\n```\ngrouped_by_len = [\n [],\n ['be', 'as', 'as', 'no'],\n ['but'],\n [],\n [],\n ['should', 'simple', 'albert'],\n ['simpler'],\n ['possible', 'einstein'],\n [],\n ['everything']\n ]\n```",
"_____no_output_____"
],
[
"# tranform a nested list\ngiven a nested list of numbers\n```\nnums = [\n [-3],\n [-2, -1],\n [0, 1, 2],\n [3, 4, 5, 6],\n [7, 8, 9, 10, 11]\n ]\n```\n\ntransform it into a list of the cumulative sums\n```\ncumsum = [\n [-3],\n [-2, -3],\n [0, 1, 3],\n [3, 7, 12, 18],\n [7, 15, 24, 34, 45]\n ]\n```\n\nhint: you may use this code to calculate a cumulative sum for a list:\n```\ndef cumsum(lst_):\n total = 0\n for val in lst:\n total += val\n yield total\n```",
"_____no_output_____"
],
[
"\n```{toctree}\n:hidden:\n:titlesonly:\n\n\nsolutions\n```\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecc60bee92d3e7eb28553c11c85793ca698942dc | 746,393 | ipynb | Jupyter Notebook | VAE_for_GFP.ipynb | anihamde/VAE_protein_function | 172773da234953e24a73a1e75265fa230bb57791 | [
"MIT"
] | null | null | null | VAE_for_GFP.ipynb | anihamde/VAE_protein_function | 172773da234953e24a73a1e75265fa230bb57791 | [
"MIT"
] | null | null | null | VAE_for_GFP.ipynb | anihamde/VAE_protein_function | 172773da234953e24a73a1e75265fa230bb57791 | [
"MIT"
] | null | null | null | 17.694166 | 19,000 | 0.359699 | [
[
[
"### Using a Variational Auto-encoder to predict protein fitness from evolutionary data\n\nJuly 20, 2017\n### Sam Sinai and Eric Kelsic\n\n\n## For the blog post associated with this notebook see [this post](https://samsinai.github.io/jekyll/update/2017/08/14/Using-a-Variational-Autoencoder-to-predict-protein-function.html). \n\n\nThis notebook it organized in 3 sections. In section 1 we show our workflow for pre-processing the biological data. We then train the model on the alignment data in section 2. In section 3 we compare the predictions of the model on the [PABP yeast](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3851721/) dataset. In section 4 we report the results from analyzing multiple other datasets. Finally we pose some questions with regards to improving the model for interested researcher.",
"_____no_output_____"
]
],
[
[
"# Generic imports\nfrom __future__ import print_function\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport math,random,re\nimport time",
"_____no_output_____"
],
[
"#Machine learning/Stats imports \nfrom scipy.stats import norm\nfrom scipy.stats import spearmanr,pearsonr\nfrom sklearn.preprocessing import normalize\nfrom sklearn.model_selection import train_test_split\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport torch.distributions as D",
"_____no_output_____"
],
[
"from models import *",
"_____no_output_____"
]
],
[
[
"## 1. Data pre-processing\n\nDefining the alphabet that is used for Amino-Acids throughout.",
"_____no_output_____"
],
[
"These are helper functions to clean and process data. ",
"_____no_output_____"
]
],
[
[
"%reload_ext autoreload\n%autoreload 1\nfrom helper_tools import *\nfrom helper_tools_for_plotting import *",
"_____no_output_____"
]
],
[
[
"Import the alignment data:",
"_____no_output_____"
]
],
[
[
"# # data=pdataframe_from_alignment_file(\"PABP_YEAST_hmmerbit_plmc_n5_m30_f50_t0.2_r115-210_id100_b48.a2m\",50000)\ndata_sar=pd.read_csv(\"sarkisyan.csv\")\n# print (\"number of data points: \",len(data))\n# data_set_size=len(data)\n# data.head()",
"_____no_output_____"
],
[
"from Bio import SeqIO\nseqs = []\nfor record in SeqIO.parse(\"aligned_gfp.fasta\", \"fasta\"):\n seqs.append(\"\".join(record.seq))\n \nseqs.append(seqs[0])\n\ndata = pd.DataFrame({\"sequence\": seqs})",
"_____no_output_____"
]
],
[
[
"Let's see how long the sequence is",
"_____no_output_____"
]
],
[
[
"print (\"length of sequence:\", len(data.iloc[0][\"sequence\"]))#, len(data.iloc[0][\"seq\"]))\nprint (\"sample sequence: \", data.iloc[0][\"sequence\"])",
"length of sequence: 238\nsample sequence: MSKGEELFTGVVPILVELDGDVNGHKFSVSGEGEGDATYGKLTLKFICTTGKLPVPWPTLVTTLT--VQCFSRYPDHMKQHDFFKSAMPEGYVQERTIFFKDDGNYKTRAEVKFEGDTLVNRIELKGIDFKEDGNILGHKLEYNYNSHNVYIMADKQKNGIKVNFKIRHNIEDGSVQLADHYQQNTPIGDGPVLLPDNHYLSTQSALSKDPNEKRDHMVLLEFVTAAGITHGMDELYK\n"
]
],
[
[
"We are only really interested in the columns that do align. This means that for every column that we include, at least 50% of sequences are not gaps. Note that this threshold is imposed by the alignment parameters loaded above. So let's make a column for that. Meanwhile, we keep track of the indices that did align.",
"_____no_output_____"
]
],
[
[
"indices=index_of_non_lower_case_dot(data.iloc[0][\"sequence\"])\ndata[\"seq\"]=list(map(prune_seq,data[\"sequence\"]))\ndata.head()",
"_____no_output_____"
]
],
[
[
"Let's see how many columns remained. ",
"_____no_output_____"
]
],
[
[
"print (\"pruned sequence length:\", len(data.iloc[0][\"seq\"]))\nPRUNED_SEQ_LENGTH=len(data.iloc[0][\"seq\"])",
"pruned sequence length: 238\n"
],
[
"uniquechars = set()\nfor i in data['seq']:\n uniquechars = uniquechars.union(i)",
"_____no_output_____"
],
[
"#Invariants\n# ORDER_KEY=\"XILVAGMFYWEDQNHCRKSTPBZ-\"[::-1]\n# ORDER_LIST=list(ORDER_KEY)\nORDER_LIST = list(uniquechars)\nORDER_LIST = sorted(ORDER_LIST,reverse=True)",
"_____no_output_____"
]
],
[
[
"A few optional lines of code to run. Printing indices, and deleting the sequence column so that it doesn't stay in memory for no reason. ",
"_____no_output_____"
],
[
"Next we translate the sequence into a one hot encoding and shape the input sequences into a m*n matrix. Here m is the number of the data points and $n=$ alphbet size $\\times$ sequence length.",
"_____no_output_____"
]
],
[
[
"#Encode training data in one_hot vectors\ntraining_data_one_hot=[]\nlabels=[]\nfor i, row in data.iterrows():\n training_data_one_hot.append(translate_string_to_one_hot(row[\"seq\"],ORDER_LIST))\nprint (len(training_data_one_hot),len(training_data_one_hot[0]),len(training_data_one_hot[0][0]))\n#plt.imshow(training_data_one_hot[0],cmap=\"Greys\")\ntraining_data=np.array([np.array(list(sample.T.flatten())) for sample in training_data_one_hot])\n# training_data=np.array([np.array(list(sample.flatten())).T for sample in training_data_one_hot])\nprint(training_data.shape)",
"200 22 238\n(200, 5236)\n"
]
],
[
[
"That takes care of the training data. But we also need to test our model on something. Thankfully, some neat experiments have been done to actually measure the effects of change in a sequence on the performance of the protein. We load this data next (because we want to make use of this test data to evaluate our performance at the end of each epoch). ",
"_____no_output_____"
]
],
[
[
"training_data.shape",
"_____no_output_____"
]
],
[
[
"## Exploring the data",
"_____no_output_____"
]
],
[
[
"def seqDist(seq1, seq2):\n return np.sum(np.array(list(seq1)) != np.array(list(seq2)))",
"_____no_output_____"
],
[
"distanceMatrix = np.zeros([len(data), len(data)])",
"_____no_output_____"
],
[
"#takes so long\nfor i in range(len(data)):\n for j in range(len(data)):\n distanceMatrix[i][j] = seqDist(data['sequence'][i], data['sequence'][j])\n print (i)",
"0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n0\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n1\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n3\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n4\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n5\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n6\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n7\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n8\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n9\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n10\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n11\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n12\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n13\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n14\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n15\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n16\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n17\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n18\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n19\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n20\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n21\n"
],
[
"file = open(\"sequences.fasta\", \"w\")\nfor i in range(len(data)):\n file.write((\">\"+str(i)+\"\\n\"))\n file.write(data['sequence'][i])\n file.write(\"\\n\")\n \nfile.close()",
"_____no_output_____"
],
[
"distance_from_0 = np.zeros([len(data)])\nfor i in range(len(data)):\n distance_from_0[i] = seqDist(data['sequence'][0], data['sequence'][i])",
"_____no_output_____"
],
[
"plt.hist(distance_from_0)\nplt.title(\"Distance of seqs from first\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data_sar['quantitative_function'])\nplt.title(\"Distribution of function\")\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pca = PCA(n_components=2)\npca.fit(training_data)",
"_____no_output_____"
],
[
"PCA_data_2dim = pca.transform(training_data)",
"_____no_output_____"
],
[
"PCA_data_2dim.transpose(1, 0)",
"_____no_output_____"
],
[
"plt.scatter(PCA_data_2dim.transpose(1, 0)[0], PCA_data_2dim.transpose(1, 0)[1], alpha=0.1, color='r')\nplt.title(\"PCA 2 dimensions of data\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Baseline One-hot model",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pca = PCA(n_components=5)\npca.fit(training_data)",
"_____no_output_____"
],
[
"PCA_training_data = pca.transform(training_data)",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestRegressor\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"#Classification on training data\ndata_sar['function'] = data_sar['quantitative_function'] > 0.5\nX_train, X_test, y_train, y_test = train_test_split(training_data, data['function'], \n test_size = 0.3, random_state=10)\n\nnaiveClf = RandomForestClassifier()\nnaiveClf.fit(X_train, y_train)\nnaiveClf.score(X_test, y_test)",
"//anaconda/envs/ML_env/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
],
[
"X_train, X_test, y_train, y_test = train_test_split(training_data, data_sar['quantitative_function'], \n test_size = 0.3, random_state=10)\n\nnaiveReg = RandomForestRegressor()\nnaiveReg.fit(X_train, y_train)\nnaiveReg.score(X_test, y_test)",
"//anaconda/envs/ML_env/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
],
[
"X_train, X_test, y_train, y_test = train_test_split(PCA_training_data, data_sar['quantitative_function'], \n test_size = 0.3, random_state=10)\n\nnaiveReg = RandomForestRegressor()\nnaiveReg.fit(X_train, y_train)\nnaiveReg.score(X_test, y_test)",
"//anaconda/envs/ML_env/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
]
],
[
[
"This concludes the pre-processing we need to do on the data.\n\n## 2. Training the model\nWe now move on to define our neural network. This is essentially a vanilla VAE in keras (with some optimization on hyperparameters). For optimization purposes we define a callback function that reports the predictive power of the model in the end of each epoch. Note that while this passes the -test data- through the model, it is kosher because we never pass in the values we are actually interested in and the network is not in \"training phase\", i.e. no weights are updated during this pass. ",
"_____no_output_____"
]
],
[
[
"class rho_vs_mutants():\n def __init__(self,mutants,test_set_size,aa_size,sequence_size):\n self.mutants=mutants\n self.sample_size=test_set_size\n self.aa_size=aa_size\n self.sequence_size=sequence_size\n self.scores=[]\n self.count_batch=0\n def on_train_begin(self, logs={}):\n self.losses = []\n def on_batch_end(self, batch, logs={}):\n self.losses.append(logs.get('loss'))\n #This allows us to track the \"progress\" of the model on different epochs\n def on_epoch_end(self,model,batch,logs):\n x_decoded=model(test_data_plus[0:self.sample_size],batch_size=batch_size)\n digit = x_decoded[0].reshape(self.aa_size,self.sequence_size)\n digit_wt = normalize(digit,axis=0, norm='l1')\n wt_prob=compute_log_probability(digit,digit_wt)\n fitnesses=[]\n for sample in range(1,self.sample_size):\n digit = x_decoded[sample].reshape(self.aa_size,self.sequence_size)\n digit = normalize(digit,axis=0, norm='l1')\n fitness=compute_log_probability(test_data_plus[sample].reshape(self.aa_size,self.sequence_size),digit)-wt_prob\n fitnesses.append(fitness)\n print (\",\"+str(spearmanr(fitnesses,target_values_singles[:self.sample_size-1])))\n self.scores.append(spearmanr(fitnesses,target_values_singles[:self.sample_size-1])[0])",
"_____no_output_____"
]
],
[
[
"Now we are ready to specify the network architecture, this is adapted from [here](https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py).",
"_____no_output_____"
]
],
[
[
"# torch.sum(1 + model.z_log_var - (model.z_mean)**2 - torch.exp(model.z_log_var),-1)",
"_____no_output_____"
],
[
"PRUNED_SEQ_LENGTH",
"_____no_output_____"
],
[
"batch_size = 20\noriginal_dim=len(ORDER_LIST)*PRUNED_SEQ_LENGTH\noutput_dim=len(ORDER_LIST)*PRUNED_SEQ_LENGTH\nlatent_dim = 2\nintermediate_dim = 250\nnb_epoch = 20\nepsilon_std = 1.0\nnp.random.seed(42) \n\nloss1 = nn.CrossEntropyLoss()\n\ndef vae_loss(x_true, x_decoded_mean, z_mean, z_log_var):\n xent_loss = original_dim * loss1(x_decoded_mean, x_true)\n kl_loss = -0.5 * torch.sum(1 + z_log_var - (z_mean)**2 - torch.exp(z_log_var))\n# print (\"xent loss: \", xent_loss)\n# print (\"KL loss: \", kl_loss)\n return (xent_loss + kl_loss)\n\n# #Encoding Layers\n# x = Input(batch_shape=(batch_size, original_dim))\n# h = Dense(intermediate_dim,activation=\"elu\")(x)\n# h= Dropout(0.7)(h)\n# h = Dense(intermediate_dim, activation='elu')(h)\n# h=BatchNormalization(mode=0)(h)\n# h = Dense(intermediate_dim, activation='elu')(h)\n\n# #Latent layers\n# z_mean=Dense(latent_dim)(h)\n# z_log_var=Dense(latent_dim)(h)\n# z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])\n\n# #Decoding layers \n\n# decoder_1= Dense(intermediate_dim, activation='elu')\n# decoder_2=Dense(intermediate_dim, activation='elu')\n# decoder_2d=Dropout(0.7)\n# decoder_3=Dense(intermediate_dim, activation='elu')\n# decoder_out=Dense(output_dim, activation='sigmoid')\n# x_decoded_mean = decoder_out(decoder_3(decoder_2d(decoder_2(decoder_1(z)))))\n\n# vae = Model(x, x_decoded_mean)\n\n# #Potentially better results, but requires further hyperparameter tuning\n# #optimizer=keras.optimizers.SGD(lr=0.005, momentum=0.001, decay=0.0, nesterov=False,clipvalue=0.05)\n# vae.compile(optimizer=\"adam\", loss=vae_loss,metrics=[\"categorical_accuracy\",\"fmeasure\",\"top_k_categorical_accuracy\"])",
"_____no_output_____"
]
],
[
[
"And run it through our training data.",
"_____no_output_____"
]
],
[
[
"len(range(0, 300)[:20])",
"_____no_output_____"
],
[
"training_size = 200 #so batchingw orks\nx_train=training_data[:200] #this needs to be divisible by batch size and less than or equal to dataset size\nx_train = x_train.astype('float32')\nx_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))\n\n# early_stopping = EarlyStopping(monitor='val_loss', patience=3)\n# spearman_measure=rho_vs_mutants(test_data_plus,batch_size*int(len(test_data_plus)/batch_size),len(ORDER_LIST),PRUNED_SEQ_LENGTH)",
"_____no_output_____"
],
[
"vae_type = 'full'",
"_____no_output_____"
],
[
"if vae_type == 'rec':\n def vae_loss(x_true, x_decoded_mean, z_mean, z_log_var):\n xent_loss = x_true.shape[0]*loss1(x_decoded_mean, x_true)\n kl_loss = -0.5 * torch.sum(1 + z_log_var - (z_mean)**2 - torch.exp(z_log_var))\n return (xent_loss + kl_loss), xent_loss, kl_loss",
"_____no_output_____"
],
[
"def conv_size_func(Lin,dilation,kernel,padding=0,stride=1):\n return int(((Lin+2*padding-dilation*(kernel-1)-1)/stride)+1)",
"_____no_output_____"
],
[
"if vae_type == 'full':\n print (\"training on full\")\n univ_dropout = [0.2]*3\n dropout_enc = univ_dropout\n dropout_dec = univ_dropout\n\n layers_enc = nn.ModuleList([nn.Linear(original_dim,intermediate_dim),nn.Dropout(dropout_enc[0]),nn.ELU()])\n for i in range(2):\n layers_enc.append(nn.Linear(intermediate_dim,intermediate_dim))\n layers_enc.append(nn.Dropout(dropout_enc[i+1]))\n layers_enc.append(nn.ELU())\n\n layers_dec = nn.ModuleList([nn.Linear(latent_dim,intermediate_dim),nn.Dropout(dropout_dec[0]),nn.ELU()])\n for i in range(2):\n layers_dec.append(nn.Linear(intermediate_dim,intermediate_dim))\n layers_dec.append(nn.Dropout(dropout_dec[i+1]))\n layers_dec.append(nn.ELU())\n\n layers_dec.append(nn.Linear(intermediate_dim,output_dim))\n\n layers_ae = nn.ModuleList([nn.Linear(intermediate_dim,latent_dim),nn.Linear(intermediate_dim,latent_dim)])\nelif vae_type == 'conv':\n out_conv_enc = [50,100]\n kernels_enc = [3,5]\n dilations_enc = [1,3]\n maxpools_enc = [4,3]\n paddings_enc = [(5,5,0,0)]\n \n out_lin_enc = [100,500]\n dropout_enc = [0.2,0.2]\n \n out_lin_dec = [100,150]\n dropout_dec = [0.2,0.2]\n \n layers_enc_pre_view = nn.ModuleList([nn.Conv1d(len(ORDER_LIST),out_conv_enc[0],kernels_enc[0],stride=1,dilation=dilations_enc[0]),\n nn.ELU(),\n nn.MaxPool1d(maxpools_enc[0],padding=0),\n nn.ZeroPad2d(paddings_enc[0]),\n nn.Conv1d(out_conv_enc[0],out_conv_enc[1],kernels_enc[1],stride=1,dilation=dilations_enc[1]),\n nn.ELU(),\n# nn.MaxPool1d(4,padding=0),\n# nn.ZeroPad2d((5,5,0,0)),\n# nn.Conv1d(out_conv_enc[1],out_conv_enc[2],kernels_enc[2],stride=1,dilation=dilations_enc[2]),\n# nn.ELU(),\n nn.MaxPool1d(maxpools_enc[1],padding=0)])\n \n inp_len = PRUNED_SEQ_LENGTH\n paddings_enc.append((0,0,0,0))\n for i in range(len(out_conv_enc)):\n inp_len = conv_size_func(inp_len,dilations_enc[i],kernels_enc[i])\n inp_len = inp_len//maxpools_enc[i]\n inp_len += (paddings_enc[i][0]+paddings_enc[i][1])\n \n enc_view = inp_len*out_conv_enc[-1]\n print('post-convolutional size is ', enc_view)\n \n layers_enc_post_view = nn.ModuleList([nn.Linear(enc_view,out_lin_enc[0]),\n nn.Dropout(dropout_enc[0]),\n nn.ELU(),\n nn.Linear(out_lin_enc[0],out_lin_enc[1]),\n nn.Dropout(dropout_enc[1]),\n nn.ELU()])\n \n layers_dec = nn.ModuleList([nn.Linear(latent_dim,out_lin_dec[0]),\n nn.Dropout(dropout_dec[0]),\n nn.ELU(),\n nn.Linear(out_lin_dec[0],out_lin_dec[1]),\n nn.Dropout(dropout_dec[1]),\n nn.ELU(),\n nn.Linear(out_lin_dec[1],output_dim)])\n \n layers_ae = nn.ModuleList([nn.Linear(out_lin_enc[-1],latent_dim),nn.Linear(out_lin_enc[-1],latent_dim)])\nelif vae_type == 'rec':\n univ_dropout = [0.2]*2\n dropout_enc = univ_dropout\n dropout_dec = univ_dropout\n hid_size = [20,10]\n dec_lin = False\n \n num_layers = 2\n num_layers_dec = 2\n bid = True\n num_dirs = 2 if bid else 1\n \n \n layers_enc = nn.ModuleList([nn.RNN(len(ORDER_LIST),hid_size[0],num_layers=num_layers,batch_first=True,dropout=univ_dropout[0],bidirectional=bid)])\n\n\n if dec_lin:\n layers_post_rec_enc = nn.ModuleList([nn.Linear(164,intermediate_dim),\n nn.Dropout(dropout_enc[0]),\n nn.ELU(),\n nn.Linear(intermediate_dim,intermediate_dim),\n nn.Dropout(dropout_enc[1]),\n nn.ELU()]) # for now, not being used in rec model\n\n\n # layers_pre_rec_dec = nn.ModuleList([nn.Linear(latent_dim,100),\n # nn.Dropout(dropout_dec[0]),\n # nn.ELU()])\n # # 25 below bc bidirectional 2 layers means we have to divide 100 by 2*2\n # layers_dec = nn.ModuleList([nn.RNN(50,25,num_layers=2,batch_first=True,dropout=0.2,bidirectional=True)])\n # layers_post_rec_dec = nn.ModuleList([nn.Linear(25*2,len(ORDER_LIST))])\n\n # layers_ae = nn.ModuleList([nn.Linear(intermediate_dim,latent_dim),nn.Linear(intermediate_dim,latent_dim)])\n layers_dec = nn.ModuleList([nn.Linear(latent_dim,intermediate_dim),\n nn.Dropout(.2),\n nn.ELU(),\n nn.Linear(intermediate_dim,intermediate_dim*2),\n nn.Dropout(.2),\n nn.ELU(),\n nn.Linear(intermediate_dim*2,output_dim)])\n \n layers_dec_post_rec = 0\n \n layers_ae = nn.ModuleList([nn.Linear(intermediate_dim,latent_dim),nn.Linear(intermediate_dim,latent_dim)])\n \n else: # dec_lin = False\n layers_post_rec_enc = 0\n \n layers_dec = nn.ModuleList([nn.Linear(latent_dim,hid_size[1]),nn.RNN(len(ORDER_LIST),hid_size[1],num_layers=num_layers_dec,batch_first=True,dropout=univ_dropout[1],bidirectional=bid)])\n \n layers_dec_post_rec = nn.ModuleList([nn.Linear(hid_size[1]*num_dirs,len(ORDER_LIST))])\n \n layers_ae = nn.ModuleList([nn.Linear(hid_size[0],latent_dim),nn.Linear(hid_size[0],latent_dim)])\n \n ",
"training on full\n"
],
[
"losses_train = []\nlosses_test = []\naccuracies_train = []\naccuracies_test = []\nxents_train = []\nxents_test = []\nkls_train = []\nkls_test = []\n\nif vae_type == 'full':\n print (\"training full\")\n model = VAE(layers_enc,layers_ae,layers_dec)\n\n prams = list(model.parameters())\n\n optimizer = torch.optim.Adam(prams, lr = 0.001)\n\n x_train_data, x_val_data = train_test_split(x_train, test_size = 0.1)\n\n ins_train = x_train_data.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_train = torch.Tensor(ins_train)\n ins_train = torch.argmax(ins_train,1)\n\n ins_val = x_val_data.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_val = torch.Tensor(ins_val)\n ins_val = torch.argmax(ins_val,1)\n\n for epoch in range(nb_epoch):\n model.train()\n\n train = np.random.permutation(x_train_data)\n train = train.reshape(-1,batch_size,len(ORDER_LIST)*PRUNED_SEQ_LENGTH) # 1968)\n\n train = torch.Tensor(train)\n\n \n \n for batch in train:\n out = model(batch)\n\n batch = batch.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n batch = torch.argmax(batch,1)\n out = out.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n loss = vae_loss(batch,out,model.z_mean,model.z_log_var)\n \n optimizer.zero_grad()\n loss.backward() \n optimizer.step()\n \n model.eval()\n\n out_train = model(torch.Tensor(x_train_data))\n out_train = torch.Tensor(out_train)\n out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_train = torch.argmax(out_train,dim=1)\n bool_train = (classpreds_train==ins_train)\n class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n out_val = model(torch.Tensor(x_val_data))\n out_val = torch.Tensor(out_val)\n out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_val = torch.argmax(out_val,dim=1)\n bool_val = (classpreds_val==ins_val)\n class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n loss_train = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n loss_val = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n \n losses_train.append(loss_train)\n losses_test.append(loss_val)\n accuracies_train.append(class_acc_train)\n accuracies_test.append(class_acc_val)\n \n print('Epoch %s | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n %( epoch, loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )\n\nelif vae_type == 'conv':\n print (\"conv\")\n model = VAE_conv(layers_enc_pre_view,enc_view,layers_enc_post_view,layers_ae,layers_dec)\n \n prams = list(model.parameters())\n\n optimizer = torch.optim.Adam(prams, lr = 0.001)\n\n x_train_data, x_val_data = train_test_split(x_train, test_size = 0.1)\n\n ins_train = x_train_data.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_train = torch.Tensor(ins_train)\n ins_train = torch.argmax(ins_train,1)\n\n ins_val = x_val_data.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_val = torch.Tensor(ins_val)\n ins_val = torch.argmax(ins_val,1)\n\n for epoch in range(nb_epoch):\n model.train()\n\n train = np.random.permutation(x_train_data)\n train = train.reshape(-1,batch_size,PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n train = torch.Tensor(train)\n train = train.transpose(-2,-1)\n\n for batch in train:\n out = model(batch)\n\n batch = batch.transpose(-2,-1).reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n batch = torch.argmax(batch,1)\n out = out.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n \n loss = vae_loss(batch,out,model.z_mean,model.z_log_var)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n model.eval()\n\n out_train = model(torch.Tensor(x_train_data).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)).transpose(-2,-1))\n out_train = torch.Tensor(out_train)\n out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_train = torch.argmax(out_train,dim=1)\n bool_train = (classpreds_train==ins_train)\n class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n out_val = model(torch.Tensor(x_val_data).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)).transpose(-2,-1))\n out_val = torch.Tensor(out_val)\n out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_val = torch.argmax(out_val,dim=1)\n bool_val = (classpreds_val==ins_val)\n class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n loss_train = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n loss_val = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n\n losses_train.append(loss_train)\n losses_test.append(loss_val)\n accuracies_train.append(class_acc_train)\n accuracies_test.append(class_acc_val)\n \n print('Epoch %s | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n %( epoch, loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )\n \nelif vae_type == 'rec':\n print (\"rec\")\n if lang_mod:\n print(\"language model training\")\n else:\n print(\"vae training\")\n \n alpha = 50000\n beta = 0.005\n print('KL annealing terms: alpha = {}, beta = {}'.format(alpha,beta))\n \n model = VAE_rec(layers_enc,layers_post_rec_enc,layers_ae,0,layers_dec,layers_dec_post_rec)\n \n if cuda:\n model = model.cuda()\n \n prams = list(model.parameters())\n\n optimizer = torch.optim.Adam(prams, lr = 0.01)\n\n x_train_data, x_val_data = train_test_split(x_train, test_size = 0.1)\n \n# print('FAKE TRAINING SET TO ASSESS REC VALIDITY')\n# x_train_data = np.array([[0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]]*3690000).reshape(45000,1968)\n \n# import pdb; pdb.set_trace()\n \n ins_train = x_train_data.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_train = torch.Tensor(ins_train)\n ins_train = torch.argmax(ins_train,1)\n\n ins_val = x_val_data.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_val = torch.Tensor(ins_val)\n ins_val = torch.argmax(ins_val,1)\n \n ins_train = create_tensor(ins_train,gpu=cuda)\n ins_val = create_tensor(ins_val,gpu=cuda)\n \n \n ## Printing model perf before\n model.eval()\n \n out_train = model(create_tensor(torch.Tensor(x_train_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_train = torch.argmax(out_train,dim=1)\n bool_train = (classpreds_train==ins_train)\n class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n out_val = model(create_tensor(torch.Tensor(x_val_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_val = torch.argmax(out_val,dim=1)\n bool_val = (classpreds_val==ins_val)\n class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n loss_train,xent_train,kl_train = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n kl_train = sigmoid(beta*(-alpha))*kl_train # annealing\n loss_train = xent_train + kl_train # annealing\n loss_val,xent_val,kl_val = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n kl_val = sigmoid(beta*(-alpha))*kl_val # annealing\n loss_val = xent_val + kl_val # annealing\n\n losses_train.append(loss_train.item())\n losses_test.append(loss_val.item())\n accuracies_train.append(class_acc_train)\n accuracies_test.append(class_acc_val)\n xents_train.append(xent_train.item())\n xents_test.append(xent_val.item())\n kls_train.append(kl_train.item())\n kls_test.append(kl_val.item())\n\n print('Pre-training | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n %( loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )\n \n for epoch in range(nb_epoch):\n model.train()\n\n train = np.random.permutation(x_train_data)\n train = train.reshape(-1,batch_size,PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n train = create_tensor(torch.Tensor(train),gpu=cuda)\n\n xents = []\n kls = []\n \n num_dum = -1\n\n optimizer.zero_grad()\n \n for batch in train:\n num_dum += 1\n out = model(batch,True,lang_mod)\n \n# import pdb; pdb.set_trace()\n batch = torch.argmax(batch,-1)\n batch = batch.reshape(-1)\n \n out = out.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n loss,xent,kl = vae_loss(batch,out,model.z_mean,model.z_log_var)\n mult = epoch*len(x_train_data)/batch_size + num_dum # annealing\n kl = sigmoid(beta*(mult-alpha))*kl # annealing\n loss = xent + kl # annealing\n if num_dum % 1000 == 0:\n print((batch==torch.argmax(out,-1)).sum().item()/(batch_size*PRUNED_SEQ_LENGTH*1.0))\n xents.append(xent)\n kls.append(kl)\n\n if lang_mod:\n xent.backward()\n else:\n loss.backward() \n \n# for layer, paramval in model.named_parameters():\n# print(layer,paramval.grad)\n \n optimizer.step()\n \n# import pdb; pdb.set_trace()\n print('xent mean is:',torch.stack(xents).mean().item())\n print('kl mean is:',torch.stack(kls).mean().item())\n\n model.eval()\n \n# import pdb; pdb.set_trace()\n out_train = model(create_tensor(torch.Tensor(x_train_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_train = torch.argmax(out_train,dim=1)\n bool_train = (classpreds_train==ins_train)\n class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n out_val = model(create_tensor(torch.Tensor(x_val_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_val = torch.argmax(out_val,dim=1)\n bool_val = (classpreds_val==ins_val)\n class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n loss_train,xent_train,kl_train = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n mult = epoch*len(x_train_data)/batch_size + num_dum # annealing\n kl_train = sigmoid(beta*(mult-alpha))*kl_train # annealing\n loss_train = xent_train + kl_train # annealing\n loss_val,xent_val,kl_val = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n kl_val = sigmoid(beta*(mult-alpha))*kl_val # annealing\n loss_val = xent_val + kl_val # annealing\n \n losses_train.append(loss_train.item())\n losses_test.append(loss_val.item())\n accuracies_train.append(class_acc_train)\n accuracies_test.append(class_acc_val)\n xents_train.append(xent_train.item())\n xents_test.append(xent_val.item())\n kls_train.append(kl_train.item())\n kls_test.append(kl_val.item())\n \n print(classpreds_train)\n print(classpreds_val)\n \n print('Epoch %s | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n %( epoch, loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )",
"training full\nEpoch 0 | Training Loss: 2965.04833984375, Training Accuracy: 0.951750700280112, Validation Loss: 2809.42529296875, Validation Accuracy: 0.9512605042016806\nEpoch 1 | Training Loss: 1098.9971923828125, Training Accuracy: 0.9575163398692811, Validation Loss: 1447.534423828125, Validation Accuracy: 0.9586134453781513\nEpoch 2 | Training Loss: 887.9511108398438, Training Accuracy: 0.9602240896358544, Validation Loss: 761.9097290039062, Validation Accuracy: 0.9594537815126051\nEpoch 3 | Training Loss: 738.9840698242188, Training Accuracy: 0.9582166199813259, Validation Loss: 664.9811401367188, Validation Accuracy: 0.9602941176470589\nEpoch 4 | Training Loss: 758.4254150390625, Training Accuracy: 0.9598506069094305, Validation Loss: 669.6949462890625, Validation Accuracy: 0.9615546218487395\nEpoch 5 | Training Loss: 746.3814086914062, Training Accuracy: 0.9601073762838469, Validation Loss: 625.1912231445312, Validation Accuracy: 0.9588235294117647\nEpoch 6 | Training Loss: 748.3277587890625, Training Accuracy: 0.9602474323062559, Validation Loss: 688.0336303710938, Validation Accuracy: 0.9594537815126051\nEpoch 7 | Training Loss: 682.6058349609375, Training Accuracy: 0.959733893557423, Validation Loss: 610.4600219726562, Validation Accuracy: 0.9621848739495799\nEpoch 8 | Training Loss: 791.62841796875, Training Accuracy: 0.9603641456582633, Validation Loss: 679.6618041992188, Validation Accuracy: 0.9596638655462185\nEpoch 9 | Training Loss: 724.5804443359375, Training Accuracy: 0.9602707749766574, Validation Loss: 649.1769409179688, Validation Accuracy: 0.9594537815126051\nEpoch 10 | Training Loss: 709.8601684570312, Training Accuracy: 0.9602474323062559, Validation Loss: 633.2260131835938, Validation Accuracy: 0.9594537815126051\nEpoch 11 | Training Loss: 711.7298583984375, Training Accuracy: 0.9602474323062559, Validation Loss: 620.95068359375, Validation Accuracy: 0.9594537815126051\nEpoch 12 | Training Loss: 694.3724365234375, Training Accuracy: 0.959780578898226, Validation Loss: 600.9882202148438, Validation Accuracy: 0.9615546218487395\nEpoch 13 | Training Loss: 694.0108032226562, Training Accuracy: 0.9602474323062559, Validation Loss: 623.4722900390625, Validation Accuracy: 0.9592436974789916\nEpoch 14 | Training Loss: 680.5443115234375, Training Accuracy: 0.9600373482726424, Validation Loss: 595.790283203125, Validation Accuracy: 0.9602941176470589\nEpoch 15 | Training Loss: 684.0516357421875, Training Accuracy: 0.9601774042950514, Validation Loss: 600.5628051757812, Validation Accuracy: 0.9596638655462185\nEpoch 16 | Training Loss: 688.9990234375, Training Accuracy: 0.9601073762838469, Validation Loss: 614.5149536132812, Validation Accuracy: 0.9592436974789916\nEpoch 17 | Training Loss: 688.678466796875, Training Accuracy: 0.959920634920635, Validation Loss: 605.007080078125, Validation Accuracy: 0.9588235294117647\nEpoch 18 | Training Loss: 681.19873046875, Training Accuracy: 0.9600840336134454, Validation Loss: 601.7752685546875, Validation Accuracy: 0.9594537815126051\nEpoch 19 | Training Loss: 689.1139526367188, Training Accuracy: 0.9602474323062559, Validation Loss: 617.867431640625, Validation Accuracy: 0.9594537815126051\n"
]
],
[
[
"Let's explore the latent space",
"_____no_output_____"
]
],
[
[
"fit_xtrain = model(torch.Tensor(x_train)).detach()\nz_means = model.z_mean.detach()",
"_____no_output_____"
],
[
"transposed_zmeans = np.array(z_means).transpose()\n\nplt.scatter(transposed_zmeans[0], transposed_zmeans[1], s = 1, linewidths = 0)\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.cluster import KMeans\n\nz_means_np = np.array(z_means)\nkmeans = KMeans(n_clusters=12, random_state=1).fit(z_means_np)",
"_____no_output_____"
],
[
"sample_points=len(z_means_np)\n\nlatent_dim = 2\nfig = plt.figure(figsize=(12,12))\ncounter=0\ncmap=kmeans.labels_\nfor z1 in range(latent_dim):\n for z2 in range(z1+1,latent_dim):\n counter+=1\n fig.add_subplot(latent_dim,latent_dim,counter)\n plt.title(str(z1)+\"_\"+str(z2))\n plt.scatter(z_means_np[:, z1][::-1], z_means_np[:, z2][::-1],c=cmap[::-1], s = 15, alpha=0.01,marker=\"o\")\n# plt.scatter(z_means_np[:, z1][::-1], z_means_np[:, z2][::-1],c=\"y\" ,alpha=0.3,marker=\"o\")\n plt.scatter(z_means_np[0][z1], z_means_np[0][z2],c=\"r\" ,alpha=1,s=40,marker=\"s\")\n plt.xlabel(\"Latent dim\"+str(z1+1))\n plt.ylabel(\"Latent dim\"+str(z2+1));\nplt.savefig(\"Try2_originalDropout.png\")\n",
"_____no_output_____"
],
[
"plt.pcolor(x_train[0].reshape(-1, len(ORDER_LIST)).transpose(1, 0))\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Training a classifier over the latent space",
"_____no_output_____"
]
],
[
[
"fit_total = model(torch.Tensor(training_data)).detach()\nlatent_data = model.z_mean.detach()",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestRegressor\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(np.array(latent_data), data_sar['quantitative_function'], \n test_size = 0.3, random_state=10)",
"_____no_output_____"
],
[
"latentReg = RandomForestRegressor()\nlatentReg.fit(X_train, y_train)\n# latentReg.predict(X_test)\nlatentReg.score(X_test, y_test)",
"//anaconda/envs/ML_env/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
],
[
"plt.scatter(X_train[:,0], y_train)\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(y_train)\nplt.show()",
"_____no_output_____"
],
[
"np.sum(data['function'])/len(data)",
"_____no_output_____"
],
[
"data['function'] = data['quantitative_function'] > 0.5\n\nX_train, X_test, y_train, y_test = train_test_split(np.array(latent_data), data['function'], \n test_size = 0.3, random_state=10)\n\nlatentClf = RandomForestClassifier()\nlatentClf.fit(X_train, y_train)\n# latentReg.predict(X_test)\nlatentClf.score(X_test, y_test)",
"//anaconda/envs/ML_env/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
],
[
"# reshaped_fit_xtrain = fit_xtrain.reshape(len(x_train)*len(ORDER_LIST),PRUNED_SEQ_LENGTH)\nm = torch.nn.Softmax()\n\nreshaped_fit_xtrain = m(fit_xtrain.reshape(51700 * 238, 24)).reshape(51700, 238, 24).transpose(2, 1)\n\n# m = torch.nn.Sigmoid()\n\n# reshaped_fit_xtrain = torch.stack(list(map(m, fit_xtrain))).reshape(50000, 82, 24).transpose(2, 1)",
"//anaconda/envs/ML_env/lib/python3.6/site-packages/ipykernel_launcher.py:4: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n after removing the cwd from sys.path.\n"
],
[
"plt.pcolor(reshaped_fit_xtrain[0])",
"_____no_output_____"
],
[
"plt.pcolor(np.matmul(test_data_plus[0].reshape(digit_size, sequence_size).T, digit_wt))",
"_____no_output_____"
],
[
"# sample_size=batch_size*int(len(test_data_plus)/batch_size)\nsample_size = batch_size*int(len(x_train)/batch_size)\nsample_for_averging_size=100\nsequence_size=PRUNED_SEQ_LENGTH\ndigit_size = len(ORDER_LIST)\nx_decoded=reshaped_fit_xtrain\n\n\n\n\n\n\nwt_prob=compute_log_probability(x_train[0].reshape(digit_size, sequence_size),digit_wt)\n#print (\"wt_log_prob: \", wt_prob)\n\nwt_probs=[]\ndigit_avg=np.zeros((digit_size, sequence_size))\n\n\nsample_indices=random.sample(range(sample_size),sample_for_averging_size)\n\ncounter=0\nfor sample in sample_indices:\n digit = x_decoded[sample]\n# print (digit)\n# print (digit_avg)\n# digit_wt_i = normalize(digit,axis=0, norm='l1')\n digit_wt_i = digit\n \n# print (digit_wt_i)\n \n digit_avg+=np.array(digit_wt_i) * 1. / sample_for_averging_size\n \n wt_p=compute_log_probability(x_train[sample].reshape(digit_size, sequence_size),digit_wt_i)\n wt_probs.append(wt_p)\n counter+=1\n \naverage_wt_p=np.mean(wt_probs)\n\nfitnesses_vs_wt=[]\nfitnesses=[] #first plug in just the sequences\nfitnesses_vs_avg=[] \n\nfor sample in range(1,sample_size):\n digit = x_decoded[sample]\n# digit = normalize(digit,axis=0, norm='l1')\n \n fitness=compute_log_probability(x_train[sample].reshape(digit_size, sequence_size),digit)-wt_prob\n fitnesses.append(fitness)\n \n fitness=compute_log_probability(x_train[sample].reshape(digit_size, sequence_size),digit_wt)-wt_prob\n fitnesses_vs_wt.append(fitness)\n \n fitness=compute_log_probability(x_train[sample].reshape(digit_size, sequence_size),digit_avg)-average_wt_p\n fitnesses_vs_avg.append(fitness)\n \n \n# print (\"Spearman\",spearmanr(fitnesses_vs_wt,target_values_singles[:sample_size-1]))\n# print (\"Pearson\", pearsonr(fitnesses_vs_wt,target_values_singles[:sample_size-1]))",
"/Users/DavidKMYang/Spring2019_Harvard/Math243/VAE_protein_function/helper_tools.py:103: RuntimeWarning: divide by zero encountered in log\n log_prod_mat=np.log(prod_mat)\n//anaconda/envs/ML_env/lib/python3.6/site-packages/ipykernel_launcher.py:49: RuntimeWarning: invalid value encountered in subtract\n//anaconda/envs/ML_env/lib/python3.6/site-packages/ipykernel_launcher.py:52: RuntimeWarning: invalid value encountered in double_scalars\n//anaconda/envs/ML_env/lib/python3.6/site-packages/ipykernel_launcher.py:55: RuntimeWarning: invalid value encountered in double_scalars\n"
],
[
"plt.pcolor(test_data_plus[0].reshape(digit_size, sequence_size))",
"_____no_output_____"
],
[
"plt.pcolor(digit_wt)",
"_____no_output_____"
],
[
"spearmanr(fitnesses_vs_avg, fitnesses)",
"_____no_output_____"
],
[
"plt.scatter(fitnesses_vs_wt, target_values_singles[:sample_size-1], alpha = 0.5, s = 10)\nplt.ylabel(\"Fitness vs. wt\")\nplt.xlabel(\"p(x)\")\nplt.title(spearmanr(fitnesses_vs_wt, target_values_singles[:sample_size-1]))\nplt.tight_layout()\nplt.savefig(\"Correlation.png\")\nplt.show()\n",
"_____no_output_____"
],
[
"plt.scatter(fitnesses_vs_wt, target_values_singles[:sample_size-1], alpha = 0.5, s = 10)\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(fitnesses,bins=30)\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(target_values_singles[:sample_size-1], bins=50)\nplt.show()",
"_____no_output_____"
]
],
[
[
"We have kept track of some performance metrics, so we can follow whether the network was still improving. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecc61ddc9725b7c758cfc51693cc485f6ca5b228 | 63,199 | ipynb | Jupyter Notebook | 01_data_cleaning.ipynb | Noah-Baustin/sf_crime_data_analysis | 7c4d015fa166abcd89930fc8a1d3e912034e5a50 | [
"MIT"
] | null | null | null | 01_data_cleaning.ipynb | Noah-Baustin/sf_crime_data_analysis | 7c4d015fa166abcd89930fc8a1d3e912034e5a50 | [
"MIT"
] | null | null | null | 01_data_cleaning.ipynb | Noah-Baustin/sf_crime_data_analysis | 7c4d015fa166abcd89930fc8a1d3e912034e5a50 | [
"MIT"
] | null | null | null | 37.980168 | 297 | 0.433124 | [
[
[
"### Data Cleaning\n#### The purpose of this notebook is to create cleaned .csv files to export for use in my data analyses",
"_____no_output_____"
],
[
"More information about this project is available in my github repo here: https://github.com/Noah-Baustin/sf_crime_data_analysis",
"_____no_output_____"
]
],
[
[
"#import modules\nimport pandas as pd",
"/Users/nbaustin/.pyenv/versions/3.8.5/envs/sf_crime_data_analysis-3.8.5/lib/python3.8/site-packages/pandas/compat/__init__.py:124: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError.\n warnings.warn(msg)\n"
],
[
"# import historical csv into a variable\nhistorical_data = pd.read_csv('raw_data/SFPD_Incident_Reports_2003-May2018/Police_Department_Incident_Reports__Historical_2003_to_May_2018.csv', dtype=str)\n\n# import newer csv into a variable\nnewer_data = pd.read_csv('raw_data/SFPD_Incident_Reports_2018-10.14.21/Police_Department_Incident_Reports__2018_to_Present(1).csv', dtype=str)",
"_____no_output_____"
]
],
[
[
"Trim the extra columns that we don't need from the historical data:",
"_____no_output_____"
]
],
[
[
"historical_data = historical_data[\n ['PdId', 'IncidntNum', 'Incident Code', 'Category', 'Descript',\n 'DayOfWeek', 'Date', 'Time', 'PdDistrict', 'Resolution', 'X',\n 'Y', 'location']\n].reset_index(drop=True)",
"_____no_output_____"
]
],
[
[
"Change the column names in the historical data to match the API names in the newer data. The SFPD published a key that I used to translate the column names over, which can be found on pg two of this document: https://drive.google.com/file/d/13n7pncEOxFTWig9-sTKnB2sRiTB54Kb-/view?usp=sharing",
"_____no_output_____"
]
],
[
[
"historical_data.rename(columns={'PdId': 'row_id',\n 'IncidntNum': 'incident_number',\n 'Incident Code': 'incident_code',\n 'Category': 'incident_category',\n 'Descript': 'incident_description',\n 'DayOfWeek': 'day_of_week',\n 'Date': 'incident_date',\n 'Time': 'incident_time',\n 'PdDistrict': 'police_district',\n 'Resolution': 'resolution',\n 'X': 'longitude',\n 'Y': 'latitude',\n 'location': 'the_geom'\n }, \n inplace=True)",
"_____no_output_____"
],
[
"historical_data",
"_____no_output_____"
]
],
[
[
"Now let's trim down the columns from the newer dataset so that we're only working with columns that match up to the old data. \n\nNote: there's no 'the geom' column, but the column 'point' is equivelant. ",
"_____no_output_____"
]
],
[
[
"newer_data = newer_data[\n ['Row ID', 'Incident Number', 'Incident Code', 'Incident Category', \n 'Incident Description', 'Incident Day of Week', 'Incident Date', 'Incident Time', \n 'Police District', 'Resolution', 'Longitude', 'Latitude', 'Point']\n].copy()",
"_____no_output_____"
]
],
[
[
"Change the column names in the newer dataset to match the API names of the columns. Doing this because the original column names have spaces, which could cause issues down the road.",
"_____no_output_____"
]
],
[
[
"newer_data.rename(columns={'Row ID': 'row_id',\n 'Incident Number': 'incident_number',\n 'Incident Code': 'incident_code',\n 'Incident Category': 'incident_category',\n 'Incident Description': 'incident_description',\n 'Incident Day of Week': 'day_of_week', \n 'Incident Date': 'incident_date',\n 'Incident Time': 'incident_time',\n 'Police District': 'police_district',\n 'Resolution': 'resolution',\n 'Longitude': 'longitude', \n 'Latitude': 'latitude',\n 'Point': 'the_geom' \n }, \n inplace=True)",
"_____no_output_____"
],
[
"newer_data",
"_____no_output_____"
],
[
"historical_data.columns",
"_____no_output_____"
],
[
"newer_data.columns",
"_____no_output_____"
]
],
[
[
"Now that our datasets have matching columns, let's merge them together. ",
"_____no_output_____"
]
],
[
[
"frames = [historical_data, newer_data]\nall_data = pd.concat(frames)",
"_____no_output_____"
]
],
[
[
"The dataframe all_data now contains our combined dataset!",
"_____no_output_____"
]
],
[
[
"all_data.info()\nall_data.head()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 2642742 entries, 0 to 513216\nData columns (total 13 columns):\n # Column Dtype \n--- ------ ----- \n 0 row_id object\n 1 incident_number object\n 2 incident_code object\n 3 incident_category object\n 4 incident_description object\n 5 day_of_week object\n 6 incident_date object\n 7 incident_time object\n 8 police_district object\n 9 resolution object\n 10 longitude object\n 11 latitude object\n 12 the_geom object\ndtypes: object(13)\nmemory usage: 282.3+ MB\n"
]
],
[
[
"We need to convert our incident_date column into a datetime format",
"_____no_output_____"
]
],
[
[
"all_data['incident_date'] = pd.to_datetime(all_data['incident_date'])",
"_____no_output_____"
],
[
"all_data['incident_date'].min()",
"_____no_output_____"
],
[
"all_data['incident_date'].max()",
"_____no_output_____"
]
],
[
[
"We can see from the date max and min that we've got our full set of date ranges from 2003 to 2021 in this new combined dataframe.",
"_____no_output_____"
],
[
"Since our string search we'll need to pull out our marijuana cases is cap sensitive, let's put all our values in the incident_description and police district in lowercase:",
"_____no_output_____"
]
],
[
[
"all_data['incident_description'] = all_data['incident_description'].str.lower()",
"_____no_output_____"
],
[
"all_data['police_district'] = all_data['police_district'].str.lower()",
"_____no_output_____"
]
],
[
[
"Now let's create a dataframe with all our marijuana data:",
"_____no_output_____"
]
],
[
[
"all_data_marijuana = all_data[\n all_data['incident_description'].str.contains('marijuana')\n].reset_index(drop=True)",
"_____no_output_____"
],
[
"all_data_marijuana",
"_____no_output_____"
],
[
"all_data_marijuana['incident_date'].min()",
"_____no_output_____"
],
[
"all_data_marijuana['incident_date'].max()",
"_____no_output_____"
]
],
[
[
"The incident dates show that we're getting marijuana incidents from 2003 all the way up to 2021. Great!",
"_____no_output_____"
],
[
"Now let's take a look at our incident description values:",
"_____no_output_____"
]
],
[
[
"all_data_marijuana['incident_description'].unique()",
"_____no_output_____"
]
],
[
[
"We can see there are slight differences in labeling of the same type of crime, likely cause by the transfer to the new system in 2018. So let's clean up the incidents column in both our dataframes.",
"_____no_output_____"
]
],
[
[
"all_data_marijuana = all_data_marijuana.replace({'incident_description' : \n { 'marijuana, possession for sale' : 'possession of marijuana for sales',\n 'marijuana, transporting' : 'transportation of marijuana',\n 'marijuana, cultivating/planting' : 'planting/cultivating marijuana',\n 'marijuana, sales' : 'sale of marijuana',\n 'marijuana, furnishing' : 'furnishing marijuana',\n }\n })",
"_____no_output_____"
],
[
"all_data = all_data.replace({'incident_description' : \n { 'marijuana, possession for sale' : 'possession of marijuana for sales',\n 'marijuana, transporting' : 'transportation of marijuana',\n 'marijuana, cultivating/planting' : 'planting/cultivating marijuana',\n 'marijuana, sales' : 'sale of marijuana',\n 'marijuana, furnishing' : 'furnishing marijuana',\n }\n })",
"_____no_output_____"
],
[
"all_data_marijuana['incident_description'].unique()",
"_____no_output_____"
]
],
[
[
"Looks good!",
"_____no_output_____"
],
[
"Now let's export our two dataframes to .csv's that we can now use in other data analysis!",
"_____no_output_____"
]
],
[
[
"all_data.to_csv(\"all_data.csv\", index=False)\nall_data_marijuana.to_csv(\"all_data_marijuana.csv\", index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ecc61df36ebaa7c782edbd3029a6c207547579e6 | 153,986 | ipynb | Jupyter Notebook | logistic_regression.ipynb | vedantc6/MLScratch | b5e97e249818f8a25b332149824e173e51163342 | [
"MIT"
] | null | null | null | logistic_regression.ipynb | vedantc6/MLScratch | b5e97e249818f8a25b332149824e173e51163342 | [
"MIT"
] | 1 | 2018-12-28T06:16:52.000Z | 2019-02-08T17:36:05.000Z | logistic_regression.ipynb | vedantc6/MLScratch | b5e97e249818f8a25b332149824e173e51163342 | [
"MIT"
] | null | null | null | 523.761905 | 77,168 | 0.938865 | [
[
[
"## Logistic Regression\n\nLogistic regression is a type of regression technique suitable when we would like to predict a binary variable, given a linear combination of input features. For example, predicting whether the cancer is malignant or benign, depending on variables such as patient's age, blood type, weight etc. <br>\nSimilar to linear regression, it has a real-valued weight vector w and a real-valued bias b.\nUnlike linear regression which used an identity function as its activation function, logistic regression uses the sigmoid function as its activation function. <br>\nAdditionally, it does not have a closed-form solution. However, the cost function is convex, which allows gradient descent to be used for training the model. <br>\n### Training Steps\n1. Initialize weight vector and bias with zero (or very minimal) values\n2. Calculate $\\boldsymbol{a} = \\boldsymbol{X} \\cdot \\boldsymbol{w} + b $\n3. Apply the sigmoid activation function, which will return a value between 0 and 1 <br>\n$\\boldsymbol{\\hat{y}} = \\sigma(\\boldsymbol{a}) = \\frac{1}{1 + \\exp(-\\boldsymbol{a})}$\n4. Compute the cost. Since, the model has to return probability of the target value as 0 or 1, our cost function should be such that it gives a high probability for a positive target samples and small values for negative target samples. This leads to the cost function looking like <br>\n$J(\\boldsymbol{w},b) = - \\frac{1}{m} \\sum_{i=1}^m \\Big[ y^{(i)} \\log(\\hat{y}^{(i)}) + (1 - y^{(i)}) \\log(1 - \\hat{y}^{(i)}) \\Big]$\n5. Compute the gradient descent, for more information, give [this](https://stats.stackexchange.com/questions/278771/how-is-the-cost-function-from-logistic-regression-derivated) a look <br>\n$ \\frac{\\partial J}{\\partial w_j} = \\frac{1}{m}\\sum_{i=1}^m\\left[\\hat{y}^{(i)}-y^{(i)}\\right]\\,x_j^{(i)}$\n6. Update the weights and bias <br>\n$w = w - \\alpha \\, \\nabla_{\\boldsymbol{w}} J$ <br>\n$b = b - \\alpha \\, \\nabla_{\\boldsymbol{b}} J$\n\nwhere, $\\alpha$ is the learning rate",
"_____no_output_____"
],
[
"### Data",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.datasets import make_blobs\n \nnp.random.seed(1234)\n\nX, y = make_blobs(n_samples=2000, centers=2)\nfig = plt.figure(figsize=(8,6))\nplt.scatter(X[:,0], X[:,1], c=y)\nplt.title(\"Dataset\")\nplt.xlabel(\"First feature\")\nplt.ylabel(\"Second feature\")\n\nplt.show()",
"_____no_output_____"
],
[
"# Reshaping y variable such that it is (n_samples, 1)\n# or in other words, making it a column vector\ny = y[:, np.newaxis]\nX_train, X_test, y_train, y_test = train_test_split(X, y)\n\nprint(f'Shape X_train: {X_train.shape}')\nprint(f'Shape y_train: {y_train.shape}')\nprint(f'Shape X_test: {X_test.shape}')\nprint(f'Shape y_test: {y_test.shape}')",
"Shape X_train: (1500, 2)\nShape y_train: (1500, 1)\nShape X_test: (500, 2)\nShape y_test: (500, 1)\n"
]
],
[
[
"### Model",
"_____no_output_____"
]
],
[
[
"class LogisticRegression:\n def __init__(self):\n pass\n \n def sigmoid(self, a):\n return 1/(1 + np.exp(-a))\n \n def train(self, X, y, alpha=0.001, iterations=100):\n# Step 1: Initialize the parameters\n n_samples, n_features = X.shape\n self.w = np.zeros(shape=(n_features,1))\n self.b = 0\n J = []\n \n for i in range(iterations):\n# Step 2 and 3: Computing linear combination of input features\n# and then passing through sigmoid activation function\n y_hat = self.sigmoid(np.dot(X, self.w) + self.b)\n# Step 4: Compute the cost\n cost = (-1/n_samples)*np.sum(y*np.log(y_hat) + (1-y)*np.log(1-y_hat))\n J.append(cost)\n# Step 5: Compute gradient descent\n dJ_dw = (1/n_samples)*np.dot(X.T, y_hat-y)\n dJ_db = (1/n_samples)*np.sum(y_hat-y)\n# Step 6: Update the parameters\n self.w = self.w - alpha*dJ_dw\n self.b = self.b - alpha*dJ_db\n \n if i % 100 == 0:\n print(f'Cost after iteration {i}: {cost}')\n \n return self.w, self.b, J\n \n def predict(self, X):\n y_pred = self.sigmoid(np.dot(X, self.w) + self.b)\n y_pred_label = [1 if i > 0.5 else 0 for i in y_pred]\n return np.array(y_pred_label)[:, np.newaxis]",
"_____no_output_____"
]
],
[
[
"### Initializing and training the model",
"_____no_output_____"
]
],
[
[
"logistic_regressor = LogisticRegression()\nw_trained, b_trained, J = logistic_regressor.train(X_train, y_train, alpha=0.003, iterations=600)\n\nfig = plt.figure(figsize=(8,6))\nplt.plot(np.arange(600), J)\nplt.title(\"Gradient Descent Training\")\nplt.xlabel(\"Iterations\")\nplt.ylabel(\"Cost\")\nplt.show()",
"Cost after iteration 0: 0.6931471805599453\nCost after iteration 100: 0.34222431599248243\nCost after iteration 200: 0.2257715571571526\nCost after iteration 300: 0.1704064152167814\nCost after iteration 400: 0.13827298750895137\nCost after iteration 500: 0.11726158602641605\n"
]
],
[
[
"### Testing the model",
"_____no_output_____"
]
],
[
[
"y_pred_train = logistic_regressor.predict(X_train)\ny_pred_test = logistic_regressor.predict(X_test)\nprint(f\"train accuracy: {100 - np.mean(np.abs(y_pred_train - y_train)) * 100}%\")\nprint(f\"test accuracy: {100 - np.mean(np.abs(y_pred_test - y_test))*100}%\")",
"train accuracy: 99.93333333333334%\ntest accuracy: 99.8%\n"
]
],
[
[
"### Visualize decision boundary",
"_____no_output_____"
]
],
[
[
"def plot_hyperplane(X, y, w, b):\n slope = -w[0]/w[1]\n intercept = -b/w[1]\n x_hyperplane = np.linspace(-10,5,5)\n y_hyperplane = slope*x_hyperplane + intercept\n fig = plt.figure(figsize=(8,6))\n plt.scatter(X[:,0], X[:,1], c=y)\n plt.plot(x_hyperplane, y_hyperplane, '-')\n plt.title(\"Dataset with fitted hyperplane\")\n plt.xlabel(\"First feature\")\n plt.ylabel(\"Second feature\")\n plt.show()\n \nplot_hyperplane(X, y, w_trained, b_trained)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc623c0d3b07786fbbba465309a91f9d2f491dc | 209,022 | ipynb | Jupyter Notebook | assignment/Gallardo-Nicolas/GallardoAssignment3.ipynb | ml6973/Content | fc88831a9c6603a92afe7d610352e4848317b825 | [
"Apache-2.0"
] | 22 | 2016-09-07T17:05:46.000Z | 2021-04-03T22:18:10.000Z | assignment/Gallardo-Nicolas/GallardoAssignment3.ipynb | ml6973/Content | fc88831a9c6603a92afe7d610352e4848317b825 | [
"Apache-2.0"
] | null | null | null | assignment/Gallardo-Nicolas/GallardoAssignment3.ipynb | ml6973/Content | fc88831a9c6603a92afe7d610352e4848317b825 | [
"Apache-2.0"
] | 25 | 2016-09-01T14:25:49.000Z | 2017-11-20T22:48:33.000Z | 735.992958 | 185,298 | 0.941107 | [
[
[
"# Import the required packages\nimport numpy as np\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport scipy\nimport math\nimport random\nimport string\nimport tensorflow as tf\n\nrandom.seed(123)\n# Display plots inline \n%matplotlib inline\n# Define plot's default figure size\nmatplotlib.rcParams['figure.figsize'] = (10.0, 8.0)",
"_____no_output_____"
],
[
"# get the data\ntrain = pd.read_csv(\"/home/nicolas_gallardo/DL/notebooks/intro_to_ann.csv\")\nprint (train.head())\ntrain_X, train_Y = np.array(train.ix[:,0:2]), np.array(train.ix[:,2])\nprint(train_X.shape, train_Y.shape)\nplt.scatter(train_X[:,0], train_X[:,1], s=40, c=train_Y, cmap=plt.cm.BuGn)\nn_samples = train_X.shape[0]",
" Feature1 Feature2 Target\n0 2.067788 0.258133 1\n1 0.993994 -0.609145 1\n2 -0.690315 0.749921 0\n3 1.023582 0.529003 0\n4 0.700747 -0.496724 1\n\n[5 rows x 3 columns]\n(500, 2) (500,)\n"
],
[
"test_X = train_X[400:, :]\ntest_Y = train_Y[400:]\n#print(test_X.shape, test_Y.shape)\ntrain_X = train_X[:400, :]\ntrain_Y = train_Y[:400]",
"_____no_output_____"
],
[
"# grab number of features and training size from train_X\ntrain_size, num_features = train_X.shape\nprint(train_size, num_features)\n\n# training epochs\nepochs = 2000\n# number of labels in data\nlabels = 2\n# learning rate\nlearning_rate = 0.01\n# number of hidden nodes\nhidden = 4\n\n# convert labels to one-hot matrix\nlabels_onehot = (np.arange(labels) == train_Y[:, None]).astype(np.float32)\n\nprint(labels_onehot.shape)",
"400 2\n(400, 2)\n"
],
[
"# tf Graph Input\nx = tf.placeholder(tf.float32, shape=[None, num_features])\ny_ = tf.placeholder(tf.float32, shape=[None, labels])\n\n# Set model weights --> set weights to an initial random value\nWh = tf.Variable(tf.random_normal([num_features, hidden]))\nbh = tf.Variable(tf.zeros([hidden]))\nW = tf.Variable(tf.random_normal([hidden, labels]))\nb = tf.Variable(tf.zeros([labels]))\n\n# Construct a linear model\nhidden_layer = tf.nn.softmax(tf.add(tf.matmul(x,Wh), bh))\ny = tf.nn.softmax(tf.add(tf.matmul(hidden_layer,W), b))\n\n# Mean squared error\ncost = -tf.reduce_sum(y_*tf.log(y))\n\n# Gradient descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n\ncorrect_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n# Initializing the variables\ninit = tf.initialize_all_variables()",
"_____no_output_____"
],
[
"# Graph\nerrors = []\nwith tf.Session() as sess:\n sess.run(init)\n print('Initialized Session.')\n for step in range(epochs):\n # run optimizer at each step in training\n optimizer.run(feed_dict={x: train_X, y_: labels_onehot})\n # fill errors array with updated error values\n accuracy_value = accuracy.eval(feed_dict={x: train_X, y_: labels_onehot})\n errors.append(1 - accuracy_value)\n print('Optimization Finished!')\n print('Weight matrix then bias matrix from training:')\n print(sess.run(W))\n print(sess.run(b))\n \n # output final error\n print(\"Final error found: \", errors[-1])",
"Initialized Session.\nOptimization Finished!\nWeight matrix then bias matrix from training:\n[[ 0.96543127 -0.53527778]\n [ 3.56306815 -3.07520294]\n [-2.28100419 2.41346622]\n [ 0.34308219 1.64620125]]\n[ 1.01480269 -1.01477766]\nFinal error found: 0.144999980927\n"
],
[
"# plot errors\nplt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])\nplt.show()",
"/usr/local/lib/python3.4/dist-packages/numpy/core/_methods.py:59: RuntimeWarning: Mean of empty slice.\n warnings.warn(\"Mean of empty slice.\", RuntimeWarning)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc6305fd8fdcadfdf2afa51c7673b625f616188 | 342,908 | ipynb | Jupyter Notebook | Task 3/code/.ipynb_checkpoints/2d_position-Constant_Vel-checkpoint.ipynb | thefurorjuror/AGV-Task-Round | 08c03dca839abbccb91850d59415041a7eb6c227 | [
"MIT"
] | null | null | null | Task 3/code/.ipynb_checkpoints/2d_position-Constant_Vel-checkpoint.ipynb | thefurorjuror/AGV-Task-Round | 08c03dca839abbccb91850d59415041a7eb6c227 | [
"MIT"
] | null | null | null | Task 3/code/.ipynb_checkpoints/2d_position-Constant_Vel-checkpoint.ipynb | thefurorjuror/AGV-Task-Round | 08c03dca839abbccb91850d59415041a7eb6c227 | [
"MIT"
] | null | null | null | 67.250049 | 28,048 | 0.640589 | [
[
[
"import numpy as np\nimport math\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"file=open(\"kalmann.txt\",'r')\npos=np.loadtxt(\"kalmann.txt\",delimiter=',',usecols=[0,1])\nvel=np.loadtxt(\"kalmann.txt\",skiprows=1,delimiter=',',usecols=[2,3])\nvel=np.insert(vel,[0],0,axis=0)\n# pos\n# vel",
"_____no_output_____"
],
[
"# defining varibles\ndel_t=1\nsigma_ax=1\nsigma_ay=1\nsigma_x=5\nsigma_y=5\nsigma_vx=0.5\nsigma_vy=0.5\npos_x0,pos_y0=pos[0]\np=15",
"_____no_output_____"
],
[
"pred_state=[]\nest_state=[]\npred_p=[]\nest_p=[]",
"_____no_output_____"
],
[
"F=np.zeros((6,6))\nF[0,:3]=(1,del_t,0.5*del_t**2)\nF[1,:3]=(0,1,del_t)\nF[2,2]=1\nF[3,3:6]=(1,del_t,0.5*del_t**2)\nF[4,3:6]=(0,1,del_t)\nF[5,5]=1\nFT=np.transpose(F)\n# F",
"_____no_output_____"
],
[
"Q=np.zeros((6,6))\nQ[0,:3]=(0.25*del_t**4*sigma_ax**2,0.5*del_t**3*sigma_ax**2,0.5*del_t**2*sigma_ax**2)\nQ[1,:3]=(0,del_t**2*sigma_ax**2,del_t*sigma_ax**2)\nQ[2,:3]=(0,0,1*sigma_ax**2)\nQ[3,3:6]=(0.25*del_t**4*sigma_ay**2,0.5*del_t**3*sigma_ay**2,0.5*del_t**2*sigma_ay**2)\nQ[4,3:6]=(0,del_t**2*sigma_ay**2,del_t*sigma_ay**2)\nQ[5,3:6]=(0,0,1*sigma_ay**2)\n# Q=Q*sigma_ay**2\n# Q",
"_____no_output_____"
],
[
"R=np.zeros((4,4))\nR[0,0],R[1,1],R[2,2],R[3,3]=(sigma_x**2,sigma_vx**2,sigma_y**2,sigma_vy**2)\n# R",
"_____no_output_____"
],
[
"H=np.zeros((4,6))\nH[0,0],H[1,1],H[2,3],H[3,4] =(1,1,1,1)\nHT=np.transpose(H)\n# H",
"_____no_output_____"
],
[
"temp=np.zeros((6,1))\ntemp[:,0]=[pos_x0,0,0,pos_y0,0,0]\nest_state.append(temp)\n# est_state[0]",
"_____no_output_____"
],
[
"temp=np.diag((p,p,p,p,p,p))\nest_p.append(temp)\n# est_p[0]",
"_____no_output_____"
],
[
"def state_extrapolate(xn):\n xnext=np.dot(F,xn)\n return xnext\n ",
"_____no_output_____"
],
[
"def cov_extrapolate(pn):\n pnext=np.dot(np.dot(F,pn),FT) +Q\n return pnext",
"_____no_output_____"
],
[
"def measure(n):\n z=np.zeros((4,1))\n z[:,0]=(pos[n,0],vel[n,0],pos[n,1],vel[n,1])\n return z\n ",
"_____no_output_____"
],
[
"def K_gain(n):\n kn=np.dot(np.dot(pred_p[n-1],HT),(np.linalg.inv(np.dot(np.dot(H,pred_p[n-1]),HT)+R)))\n return kn\n ",
"_____no_output_____"
],
[
"def state_update(xn_1,kn,zn):\n xn=xn_1 + np.dot(kn,(zn-np.dot(H,xn_1)))\n return xn",
"_____no_output_____"
],
[
"def cov_update(pn_1,kn):\n idm=(np.identity(6)-np.dot(kn,H))\n idmT=np.transpose(idm)\n pn=np.dot(np.dot(idm,pn_1),idmT)+ np.dot(np.dot(kn,R),np.transpose(kn))\n return pn",
"_____no_output_____"
],
[
"pred_state.append(state_extrapolate(est_state[0]))\npred_p.append(cov_extrapolate(est_p[0]))\nfor i in range(1,len(vel)):\n zi=measure(i)\n ki=K_gain(1)\n est_state.append(state_update(est_state[i-1],ki,zi))\n est_p.append(cov_update(est_p[i-1],ki))\n pred_state.append(state_extrapolate(est_state[i]))\n pred_p.append(cov_extrapolate(est_p[i]))\n \n ",
"_____no_output_____"
],
[
"est_state",
"_____no_output_____"
],
[
"pred_state",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(pred_state[p][3])\n br.append(est_state[p][3])\n m.append(pos[p][1])\n# pred_state[2][0]\nplt.plot(i,ar,color='orange')\n# plt.plot(i,br,color='blue')\nplt.plot(i,m,color='red')",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(pred_state[p][0])\n br.append(est_state[p][0])\n m.append(pos[p][0])\n# pred_state[2][0]\nplt.plot(i,ar,color='orange')\n# plt.plot(i,br,color='blue')\nplt.plot(i,m,color='red')",
"_____no_output_____"
],
[
"# plt.plot(i,ar,color='orange')\nplt.plot(i,br,color='blue')\nplt.plot(i,m,color='red')",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(pred_state[p][0])\n br.append(pred_state[p][3])\n m.append(pos[p][0])\n# pred_state[2][0]\nplt.plot(ar,br,color='orange')\n# plt.plot(i,br)\n# plt.plot(i,m)",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(est_state[p][0])\n br.append(est_state[p][3])\n m.append(pos[p][0])\n# pred_state[2][0]\nplt.plot(ar,br,color='blue')\n# plt.plot(i,br)\n# plt.plot(i,m)",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(pos[p][0])\n br.append(pos[p][1])\n# m.append(pos[p][0])\n# pred_state[2][0]\nplt.plot(ar,br,color='red')\n# plt.plot(i,br)\n# plt.plot(i,m)",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nar1=[]\nbr1=[]\nar2=[]\nbr2=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(est_state[p][0])\n br.append(est_state[p][3]) \n ar1.append(pred_state[p][0])\n br1.append(pred_state[p][3])\n ar2.append(pos[p][0])\n br2.append(pos[p][1]) \n# m.append(pos[p][0])\n# pred_state[2][0]\nplt.plot(ar,br,color='blue')\nplt.plot(ar1,br1,color='orange')\nplt.plot(ar2,br2,color='red')\n# plt.plot(i,br)\n# plt.plot(i,m)",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(est_state[p][1])\n br.append(pred_state[p][1])\n m.append(vel[p][0])\n \nplt.plot(i,ar,color='blue')\nplt.plot(i,br,color='orange')\nplt.plot(i,m,color='red')\n",
"_____no_output_____"
],
[
"i=np.arange(1,len(vel))\nar=[]\nbr=[]\nm=[]\nfor p in range(1,len(vel)):\n ar.append(est_state[p][4])\n br.append(pred_state[p][4])\n m.append(vel[p][1])\n \nplt.plot(i,ar,color='blue')\nplt.plot(i,br,color='orange')\nplt.plot(i,m,color='red')",
"_____no_output_____"
],
[
"np.std(pred_state[:115][0])",
"_____no_output_____"
],
[
"temp=vel[10:20,1]\ntemp",
"_____no_output_____"
],
[
"np.std(temp)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc6314369cb0383b5d48fa885fddfabd2eaaa1f | 2,658 | ipynb | Jupyter Notebook | Week 2/HW2_1.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | Week 2/HW2_1.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | Week 2/HW2_1.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | 20.929134 | 385 | 0.555305 | [
[
[
"Created by: Drishika Nadella\n\nDate: 03 March 2021",
"_____no_output_____"
]
],
[
[
"def fact(x):\n if x>1:\n return x*fact(x-1)\n else:\n return x",
"_____no_output_____"
],
[
"num = int(input(\"Enter the number whose factorial you would like: \"))\nprint(fact(num))",
"Enter the number whose factorial you would like: 20\n2432902008176640000\n"
],
[
"print(fact(200))",
"788657867364790503552363213932185062295135977687173263294742533244359449963403342920304284011984623904177212138919638830257642790242637105061926624952829931113462857270763317237396988943922445621451664240254033291864131227428294853277524242407573903240321257405579568660226031904170324062351700858796178922222789623703897374720000000000000000000000000000000000000000000000000\n"
],
[
"real = float(input(\"Enter the number whose factorial you would like: \"))\nprint(fact(real))",
"Enter the number whose factorial you would like: 10\n3628800.0\n"
],
[
"print(fact(200.0))",
"inf\n"
]
],
[
[
"The reason why it shows as 'inf' is because 200! is beyond the limit of floating points.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecc634b4ae78b83634f9b77711b6169da6bb0b53 | 8,641 | ipynb | Jupyter Notebook | notebooks/Pywedge_Classification.ipynb | taknev83/pywedge_classification_demo | afe0a2fa0528cd209c70c106053957d144109a1a | [
"Apache-2.0"
] | null | null | null | notebooks/Pywedge_Classification.ipynb | taknev83/pywedge_classification_demo | afe0a2fa0528cd209c70c106053957d144109a1a | [
"Apache-2.0"
] | null | null | null | notebooks/Pywedge_Classification.ipynb | taknev83/pywedge_classification_demo | afe0a2fa0528cd209c70c106053957d144109a1a | [
"Apache-2.0"
] | 1 | 2021-02-10T18:03:05.000Z | 2021-02-10T18:03:05.000Z | 22.099744 | 249 | 0.49242 | [
[
[
"%%HTML\n\n<html>\n<head>\n<style>\nh1 {\n text-shadow: 2px 2px 5px lightblue;\n}\n</style>\n</head>\n<body>\n\n<h1><center></center</h1>\n<h1><center>Pywedge - Classification Demo</center</h1>\n\n</body>\n</html>\n",
"_____no_output_____"
]
],
[
[
"### > Interactive Charts\n### > Interactive Baseline Models\n### > Interactive Hyperparameter Tuning\n### > Track Hyperparameters on-the-go!\n\n### [Git Hub](https://github.com/taknev83/pywedge) | [Docs](https://taknev83.github.io/pywedge-docs/) | [PyPi](https://pypi.org/project/pywedge/)\n\n#### Dataset - Subset of Bank Marketing Data Set (Classification)\n#### Source: https://archive.ics.uci.edu/ml/datasets/Bank+Marketing",
"_____no_output_____"
]
],
[
[
"import pywedge as pw",
"_____no_output_____"
],
[
"import pandas as pd\ntrain = pd.read_csv('https://raw.githubusercontent.com/taknev83/datasets/master/bank_train.csv')\ntest = pd.read_csv('https://raw.githubusercontent.com/taknev83/datasets/master/bank_test.csv')",
"_____no_output_____"
],
[
"mc = pw.Pywedge_Charts(train, c='Unnamed: 0', y = 'y')",
"_____no_output_____"
],
[
"charts = mc.make_charts()",
"_____no_output_____"
],
[
"blm = pw.baseline_model(train, test, c='Unnamed: 0', y='y')",
"_____no_output_____"
],
[
"blm.classification_summary()",
"_____no_output_____"
],
[
"pph = pw.Pywedge_HP(train, test, c='Unnamed: 0', y='y')",
"_____no_output_____"
],
[
"pph.HP_Tune_Classification()",
"_____no_output_____"
]
],
[
[
"### HP_Tune - User Guide:\n\n* Please enter multiple numerical values as comma seperates values, for eg. in Decision Tree Hyperpameter search space, multiple Max_Depth can be entered as 5, 10, 15 (Use of numpy notation is not yet supported)\n* Use ctrl + click to select multiple \n* Use n_jobs as -1 for faster hyperparameter search\n* The helper page tab provides the relavent estimator's web page for quick reference\n* Hyperparameters can be tracked on-the-go by passing tracking=True & invoke MLFlow user interface from command prompt, more details in the [docs page](https://taknev83.github.io/pywedge-docs/modules/#pywedge_interactive-hyperparameter-tuning)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecc63abb7289f2c36c5482eb20ef19109eeb1fcb | 342,753 | ipynb | Jupyter Notebook | .ipynb_checkpoints/WrodInfoTemplateDisplay-checkpoint.ipynb | hedgeli/SimpleOffLineASR_Demo_sw | 2bc6e5125d4d88f6e60c0c7810dce92cbb1c3f13 | [
"MIT"
] | 9 | 2020-07-02T01:49:43.000Z | 2021-07-24T11:07:58.000Z | .ipynb_checkpoints/WrodInfoTemplateDisplay-checkpoint.ipynb | hedgeli/SimpleOffLineASR_Demo_sw | 2bc6e5125d4d88f6e60c0c7810dce92cbb1c3f13 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/WrodInfoTemplateDisplay-checkpoint.ipynb | hedgeli/SimpleOffLineASR_Demo_sw | 2bc6e5125d4d88f6e60c0c7810dce92cbb1c3f13 | [
"MIT"
] | 4 | 2020-07-14T02:50:19.000Z | 2021-06-23T10:04:55.000Z | 1,127.476974 | 265,936 | 0.955592 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport librosa\nimport librosa.display\nimport scipy\nimport matplotlib\n# import spectrogram_sw as spec \n\nnp.seterr(divide='ignore', invalid='ignore')\n\n# 预设字体格式,并传给rc方法\nfont = {'family': 'SimHei', \"size\": 10}\nmatplotlib.rc('font', **font)\nmatplotlib.rcParams['axes.unicode_minus'] = False\n\nnp.set_printoptions(precision=2)",
"_____no_output_____"
],
[
"para_file = \"./ResFiles/Wav/Word_voice_tmptWordPara.npz\"\n\npars_pc = np.load(para_file)\nprint(pars_pc.files)\nprint(pars_pc[\"arr_0\"].shape)",
"['arr_0']\n(24,)\n"
],
[
"print(pars_pc['arr_0'][1]['specgram'].shape)\nplt.figure(figsize=(10,10))\nfor i in range(3):\n for j in range(4):\n plt.subplot(3,4,i*4 + j+1)\n plt.imshow(pars_pc['arr_0'][i*4 + j]['specgram'], \n origin='lower')\n plt.title(pars_pc['arr_0'][i*4 + j]['word'])\nplt.show()",
"(80, 80)\n"
],
[
"plt.figure(figsize=(10,10))\nfor i in range(3):\n for j in range(4):\n plt.subplot(3,4,i*4 + j+1)\n plt.imshow(pars_pc['arr_0'][i*4 + j]['spec_zoom_out'], \n origin='lower')\n plt.title(pars_pc['arr_0'][i*4 + j]['word'])\nplt.show()",
"_____no_output_____"
],
[
"para_file = \"./ResFiles/Wav/mic_开机关机启动停止前进后退左转右转提高降低加速减速WordPara.npz\"\n\npars_mic = np.load(para_file)\nprint(pars_mic.files)\nprint(pars_mic[\"arr_0\"].shape)",
"['arr_0']\n(24,)\n"
],
[
"plt.figure(figsize=(10,10))\nfor i in range(3):\n for j in range(4):\n plt.subplot(3,4,i*4 + j+1)\n plt.imshow(pars_mic['arr_0'][i*4 + j]['spec_zoom_out'], \n origin='lower')\n plt.title(pars_mic['arr_0'][i*4 + j]['word'])\nplt.show()",
"_____no_output_____"
],
[
"def sub_abs_dhash(arr1, arr2):\n arr = arr1 - arr2\n _v, _w = arr.shape\n if _v * _w > 0:\n _sum = np.sum(abs(arr))/(_v*_w)\n return _sum\n else:\n return 1.0\n\ndef cal_dhash_diff_rate(arr1, arr2, phoneme=2):\n _v1, _w1 = arr1.shape\n _v2, _w2 = arr2.shape\n if _v1 != _v2:\n print('vertical of cal_dhash_diff_rate(arr1, arr2) must be same')\n return []\n if _w1 == _w2:\n _phoneme_width = _w1//2\n _out1 = sub_abs_dhash(arr1[0:_phoneme_width], \n arr2[0:_phoneme_width])\n _out2 = sub_abs_dhash(arr1[_phoneme_width:], \n arr2[_phoneme_width:])\n# return max(_out1, _out2) # 不同度 \n return (_out1+_out2)/2\n if _w1 > _w2:\n _d = _w1 - _w2\n t_val = np.zeros(_d+1)\n for k in range(_d+1):\n t_val[k] = cal_dhash_diff_rate(arr1[:, k:k+_w2], arr2[:, 0:_w2])\n return np.min(t_val)\n else:\n _d = _w2 - _w1\n t_val = np.zeros(_d+1)\n for k in range(_d+1):\n t_val[k] = cal_dhash_diff_rate(arr1[:, 0:_w1], arr2[:, k:k+_w1])\n return np.min(t_val)\n \nsimi_arr = np.zeros((12,12))\nfor i in range(12):\n for j in range(12):\n _wi = pars_pc['arr_0'][i]['_zoom_width']\n _wj = pars_mic['arr_0'][j]['_zoom_width']\n simi_arr[i,j] = cal_dhash_diff_rate(pars_pc['arr_0'][i]['spec_dhash'][:,0:_wi],\n pars_mic['arr_0'][j]['spec_dhash'][:,0:_wj])\n \nplt.imshow(simi_arr)\nplt.show()\n\nprint(simi_arr)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc647af226bb0c749c02b6e76a2c013afe7900d | 10,999 | ipynb | Jupyter Notebook | example/random_request_network.ipynb | viachesl/SeQUeNCe | 6ee144d1b0af9f3208d329d1019f3d29a4dc0485 | [
"BSD-3-Clause"
] | null | null | null | example/random_request_network.ipynb | viachesl/SeQUeNCe | 6ee144d1b0af9f3208d329d1019f3d29a4dc0485 | [
"BSD-3-Clause"
] | null | null | null | example/random_request_network.ipynb | viachesl/SeQUeNCe | 6ee144d1b0af9f3208d329d1019f3d29a4dc0485 | [
"BSD-3-Clause"
] | null | null | null | 42.303846 | 443 | 0.626966 | [
[
[
"# Network with Applications\n\nIn this file, we'll demonstrate the simulation of a more complicated network topology with randomized applications. These applications will act on each node, first choosing a random other node from the network and then requesting a random number of entangled pairs between the local and distant nodes. The network topology, including hardware components, is shown below:\n\n<img src=\"./notebook_images/star_network.png\" width=\"700\"/>",
"_____no_output_____"
],
[
"## Example\n\nIn this example, we construct the network described above and add the random request app included in SeQUeNCe. We'll be building the topology from an external json file `star_network.json`.\n\n### Imports\nWe must first import the necessary tools from SeQUeNCe.\n- `Timeline` is the main simulation tool, providing an interface for the discrete-event simulation kernel.\n- `Topology` is a powerful class for creating and managing complex network topologies. We'll be using it to build our network and intefrace with specific nodes and node types.\n- `RandomRequestApp` is an example application included with SeQUeNCe. We will investigate its behavior when we add applications to our network.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom ipywidgets import interact\nimport time",
"_____no_output_____"
],
[
"from sequence.kernel.timeline import Timeline\nfrom sequence.topology.topology import Topology\nfrom sequence.app.random_request import RandomRequestApp",
"_____no_output_____"
]
],
[
[
"### Building the Simulation\n\nWe'll now construct the network and add our applications. This example follows the usual process to ensure that all tools function properly:\n1. Create the timeline for the simulation\n2. Create the simulated network topology. In this case, we are using an external JSON file to specify nodes and their connectivity.\n - This includes specifying hardware parameters in the `set_parameters` function, defined later\n3. Install custom protocols/applications and ensure all are set up properly\n4. Initialize and run the simulation\n5. Collect and display the desired metrics\n\nThe JSON file specifies that network nodes should be of type `QuantumRouter`, a node type defined by SeQUeNCe. This will automatically create all necessary hardware and protocol instances on the nodes, and the `Topology` class will automatically generate `BSMNode` instances between such nodes.\n\nTo construct an application, we need:\n- The node to attach the application to\n- The names (given as strings) of other possible nodes to generate links with\n- A seed for the internal random number generator of the application\n\nWe can get a list of all desired application nodes, in this case routers, from the `Topology` class with the `get_nodes_by_type` method. We then set an application on each one, with the other possible connections being every other node in the network. We also give a unique random seed `i` to each application.",
"_____no_output_____"
]
],
[
[
"def test(sim_time, qc_atten):\n \"\"\"\n sim_time: duration of simulation time (ms)\n qc_atten: quantum channel attenuation (dB/km)\n \"\"\"\n network_config = \"star_network.json\"\n \n tl = Timeline(sim_time * 1e9)\n tl.seed(0)\n\n network_topo = Topology(\"network_topo\", tl)\n network_topo.load_config(network_config)\n \n set_parameters(network_topo, qc_atten)\n \n # construct random request applications\n node_names = [node.name for node in network_topo.get_nodes_by_type(\"QuantumRouter\")]\n apps = []\n for i, name in enumerate(node_names):\n other_nodes = node_names[:] # copy node name list\n other_nodes.remove(name)\n app = RandomRequestApp(network_topo.nodes[name], other_nodes, i)\n apps.append(app)\n app.start()\n \n tl.init()\n tick = time.time()\n tl.run()\n print(\"execution time %.2f sec\" % (time.time() - tick))\n \n for app in apps:\n print(\"node \" + app.node.name)\n print(\"\\tnumber of wait times: \", len(app.get_wait_time()))\n print(\"\\twait times:\", app.get_wait_time())\n print(\"\\treservations: \", app.reserves)\n print(\"\\tthroughput: \", app.get_throughput())\n \n print(\"\\nReservations Table:\\n\")\n node_names = []\n start_times = []\n end_times = []\n memory_sizes = []\n for node in network_topo.get_nodes_by_type(\"QuantumRouter\"):\n node_name = node.name\n for reservation in node.network_manager.protocol_stack[1].accepted_reservation:\n s_t, e_t, size = reservation.start_time, reservation.end_time, reservation.memory_size\n if reservation.initiator != node.name and reservation.responder != node.name:\n size *= 2\n node_names.append(node_name)\n start_times.append(s_t)\n end_times.append(e_t)\n memory_sizes.append(size)\n log = {\"Node\": node_names, \"Start_time\": start_times, \"End_time\": end_times, \"Memory_size\": memory_sizes}\n df = pd.DataFrame(log)\n print(df)",
"_____no_output_____"
]
],
[
[
"### Setting parameters\n\nHere we define the `set_parameters` function we used earlier. This function will take a `Topology` as input and change many parameters to desired values.\n\nQuantum memories and detectors are hardware elements, and so parameters are changed by accessing the hardware included with the `QuantumRouter` and `BSMNode` node types. Many complex hardware elements, such as bsm devices or memory arrays, have methods to update parameters for all included hardware elements. This includes `update_memory_params` to change all memories in an array or `update_detector_params` to change all detectors.\n\nWe will also set the success probability and swapping degradation of the entanglement swapping protocol. This will be set in the Network management Module (specifically the reservation protocol), as this information is necessary to create and manage the rules for the Resource Management module.\n\nLastly, we'll update some parameters of the quantum channels. Quantum channels (and, similarly, classical channels) can be accessed from the `Topology` class as the `qchannels` field. Since these are individual hardware elements, we will set the parameters directly.",
"_____no_output_____"
]
],
[
[
"def set_parameters(topology, attenuation):\n # set memory parameters\n MEMO_FREQ = 2e3\n MEMO_EXPIRE = 0\n MEMO_EFFICIENCY = 1\n MEMO_FIDELITY = 0.9349367588934053\n for node in topology.get_nodes_by_type(\"QuantumRouter\"):\n node.memory_array.update_memory_params(\"frequency\", MEMO_FREQ)\n node.memory_array.update_memory_params(\"coherence_time\", MEMO_EXPIRE)\n node.memory_array.update_memory_params(\"efficiency\", MEMO_EFFICIENCY)\n node.memory_array.update_memory_params(\"raw_fidelity\", MEMO_FIDELITY)\n\n # set detector parameters\n DETECTOR_EFFICIENCY = 0.9\n DETECTOR_COUNT_RATE = 5e7\n DETECTOR_RESOLUTION = 100\n for node in topology.get_nodes_by_type(\"BSMNode\"):\n node.bsm.update_detectors_params(\"efficiency\", DETECTOR_EFFICIENCY)\n node.bsm.update_detectors_params(\"count_rate\", DETECTOR_COUNT_RATE)\n node.bsm.update_detectors_params(\"time_resolution\", DETECTOR_RESOLUTION)\n \n # set entanglement swapping parameters\n SWAP_SUCC_PROB = 0.90\n SWAP_DEGRADATION = 0.99\n for node in topology.get_nodes_by_type(\"QuantumRouter\"):\n node.network_manager.protocol_stack[1].set_swapping_success_rate(SWAP_SUCC_PROB)\n node.network_manager.protocol_stack[1].set_swapping_degradation(SWAP_DEGRADATION)\n \n # set quantum channel parameters\n ATTENUATION = attenuation\n QC_FREQ = 1e11\n for qc in topology.qchannels:\n qc.attenuation = ATTENUATION\n qc.frequency = QC_FREQ",
"_____no_output_____"
]
],
[
[
"### Running the Simulation\n\nAll that is left is to run the simulation with user input. Note that different hardware parameters or network topologies may cause the simulation to run for a very long time.",
"_____no_output_____"
]
],
[
[
"interact(test, sim_time=50e3, qc_atten=[0, 1e-5, 2e-5])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc64806d5a1a933c12a6a63b85595f9c5b0cbdb | 289,658 | ipynb | Jupyter Notebook | notebook/doc2vec.ipynb | tkosht/wikiencoder | c1744e60e902949e1926c9efe0c24eb3ac5f00fd | [
"MIT"
] | null | null | null | notebook/doc2vec.ipynb | tkosht/wikiencoder | c1744e60e902949e1926c9efe0c24eb3ac5f00fd | [
"MIT"
] | null | null | null | notebook/doc2vec.ipynb | tkosht/wikiencoder | c1744e60e902949e1926c9efe0c24eb3ac5f00fd | [
"MIT"
] | null | null | null | 29.647697 | 47,286 | 0.314267 | [
[
[
"import sys\nsys.path.append(\".\")",
"_____no_output_____"
],
[
"import os\nos.getcwd()\nos.chdir(\"../\")",
"_____no_output_____"
],
[
"from gensim.models.doc2vec import Doc2Vec, TaggedDocument\nfrom project.wikitext import WikiTextLoader\nfrom project.vectorize import Vectorize",
"_____no_output_____"
],
[
"v = Vectorize(indir=\"data/tests/parsed\", batch_size=32)\nv.run()\n",
"_____no_output_____"
],
[
"from allennlp.data.tokenizers.word_tokenizer import SpacyWordSplitter\nimport spacy\nspacy.load('en_core_web_sm')",
"_____no_output_____"
],
[
"tokenizer = SpacyWordSplitter(pos_tags=True)\nsentence = tokenizer.split_words(\"This is a sentence.\")\nsentence\nword = sentence[0]\nstr(word)",
"_____no_output_____"
],
[
"import nltk\nnltk.download('punkt')",
"[nltk_data] Downloading package punkt to /home/take/nltk_data...\n[nltk_data] Unzipping tokenizers/punkt.zip.\n"
],
[
"from nltk import tokenize",
"_____no_output_____"
],
[
"wt = WikiTextLoader(\"data/tests/parsed\", batch_size=32, with_title=False, residual=True)",
"_____no_output_____"
],
[
"# tagdocs = [\n# TaggedDocument([tokenizer.split_words(s) for s in tokenize.sent_tokenize(doctxt)], [f\"{m:03d}-{k:03d}\"])\n# for m, (docs, paths) in enumerate(wt.iter())\n# for k, (doctxt, path) in enumerate(zip(docs, paths))\n# ]\ntagdocs = [\n TaggedDocument(doc, [f\"{m:03d}-{k:03d}\"])\n for m, (docs, paths) in enumerate(wt.iter())\n for k, (doc, path) in enumerate(zip(docs, paths))\n]\n\ntagdocs",
"_____no_output_____"
],
[
"td = tagdocs[0]\ntd.words",
"_____no_output_____"
],
[
"from functools import wraps\nimport time\ndef timer(func) :\n @wraps(func)\n def wrapper(*args, **kargs) :\n start = time.time()\n result = func(*args,**kargs)\n process_time = time.time() - start\n print(f\"elapsed time: {process_time} in {func.__name__}\")\n return result\n return wrapper\n",
"_____no_output_____"
],
[
"wt = WikiTextLoader(\"data/tests/parsed\", batch_size=32, do_tokenize=False, with_title=False, residual=True)\ndocs, paths = next(wt.iter())",
"_____no_output_____"
],
[
"from nltk import tokenize\n@timer\ndef nltk_word_toknize(docs):\n for _ in range(10):\n words = [\n [tokenize.word_tokenize(s) for s in tokenize.sent_tokenize(doctxt)]\n for doctxt in docs\n ]\nnltk_word_toknize(docs)",
"elapsed time: 0.7223348617553711 in nltk_word_toknize\n"
],
[
"spacy_tokenizer = SpacyWordSplitter(pos_tags=True)\n\n@timer\ndef spacy_word_toknize(docs):\n for _ in range(10):\n words = [\n [spacy_tokenizer.split_words(s) for s in tokenize.sent_tokenize(doctxt)]\n for doctxt in docs\n ]\nspacy_word_toknize(docs)",
"elapsed time: 14.493231534957886 in spacy_word_toknize\n"
],
[
"wt = WikiTextLoader(\"data/tests/parsed\", batch_size=32, do_tokenize=True, with_title=False, residual=True)\ndocs, paths = next(wt.iter())",
"_____no_output_____"
],
[
"docs",
"_____no_output_____"
],
[
"from itertools import chain\n@timer\ndef tagdocs(docs):\n for k in range(10):\n tagged = [\n TaggedDocument(list(chain.from_iterable(doc)), [f\"{k:03d}\"])\n for doc in docs\n ]\n return tagged\ntagged = tagdocs(docs)",
"elapsed time: 0.0014331340789794922 in tagdocs\n"
],
[
"m = Doc2Vec(tagged, vector_size=128, window=3, min_count=1,\n dm=1, hs=0, negative=10, dbow_words=1,\n workers=4)",
"_____no_output_____"
],
[
"m.wv.vocab.keys()",
"_____no_output_____"
],
[
"tg = tagged[0]\ntg.words[0]\n",
"_____no_output_____"
],
[
"from itertools import chain\nlist(chain.from_iterable(tg.words))\n",
"_____no_output_____"
],
[
"import nltk\nnltk.download('stopwords')\n",
"[nltk_data] Downloading package stopwords to /home/take/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\n"
],
[
"import string\nfrom nltk.corpus import stopwords\npunc = set(list(string.punctuation))\nsw = set(stopwords.words(\"english\"))\nsw = sw.union(punc)\nsw",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc64d59a57506130cfeaf040425f1544879f618 | 109,946 | ipynb | Jupyter Notebook | MachineLearning_projects/tencentAdCtrPredict/3.feature_engineering_and_machine_learning.ipynb | zengqingxue/ML_cases | ca460838422ef93d6b3739bdf3ba2938b94870e3 | [
"Apache-2.0"
] | null | null | null | MachineLearning_projects/tencentAdCtrPredict/3.feature_engineering_and_machine_learning.ipynb | zengqingxue/ML_cases | ca460838422ef93d6b3739bdf3ba2938b94870e3 | [
"Apache-2.0"
] | null | null | null | MachineLearning_projects/tencentAdCtrPredict/3.feature_engineering_and_machine_learning.ipynb | zengqingxue/ML_cases | ca460838422ef93d6b3739bdf3ba2938b94870e3 | [
"Apache-2.0"
] | null | null | null | 47.513397 | 27,552 | 0.571772 | [
[
[
"## 特征工程与机器学习建模",
"_____no_output_____"
],
[
"### 自定义工具函数库",
"_____no_output_____"
]
],
[
[
"#coding=utf-8\nimport pandas as pd\nimport numpy as np\nimport scipy as sp\n\n\n#文件读取\ndef read_csv_file(f,logging=False):\n print(\"============================读取数据========================\",f) \n print(\"======================我是萌萌哒分界线========================\")\n data = pd.read_csv(f)\n if logging:\n print(data.head(5))\n print( f,\"包含以下列....\")\n print(data.columns.values)\n print( data.describe())\n print( data.info())\n return data\n\n#第一类编码\ndef categories_process_first_class(cate):\n cate = str(cate)\n if len(cate)==1:\n if int(cate)==0:\n return 0\n else:\n return int(cate[0])\n\n#第2类编码\ndef categories_process_second_class(cate):\n cate = str(cate)\n if len(cate)<3:\n return 0\n else:\n return int(cate[1:])\n\n#年龄处理,切段\ndef age_process(age):\n age = int(age)\n if age==0:\n return 0\n elif age<15:\n return 1\n elif age<25:\n return 2\n elif age<40:\n return 3\n elif age<60:\n return 4\n else:\n return 5\n\n#省份处理\ndef process_province(hometown):\n hometown = str(hometown)\n province = int(hometown[0:2])\n return province\n\n#城市处理\ndef process_city(hometown):\n hometown = str(hometown)\n if len(hometown)>1:\n province = int(hometown[2:])\n else:\n province = 0\n return province\n\n#几点钟\ndef get_time_day(t):\n t = str(t)\n t=int(t[0:2])\n return t\n\n#一天切成4段\ndef get_time_hour(t):\n t = str(t)\n t=int(t[2:4])\n if t<6:\n return 0\n elif t<12:\n return 1\n elif t<18:\n return 2\n else:\n return 3\n\n#评估与计算logloss\ndef logloss(act, pred):\n epsilon = 1e-15\n pred = sp.maximum(epsilon, pred)\n pred = sp.minimum(1-epsilon, pred)\n ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))\n ll = ll * -1.0/len(act)\n return ll",
"_____no_output_____"
]
],
[
[
"### 特征工程+随机森林建模",
"_____no_output_____"
],
[
"#### import 库",
"_____no_output_____"
]
],
[
[
"#coding=utf-8\nfrom sklearn.preprocessing import Binarizer\nfrom sklearn.preprocessing import MinMaxScaler\nimport pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"#### 读取train_data和ad\n#### 特征工程",
"_____no_output_____"
]
],
[
[
"data_root = \"E:/E_datas/tencentCvrConvertPrediton_7_8\"\n#['label' 'clickTime' 'conversionTime' 'creativeID' 'userID' 'positionID' 'connectionType' 'telecomsOperator']\ntrain_data = read_csv_file(data_root+'/train.csv',logging=True)\n\n#['creativeID' 'adID' 'camgaignID' 'advertiserID' 'appID' 'appPlatform']\nad = read_csv_file(data_root+'/ad.csv',logging=True)",
"============================读取数据======================== E:/E_datas/tencentCvrConvertPrediton_7_8/train.csv\n======================我是萌萌哒分界线========================\n label clickTime conversionTime creativeID userID positionID \\\n0 0 170000 NaN 3089 2798058 293 \n1 0 170000 NaN 1259 463234 6161 \n2 0 170000 NaN 4465 1857485 7434 \n3 0 170000 NaN 1004 2038823 977 \n4 0 170000 NaN 1887 2015141 3688 \n\n connectionType telecomsOperator \n0 1 1 \n1 1 2 \n2 4 1 \n3 1 1 \n4 1 1 \nE:/E_datas/tencentCvrConvertPrediton_7_8/train.csv 包含以下列....\n['label' 'clickTime' 'conversionTime' 'creativeID' 'userID' 'positionID'\n 'connectionType' 'telecomsOperator']\n label clickTime conversionTime creativeID userID \\\ncount 3.749528e+06 3.749528e+06 93262.000000 3.749528e+06 3.749528e+06 \nmean 2.487300e-02 2.418317e+05 242645.358013 3.261575e+03 1.405349e+06 \nstd 1.557380e-01 3.958793e+04 39285.385532 1.829643e+03 8.088094e+05 \nmin 0.000000e+00 1.700000e+05 170005.000000 1.000000e+00 1.000000e+00 \n25% 0.000000e+00 2.116270e+05 211626.000000 1.540000e+03 7.058698e+05 \n50% 0.000000e+00 2.418390e+05 242106.000000 3.465000e+03 1.407062e+06 \n75% 0.000000e+00 2.722170e+05 272344.000000 4.565000e+03 2.105989e+06 \nmax 1.000000e+00 3.023590e+05 302359.000000 6.582000e+03 2.805118e+06 \n\n positionID connectionType telecomsOperator \ncount 3.749528e+06 3.749528e+06 3.749528e+06 \nmean 3.702799e+03 1.222590e+00 1.605879e+00 \nstd 1.923724e+03 5.744428e-01 8.491127e-01 \nmin 1.000000e+00 0.000000e+00 0.000000e+00 \n25% 2.579000e+03 1.000000e+00 1.000000e+00 \n50% 3.322000e+03 1.000000e+00 1.000000e+00 \n75% 4.896000e+03 1.000000e+00 2.000000e+00 \nmax 7.645000e+03 4.000000e+00 3.000000e+00 \n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 3749528 entries, 0 to 3749527\nData columns (total 8 columns):\n # Column Dtype \n--- ------ ----- \n 0 label int64 \n 1 clickTime int64 \n 2 conversionTime float64\n 3 creativeID int64 \n 4 userID int64 \n 5 positionID int64 \n 6 connectionType int64 \n 7 telecomsOperator int64 \ndtypes: float64(1), int64(7)\nmemory usage: 228.9 MB\nNone\n============================读取数据======================== E:/E_datas/tencentCvrConvertPrediton_7_8/ad.csv\n======================我是萌萌哒分界线========================\n creativeID adID camgaignID advertiserID appID appPlatform\n0 4079 2318 147 80 14 2\n1 4565 3593 632 3 465 1\n2 3170 1593 205 54 389 1\n3 6566 2390 205 54 389 1\n4 5187 411 564 3 465 1\nE:/E_datas/tencentCvrConvertPrediton_7_8/ad.csv 包含以下列....\n['creativeID' 'adID' 'camgaignID' 'advertiserID' 'appID' 'appPlatform']\n creativeID adID camgaignID advertiserID appID \\\ncount 6582.000000 6582.000000 6582.000000 6582.000000 6582.000000 \nmean 3291.500000 1786.341689 313.397144 44.381191 310.805682 \nstd 1900.204068 1045.890729 210.636055 24.091342 125.577377 \nmin 1.000000 1.000000 1.000000 1.000000 14.000000 \n25% 1646.250000 882.250000 131.000000 26.000000 205.000000 \n50% 3291.500000 1771.000000 274.000000 54.000000 389.000000 \n75% 4936.750000 2698.750000 512.000000 57.000000 421.000000 \nmax 6582.000000 3616.000000 720.000000 91.000000 472.000000 \n\n appPlatform \ncount 6582.000000 \nmean 1.448952 \nstd 0.497425 \nmin 1.000000 \n25% 1.000000 \n50% 1.000000 \n75% 2.000000 \nmax 2.000000 \n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6582 entries, 0 to 6581\nData columns (total 6 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 creativeID 6582 non-null int64\n 1 adID 6582 non-null int64\n 2 camgaignID 6582 non-null int64\n 3 advertiserID 6582 non-null int64\n 4 appID 6582 non-null int64\n 5 appPlatform 6582 non-null int64\ndtypes: int64(6)\nmemory usage: 308.7 KB\nNone\n"
],
[
"#app\napp_categories = read_csv_file(data_root+'/app_categories.csv',logging=True)\napp_categories[\"app_categories_first_class\"] = app_categories['appCategory'].apply(categories_process_first_class)\napp_categories[\"app_categories_second_class\"] = app_categories['appCategory'].apply(categories_process_second_class)",
"============================读取数据======================== E:/E_datas/tencentCvrConvertPrediton_7_8/app_categories.csv\n======================我是萌萌哒分界线========================\n appID appCategory\n0 14 2\n1 25 203\n2 68 104\n3 75 402\n4 83 203\nE:/E_datas/tencentCvrConvertPrediton_7_8/app_categories.csv 包含以下列....\n['appID' 'appCategory']\n appID appCategory\ncount 217041.000000 217041.000000\nmean 137220.306472 161.856133\nstd 105340.872671 157.746571\nmin 14.000000 0.000000\n25% 54585.000000 0.000000\n50% 111520.000000 106.000000\n75% 195882.000000 301.000000\nmax 433269.000000 503.000000\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 217041 entries, 0 to 217040\nData columns (total 2 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 appID 217041 non-null int64\n 1 appCategory 217041 non-null int64\ndtypes: int64(2)\nmemory usage: 3.3 MB\nNone\n"
],
[
"app_categories.head()",
"_____no_output_____"
],
[
"user = read_csv_file(data_root+'/user.csv',logging=False)",
"============================读取数据======================== E:/E_datas/tencentCvrConvertPrediton_7_8/user.csv\n======================我是萌萌哒分界线========================\n"
],
[
"user.columns",
"_____no_output_____"
],
[
"user[user.age!=0].describe()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nuser.age.value_counts()",
"_____no_output_____"
],
[
"#user\nuser = read_csv_file(data_root+'/user.csv',logging=True)\nuser['age_process'] = user['age'].apply(age_process)\nuser[\"hometown_province\"] = user['hometown'].apply(process_province)\nuser[\"hometown_city\"] = user['hometown'].apply(process_city)\nuser[\"residence_province\"] = user['residence'].apply(process_province)\nuser[\"residence_city\"] = user['residence'].apply(process_city)",
"============================读取数据======================== E:/E_datas/tencentCvrConvertPrediton_7_8/user.csv\n======================我是萌萌哒分界线========================\n userID age gender education marriageStatus haveBaby hometown \\\n0 1 42 1 0 2 0 512 \n1 2 18 1 5 1 0 1403 \n2 3 0 2 4 0 0 0 \n3 4 21 2 5 3 0 607 \n4 5 22 2 0 0 0 0 \n\n residence \n0 503 \n1 1403 \n2 0 \n3 607 \n4 1301 \nE:/E_datas/tencentCvrConvertPrediton_7_8/user.csv 包含以下列....\n['userID' 'age' 'gender' 'education' 'marriageStatus' 'haveBaby'\n 'hometown' 'residence']\n userID age gender education marriageStatus \\\ncount 2.805118e+06 2.805118e+06 2.805118e+06 2.805118e+06 2.805118e+06 \nmean 1.402560e+06 2.038662e+01 1.294072e+00 1.889235e+00 9.870540e-01 \nstd 8.097680e+05 1.151120e+01 6.409864e-01 1.607085e+00 9.621890e-01 \nmin 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 \n25% 7.012802e+05 1.400000e+01 1.000000e+00 1.000000e+00 0.000000e+00 \n50% 1.402560e+06 2.000000e+01 1.000000e+00 2.000000e+00 1.000000e+00 \n75% 2.103839e+06 2.700000e+01 2.000000e+00 3.000000e+00 2.000000e+00 \nmax 2.805118e+06 8.000000e+01 2.000000e+00 7.000000e+00 3.000000e+00 \n\n haveBaby hometown residence \ncount 2.805118e+06 2.805118e+06 2.805118e+06 \nmean 2.848418e-01 6.750372e+02 9.571084e+02 \nstd 7.800834e-01 7.691699e+02 7.897154e+02 \nmin 0.000000e+00 0.000000e+00 0.000000e+00 \n25% 0.000000e+00 0.000000e+00 3.020000e+02 \n50% 0.000000e+00 4.030000e+02 7.170000e+02 \n75% 0.000000e+00 1.201000e+03 1.507000e+03 \nmax 6.000000e+00 3.401000e+03 3.401000e+03 \n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2805118 entries, 0 to 2805117\nData columns (total 8 columns):\n # Column Dtype\n--- ------ -----\n 0 userID int64\n 1 age int64\n 2 gender int64\n 3 education int64\n 4 marriageStatus int64\n 5 haveBaby int64\n 6 hometown int64\n 7 residence int64\ndtypes: int64(8)\nmemory usage: 171.2 MB\nNone\n"
],
[
"user.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2805118 entries, 0 to 2805117\nData columns (total 13 columns):\n # Column Dtype\n--- ------ -----\n 0 userID int64\n 1 age int64\n 2 gender int64\n 3 education int64\n 4 marriageStatus int64\n 5 haveBaby int64\n 6 hometown int64\n 7 residence int64\n 8 age_process int64\n 9 hometown_province int64\n 10 hometown_city int64\n 11 residence_province int64\n 12 residence_city int64\ndtypes: int64(13)\nmemory usage: 278.2 MB\n"
],
[
"user.head()",
"_____no_output_____"
],
[
"train_data.head()",
"_____no_output_____"
],
[
"train_data['clickTime_day'] = train_data['clickTime'].apply(get_time_day)\ntrain_data['clickTime_hour']= train_data['clickTime'].apply(get_time_hour)",
"_____no_output_____"
]
],
[
[
"### 合并数据",
"_____no_output_____"
]
],
[
[
"#train data\ntrain_data['clickTime_day'] = train_data['clickTime'].apply(get_time_day)\ntrain_data['clickTime_hour']= train_data['clickTime'].apply(get_time_hour)\n# train_data['conversionTime_day'] = train_data['conversionTime'].apply(get_time_day)\n# train_data['conversionTime_hour'] = train_data['conversionTime'].apply(get_time_hour)\n\n\n#test_data\ntest_data = read_csv_file(data_root+'/test.csv', True)\ntest_data['clickTime_day'] = test_data['clickTime'].apply(get_time_day)\ntest_data['clickTime_hour']= test_data['clickTime'].apply(get_time_hour)\n# test_data['conversionTime_day'] = test_data['conversionTime'].apply(get_time_day)\n# test_data['conversionTime_hour'] = test_data['conversionTime'].apply(get_time_hour)\n\n\ntrain_user = pd.merge(train_data,user,on='userID')\ntrain_user_ad = pd.merge(train_user,ad,on='creativeID')\ntrain_user_ad_app = pd.merge(train_user_ad,app_categories,on='appID')",
"============================读取数据======================== E:/E_datas/tencentCvrConvertPrediton_7_8/test.csv\n======================我是萌萌哒分界线========================\n instanceID label clickTime creativeID userID positionID \\\n0 1 -1 310000 3745 1164848 3451 \n1 2 -1 310000 2284 2127247 1613 \n2 3 -1 310000 1456 2769125 5510 \n3 4 -1 310000 4565 9762 4113 \n4 5 -1 310000 49 2513636 3615 \n\n connectionType telecomsOperator \n0 1 3 \n1 1 3 \n2 2 1 \n3 2 3 \n4 1 3 \nE:/E_datas/tencentCvrConvertPrediton_7_8/test.csv 包含以下列....\n['instanceID' 'label' 'clickTime' 'creativeID' 'userID' 'positionID'\n 'connectionType' 'telecomsOperator']\n instanceID label clickTime creativeID userID \\\ncount 338489.000000 338489.0 338489.000000 338489.000000 3.384890e+05 \nmean 169245.000000 -1.0 311479.490613 3001.534765 1.409519e+06 \nstd 97713.501971 0.0 580.393521 1869.336873 8.073083e+05 \nmin 1.000000 -1.0 310000.000000 4.000000 3.000000e+00 \n25% 84623.000000 -1.0 311053.000000 1248.000000 7.149930e+05 \n50% 169245.000000 -1.0 311536.000000 3012.000000 1.411134e+06 \n75% 253867.000000 -1.0 311951.000000 4565.000000 2.108981e+06 \nmax 338489.000000 -1.0 312359.000000 6580.000000 2.805117e+06 \n\n positionID connectionType telecomsOperator \ncount 338489.000000 338489.000000 338489.000000 \nmean 3640.126394 1.139015 1.629028 \nstd 1902.559504 0.511882 0.854993 \nmin 2.000000 0.000000 0.000000 \n25% 2436.000000 1.000000 1.000000 \n50% 3322.000000 1.000000 1.000000 \n75% 4881.000000 1.000000 2.000000 \nmax 7645.000000 4.000000 3.000000 \n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 338489 entries, 0 to 338488\nData columns (total 8 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 instanceID 338489 non-null int64\n 1 label 338489 non-null int64\n 2 clickTime 338489 non-null int64\n 3 creativeID 338489 non-null int64\n 4 userID 338489 non-null int64\n 5 positionID 338489 non-null int64\n 6 connectionType 338489 non-null int64\n 7 telecomsOperator 338489 non-null int64\ndtypes: int64(8)\nmemory usage: 20.7 MB\nNone\n"
],
[
"train_user_ad_app.head()",
"_____no_output_____"
],
[
"train_user_ad_app.columns",
"_____no_output_____"
]
],
[
[
"### 取出数据和label",
"_____no_output_____"
]
],
[
[
"#特征部分\nx_user_ad_app = train_user_ad_app.loc[:,['creativeID','userID','positionID',\n 'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',\n 'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',\n 'hometown_province', 'hometown_city','residence_province', 'residence_city',\n 'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,\n 'app_categories_first_class' ,'app_categories_second_class']]\n\nx_user_ad_app = x_user_ad_app.values\nx_user_ad_app = np.array(x_user_ad_app,dtype='int32')\n\n#标签部分\ny_user_ad_app =train_user_ad_app.loc[:,['label']].values",
"_____no_output_____"
]
],
[
[
"### 随机森林建模&&特征重要度排序",
"_____no_output_____"
]
],
[
[
"# %matplotlib inline\n# import matplotlib.pyplot as plt\n# print('Plot feature importances...')\n# ax = lgb.plot_importance(gbm, max_num_features=10)\n# plt.show()\n# 用RF 计算特征重要度\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import cross_val_score, train_test_split\n\nfeat_labels = np.array(['creativeID','userID','positionID',\n 'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',\n 'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',\n 'hometown_province', 'hometown_city','residence_province', 'residence_city',\n 'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,\n 'app_categories_first_class' ,'app_categories_second_class'])\n\nforest = RandomForestClassifier(n_estimators=100,\n random_state=0,\n n_jobs=-1)\n\nforest.fit(x_user_ad_app, y_user_ad_app.reshape(y_user_ad_app.shape[0],))\nimportances = forest.feature_importances_ \n\nindices = np.argsort(importances)[::-1] ",
"_____no_output_____"
],
[
"train_user_ad_app.shape",
"_____no_output_____"
],
[
"importances",
"_____no_output_____"
],
[
"# ['creativeID','userID','positionID',\n# 'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',\n# 'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',\n# 'hometown_province', 'hometown_city','residence_province', 'residence_city',\n# 'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,\n# 'app_categories_first_class' ,'app_categories_second_class']",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline\nfor f in range(x_user_ad_app.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, \n feat_labels[indices[f]], \n importances[indices[f]]))\n\nplt.title('Feature Importances')\nplt.bar(range(x_user_ad_app.shape[1]), \n importances[indices],\n color='lightblue', \n align='center')\n\nplt.xticks(range(x_user_ad_app.shape[1]), \n feat_labels[indices], rotation=90)\nplt.xlim([-1, x_user_ad_app.shape[1]])\nplt.tight_layout()\n#plt.savefig('./random_forest.png', dpi=300)\nplt.show()",
" 1) userID 0.166023\n 2) residence 0.099107\n 3) clickTime_day 0.077354\n 4) age 0.075498\n 5) positionID 0.065839\n 6) residence_province 0.063739\n 7) residence_city 0.057849\n 8) hometown_province 0.054218\n 9) education 0.048913\n10) hometown_city 0.048328\n11) clickTime_hour 0.039196\n12) telecomsOperator 0.033300\n13) marriageStatus 0.031278\n14) creativeID 0.027913\n15) adID 0.019010\n16) haveBaby 0.018649\n17) age_process 0.015707\n18) camgaignID 0.015615\n19) gender 0.012824\n20) advertiserID 0.008360\n21) app_categories_second_class 0.006968\n22) appID 0.005930\n23) connectionType 0.004271\n24) app_categories_first_class 0.003228\n25) appPlatform 0.000883\n"
]
],
[
[
"### 随机森林调参",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV\nfrom sklearn.ensemble import RandomForestClassifier\nparam_grid = {\n #'n_estimators': [100],\n 'n_estimators': [10, 100, 500, 1000],\n 'max_features':[0.6, 0.7, 0.8, 0.9]\n }\n\nrf = RandomForestClassifier()\nrfc = GridSearchCV(rf, param_grid, scoring = 'neg_log_loss', cv=3, n_jobs=-1)\nrfc.fit(x_user_ad_app, y_user_ad_app.reshape(y_user_ad_app.shape[0],-1))\nprint(rfc.best_score_)\nprint(rfc.best_params_)",
"_____no_output_____"
]
],
[
[
"### Xgboost调参",
"_____no_output_____"
]
],
[
[
"import xgboost as xgb",
"_____no_output_____"
],
[
"import os\nimport numpy as np\nfrom sklearn.model_selection import GridSearchCV\nimport xgboost as xgb\nos.environ[\"OMP_NUM_THREADS\"] = \"8\" #并行训练\nrng = np.random.RandomState(4315)\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nparam_grid = {\n 'max_depth': [3, 4, 5, 7, 9],\n 'n_estimators': [10, 50, 100, 400, 800, 1000, 1200],\n 'learning_rate': [0.1, 0.2, 0.3],\n 'gamma':[0, 0.2],\n 'subsample': [0.8, 1],\n 'colsample_bylevel':[0.8, 1]\n }\n\nxgb_model = xgb.XGBClassifier()\nrgs = GridSearchCV(xgb_model, param_grid, n_jobs=-1)\nrgs.fit(X, y)\nprint(rgs.best_score_)\nprint(rgs.best_params_)",
"_____no_output_____"
]
],
[
[
"### 正负样本比",
"_____no_output_____"
]
],
[
[
"positive_num = train_user_ad_app[train_user_ad_app['label']==1].values.shape[0]\nnegative_num = train_user_ad_app[train_user_ad_app['label']==0].values.shape[0]\n\nnegative_num/float(positive_num)",
"_____no_output_____"
]
],
[
[
"**我们可以看到正负样本数量相差非常大,数据严重unbalanced**",
"_____no_output_____"
],
[
"我们用Bagging修正过后,处理不均衡样本的B(l)agging来进行训练和实验。",
"_____no_output_____"
]
],
[
[
"from blagging import BlaggingClassifier",
"_____no_output_____"
],
[
"help(BlaggingClassifier)",
"Help on class BlaggingClassifier in module blagging:\n\nclass BlaggingClassifier(BaseBagging, sklearn.base.ClassifierMixin)\n | A Bagging classifier.\n | \n | A Bagging classifier is an ensemble meta-estimator that fits base\n | classifiers each on random subsets of the original dataset and then\n | aggregate their individual predictions (either by voting or by averaging)\n | to form a final prediction. Such a meta-estimator can typically be used as\n | a way to reduce the variance of a black-box estimator (e.g., a decision\n | tree), by introducing randomization into its construction procedure and\n | then making an ensemble out of it.\n | \n | This algorithm encompasses several works from the literature. When random\n | subsets of the dataset are drawn as random subsets of the samples, then\n | this algorithm is known as Pasting [1]_. If samples are drawn with\n | replacement, then the method is known as Bagging [2]_. When random subsets\n | of the dataset are drawn as random subsets of the features, then the method\n | is known as Random Subspaces [3]_. Finally, when base estimators are built\n | on subsets of both samples and features, then the method is known as\n | Random Patches [4]_.\n | \n | Read more in the :ref:`User Guide <bagging>`.\n | \n | Parameters\n | ----------\n | base_estimator : object or None, optional (default=None)\n | The base estimator to fit on random subsets of the dataset.\n | If None, then the base estimator is a decision tree.\n | \n | n_estimators : int, optional (default=10)\n | The number of base estimators in the ensemble.\n | \n | max_samples : int or float, optional (default=1.0)\n | The number of samples to draw from X to train each base estimator.\n | - If int, then draw `max_samples` samples.\n | - If float, then draw `max_samples * X.shape[0]` samples.\n | \n | max_features : int or float, optional (default=1.0)\n | The number of features to draw from X to train each base estimator.\n | - If int, then draw `max_features` features.\n | - If float, then draw `max_features * X.shape[1]` features.\n | \n | bootstrap : boolean, optional (default=True)\n | Whether samples are drawn with replacement.\n | \n | bootstrap_features : boolean, optional (default=False)\n | Whether features are drawn with replacement.\n | \n | oob_score : bool\n | Whether to use out-of-bag samples to estimate\n | the generalization error.\n | \n | warm_start : bool, optional (default=False)\n | When set to True, reuse the solution of the previous call to fit\n | and add more estimators to the ensemble, otherwise, just fit\n | a whole new ensemble.\n | \n | .. versionadded:: 0.17\n | *warm_start* constructor parameter.\n | \n | n_jobs : int, optional (default=1)\n | The number of jobs to run in parallel for both `fit` and `predict`.\n | If -1, then the number of jobs is set to the number of cores.\n | \n | random_state : int, RandomState instance or None, optional (default=None)\n | If int, random_state is the seed used by the random number generator;\n | If RandomState instance, random_state is the random number generator;\n | If None, the random number generator is the RandomState instance used\n | by `np.random`.\n | \n | verbose : int, optional (default=0)\n | Controls the verbosity of the building process.\n | \n | Attributes\n | ----------\n | base_estimator_ : list of estimators\n | The base estimator from which the ensemble is grown.\n | \n | estimators_ : list of estimators\n | The collection of fitted base estimators.\n | \n | estimators_samples_ : list of arrays\n | The subset of drawn samples (i.e., the in-bag samples) for each base\n | estimator.\n | \n | estimators_features_ : list of arrays\n | The subset of drawn features for each base estimator.\n | \n | classes_ : array of shape = [n_classes]\n | The classes labels.\n | \n | n_classes_ : int or list\n | The number of classes.\n | \n | oob_score_ : float\n | Score of the training dataset obtained using an out-of-bag estimate.\n | \n | oob_decision_function_ : array of shape = [n_samples, n_classes]\n | Decision function computed with out-of-bag estimate on the training\n | set. If n_estimators is small it might be possible that a data point\n | was never left out during the bootstrap. In this case,\n | `oob_decision_function_` might contain NaN.\n | \n | References\n | ----------\n | \n | .. [1] L. Breiman, \"Pasting small votes for classification in large\n | databases and on-line\", Machine Learning, 36(1), 85-103, 1999.\n | \n | .. [2] L. Breiman, \"Bagging predictors\", Machine Learning, 24(2), 123-140,\n | 1996.\n | \n | .. [3] T. Ho, \"The random subspace method for constructing decision\n | forests\", Pattern Analysis and Machine Intelligence, 20(8), 832-844,\n | 1998.\n | \n | .. [4] G. Louppe and P. Geurts, \"Ensembles on Random Patches\", Machine\n | Learning and Knowledge Discovery in Databases, 346-361, 2012.\n | \n | Method resolution order:\n | BlaggingClassifier\n | BaseBagging\n | abc.NewBase\n | sklearn.ensemble.base.BaseEnsemble\n | sklearn.base.BaseEstimator\n | sklearn.base.MetaEstimatorMixin\n | sklearn.base.ClassifierMixin\n | __builtin__.object\n | \n | Methods defined here:\n | \n | __init__(self, base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=None, verbose=0)\n | \n | decision_function(*args, **kwargs)\n | Average of the decision functions of the base classifiers.\n | \n | Parameters\n | ----------\n | X : {array-like, sparse matrix} of shape = [n_samples, n_features]\n | The training input samples. Sparse matrices are accepted only if\n | they are supported by the base estimator.\n | \n | Returns\n | -------\n | score : array, shape = [n_samples, k]\n | The decision function of the input samples. The columns correspond\n | to the classes in sorted order, as they appear in the attribute\n | ``classes_``. Regression and binary classification are special\n | cases with ``k == 1``, otherwise ``k==n_classes``.\n | \n | predict(self, X)\n | Predict class for X.\n | \n | The predicted class of an input sample is computed as the class with\n | the highest mean predicted probability. If base estimators do not\n | implement a ``predict_proba`` method, then it resorts to voting.\n | \n | Parameters\n | ----------\n | X : {array-like, sparse matrix} of shape = [n_samples, n_features]\n | The training input samples. Sparse matrices are accepted only if\n | they are supported by the base estimator.\n | \n | Returns\n | -------\n | y : array of shape = [n_samples]\n | The predicted classes.\n | \n | predict_log_proba(self, X)\n | Predict class log-probabilities for X.\n | \n | The predicted class log-probabilities of an input sample is computed as\n | the log of the mean predicted class probabilities of the base\n | estimators in the ensemble.\n | \n | Parameters\n | ----------\n | X : {array-like, sparse matrix} of shape = [n_samples, n_features]\n | The training input samples. Sparse matrices are accepted only if\n | they are supported by the base estimator.\n | \n | Returns\n | -------\n | p : array of shape = [n_samples, n_classes]\n | The class log-probabilities of the input samples. The order of the\n | classes corresponds to that in the attribute `classes_`.\n | \n | predict_proba(self, X)\n | Predict class probabilities for X.\n | \n | The predicted class probabilities of an input sample is computed as\n | the mean predicted class probabilities of the base estimators in the\n | ensemble. If base estimators do not implement a ``predict_proba``\n | method, then it resorts to voting and the predicted class probabilities\n | of a an input sample represents the proportion of estimators predicting\n | each class.\n | \n | Parameters\n | ----------\n | X : {array-like, sparse matrix} of shape = [n_samples, n_features]\n | The training input samples. Sparse matrices are accepted only if\n | they are supported by the base estimator.\n | \n | Returns\n | -------\n | p : array of shape = [n_samples, n_classes]\n | The class probabilities of the input samples. The order of the\n | classes corresponds to that in the attribute `classes_`.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes defined here:\n | \n | __abstractmethods__ = frozenset([])\n | \n | ----------------------------------------------------------------------\n | Methods inherited from BaseBagging:\n | \n | fit(self, X, y)\n | Build a Bagging ensemble of estimators from the training\n | set (X, y).\n | \n | Parameters\n | ----------\n | X : {array-like, sparse matrix} of shape = [n_samples, n_features]\n | The training input samples. Sparse matrices are accepted only if\n | they are supported by the base estimator.\n | \n | y : array-like, shape = [n_samples]\n | The target values (class labels in classification, real numbers in\n | regression).\n | \n | \n | Returns\n | -------\n | self : object\n | Returns self.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from sklearn.ensemble.base.BaseEnsemble:\n | \n | __getitem__(self, index)\n | Returns the index'th estimator in the ensemble.\n | \n | __iter__(self)\n | Returns iterator over estimators in the ensemble.\n | \n | __len__(self)\n | Returns the number of estimators in the ensemble.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from sklearn.base.BaseEstimator:\n | \n | __getstate__(self)\n | \n | __repr__(self)\n | \n | __setstate__(self, state)\n | \n | get_params(self, deep=True)\n | Get parameters for this estimator.\n | \n | Parameters\n | ----------\n | deep : boolean, optional\n | If True, will return the parameters for this estimator and\n | contained subobjects that are estimators.\n | \n | Returns\n | -------\n | params : mapping of string to any\n | Parameter names mapped to their values.\n | \n | set_params(self, **params)\n | Set the parameters of this estimator.\n | \n | The method works on simple estimators as well as on nested objects\n | (such as pipelines). The latter have parameters of the form\n | ``<component>__<parameter>`` so that it's possible to update each\n | component of a nested object.\n | \n | Returns\n | -------\n | self\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from sklearn.base.BaseEstimator:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | ----------------------------------------------------------------------\n | Methods inherited from sklearn.base.ClassifierMixin:\n | \n | score(self, X, y, sample_weight=None)\n | Returns the mean accuracy on the given test data and labels.\n | \n | In multi-label classification, this is the subset accuracy\n | which is a harsh metric since you require for each sample that\n | each label set be correctly predicted.\n | \n | Parameters\n | ----------\n | X : array-like, shape = (n_samples, n_features)\n | Test samples.\n | \n | y : array-like, shape = (n_samples) or (n_samples, n_outputs)\n | True labels for X.\n | \n | sample_weight : array-like, shape = [n_samples], optional\n | Sample weights.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy of self.predict(X) wrt. y.\n\n"
],
[
"#处理unbalanced的classifier\nclassifier = BlaggingClassifier(n_jobs=-1)",
"_____no_output_____"
],
[
"classifier.fit(x_user_ad_app, y_user_ad_app)",
"/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py:526: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
],
[
"classifier.predict_proba(x_test_clean)",
"_____no_output_____"
]
],
[
[
"#### 预测",
"_____no_output_____"
]
],
[
[
"test_data = pd.merge(test_data,user,on='userID')\ntest_user_ad = pd.merge(test_data,ad,on='creativeID')\ntest_user_ad_app = pd.merge(test_user_ad,app_categories,on='appID')\n\nx_test_clean = test_user_ad_app.loc[:,['creativeID','userID','positionID',\n 'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',\n 'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',\n 'hometown_province', 'hometown_city','residence_province', 'residence_city',\n 'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,\n 'app_categories_first_class' ,'app_categories_second_class']].values\n\nx_test_clean = np.array(x_test_clean,dtype='int32')\n\nresult_predict_prob = []\nresult_predict=[]\nfor i in range(scale):\n result_indiv = clfs[i].predict(x_test_clean)\n result_indiv_proba = clfs[i].predict_proba(x_test_clean)[:,1]\n result_predict.append(result_indiv)\n result_predict_prob.append(result_indiv_proba)\n\n\nresult_predict_prob = np.reshape(result_predict_prob,[-1,scale])\nresult_predict = np.reshape(result_predict,[-1,scale])\n\nresult_predict_prob = np.mean(result_predict_prob,axis=1)\nresult_predict = max_count(result_predict)\n\n\nresult_predict_prob = np.array(result_predict_prob).reshape([-1,1])\n\n\ntest_data['prob'] = result_predict_prob\ntest_data = test_data.loc[:,['instanceID','prob']]\ntest_data.to_csv('predict.csv',index=False)\nprint \"prediction done!\"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc66b9efa2bdc2ab8302dbf10eaf49bb12bcf08 | 30,771 | ipynb | Jupyter Notebook | Working.ipynb | abhishirk/Traffic_classifier | 5f7a3589cd198008e0a3a7a571355da779e177a4 | [
"MIT"
] | null | null | null | Working.ipynb | abhishirk/Traffic_classifier | 5f7a3589cd198008e0a3a7a571355da779e177a4 | [
"MIT"
] | null | null | null | Working.ipynb | abhishirk/Traffic_classifier | 5f7a3589cd198008e0a3a7a571355da779e177a4 | [
"MIT"
] | null | null | null | 48.765452 | 10,636 | 0.685288 | [
[
[
"import pickle\n\n# TODO: Fill this in based on where you saved the training and testing data\n\ntraining_file = 'train.p'\nvalidation_file='valid.p'\ntesting_file = 'test.p'\n\nwith open(training_file, mode='rb') as f:\n train = pickle.load(f)\nwith open(validation_file, mode='rb') as f:\n valid = pickle.load(f)\nwith open(testing_file, mode='rb') as f:\n test = pickle.load(f)\n \nX_train, y_train = train['features'], train['labels']\nX_valid, y_valid = valid['features'], valid['labels']\nX_test, y_test = test['features'], test['labels']\nlabels=[x for x in train['labels']]\nlabels.extend([x for x in valid['labels']])\nlabels.extend([x for x in test['labels']])\nX_train_norm=[]\n##print(len(valid['labels']))\n##print(len(test['labels']))\n##print(len(labels))\nlabels_set=set(labels)\n##print(len(label_set))\n## X_train=(X_train-128)/128\n## X_valid=(X_valid-128)/128",
"_____no_output_____"
],
[
"### Replace each question mark with the appropriate value. \n### Use python, pandas or numpy methods rather than hard coding the results\n\n# TODO: Number of training examples\nn_train = len(X_train)\n\n# TODO: Number of validation examples\nn_validation = len(X_valid)\n\n# TODO: Number of testing examples.\nn_test = len(X_test)\n\n# TODO: What's the shape of a traffic sign image?\nimage_shape = X_train[0].shape ##(len(train['features'][0]),len(train['features'][0][0]),len(train['features'][0][0][0]))\n\n# TODO: How many unique classes/labels there are in the dataset.\nn_classes = len(labels_set)\n##print(X_train[0].shape)\nprint(\"Number of training examples =\", n_train)\nprint(\"Number of validation examples =\", n_validation)\nprint(\"Number of testing examples =\", n_test)\nprint(\"Image data shape =\", image_shape)\nprint(\"Number of classes =\", n_classes)",
"Number of training examples = 34799\nNumber of validation examples = 4410\nNumber of testing examples = 12630\nImage data shape = (32, 32, 3)\nNumber of classes = 43\n"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n\ndef normalize(data):\n a=0.1\n b=0.9\n data = data.astype(np.int16)\n data=a+data*(b-a)/255\n ##data=(data-128)/128\n print(data.shape)\n return data\n\ndef im2gray(data):\n gray_data=[]\n for im in data:\n gray=cv2.cvtColor(im, cv2.COLOR_RGB2GRAY)\n gray_data.append(gray)\n gray_data=np.array(gray_data)\n print(gray_data.shape)\n ##data=0.2989 *data[:,:,:,0] + 0.5870 * data[:,:,:,1] + 0.1140 * data[:,:,:,2]\n plt.imshow(gray_data[len(gray_data)-1],cmap='gray')\n plt.savefig('./training_img_gr.jpg')\n gray_data=gray_data[...,np.newaxis]\n print(gray_data.shape)\n return gray_data",
"_____no_output_____"
]
],
[
[
"#### [Data Augmentation] Rotating and adding new images to the data",
"_____no_output_____"
]
],
[
[
"from scipy import ndimage\n\nprint(X_train.shape)\nprint(y_train.shape)\n\ndef rotate(image,angle):\n rotated = ndimage.rotate(image,angle)\n left=0\n right=rotated.shape[1]\n mid_x=(right-left)/2\n left=int(mid_x-16)\n right=left+32\n up=0\n bottom=rotated.shape[0]\n mid_y=(bottom-up)/2\n up=int(mid_y-16)\n bottom=up+32\n new_rot=rotated[left:right,up:bottom,:]\n return new_rot\n \n##plt.imshow(rotate(X_train[1079],45))\n\nlower_bound=1200\nbins=np.bincount(y_train)\naugment_x=[]\naugment_y=[]\nnum_examples = len(X_train)\nBATCH_SIZE = 128\nprint(\"Augmenting... on \"+str(num_examples)+\" examples\")\nprint()\nfor offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n if end>=num_examples:\n end=num_examples\n for i in range(offset,end):\n for ang in range(-15,40,30):\n temp=rotate(X_train[i],ang)\n augment_x.append(temp)\n augment_y.append(y_train[i])\n print(\"Finished till \"+str(end)+\" batch\")\n \n'''\nfor i in range(n_train):\n for ang in range(10,40,15):\n temp=rotate(X_train[i],ang)\n np.append(X_train,temp)\n np.append(y_train,y_train[i])\n'''\n\nprint(X_train.shape)\nprint(y_train.shape)\naugment_x=np.array(augment_x)\naugment_y=np.array(augment_y)\nprint(augment_x.shape)\nprint(augment_y.shape)\nprint(\"######################\")\nprint(X_train.shape)\nprint(y_train.shape)\nX_train=np.append(X_train,augment_x,axis=0)\ny_train=np.append(y_train,augment_y,axis=0)\nbins=np.bincount(y_train)\n## print(bins)\nprint(X_train.shape)\nprint(y_train.shape)",
"_____no_output_____"
]
],
[
[
"#### [Gray scaling]",
"_____no_output_____"
]
],
[
[
"X_train_gr=im2gray(X_train)\nX_valid_gr=im2gray(X_valid)\nX_test_gr=im2gray(X_test)\nprint(X_train_gr.shape)\nprint(X_valid_gr.shape)\nprint(X_test_gr.shape)",
"_____no_output_____"
]
],
[
[
"##### Normalizing Data\nBetween 0.1 to 0.9",
"_____no_output_____"
]
],
[
[
"X_train=normalize(X_train_gr)\nX_valid=normalize(X_valid_gr)\nX_test=normalize(X_test_gr)\nprint(X_train.shape)\nprint(X_valid.shape)\nprint(X_test.shape)",
"(104397, 32, 32, 3)\n(4410, 32, 32, 3)\n(104397, 32, 32, 3)\n(4410, 32, 32, 3)\n"
]
],
[
[
"### My Model",
"_____no_output_____"
]
],
[
[
"from tensorflow.contrib.layers import flatten\n\ndef LeNet(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n \n # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x20.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 20), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(20))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # SOLUTION: Activation.\n conv1 = tf.nn.relu(conv1)\n\n # SOLUTION: Pooling. Input = 28x28x20. Output = 14x14x20.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Layer 2: Convolutional.14x14x20 Output = 12x12x40.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 20, 40), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(40))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n \n # SOLUTION: Activation.\n conv2 = tf.nn.relu(conv2)\n \n # SOLUTION: Layer 3: Convolutional.12x12x40 Output = 10x10x80.\n conv3_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 40, 80), mean = mu, stddev = sigma))\n conv3_b = tf.Variable(tf.zeros(80))\n conv3 = tf.nn.conv2d(conv2, conv3_W, strides=[1, 1, 1, 1], padding='VALID') + conv3_b\n \n # SOLUTION: Activation.\n conv3 = tf.nn.relu(conv3)\n\n # SOLUTION: Pooling. Input = 10x10x80. Output = 5x5x80.\n conv3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Flatten. Input = 5x5x80. Output = 400.\n fc0 = flatten(conv3)\n \n # SOLUTION: Layer 3: Fully Connected. Input = 2000. Output = 200.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(2000, 200), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(200))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n \n # SOLUTION: Activation.\n fc1 = tf.nn.relu(fc1)\n fc1 = tf.nn.dropout(fc1, keep_prob)\n \n w = tf.Variable(tf.truncated_normal(shape=(200, 120), mean = mu, stddev = sigma))\n b = tf.Variable(tf.zeros(120))\n fc1 = tf.matmul(fc1, w) + b\n\n # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(84))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n \n # SOLUTION: Activation.\n fc2 = tf.nn.relu(fc2)\n\n # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(43))\n logits = tf.add(tf.matmul(fc2, fc3_W),fc3_b, name=\"logits\")\n \n return logits\n\nprint('Model Created')",
"Model Created\n"
],
[
"### Train your model here.\n### Calculate and report the accuracy on the training and validation set.\n### Once a final model architecture is selected, \n### the accuracy on the test set should be calculated and reported as well.\n### Feel free to use as many code cells as needed.\n\n\nfrom sklearn.utils import shuffle\n\nX_train, y_train = shuffle(X_train, y_train)\n'''\nprint(len(X_train))\nprint(len(y_train))\nplt.imshow(X_train[56])\nprint(y_train[56])\n'''\nimport tensorflow as tf\n\nEPOCHS = 10\n\n\nx = tf.placeholder(tf.float32, (None, 32, 32, 1), name=\"x_inp\")\ny = tf.placeholder(tf.int32, (None), name=\"y_inp\")\none_hot_y = tf.one_hot(y, 43)\nkeep_prob = tf.placeholder(tf.float32, name=\"k_prob\")\nrate = 0.001\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)\nloss_operation = tf.reduce_mean(cross_entropy, name=\"loss_operation\")## loss \noptimizer = tf.train.AdamOptimizer(learning_rate = rate) \ntraining_operation = optimizer.minimize(loss_operation)\n\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1), name=\"correct_prediction\")\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32),name=\"accuracy_operation\")\nsaver = tf.train.Saver()\nloss_valid=[]\nloss_train=[]\nepochs=[]\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n total_loss = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n loss,accuracy = sess.run([loss_operation,accuracy_operation], feed_dict={x: batch_x, y: batch_y,keep_prob:1.0}) \n total_accuracy += (accuracy * len(batch_x))\n total_loss += (loss * len(batch_x))\n return total_loss/num_examples,total_accuracy / num_examples\n\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training... on \"+str(num_examples)+\" examples\")\n print()\n for i in range(EPOCHS):\n total_loss = 0\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n loss,var=sess.run([loss_operation,training_operation], feed_dict={x: batch_x, y: batch_y,keep_prob:0.7})\n total_loss += (loss * len(batch_x))\n print(\"Training loss \"+str(total_loss/num_examples))\n loss_train.append(total_loss/num_examples)\n ##train_loss,train_accuracy=evaluate(X_train, y_train)\n validation_loss,validation_accuracy = evaluate(X_valid, y_valid)\n ##loss_train.append(train_loss)\n loss_valid.append(validation_loss)\n epochs.append(i)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n \n plt.plot(epochs,loss_valid,epochs,loss_train)\n plt.xlabel('EPOCHS')\n plt.ylabel('LOSS')\n plt.savefig('./losscurve.jpg')\n saver.save(sess, './lenet')\n print(\"Model saved\")\n test_loss,test_accuracy = evaluate(X_test, y_test)\n print(\"Testing Accuracy = {:.3f}\".format(test_accuracy))\n print()\n",
"Training... on 104397 examples\n\nTraining loss 2.596646621302469\nEPOCH 1 ...\nValidation Accuracy = 0.760\n\nTraining loss 0.6027613429084244\nEPOCH 2 ...\nValidation Accuracy = 0.865\n\nTraining loss 0.33995004802380185\nEPOCH 3 ...\nValidation Accuracy = 0.919\n\nTraining loss 0.23456010804667501\nEPOCH 4 ...\nValidation Accuracy = 0.923\n\nTraining loss 0.18670206516759885\nEPOCH 5 ...\nValidation Accuracy = 0.941\n\nTraining loss 0.15656836439155555\nEPOCH 6 ...\nValidation Accuracy = 0.950\n\nTraining loss 0.12944523934694377\nEPOCH 7 ...\nValidation Accuracy = 0.959\n\nTraining loss 0.12297662790395712\nEPOCH 8 ...\nValidation Accuracy = 0.951\n\nTraining loss 0.10635980921032614\nEPOCH 9 ...\nValidation Accuracy = 0.963\n\nTraining loss 0.0966269737152095\nEPOCH 10 ...\nValidation Accuracy = 0.958\n\n"
]
],
[
[
"#### Testing the model ",
"_____no_output_____"
]
],
[
[
"print(X_test.shape)\nprint(y_test.shape)\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, './lenet')\n print(\"Model restored.\")\n test_loss,test_accuracy = evaluate(X_test, y_test)\n print(\"Testing Accuracy = {:.3f}\".format(test_accuracy))\n print()",
"(12630, 32, 32, 3)\n(12630,)\nModel restored.\nTesting Accuracy = 0.016\n\n"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\nimport pickle\n\ntesting_file = 'test_cust.p'\nX_test_google=[]\ny_test_google=[]\ndef customimgread(path,label):\n res=cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB)\n y_test_google.append(label)\n return res\n\n'''\nx=customimgread('test\\\\2.jpg',1)\nprint(x.shape)\nplt.imshow(x)\n'''\nX_test_google.append(customimgread('test\\\\35.jpg',35))\nX_test_google.append(customimgread('test\\\\2.jpg',25))\nX_test_google.append(customimgread('test\\\\3.jpg',33))\nX_test_google.append(customimgread('test\\\\4.jpg',1))\nX_test_google.append(customimgread('test\\\\5.jpg',14))\nX_test_google.append(customimgread('test\\\\rounabout.jpg',40))\nX_test_google=np.asarray(X_test_google)\ny_test_google=np.asarray(y_test_google)\ntest_goog=(X_test_google,y_test_google)\n\nfilehandler = open(testing_file,\"wb\")\npickle.dump(test_goog,filehandler)\nfilehandler.close()\n\n# print(X_test_google.shape)\n# print(y_test_google.shape)\n# plt.imshow(t1)\n\n\ntesting_file = 'test_cust.p'\nwith open(testing_file, mode='rb') as f:\n X_test_cust,y_test_cust = pickle.load(f)\n\nprint(X_test_cust.shape)\nprint(y_test_cust.shape)\nplt.imshow(X_test_cust[5])\nprint(y_test_cust[5])",
"(6, 32, 32, 3)\n(6,)\n40\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecc677c41676626ecc74b2160aaecbf7381be11f | 3,092 | ipynb | Jupyter Notebook | PairWiseAlignments.ipynb | CompBiochBiophLab/IntroBioinfo | ddbfb45f6cbcede2e10b589e3d6dab840cccb484 | [
"MIT"
] | null | null | null | PairWiseAlignments.ipynb | CompBiochBiophLab/IntroBioinfo | ddbfb45f6cbcede2e10b589e3d6dab840cccb484 | [
"MIT"
] | null | null | null | PairWiseAlignments.ipynb | CompBiochBiophLab/IntroBioinfo | ddbfb45f6cbcede2e10b589e3d6dab840cccb484 | [
"MIT"
] | 4 | 2020-04-03T14:04:32.000Z | 2021-01-26T14:16:43.000Z | 25.138211 | 258 | 0.525873 | [
[
[
"modified from [here](https://towardsdatascience.com/pairwise-sequence-alignment-using-biopython-d1a9d0ba861f)",
"_____no_output_____"
]
],
[
[
"from Bio import pairwise2\nfrom Bio.pairwise2 import format_alignment\n\n# Define two sequences to be aligned\nX = \"ACGGGT\"\nY = \"GATTACCA\"\n\n\n",
"_____no_output_____"
]
],
[
[
"The name of the function determines the type of alignment\n\n`<alignment type>XX` \n\nwhere `<alignment type>`can be `local` or `global` and XX is a 2 character code indicating the parameters it takes. The first character indicates the parameters for matches (and mismatches), and the second indicates the parameters for gap penalties.\n\nThe match parameters are:\n\nCODE DESCRIPTION\n* x No parameters. Identical characters have score of 1, else 0.\n* m A match score is the score of identical chars, else mismatch score.\n* d A dictionary returns the score of any pair of characters.\n* c A callback function returns scores.\n\nThe gap penalty parameters are:\n\nCODE DESCRIPTION\n* x No gap penalties.\n* s Same open and extend gap penalties for both sequences.\n* d The sequences have different open and extend gap penalties.\n* c A callback function returns the gap penalties.",
"_____no_output_____"
]
],
[
[
"alignments = pairwise2.align.globalms(X, Y, 2, -1, -0.5, -0.1)\n\n# Use format_alignment method to format the alignments in the list\nfor a in alignments:\n print(format_alignment(*a))",
"ACGGG-T-----\n | | \n----GATTACCA\n Score=1.8\n\n-ACGGGT-----\n | | \nGA----TTACCA\n Score=1.8\n\nACGGG--T----\n | | \n----GATTACCA\n Score=1.8\n\n----ACGGGT--\n || \nGATTAC----CA\n Score=1.8\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc67924086c902e02f7d2231168681dfb054631 | 106,975 | ipynb | Jupyter Notebook | ImageBasedDataMiningContinuous.ipynb | afg1/jedi-colombia-project-complete | eb5f6149cf2dd22574aefec83d5122a5abba687b | [
"Unlicense"
] | null | null | null | ImageBasedDataMiningContinuous.ipynb | afg1/jedi-colombia-project-complete | eb5f6149cf2dd22574aefec83d5122a5abba687b | [
"Unlicense"
] | null | null | null | ImageBasedDataMiningContinuous.ipynb | afg1/jedi-colombia-project-complete | eb5f6149cf2dd22574aefec83d5122a5abba687b | [
"Unlicense"
] | null | null | null | 196.645221 | 41,880 | 0.887207 | [
[
[
"## ImageBasedDataMiningContinuous.ipynb\n‹ ImageBasedDataMining.ipynb › Copyright (C) ‹ 2019 › ‹ Andrew Green - [email protected] › This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\n\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.",
"_____no_output_____"
]
],
[
[
"\"\"\"\n| date | who | note |\n|----------|----------|-------------------------------------------------------|\n| 20190201 | afg | First version, adapted from MadagaSKA code |\n| 20190208 | afg | Updated code, completed up to point of plotting |\n| 20190208 | afg | Update paths in code to work with docker container |\n\"\"\";",
"_____no_output_____"
]
],
[
[
"This notebook shows how to do image based data mining against a continuous outcome. The outcome could be whatever you like, provided it is a continuous variable; some examples include weight loss, muscle area loss and feeding tube duration",
"_____no_output_____"
]
],
[
[
"## Import libraries and set up\n\nimport os\nimport time\nimport os.path\nimport numpy as np\ntry:\n from tqdm import tqdm_notebook as tqdm\n haveTQDM = True\nexcept:\n haveTQDM = False\nimport SimpleITK as sitk\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n## Make the notebook use full width of display\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\n",
"_____no_output_____"
],
[
"%%javascript\n// this cell stops the notebook from putting outut in scrolling frames, which I find really annoying\nIPython.OutputArea.prototype._should_scroll = function(lines){return false;}",
"_____no_output_____"
]
],
[
[
"Once again, we will use pandas to load the csv. The next cell contains all the pre-processing from the binary data mining, in a single cell, including the check that a dose distribution is available for each patient. By the end of the cell, we should have a set of clean data again.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nclinicalDataPath = \"/data/clinicalData.csv\"\n\nclinicalData = pd.read_csv(clinicalDataPath)\n\nccrtOnlyPatients = clinicalData[(clinicalData[\"Oncologic Treatment Summary\"].str.contains('^CCRT', regex=True)) & (clinicalData[\"Oncologic Treatment Summary\"].str.contains('\\+', regex=True) == False)]\nlen(ccrtOnlyPatients[\"Oncologic Treatment Summary\"])\n\nselectedPatients = ccrtOnlyPatients[ccrtOnlyPatients[\"Number of Fractions\"].astype(int) < 40]\nlen(selectedPatients[\"Number of Fractions\"])\n\ndef calculateBEDCorrection(df, early=True):\n earlyAlphaBeta = 10.0\n lateAlphaBeta = 03.0\n if early:\n df = df.assign(BEDfactor = lambda d : 1.0 + d[\"Dose/Fraction (Gy/fx)\"].astype(float)/earlyAlphaBeta)\n else:\n df = df.assign(BEDfactor = lambda d : 1.0 + d[\"Dose/Fraction (Gy/fx)\"].astype(float)/lateAlphaBeta)\n return df\n\nselectedPatients = calculateBEDCorrection(selectedPatients)\n\ndosesPath = \"/data/registeredDoses\"\navailableDoses = [\"HNSCC-01-{0}\".format(a.split('.')[0]) for a in os.listdir(dosesPath)]\n\navailablePatientsMask = selectedPatients['ID'].isin(availableDoses)\nprobeDose = sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(dosesPath, \"{0:04d}.nii\".format(int(2)))))\n\nselectedPatients = selectedPatients.loc[availablePatientsMask]\n",
"_____no_output_____"
]
],
[
[
"Now we have our BED correction, we can start loading data ready to do mining. We will load each image using SimpleITK, convert it to a numpy array and put it into a bit numpy array. We will concurrently load the binary status from the clinical data as well and put that into a numpy array.\n\nIn the next cell, we will load all our data. Note that we pre-allocate the numpy array we will use to hold the data - this is a performance optimisation to make loading the data a bit quicker.",
"_____no_output_____"
]
],
[
[
"probeDose = sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(dosesPath, \"{0:04d}.nii\".format(int(2)))))\n\ndoseArray = np.zeros((*probeDose.shape, len(selectedPatients)))\nstatusArray = np.zeros((len(selectedPatients),1))\n\nn = 0\nfor idx, pt in selectedPatients.iterrows():\n doseArray[...,n] = pt.BEDfactor * sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(dosesPath, \"{0}.nii\".format(pt.ID.split(\"-\")[-1]))))\n n += 1",
"(123, 128, 128)\n(123, 128, 128, 48)\n"
]
],
[
[
"Now we have our data, and we have corrected all the doses tot eh same BED, we are ready to do continuous outcome image based data mining.\n\nFor this we need to select a suitable outcome variable - I suggest weight loss as a good one to start with. \n\nThe next cell defines a function that calculates the pearson correlation coefficient in each voxel of the dose distribution. To do this, we slightly modify the online calculation of variance used in the binary data mining to do online calculation of covariance. The formula for pearson's correlation coefficient is then:\n\n$ \\rho = \\frac{cov(X,Y)}{\\sigma_{x} \\sigma_{y}} $\n\n\n(note: strictly, this is for a population, but the estimates for variance and covariance we return are for a sample, so it will work)\n",
"_____no_output_____"
]
],
[
[
"def imagesCorrelation(doseData, continuousOutcome, mask=None):\n \"\"\"\n Calculate a per-voxel correlation coefficient between two images. Uses Welford's method to calculate mean, variance and covariance. \n \n Inputs:\n - doseData: the dose data, should be structured such that the number of patients in it is along the last axis\n - statuses: the outcome labels. 1 indicates an event, 0 indicates no event\n Returns:\n - rhoValues: an array of the same size as one of the images which contains the per-voxel rho values\n \"\"\"\n doseMean = np.zeros_like(doseData[...,0])\n doseStd = np.zeros_like(doseData[...,0])\n covariance = np.zeros_like(doseData[...,0])\n C = np.zeros_like(doseData[...,0])\n rho = np.zeros_like(doseData[...,0])\n \n \n outcomeMean = 0.0\n outcomeVar = 0.0\n doseMean[np.where(mask)] += doseData[...,0][np.where(mask)]\n outcomeMean += continuousOutcome[0]\n subjectCount = 1.0\n \n for n,y in zip(range(1, doseData.shape[-1]), continuousOutcome[1:]):\n x = doseData[...,n]\n subjectCount += 1.0\n dx = x - doseMean\n \n om = doseMean.copy()\n yom = outcomeMean.copy()\n\n doseMean[np.where(mask)] += dx[np.where(mask)]/subjectCount\n outcomeMean += (y - outcomeMean)/subjectCount\n\n doseStd[np.where(mask)] += ((x[np.where(mask)] - om[np.where(mask)])*(x[np.where(mask)] - doseMean[np.where(mask)]))\n outcomeVar += (y - yom)*(y - outcomeMean)\n\n C[np.where(mask)] += (dx[np.where(mask)] * (y - outcomeMean))\n \n doseStd[np.where(mask)] /= (subjectCount)\n outcomeVar /= (subjectCount)\n covariance[np.where(mask)] = C[np.where(mask)] / (subjectCount - 1) ## Bessel's correction for a sample\n\n rho[np.where(mask)] = covariance[np.where(mask)] / (np.sqrt(doseStd[np.where(mask)]) * np.sqrt(outcomeVar))\n# if mask is not None:\n# rho *= mask.astype(np.float64)\n return rho",
"_____no_output_____"
]
],
[
[
"This function is very similar to the one we used in the binary IBDM code, but this time it also calculates the covariance between dose in each voxel and the continuous outcome variable. We then use that covariance to calculate the correlation between dose and outcome.\n\nNow we can apply the mining to some data. The first thing we need to do is select a continuous outcome variable; we will try weight loss first - we need to create this variable from our database in the same way we did for the BEDfactor earlier.",
"_____no_output_____"
]
],
[
[
"probeDose = sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(dosesPath, \"{0:04d}.nii\".format(int(2)))))\ndoseArray = np.zeros((*probeDose.shape, len(selectedPatients)))\nn = 0\nfor idx, pt in selectedPatients.iterrows():\n doseArray[...,n] = pt.BEDfactor * sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(dosesPath, \"{0}.nii\".format(pt.ID.split(\"-\")[-1]))))\n n += 1\n \nselectedPatients = selectedPatients.assign(WeightLoss = lambda d : d[\"WeightStop\"] - d[\"WeightStart\"])\nweightLoss = selectedPatients.WeightLoss.values \n",
"_____no_output_____"
],
[
"mask = sitk.GetArrayFromImage(sitk.ReadImage(\"/data/0002_mask_ds.nii\")).astype(np.int16)\n\nrhoMap = imagesCorrelation(doseArray, weightLoss, mask=mask)\n\nreferenceAnatomy = sitk.GetArrayFromImage(sitk.ReadImage(\"/data/downsampledCTs/0002.nii\"))",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(10,10))\nanatomy = plt.imshow(referenceAnatomy[:,64,...], cmap='Greys_r')\nrhoMapOverlay = plt.imshow(rhoMap[:,64,...], alpha=0.5)\n_ = plt.axis('off')",
"_____no_output_____"
]
],
[
[
"Now we've got our correlation map with the true correspondence of weight loss to dose, we can compute the permutation distribution again to get the significance of the correlation.\n\nThis works just like before, we just rearrange the weight loss values and re-calculate the correlation coefficient and look at how the most extreme voxels behave",
"_____no_output_____"
]
],
[
[
"def doPermutation(doseData, outcome, mask=None):\n \"\"\"\n Permute the statuses and return the maximum t value for this permutation\n Inputs:\n - doseData: the dose data, should be structured such that the number of patients in it is along the last axis\n - outcome: the outcome values. Shuold be a continuous number. These will be permuted in this function to \n assess the null hypothesis of no dose interaction\n - mask: A mask outside which we will ignore the returned correlation\n Returns:\n - (tMin, tMax): the extreme values of the whole t-value map for this permutation\n \"\"\"\n poutcome = np.random.permutation(outcome)\n permT = imagesCorrelation(doseData, poutcome, mask)\n return (np.min(permT), np.max(permT))\n\n\ndef permutationTest(doseData, outcome, nperm=1000, mask=None):\n \"\"\"\n Perform a permutation test to get the global p-value and t-thresholds\n Inputs:\n - doseData: the dose data, should be structured such that the number of patients in it is along the last axis\n - outcome: the outcome labels. Should be a continuous number.\n - nperm: The number of permutations to calculate. Defaults to 1000 which is the minimum for reasonable accuracy\n - mask: A mask outside which we will ignore the returned correlation\n Returns:\n - globalPNeg: the global significance of the test for negative t-values\n - globalPPos: the global significance of the test for positive t-values\n - tThreshNeg: the list of minT from all the permutations, use it to set a significance threshold.\n - tThreshPos: the list of maxT from all the permutations, use it to set a significance threshold.\n \"\"\"\n tthresh = []\n gtCount = 1\n ltCount = 1\n trueT = imagesCorrelation(doseData, outcome, mask=mask)\n trueMaxT = np.max(trueT)\n trueMinT = np.min(trueT)\n if haveTQDM:\n for perm in tqdm(range(nperm)):\n tthresh.append(doPermutation(doseData, outcome, mask))\n if tthresh[-1][1] > trueMaxT:\n gtCount += 1.0\n if tthresh[-1][0] < trueMinT:\n ltCount += 1.0\n else:\n for perm in range(nperm):\n tthresh.append(doPermutation(doseData, outcome, mask))\n if tthresh[-1][1] > trueMaxT:\n gtCount += 1.0\n if tthresh[-1][0] < trueMinT:\n ltCount += 1.0\n \n globalpPos = gtCount / float(nperm)\n globalpNeg = ltCount / float(nperm)\n tthresh = np.array(tthresh)\n return (globalpNeg, globalpPos, sorted(tthresh[:,0]), sorted(tthresh[:,1]))",
"_____no_output_____"
]
],
[
[
"If we run this function with our data, we will get back the global significance and threshold values of $\\rho$. We can then use a contour plot at the 95th percentile to indicate regions of significance.\n\nLet's try doing it now - this is once again equivalent to the binary data mining, but with the continuous data mining code we wrote above. The function call is identical to the binary version, but because the content of the functions is different, it is now doing the calculation with continuous IBDM.\n\n*Warning: this cell will take a really long time to run! On my machine, it was about 13 minutes*",
"_____no_output_____"
]
],
[
[
"pNeg, pPos, threshNeg, threshPos = permutationTest(doseArray, weightLoss, nperm=100, mask=mask)",
"_____no_output_____"
],
[
"print(pNeg, pPos)\nprint(np.percentile(threshNeg, 10))\nprint(threshNeg)\nprint(np.min(rhoMap))\n\nprint(threshPos)\nprint(np.max(rhoMap))",
"0.01 0.85\n-0.5678511904799166\n[-0.6219988997229952, -0.6175109076226357, -0.6134544096890089, -0.6121589002562852, -0.5910111181363896, -0.5834143480176562, -0.578927972628363, -0.5772683707482428, -0.5701034260885904, -0.5684780428630538, -0.5677815402151236, -0.567112051274208, -0.5663344552054927, -0.5618815876344448, -0.5570916973689114, -0.5528503196099662, -0.5426539868773923, -0.5423036971071672, -0.5413197498875186, -0.5357858691601717, -0.5290405591943447, -0.5286249352917888, -0.5281143682691387, -0.5276382490274352, -0.5222747681754554, -0.5217438354606565, -0.5203435328826153, -0.5203420652859473, -0.5161209326497409, -0.5115959938480666, -0.5101799834370042, -0.509385687968707, -0.5074642602827084, -0.5069577884284177, -0.5067445160059888, -0.5054098906395075, -0.5038250244240755, -0.500749104972936, -0.49791810677018844, -0.49667674515174803, -0.4963699384184846, -0.49627667205388376, -0.4956437913183157, -0.49324123344749005, -0.4928835686157826, -0.49177137369837537, -0.4900995815649822, -0.4893313063545188, -0.48892091264420573, -0.4876952233186942, -0.4868397069915791, -0.48660366147002587, -0.4865930222580107, -0.4862768117913989, -0.48507651469482477, -0.48420269757786555, -0.4836388210286083, -0.4826712485167549, -0.4814986615992971, -0.4756493535258817, -0.4754330109385045, -0.47453227872412945, -0.4723912885521767, -0.4721632318160545, -0.46975396698765104, -0.4685400833654899, -0.46684621957115635, -0.46593763407035327, -0.46452621395729726, -0.46219120981134537, -0.45956874340046794, -0.4589351229167462, -0.4580773081418265, -0.45129522755029905, -0.4502150885586886, -0.44989134639234923, -0.4492305602588839, -0.4470673352195838, -0.44614816231599785, -0.44500421250308014, -0.4433542021865776, -0.44229394040713277, -0.4379825724380826, -0.4305976076993396, -0.4296902245577015, -0.42902006567429907, -0.42825416573132585, -0.4225599365688801, -0.41962719381198876, -0.418209611012009, -0.4153423378187456, -0.41204127072248337, -0.41143459929886067, -0.408528961987157, -0.40105449686727807, -0.3987316846351392, -0.39387531103892254, -0.39220604414803606, -0.39001188630039674, -0.3769132608651562]\n-0.6439104529954679\n[0.33108213363572875, 0.35277051325445113, 0.35554666356732184, 0.38105677401828114, 0.3830208282646315, 0.3835360189130155, 0.38675660708925963, 0.3876150532640003, 0.3885882379791939, 0.39832323750639725, 0.4019419918016668, 0.40563768543267653, 0.40597559577843684, 0.4116604304104511, 0.41309980279552805, 0.4135631336909694, 0.4219927370844362, 0.42219537807603946, 0.4235008006853942, 0.4255269129349205, 0.43277445643423224, 0.43586404228875136, 0.4421060022495082, 0.44481093158289964, 0.4495259967173965, 0.45123615589487376, 0.451304437361307, 0.4540236649746801, 0.4543970316134987, 0.45525272431784836, 0.45661428977110285, 0.461471552213069, 0.46247159201037175, 0.4632521038485004, 0.4657199193669504, 0.46697680155762133, 0.46720752424969464, 0.46847867223717243, 0.46911568293939554, 0.4706722969308973, 0.47382744246151176, 0.47396983801888093, 0.4740157978643539, 0.4745302720997035, 0.47464530852093045, 0.4748564205212178, 0.475505148662933, 0.475714073247409, 0.4771366259809781, 0.47789016894652675, 0.4820832284094369, 0.48235209430917114, 0.4842960300473896, 0.48469313087371324, 0.48646366124274276, 0.4884147759234943, 0.49569138296184134, 0.49593010482378025, 0.4961660476877367, 0.4964534438600978, 0.4999426453395259, 0.500801989349693, 0.5055646438597244, 0.5057746026826461, 0.5105476572462282, 0.5116446734198008, 0.5138925134335104, 0.5147905302507828, 0.5151317211561134, 0.5160690231004603, 0.518284593125619, 0.5185966533524666, 0.5197450190578145, 0.5237723182306928, 0.5242564275541689, 0.527264966288114, 0.5305408076719184, 0.5318532782225502, 0.5405393421251731, 0.5419246313226787, 0.542009937998419, 0.5425402574551053, 0.5427217787254232, 0.5447018158982218, 0.5455990721687709, 0.5469385093997547, 0.5471721949048161, 0.5524806456937799, 0.5532880052767577, 0.566029375548983, 0.5666995550796005, 0.5723093719803845, 0.5728958438923695, 0.5740458384088846, 0.5754334269299971, 0.5797613000789289, 0.5797984973700215, 0.5825309725237796, 0.608215715951344, 0.646658703227222]\n0.42164476363354536\n"
]
],
[
[
"The usual threshold for saying a result is statstically sgnificant is p=<0.05. Unfortunately, in my example analysis we don't seem to have a globally significant result. Everything below here won't really work properly because there is not significant result in this case, however let's do it anyway so you can see what to do when you mine something else later and get a significant result!\n\n---\n\nWe also have our map of rho values, and the associated permutation test distribution, so we can plot the regions of significance overlaid on the rho-map and CT anatomy. To do this, we use matplotlib's imshow and contourf functions as below",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(10,10))\n\nax = fig.add_subplot(111)\n\n# First show the CT\nctImg = ax.imshow(referenceAnatomy[:,:,64], cmap='Greys_r')\n\n# Now add the t-map with some transparency\nrhomapImg = ax.imshow(rhoMap[:,:,64], alpha=0.3)\n\nplt.axis('off');\n\nneg_p005 = np.percentile(threshNeg, 0.05)\nneg_p010 = np.percentile(threshNeg, 0.10)\nneg_p015 = np.percentile(threshNeg, 0.15) ## Contour plot needs two levels, so we use p=0.05 & 0.10\npos_p005 = np.percentile(threshPos, 0.95)\npos_p010 = np.percentile(threshPos, 0.9)\npos_p015 = np.percentile(threshPos, 0.75)\n\n## Now do the contourplot at the 95% level for p=0.05\npos_contourplot = plt.contour(rhoMap[:,:,64], levels=[pos_p015, pos_p010, pos_p005], colors='r')\n# neg_contourplot = plt.contour(rhoMap[:,:,64], levels=[neg_p005, neg_p010, neg_p015], colors='g')",
"_____no_output_____"
]
],
[
[
"Now we have a complete pipeline to do image based data mining!\n\nNow use this pipeline to try mining against some of the other outcomes in the clinical data, for example:\n\n- Change in BMI between pre/post treatment\n- Change in skeletal muscle pre/post treatment\n- Change in other areas, e.g. fats\n- Skeletal muscle change against CT image density\n\nHave fun!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecc67b7544261d0af3d9c5764c36314225489266 | 11,106 | ipynb | Jupyter Notebook | docker/Notebooks/pseudo-distributed/pseudo-distributed.ipynb | aws-deepracer/ude-gym-bridge | 030c9eb56bbf4e892245b876ec06b58561199038 | [
"Apache-2.0"
] | 1 | 2022-03-25T07:20:46.000Z | 2022-03-25T07:20:46.000Z | docker/Notebooks/pseudo-distributed/pseudo-distributed.ipynb | aws-deepracer/ude-gym-bridge | 030c9eb56bbf4e892245b876ec06b58561199038 | [
"Apache-2.0"
] | null | null | null | docker/Notebooks/pseudo-distributed/pseudo-distributed.ipynb | aws-deepracer/ude-gym-bridge | 030c9eb56bbf4e892245b876ec06b58561199038 | [
"Apache-2.0"
] | null | null | null | 33.55287 | 133 | 0.571223 | [
[
[
"# Pre-requisite",
"_____no_output_____"
]
],
[
[
"%cd /Notebooks/pseudo-distributed/\n!git clone https://github.com/DLR-RM/stable-baselines3.git\n%cd /Notebooks/pseudo-distributed/stable-baselines3/\n!git checkout 58a9806\n!cp /Notebooks/pseudo-distributed/stable-baselines3.patch /Notebooks/pseudo-distributed/stable-baselines3/\n%cd /Notebooks/pseudo-distributed/stable-baselines3/\n!git apply --reject --whitespace=fix ./stable-baselines3.patch\n%cd /Notebooks/pseudo-distributed/",
"_____no_output_____"
],
[
"import numpy as np\nimport grpc\nimport os\nimport time\nimport gym\nfrom gym.spaces.space import Space\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport ude\nfrom ude import UDEEnvironment, RemoteEnvironmentAdapter, UDEToGymWrapper\nfrom typing import Union, Tuple, Dict, List, Any, Optional\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Customize the following settings\n\n1. HOSTNAME -> This will the name of the host where gym environment is running\n2. ENV_NAME -> UDE paper experiments are the following gym environment Hopper-v2, LunarLanderContinuous-v2, Pendulum-v1\n3. ALGO -> UDE paper experiments are the following PPO, DDPG, SAC algorithm",
"_____no_output_____"
]
],
[
[
"HOSTNAME = \"\" # Example: HOSTNAME = \"ec2-54-221-17-66.compute-1.amazonaws.com\"\nENV_NAME = \"Hopper-v2\" # This experiment is run for Hopper-v2, LunarLanderContinuous-v2, Pendulum-v1\nALGO = \"PPO\" # Supported are PPO, DDPG, SAC",
"_____no_output_____"
],
[
"PORT = 80\nBASE_PATH = \"/Notebooks/pseudo-distributed\"",
"_____no_output_____"
],
[
"remote_env_adapter = RemoteEnvironmentAdapter(HOSTNAME, port=PORT)\nude_env = UDEEnvironment(remote_env_adapter)\nude_env.side_channel.send(\"env\", ENV_NAME)\nude_env.reset()\nenv = UDEToGymWrapper(ude_env=ude_env, agent_name=\"agent0\")\n",
"_____no_output_____"
],
[
"%cd {BASE_PATH}/stable-baselines3\n\nfrom stable_baselines3 import PPO\nfrom stable_baselines3 import SAC\nfrom stable_baselines3 import DDPG\nfrom stable_baselines3.common.evaluation import evaluate_policy",
"_____no_output_____"
],
[
"def write_metrics(path, data):\n with open(path, \"a+\") as fp:\n fp.write(data)",
"_____no_output_____"
]
],
[
[
"# Intialize environment",
"_____no_output_____"
]
],
[
[
"model_path = \"{}/output/models/{}-MlpPolicy-pseudo-distributed-{}\".format(BASE_PATH, ALGO, ENV_NAME)\nexperiment_results_path = \"{}/output/experiment_results/{}-MlpPolicy-pseudo-distributed-{}\".format(BASE_PATH, ALGO, ENV_NAME)",
"_____no_output_____"
],
[
"model_path",
"_____no_output_____"
],
[
"%mkdir -p {model_path}\n%mkdir -p {experiment_results_path}",
"_____no_output_____"
],
[
"seed_list = [0, 1, 6, 7, 9]\ntotal_timesteps = 1000000\nevals_between_training_step = 1000",
"_____no_output_____"
]
],
[
[
"# Train with different seeds",
"_____no_output_____"
]
],
[
[
"for seed in seed_list:\n model_seed_path = \"{}/seed-{}\".format(model_path, seed)\n experiment_result_seed_path = \"{}/seed-{}.txt\".format(experiment_results_path, seed)\n step_experiment_result_seed_path = \"{}/step-seed-{}.txt\".format(experiment_results_path, seed)\n if ALGO == \"PPO\":\n model = PPO(policy=\"MlpPolicy\", env=env, verbose=0, seed=seed,\n metric_path = step_experiment_result_seed_path)\n elif ALGO == \"SAC\":\n model = SAC(policy=\"MlpPolicy\", env=env, verbose=0, seed=seed,\n metric_path = step_experiment_result_seed_path)\n elif ALGO == \"DDPG\":\n model = DDPG(policy=\"MlpPolicy\", env=env, verbose=0, seed=seed,\n metric_path = step_experiment_result_seed_path)\n else:\n raise Exception(\"Supported ALGO values are PPO, SAC, DDPG\")\n for i in range(total_timesteps//evals_between_training_step):\n model.increment_iteration_number()\n \n start_training_time = time.time()\n model.learn(total_timesteps=evals_between_training_step)\n total_training_time = time.time() - start_training_time\n if i % 10 == 0:\n model.save(model_seed_path)\n env.reset()\n start_eval_time = time.time()\n mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=5)\n total_eval_time = time.time() - start_eval_time\n data = \"{}|{}|{}|{}|{}|{}|{}|{}\\n\".format(seed, i,\n start_training_time, total_training_time,\n start_eval_time, total_eval_time,\n mean_reward, std_reward) \n write_metrics(experiment_result_seed_path, data)\n model.save(model_seed_path)\n del model",
"_____no_output_____"
]
],
[
[
"# Plot graphs",
"_____no_output_____"
]
],
[
[
"seeds_mean_reward_list = []\nseeds_timesteps_list = []\nfor seed in seed_list:\n experiment_result_seed_path = \"{}/seed-{}.txt\".format(experiment_results_path, seed)\n df = pd.read_csv(experiment_result_seed_path, sep=\"|\",\n names=[\"seed\", \"rollout\",\n \"start_training_time\", \"total_training_time\",\n \"start_eval_time\", \"total_eval_time\",\n \"mean_reward\", \"std_reward\"])\n df = pd.read_csv(experiment_result_seed_path, sep=\"|\",\n names=[\"seed\", \"rollouts\",\n \"start_training_time\", \"total_training_time\",\n \"start_eval_time\", \"total_eval_time\",\n \"mean_reward\", \"std_reward\"])\n df['timesteps'] = df['rollouts'] * evals_between_training_step\n df['cumulative_training_time'] = df['total_training_time'].cumsum()\n df['cumulative_evaluation_time'] = df['total_eval_time'].cumsum()\n seeds_mean_reward_list.append(df[\"mean_reward\"].to_numpy())\n seeds_timesteps_list.append(df[\"timesteps\"].to_numpy())\n \n # Plotting graphs\n fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20, 6))\n fig.suptitle(\"Seed = {}\".format(seed))\n \n ax1.set_title(\"mean_reward vs timesteps\")\n ax1.plot(df['timesteps'], df['mean_reward'])\n ax1.set(xlabel='timesteps', ylabel='mean_reward')\n \n ax2.set_title(\"mean_reward vs cumulative_training_time\")\n ax2.plot(df['cumulative_training_time'], df['mean_reward'])\n ax2.set(xlabel='cumulative_training_time (seconds)', ylabel='mean_reward')\n\n ax3.set_title(\"mean_reward vs cumulative_evaluation_time\")\n ax3.plot(df['cumulative_evaluation_time'], df['mean_reward'])\n ax3.set(xlabel='cumulative_evaluation_time (seconds)', ylabel='mean_reward')\n fig.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 1, figsize=(20, 6))\nfig.suptitle(\"Mean reward across all seeds\")\n\navg_reward_all_seeds = np.array(seeds_mean_reward_list).mean(axis=0)\nax.set_title(\"mean_reward vs timesteps\")\nax.plot(seeds_timesteps_list[0], avg_reward_all_seeds)\nax.set(xlabel='timesteps', ylabel='mean_reward')\nfig.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecc690f274f7692303dea6dd9e429807081ec784 | 4,089 | ipynb | Jupyter Notebook | notebooks/2.0-me-document-representation.ipynb | gefei-htw/academia_tag_recommender | aea56d36e16584824ef217d1f2caaee3414098f8 | [
"MIT"
] | null | null | null | notebooks/2.0-me-document-representation.ipynb | gefei-htw/academia_tag_recommender | aea56d36e16584824ef217d1f2caaee3414098f8 | [
"MIT"
] | null | null | null | notebooks/2.0-me-document-representation.ipynb | gefei-htw/academia_tag_recommender | aea56d36e16584824ef217d1f2caaee3414098f8 | [
"MIT"
] | 1 | 2021-01-29T19:41:47.000Z | 2021-01-29T19:41:47.000Z | 37.172727 | 490 | 0.649792 | [
[
[
"# Document Representation\n\nThis notebook evaluates methods for document representation using the [academia.stackexchange.com](https://academia.stackexchange.com/) data dump.\n\nThe process of document representation is usually split into three steps: Preprocessing, Transformation and Dimension Reduction.\n\n## Table of Contents\n- [Preprocessing](#preprocessing)\n- [Transformation](#transformation)\n- [Dimension Reduction](#dim_reduce)",
"_____no_output_____"
],
[
"<a id=preprocessing/>",
"_____no_output_____"
],
[
"## Preprocessing\n\nIn text preprocessing the defines which characters of the documents text will be part of the document representation. That decision is split into the following parts:\n\n* **Removing characters**\n\n In a preprocessing step parts of the text can be removed that should not be part of the document representation (e.g. html tags, numbers, punctuation).\n\n* **Tokenization**:\n\n In the tokenization step the text gets split into a vector of tokens. It needs to be decided where to split the text (e.g. spaces, punctuation) and what text should appear in the vector (e.g. minimum length). Furthermore it can be decided if unigrams (one word), bigrams (phrases consisting of two words) or even trigrams (phrases consisting of three words) should be tokens (called 'ngrams').\n\n* **Normalization**\n \n Another decision has to made regarding how to differentiate between tokens. It might be not relevant whether text includes *student* or *students*, so it might be a could option to choose the same representation for both of them. This step is called *Normalization*. There are different normalization algorithms that either stem words to their word stem (e.g. Porter Stemmer, Lancaster Stemmer) or choose a generell represation for words describing the same topic (Lemmatizer).\n\n* **Stop word removal**\n\n In natural speech there are words that appear often regardless of the meaning of the text. For text classification those might not be relevant (e.g. articles, pronouns) and thuse be removed from the document representation.",
"_____no_output_____"
],
[
"<a id=transformation/>",
"_____no_output_____"
],
[
"## Transformation\n\nThe process of text transformation (also called feature extraction) converts the tokens of the text into a vector representation to feed to a classifier. There are two often used approaches to text transformation:\n\n- [Bag of words](2.1-me-bag-of-words.ipynb)\n- [Embedding](2.2-me-embedding.ipynb)",
"_____no_output_____"
],
[
"<a id=dim_reduce/>",
"_____no_output_____"
],
[
"## Dimension Reduction\n\nEspecially the Bag of words approach can result in a vector of very high dimensionality. If each word in a corpus is used as a feature, the number of features used would be as high as the number of words in the corpus. Therefore is might be necessary to [reduce the dimensions](2.3-me-dimensionality-reduction.ipynb) (also called feature selection).",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecc694e5ec6eb8ad28f400370b92f64655989df3 | 11,584 | ipynb | Jupyter Notebook | object_detection/human_detection_export.ipynb | pcrete/humanet | a956c0903d4a6ebff987da723ce8e80ada692585 | [
"Apache-2.0"
] | 7 | 2019-10-25T12:35:02.000Z | 2022-03-24T02:14:43.000Z | object_detection/human_detection_export.ipynb | pcrete/humanet | a956c0903d4a6ebff987da723ce8e80ada692585 | [
"Apache-2.0"
] | null | null | null | object_detection/human_detection_export.ipynb | pcrete/humanet | a956c0903d4a6ebff987da723ce8e80ada692585 | [
"Apache-2.0"
] | 2 | 2019-11-30T03:22:37.000Z | 2020-10-06T07:17:36.000Z | 33.191977 | 131 | 0.517697 | [
[
[
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\nprint('importing libraries..')\n\nimport numpy as np\nimport os\nimport six.moves.urllib as urllib\nimport sys\nimport tarfile\nimport tensorflow as tf\nimport zipfile\nimport cv2\nimport pandas as pd\nimport difflib\nimport time\nimport plotly.plotly as py\nimport plotly.graph_objs as go\nfrom skimage.measure import compare_ssim as ssim\nfrom tqdm import tqdm_notebook\n\nfrom collections import defaultdict\nfrom io import StringIO\nfrom matplotlib import pyplot as plt\nfrom PIL import Image\n\nsys.path.insert(0, os.path.abspath(\"..\"))\nfrom utils import label_map_util\nfrom utils import visualization_utils as vis_util\n\n# MODEL_NAME = 'faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017'\nMODEL_NAME = 'faster_rcnn_resnet101_coco_11_06_2017'\n\nPATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'\nPATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')\n\nNUM_CLASSES = 90\n\nprint ('loading model..')\ndetection_graph = tf.Graph()\nwith detection_graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')\nprint('model loaded')\n\nlabel_map = label_map_util.load_labelmap(PATH_TO_LABELS)\ncategories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)\ncategory_index = label_map_util.create_category_index(categories)",
"_____no_output_____"
]
],
[
[
"## Human Detection Process",
"_____no_output_____"
]
],
[
[
"def detection(path):\n points_objs = {}\n frame_objs = {}\n start = time.time()\n \n with detection_graph.as_default():\n with tf.Session(graph=detection_graph) as sess:\n skip = 0\n cap = cv2.VideoCapture(path)\n id_frame = 1\n id_center = 0\n \n while(True):\n ret, frame = cap.read()\n if cv2.waitKey(1) & 0xFF == ord('q'): break\n skip = 0\n image_np = np.array(frame)\n if(image_np.shape == ()): break\n\n print('Frame ID:', id_frame, '\\tTime:', '{0:.2f}'.format(time.time()-start), 'seconds')\n frame_objs[id_frame] = image_np\n image_np_expanded = np.expand_dims(image_np, axis=0)\n\n\n image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')\n boxes = detection_graph.get_tensor_by_name('detection_boxes:0')\n scores = detection_graph.get_tensor_by_name('detection_scores:0')\n classes = detection_graph.get_tensor_by_name('detection_classes:0')\n num_detections = detection_graph.get_tensor_by_name('num_detections:0')\n\n (boxes, scores, classes, num_detections) = sess.run([boxes, scores, classes, num_detections],\n feed_dict={image_tensor: image_np_expanded})\n\n boxes = np.squeeze(boxes)\n classes = np.squeeze(classes).astype(np.int32)\n scores = np.squeeze(scores)\n\n count_boxes = 0\n thresh = 0.3\n max_boxes = 50\n\n for i, c in enumerate(classes):\n if (c == 1 and (scores[i] > thresh) and (count_boxes < max_boxes)):\n im_height = image_np.shape[0]\n im_width = image_np.shape[1]\n ymin, xmin, ymax, xmax = boxes[i]\n \n (left, right, top, bottom) = (int(xmin*im_width), int(xmax*im_width),\n int(ymin*im_height), int(ymax*im_height))\n points_objs[id_center] = {\n 'frame': id_frame,\n 'bbox': [left, top, right-left, bottom-top],\n 'score': scores[i]\n }\n count_boxes += 1\n\n id_center += 1\n id_frame += 1\n \n cap.release()\n cv2.destroyAllWindows()\n\n return points_objs, frame_objs",
"_____no_output_____"
]
],
[
[
"## Testing",
"_____no_output_____"
]
],
[
[
"points_objs, frame_objs = detection(path='../data/1_BEST/1_BEST.mp4')",
"Frame ID: 0 \tTime: 0.35 seconds\nFrame ID: 1 \tTime: 11.38 seconds\nFrame ID: 2 \tTime: 12.73 seconds\nFrame ID: 3 \tTime: 14.08 seconds\nFrame ID: 4 \tTime: 15.42 seconds\nFrame ID: 5 \tTime: 16.75 seconds\nFrame ID: 6 \tTime: 18.10 seconds\nFrame ID: 7 \tTime: 19.45 seconds\nFrame ID: 8 \tTime: 20.80 seconds\nFrame ID: 9 \tTime: 22.16 seconds\nFrame ID: 10 \tTime: 23.52 seconds\nFrame ID: 11 \tTime: 24.87 seconds\nFrame ID: 12 \tTime: 26.24 seconds\nFrame ID: 13 \tTime: 27.60 seconds\nFrame ID: 14 \tTime: 28.96 seconds\nFrame ID: 15 \tTime: 30.33 seconds\nFrame ID: 16 \tTime: 31.71 seconds\nFrame ID: 17 \tTime: 33.07 seconds\nFrame ID: 18 \tTime: 34.42 seconds\nFrame ID: 19 \tTime: 35.78 seconds\nFrame ID: 20 \tTime: 37.13 seconds\nFrame ID: 21 \tTime: 38.48 seconds\nFrame ID: 22 \tTime: 39.84 seconds\nFrame ID: 23 \tTime: 41.19 seconds\nFrame ID: 24 \tTime: 42.54 seconds\nFrame ID: 25 \tTime: 43.93 seconds\nFrame ID: 26 \tTime: 45.28 seconds\nFrame ID: 27 \tTime: 46.63 seconds\nFrame ID: 28 \tTime: 47.98 seconds\nFrame ID: 29 \tTime: 49.33 seconds\nFrame ID: 30 \tTime: 50.70 seconds\nFrame ID: 31 \tTime: 52.06 seconds\nFrame ID: 32 \tTime: 53.45 seconds\nFrame ID: 33 \tTime: 54.84 seconds\nFrame ID: 34 \tTime: 56.20 seconds\nFrame ID: 35 \tTime: 57.56 seconds\nFrame ID: 36 \tTime: 58.91 seconds\nFrame ID: 37 \tTime: 60.26 seconds\nFrame ID: 38 \tTime: 61.61 seconds\nFrame ID: 39 \tTime: 62.97 seconds\nFrame ID: 40 \tTime: 64.32 seconds\nFrame ID: 41 \tTime: 65.66 seconds\nFrame ID: 42 \tTime: 67.02 seconds\nFrame ID: 43 \tTime: 68.37 seconds\nFrame ID: 44 \tTime: 69.73 seconds\nFrame ID: 45 \tTime: 71.07 seconds\nFrame ID: 46 \tTime: 72.43 seconds\nFrame ID: 47 \tTime: 73.78 seconds\nFrame ID: 48 \tTime: 75.13 seconds\nFrame ID: 49 \tTime: 76.47 seconds\nFrame ID: 50 \tTime: 77.82 seconds\nFrame ID: 51 \tTime: 79.16 seconds\nFrame ID: 52 \tTime: 80.53 seconds\nFrame ID: 53 \tTime: 81.88 seconds\nFrame ID: 54 \tTime: 83.23 seconds\nFrame ID: 55 \tTime: 84.58 seconds\nFrame ID: 56 \tTime: 85.93 seconds\nFrame ID: 57 \tTime: 87.27 seconds\nFrame ID: 58 \tTime: 88.62 seconds\nFrame ID: 59 \tTime: 89.96 seconds\nFrame ID: 60 \tTime: 91.31 seconds\nFrame ID: 61 \tTime: 92.66 seconds\nFrame ID: 62 \tTime: 94.00 seconds\nFrame ID: 63 \tTime: 95.36 seconds\nFrame ID: 64 \tTime: 96.71 seconds\nFrame ID: 65 \tTime: 98.05 seconds\nFrame ID: 66 \tTime: 99.40 seconds\nFrame ID: 67 \tTime: 100.75 seconds\n"
],
[
"df_points = pd.DataFrame.from_dict(points_objs, orient='index')\ndf_points.head()",
"_____no_output_____"
],
[
"df_boxes = []\nfor left,top,width,height in df_points['bbox'].as_matrix():\n df_boxes.append([left,top,width,height])\ndf_boxes = pd.DataFrame.from_records(df_boxes)\ndf_boxes.head()",
"_____no_output_____"
],
[
"df_minus_ones = pd.DataFrame.from_records([[-1] for x in range(len(df_boxes))])\ndf_minus_ones.head()",
"_____no_output_____"
],
[
"df_MOT = pd.concat([df_points['frame'], \n df_minus_ones, \n df_boxes, \n df_points['score'], \n df_minus_ones, \n df_minus_ones, \n df_minus_ones], axis=1)\ndf_MOT.head(10)",
"_____no_output_____"
],
[
"df_MOT.to_csv('../deep_sort/MOT15/train/PETS09-S2L1/det/det.txt', header=None, index=False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc69ce05f797c4b6636c1e32727a1f3b8d71f56 | 10,529 | ipynb | Jupyter Notebook | genius_crawler/aiohttp test.ipynb | crawftv/music_recommender | b0e1569e7d9271341efde7df07018e34390c03b7 | [
"BSD-2-Clause"
] | null | null | null | genius_crawler/aiohttp test.ipynb | crawftv/music_recommender | b0e1569e7d9271341efde7df07018e34390c03b7 | [
"BSD-2-Clause"
] | null | null | null | genius_crawler/aiohttp test.ipynb | crawftv/music_recommender | b0e1569e7d9271341efde7df07018e34390c03b7 | [
"BSD-2-Clause"
] | null | null | null | 56.005319 | 1,271 | 0.619432 | [
[
[
"%load_ext blackcellmagic",
"_____no_output_____"
],
[
"from decouple import config\nimport json\nimport requests\nimport sys\nimport asyncio\nfrom concurrent.futures import ThreadPoolExecutor\nimport nest_asyncio\nfrom itertools import zip_longest\nimport aiohttp\nfrom aiohttp import ClientSession\n\nnest_asyncio.apply()",
"_____no_output_____"
],
[
"CLIENT_SECRET= config(\"CLIENT_SECRET\")\nCLIENT_ID = config(\"CLIENT_ID\")\nCLIENT_ACCESS_TOKEN = config(\"CLIENT_ACCESS_TOKEN\")",
"_____no_output_____"
],
[
"import requests\n\n\ndef request_song_info(session, song_num, song_urls):\n base_url = \"https://api.genius.com/songs/\" + str(song_num)\n headers = {\"Authorization\": \"Bearer \" + CLIENT_ACCESS_TOKEN}\n response = requests.get(base_url, headers=headers)\n try:\n if response.json()[\"meta\"][\"status\"] == 200:\n song_urls.append(\n (\n song_num,\n response.json()[\"response\"][\"song\"][\"title\"],\n response.json()[\"response\"][\"song\"][\"url\"],\n )\n )\n else:\n pass\n except:\n pass",
"_____no_output_____"
],
[
"async def get_index_data_asynchronous(min_song, max_song, song_urls):\n \"\"\"\n 1. Establish an executor and number of workers\n 2. Establish the session\n 3. Establish the event loop\n 4. Create the task by list comprenhensions\n 5. Gather tasks.\n \"\"\"\n with ClientSession as session:\n loop = asyncio.get_event_loop()\n tasks = [\n loop.run_in_executor(\n executor, request_song_info, *(session, song_num, song_urls)\n )\n for song_num in range(min_song, max_song)\n ]\n for response in await asyncio.gather(*tasks):\n pass\n\n\ndef execute_async_index_event_loop(min_song, max_song, song_urls):\n \"\"\"\n This function does something analogous to compiling the get_data_asynchronously function,\n Then it executes loop.\n 1. Call the get_data_function\n 2. Get the event_loop\n 3. Run the tasks (Much easier to understand in python 3.7, \"ensure_future\" was changed to \"create_task\")\n 4. Edge_list and top_interactions will be passed to the next functions\n \"\"\"\n future = asyncio.create_task(\n get_index_data_asynchronous(min_song, max_song, song_urls)\n )\n loop = asyncio.get_event_loop()\n loop.run_until_complete(future)\n return song_urls",
"_____no_output_____"
],
[
"%%time\nsong_urls = []\nmin_song=0\ntest_song =100\ntest_song_urls = execute_async_index_event_loop(min_song,test_song,song_urls)",
"_____no_output_____"
],
[
"len(test_song_urls)",
"_____no_output_____"
],
[
"%%time\nmin_song = max([s[0] for s in song_urls])+1\nmax_song = 100000+min_song\nsong_urls = execute_async_index_event_loop(min_song, max_song,song_urls)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc6a54868e51615522e34b8c21dd20c8822da48 | 16,395 | ipynb | Jupyter Notebook | in_progress/Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb | rhaas80/nrpytutorial | 4398cd6b5a071c8fb8b2b584a01f07a4591dd5f4 | [
"BSD-2-Clause"
] | null | null | null | in_progress/Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb | rhaas80/nrpytutorial | 4398cd6b5a071c8fb8b2b584a01f07a4591dd5f4 | [
"BSD-2-Clause"
] | null | null | null | in_progress/Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb | rhaas80/nrpytutorial | 4398cd6b5a071c8fb8b2b584a01f07a4591dd5f4 | [
"BSD-2-Clause"
] | null | null | null | 44.072581 | 685 | 0.584081 | [
[
[
"<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n<script>\n window.dataLayer = window.dataLayer || [];\n function gtag(){dataLayer.push(arguments);}\n gtag('js', new Date());\n\n gtag('config', 'UA-59152712-8');\n</script>\n\n# Interpolating Metric Quantities on Cell Faces\n\n## Author: Patrick Nelson\n\n**Notebook Status:** <font color='green'><b>Validated</b></font>\n\n**Validation Notes:** This module will be self-validated against [its module](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_FCVAL.py) and will also be validated against the corresponding algorithm in the old `GiRaFFE` code in [this tutorial](Tutorial-Start_to_Finish-GiRaFFE_NRPy-FCVAL.ipynb).\n\n### This module presents the functionality of [GiRaFFE_NRPy_Metric_Face_Values.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py) .\nThis notebook presents the macros from the original `GiRaFFE` that provide the values of the metric gridfunctions interpolated to the cell faces along with the code needed to implement this in NRPy. \n",
"_____no_output_____"
],
[
"<a id='toc'></a>\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis notebook is organized as follows\n\n0. [Step 0](#prelim): Preliminaries\n1. [Step 1](#interpolator): The Interpolator\n 1. [Step 1.a](#macros): Interpolator coefficients and definition\n 1. [Step 1.b](#gf_struct): Create an array to easily define the gridfunctions we want to interpolate.\n 1. [Step 1.c](#loops): Define the loop parameters and body\n1. [Step 2](#code_validation): Code Validation against original C code\n1. [Step 3](#latex_pdf_output): Output this notebook to $\\LaTeX$-formatted PDF file",
"_____no_output_____"
],
[
"<a id='prelim'></a>\n\n# Step 0: Preliminaries \\[Back to [top](#toc)\\]\n$$\\label{prelim}$$\n\nThis first block of code just sets up a subdirectory within `GiRaFFE_standalone_Ccodes/` to which we will write the C code and adds core NRPy+ functionality to `sys.path`.",
"_____no_output_____"
]
],
[
[
"# Step 0: Add NRPy's directory to the path\n# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory\nimport os,sys\nnrpy_dir_path = os.path.join(\"..\")\nif nrpy_dir_path not in sys.path:\n sys.path.append(nrpy_dir_path)\n\nfrom outputC import outCfunction # NRPy+: Core C code output module\nimport cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\nCcodesdir = \"GiRaFFE_standalone_Ccodes/FCVAL\"\ncmd.mkdir(os.path.join(Ccodesdir))",
"_____no_output_____"
]
],
[
[
"<a id='interpolator'></a>\n\n# Step 1: The Interpolator \\[Back to [top](#toc)\\]\n$$\\label{interpolator}$$\n\nHere, we we will write the code necessary to interpolate the metric gridfunction $\\alpha, \\beta^i, \\gamma_{ij}$ onto the cell faces. These values will be necessary to compute fluxes of the Poynting vector and electric field through those faces.\n\n<a id='macros'></a>\n\n## Step 1.a: Interpolator coefficients and definition \\[Back to [top](#toc)\\]\n$$\\label{macros}$$\n\nFirst, we will define the functional form of our interpolator. At some point on our grid $i$, we will calculate the value of some gridfunction $Q$ at position $i-\\tfrac{1}{2}$ with\n$$\nQ_{i-1/2} = a_{i-2} Q_{i-2} +a_{i-1} Q_{i-1} +a_{i} Q_{i} +a_{i+1} Q_{i+1}\n$$\nand the coefficients we will use for it, \n\\begin{align}\na_{i-2} &= -0.0625 \\\\\na_{i-1} &= 0.5625 \\\\\na_{i} &= 0.5625 \\\\\na_{i+1} &= -0.0625 \\\\\n\\end{align}.",
"_____no_output_____"
]
],
[
[
"%%writefile $Ccodesdir/interpolate_metric_gfs_to_cell_faces.h\n// Side note: the following values could be used for cell averaged gfs:\n// am2=-1.0/12.0, am1=7.0/12.0, a0=7.0/12.0, a1=-1.0/12.0\n// However, since the metric gfs store the grid point values instead of the cell average,\n// the following coefficients should be used:\n// am2 = -1/16, am1 = 9/16, a0 = 9/16, a1 = -1/16\n// This will yield the third-order-accurate face values at m-1/2,\n// using values specified at {m-2,m-1,m,m+1}\n#define AM2 -0.0625\n#define AM1 0.5625\n#define A0 0.5625\n#define A1 -0.0625\n#define COMPUTE_FCVAL(METRICm2,METRICm1,METRIC,METRICp1) (AM2*(METRICm2) + AM1*(METRICm1) + A0*(METRIC) + A1*(METRICp1))\n",
"Overwriting GiRaFFE_standalone_Ccodes/FCVAL/interpolate_metric_gfs_to_cell_faces.h\n"
]
],
[
[
"<a id='gf_struct'></a>\n\n## Step 1.b: Create an array to easily define the gridfunctions we want to interpolate. \\[Back to [top](#toc)\\]\n$$\\label{gf_struct}$$\n\nWe will need to apply this interpolation to each gridpoint for several gridfunctions: the lapse $\\alpha$, the shift $\\beta^i$, and the three-metric $\\gamma_{ij}$. Consider that in NRPy+, each gridfunction is assigned an integer identifier with the C macro `#define`. So, the simplest (and shortest to write!) way to ensure we hit each of these is to create arrays that list each of these identifiers in order, so we can always hit the write gridfunction no matter where each gridfunction lies in the list. We use two arrays; the first identifies the usual gridfunctions from which we will read, and the second identifies the face-sampled gridfunctions to which we will write. ",
"_____no_output_____"
]
],
[
[
"%%writefile -a $Ccodesdir/interpolate_metric_gfs_to_cell_faces.h\n\nconst int metric_gfs_list[10] = {GAMMADD00GF,\n GAMMADD01GF,\n GAMMADD02GF,\n GAMMADD11GF,\n GAMMADD12GF,\n GAMMADD22GF,\n BETAU0GF,\n BETAU1GF,\n BETAU2GF,\n ALPHAGF};\n\nconst int metric_gfs_face_list[10] = {GAMMA_FACEDD00GF,\n GAMMA_FACEDD01GF,\n GAMMA_FACEDD02GF,\n GAMMA_FACEDD11GF,\n GAMMA_FACEDD12GF,\n GAMMA_FACEDD22GF,\n BETA_FACEU0GF,\n BETA_FACEU1GF,\n BETA_FACEU2GF,\n ALPHA_FACEGF};\n\nconst int num_metric_gfs = 10;",
"Appending to GiRaFFE_standalone_Ccodes/FCVAL/interpolate_metric_gfs_to_cell_faces.h\n"
]
],
[
[
"<a id='loops'></a>\n\n## Step 1.c: Define the loop parameters and body \\[Back to [top](#toc)\\]\n$$\\label{loops}$$\n\nNext, we will write the function that loops over the entire grid. One additional parameter to consider here is the direction in which we need to do the interpolation. This direction exactly corresponds to the parameter `flux_dirn` used in the calculation of the flux of the [Poynting vector](Tutorial-GiRaFFE_NRPy-Stilde-flux.ipynb) and [electric field](Tutorial-GiRaFFE_NRPy-Induction_Equation.ipynb).\n\nThe outermost loop will iterate over the gridfunctions we listed above. Nested inside of that, there will be three loops that go through the grid in the usual way. However, the upper bound will be a little unusual. Instead of covering all points or all interior points, we will write these loops to cover all interior points *and one extra past that*. This is because we have defined our interpolation on the $i-\\tfrac{1}{2}$ face of a cell, but any given calculation will require both that and an interpolation on the $i+\\tfrac{1}{2}$ face as well.",
"_____no_output_____"
]
],
[
[
"# FIXME: The old oop bounds are NGHOSTS to Nxx+NGHOSTS+1, but converting reconstructed velocities to\n# and from Valencia requires extra points, so we crank it to maximum.\ndesc = \"Interpolate metric gridfunctions to cell faces\"\nname = \"interpolate_metric_gfs_to_cell_faces\"\ninterp_Cfunc = outCfunction(\n outfile = \"returnstring\", desc=desc, name=name,\n params =\"const paramstruct *params,REAL *auxevol_gfs,const int flux_dirn\",\n preloop =\"\"\" int in_gf,out_gf;\n REAL Qm2,Qm1,Qp0,Qp1;\n\n\"\"\" ,\n body =\"\"\" for(int gf = 0;gf < num_metric_gfs;gf++) {\n in_gf = metric_gfs_list[gf];\n out_gf = metric_gfs_face_list[gf];\n for (int i2 = 2;i2 < Nxx_plus_2NGHOSTS2-1;i2++) {\n for (int i1 = 2;i1 < Nxx_plus_2NGHOSTS1-1;i1++) {\n for (int i0 = 2;i0 < Nxx_plus_2NGHOSTS0-1;i0++) {\n Qm2 = auxevol_gfs[IDX4S(in_gf,i0-2*kronecker_delta[flux_dirn][0],i1-2*kronecker_delta[flux_dirn][1],i2-2*kronecker_delta[flux_dirn][2])];\n Qm1 = auxevol_gfs[IDX4S(in_gf,i0-kronecker_delta[flux_dirn][0],i1-kronecker_delta[flux_dirn][1],i2-kronecker_delta[flux_dirn][2])];\n Qp0 = auxevol_gfs[IDX4S(in_gf,i0,i1,i2)];\n Qp1 = auxevol_gfs[IDX4S(in_gf,i0+kronecker_delta[flux_dirn][0],i1+kronecker_delta[flux_dirn][1],i2+kronecker_delta[flux_dirn][2])];\n auxevol_gfs[IDX4S(out_gf,i0,i1,i2)] = COMPUTE_FCVAL(Qm2,Qm1,Qp0,Qp1);\n }\n }\n }\n }\n#ifdef WORKAROUND_ENABLED\n for (int i2 = 2;i2 < Nxx_plus_2NGHOSTS2-1;i2++) {\n for (int i1 = 2;i1 < Nxx_plus_2NGHOSTS1-1;i1++) {\n for (int i0 = 2;i0 < Nxx_plus_2NGHOSTS0-1;i0++) {\n Qm2 = auxevol_gfs[IDX4S(PHIGF,i0-2*kronecker_delta[flux_dirn][0],i1-2*kronecker_delta[flux_dirn][1],i2-2*kronecker_delta[flux_dirn][2])];\n Qm1 = auxevol_gfs[IDX4S(PHIGF,i0-kronecker_delta[flux_dirn][0],i1-kronecker_delta[flux_dirn][1],i2-kronecker_delta[flux_dirn][2])];\n Qp0 = auxevol_gfs[IDX4S(PHIGF,i0,i1,i2)];\n Qp1 = auxevol_gfs[IDX4S(PHIGF,i0+kronecker_delta[flux_dirn][0],i1+kronecker_delta[flux_dirn][1],i2+kronecker_delta[flux_dirn][2])];\n auxevol_gfs[IDX4S(PSI6_TEMPGF,i0,i1,i2)] = COMPUTE_FCVAL(Qm2,Qm1,Qp0,Qp1);\n }\n }\n }\n#endif /*WORKAROUND_ENABLED*/\n\n\"\"\",\n rel_path_for_Cparams=os.path.join(\"../\"))\n\nwith open(os.path.join(Ccodesdir,\"interpolate_metric_gfs_to_cell_faces.h\"),\"a\") as file:\n file.write(interp_Cfunc)",
"_____no_output_____"
]
],
[
[
"<a id='code_validation'></a>\n\n# Step 2: Code Validation against `GiRaFFE_NRPy_FCVAL.py` \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\nNow, we will confirm that the code we have written here is the same as that generated by the module [`GiRaFFE_NRPy_FCVAL.py`](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_FCVAL.py).",
"_____no_output_____"
]
],
[
[
"# Define the directory that we wish to validate against:\nvaldir = \"GiRaFFE_NRPy/GiRaFFE_Ccode_library/FCVAL/\"\n\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_Metric_Face_Values as FCVAL\nFCVAL.GiRaFFE_NRPy_FCVAL(valdir)\n\nimport difflib\nimport sys\n\nprint(\"Printing difference between original C code and this code...\")\n# Open the files to compare\nfile = \"interpolate_metric_gfs_to_cell_faces.h\"\n\nprint(\"Checking file \" + file)\nwith open(os.path.join(valdir,file)) as file1, open(os.path.join(Ccodesdir,file)) as file2:\n # Read the lines of each file\n file1_lines = file1.readlines()\n file2_lines = file2.readlines()\n num_diffs = 0\n for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir+file), tofile=os.path.join(Ccodesdir+file)):\n sys.stdout.writelines(line)\n num_diffs = num_diffs + 1\n if num_diffs == 0:\n print(\"No difference. TEST PASSED!\")\n else:\n print(\"ERROR: Disagreement found with .py file. See differences above.\")\n sys.exit(1)",
"Printing difference between original C code and this code...\nChecking file interpolate_metric_gfs_to_cell_faces.h\nNo difference. TEST PASSED!\n"
]
],
[
[
"<a id='latex_pdf_output'></a>\n\n# Step 3: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-GiRaFFE_NRPy-FCVAL.pdf](Tutorial-GiRaFFE_NRPy-FCVAL.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)",
"_____no_output_____"
]
],
[
[
"import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\ncmd.output_Jupyter_notebook_to_LaTeXed_PDF(\"Tutorial-GiRaFFE_NRPy-Metric_Face_Values\",location_of_template_file=os.path.join(\"..\"))",
"pdflatex: security risk: running with elevated privileges\npdflatex: security risk: running with elevated privileges\npdflatex: security risk: running with elevated privileges\nCreated Tutorial-GiRaFFE_NRPy-Metric_Face_Values.tex, and compiled LaTeX\n file to PDF file Tutorial-GiRaFFE_NRPy-Metric_Face_Values.pdf\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc6b71dcf1671613653436587e4f52252a0d9d2 | 299,413 | ipynb | Jupyter Notebook | notebooks/orthologs-conditions-strata.ipynb | mgalardini/2018koyeast | 8b82c567c3dfbfa3c1571911fe6b8bd59a681105 | [
"Apache-2.0"
] | null | null | null | notebooks/orthologs-conditions-strata.ipynb | mgalardini/2018koyeast | 8b82c567c3dfbfa3c1571911fe6b8bd59a681105 | [
"Apache-2.0"
] | null | null | null | notebooks/orthologs-conditions-strata.ipynb | mgalardini/2018koyeast | 8b82c567c3dfbfa3c1571911fe6b8bd59a681105 | [
"Apache-2.0"
] | 1 | 2019-01-16T13:22:11.000Z | 2019-01-16T13:22:11.000Z | 1,740.773256 | 218,560 | 0.955647 | [
[
[
"corr = ['../out/orthologs_condition_correlation_g%d.tsv' % x\n for x in range(4)]",
"_____no_output_____"
],
[
"# plotting imports\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set_style('white')\n\nplt.rc('font', size=12)",
"_____no_output_____"
],
[
"import itertools\nimport pandas as pd",
"_____no_output_____"
],
[
"rs = [pd.read_table(x)\n for x in corr]",
"_____no_output_____"
],
[
"plt.figure(figsize=(9, 9))\n\nfor i, r in enumerate(rs):\n plt.subplot(4, 1, i+1)\n \n sns.kdeplot(r[r['set'] == 'same']['correlation'],\n label='orthologs',\n lw=2.5,\n color='r',\n zorder=10)\n sns.kdeplot(r[r['set'] == 'shuffled']['correlation'],\n label='random gene pairs',\n color='grey',\n shade=True,\n alpha=0.3)\n plt.legend(title='gene set',\n frameon=True)\n plt.xlabel('s-score correlation\\n(across all orthologs)')\n plt.yticks([])\n plt.xlim(-1.05, 1.05)\n plt.axvline(0,\n color='grey',\n linestyle='dashed',\n zorder=0)\nsns.despine(left=True)\nplt.tight_layout();",
"_____no_output_____"
],
[
"plt.figure(figsize=(9, 12))\n\nfor i, r in enumerate(rs):\n plt.subplot(4, 1, i+1)\n\n strains = set(r['strain1']).union(r['strain2'])\n for s1, s2 in itertools.combinations(strains, 2):\n sns.kdeplot(r[(r['set'] == 'same') &\n (((r['strain1'] == s1) & ((r['strain2'] == s2))) |\n ((r['strain2'] == s1) & ((r['strain1'] == s2))))\n ]['correlation'],\n label=s1 + '/' + s2,\n lw=2.5)\n sns.kdeplot(r[(r['set'] == 'shuffled') &\n (((r['strain1'] == s1) & ((r['strain2'] == s2))) |\n ((r['strain2'] == s1) & ((r['strain1'] == s2))))\n ]['correlation'],\n color='grey',\n shade=True,\n alpha=0.3,\n zorder=0,\n label='_')\n\n plt.legend(title='strains pair',\n frameon=True)\n plt.xlabel('s-score correlation\\n(across all orthologs)')\n plt.yticks([])\n plt.xlim(-1.05, 1.05)\n plt.axvline(0,\n color='grey',\n linestyle='dashed',\n zorder=0)\n \nsns.despine(left=True)\nplt.tight_layout();",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc6e7b4829ed9f0f7869ffd955798d0f496f152 | 5,503 | ipynb | Jupyter Notebook | _notebooks/2021-04-23-check-kashubian-adjlike.ipynb | jimregan/notes | 24e374d326b4e96b7d31d08a808f9f19fe473e76 | [
"Apache-2.0"
] | 1 | 2021-08-25T08:08:45.000Z | 2021-08-25T08:08:45.000Z | _notebooks/2021-04-23-check-kashubian-adjlike.ipynb | jimregan/notes | 24e374d326b4e96b7d31d08a808f9f19fe473e76 | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-04-23-check-kashubian-adjlike.ipynb | jimregan/notes | 24e374d326b4e96b7d31d08a808f9f19fe473e76 | [
"Apache-2.0"
] | null | null | null | 26.080569 | 84 | 0.513538 | [
[
[
"# \"Checking a Kashubian adjective-like declension\"\n> \"Check if I haven't left anything out\"\n\n- toc: false\n- branch: master\n- comments: true\n- categories: [kashubian, declension]",
"_____no_output_____"
]
],
[
[
"def _list_to_check(pos):\n num = ['sg', 'pl']\n gen = ['mp', 'ma', 'mi', 'f', 'nt']\n cas = ['nom', 'gen', 'dat', 'acc', 'ins', 'loc', 'voc']\n out = []\n for n in num:\n for g in gen:\n for c in cas:\n out.append(f\"{pos}.{g}.{n}.{c}\")\n return out",
"_____no_output_____"
],
[
"len(_list_to_check('num.ord'))",
"_____no_output_____"
],
[
"dredzi = \"\"\"\ndrëdżi\tdrëdżi\tnum.ord.mp|ma|mi.sg.nom|voc\ndrëgô\tdrëdżi\tnum.ord.f.sg.nom|voc\ndrëdżé\tdrëdżi\tnum.ord.nt.sg.nom|acc|voc\ndrëgą\tdrëdżi\tnum.ord.f.sg.acc|ins\ndrëdżi\tdrëdżi\tnum.ord.f.sg.gen|dat|loc\ndrëdżim\tdrëdżi\tnum.ord.mp|ma|mi|nt.sg.loc|ins\ndrëdżé\tdrëdżi\tnum.ord.nt|f|mi|ma.pl.nom|acc|voc\ndrëdżich\tdrëdżi\tnum.ord.nt|f|mi|ma|mp.pl.gen|loc\ndrëdżima\tdrëdżi\tnum.ord.nt|f|mi|ma.pl.ins\ndrëdżégò\tdrëdżi\tnum.ord.nt|mi|ma|mp.sg.gen\ndrëdżégò\tdrëdżi\tnum.ord.ma|mp.sg.acc\ndrëdżémù\tdrëdżi\tnum.ord.nt|mi|ma|mp.sg.dat\ndrëdżi\tdrëdżi\tnum.ord.mp.pl.nom|voc\ndrëdżich\tdrëdżi\tnum.ord.mp.pl.acc\ndrëdżim\tdrëdżi\tnum.ord.nt|f|mi|ma|mp.pl.dat\n\"\"\"",
"_____no_output_____"
],
[
"def _do_expand(stack, todo):\n onward = []\n if not '.' in todo:\n return [f'{a}.{b}' for a in stack for b in todo.split('|')]\n cur, rest = todo.split('.', 1)\n if stack == []:\n onward = cur.split('|')\n return _do_expand(onward, rest)\n else:\n onward = [f'{a}.{b}' for a in stack for b in cur.split('|')]\n return _do_expand(onward, rest)\ndef expand_compressed(lines):\n output = []\n for i in lines:\n form, lemma, postag = i.split('\\t')\n newtags = _do_expand([], postag)\n output.extend([f\"{form}\\t{lemma}\\t{itag}\" for itag in newtags])\n return output",
"_____no_output_____"
],
[
"expand_compressed([l for l in dredzi.split('\\n') if l != ''])",
"_____no_output_____"
],
[
"vals = expand_compressed([l for l in dredzi.split('\\n') if l != ''])",
"_____no_output_____"
],
[
"tags = [a.split('\\t')[-1] for a in vals]",
"_____no_output_____"
],
[
"for tc in _list_to_check('num.ord'):\n if not tc in tags:\n print(tc)",
"num.ord.mi.sg.acc\nnum.ord.mp.pl.ins\n"
],
[
"dredzi = \"\"\"\ndrëdżi\tdrëdżi\tnum.ord.mp|ma|mi.sg.nom|voc\ndrëdżi\tdrëdżi\tnum.ord.mi.sg.acc\ndrëgô\tdrëdżi\tnum.ord.f.sg.nom|voc\ndrëdżé\tdrëdżi\tnum.ord.nt.sg.nom|acc|voc\ndrëgą\tdrëdżi\tnum.ord.f.sg.acc|ins\ndrëdżi\tdrëdżi\tnum.ord.f.sg.gen|dat|loc\ndrëdżim\tdrëdżi\tnum.ord.mp|ma|mi|nt.sg.loc|ins\ndrëdżé\tdrëdżi\tnum.ord.nt|f|mi|ma.pl.nom|acc|voc\ndrëdżich\tdrëdżi\tnum.ord.nt|f|mi|ma|mp.pl.gen|loc\ndrëdżima\tdrëdżi\tnum.ord.nt|f|mi|ma|mp.pl.ins\ndrëdżégò\tdrëdżi\tnum.ord.nt|mi|ma|mp.sg.gen\ndrëdżégò\tdrëdżi\tnum.ord.ma|mp.sg.acc\ndrëdżémù\tdrëdżi\tnum.ord.nt|mi|ma|mp.sg.dat\ndrëdżi\tdrëdżi\tnum.ord.mp.pl.nom|voc\ndrëdżich\tdrëdżi\tnum.ord.mp.pl.acc\ndrëdżim\tdrëdżi\tnum.ord.nt|f|mi|ma|mp.pl.dat\n\"\"\"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc6ebfdee72021a465169d7e61faaf2fb5cb083 | 431,159 | ipynb | Jupyter Notebook | LouisPortfolio-Analysis.ipynb | ChristopherDaigle/ConvexOptimizationWithPython | b7a1b5126b076d06ec03e906b19930bf324272ff | [
"Unlicense"
] | null | null | null | LouisPortfolio-Analysis.ipynb | ChristopherDaigle/ConvexOptimizationWithPython | b7a1b5126b076d06ec03e906b19930bf324272ff | [
"Unlicense"
] | null | null | null | LouisPortfolio-Analysis.ipynb | ChristopherDaigle/ConvexOptimizationWithPython | b7a1b5126b076d06ec03e906b19930bf324272ff | [
"Unlicense"
] | null | null | null | 613.312945 | 173,516 | 0.937424 | [
[
[
"# DATA ANALYSIS (PANDAS), VISUALIZATION (MATPLOTLIB), AND OPTIMIZATION (CVXPY) using daily stock prices",
"_____no_output_____"
],
[
"# 1. Some work with the data.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\n\ndstocks = pd.read_csv('Stocks2-closeP.csv')\nplt.figure()\ndstocks.plot(grid = True, figsize = [10,5]).axhline(y = 0, color = \"black\", lw = 1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Compute and visualize growth rates of prices (from one day to the next)",
"_____no_output_____"
]
],
[
[
"dstocks.head()",
"_____no_output_____"
],
[
"stock = dstocks.iloc[:,[1,2,3,4,5,6,7,8]] # Select colum 1-7 from the DataFrame\n\n#Compute growth rates of that stock as [p2-p1]/p1\n# A lambda function is applied locally\nstock_change = stock.apply(lambda x: ((x - (x.shift(1))) / (x.shift(1))))\n\n# Plot the dataframe of growth rates\nstock_change.plot(grid = True, figsize = (12,8)).axhline(y = 0, color = \"black\", lw = 2)\n# Then save the figure as a pdf file\nplt.savefig('sample.pdf')",
"_____no_output_____"
],
[
"stock_change.describe()",
"_____no_output_____"
],
[
"#Extract the means\nstock_means = stock_change.describe().iloc[1].values\nstock_means",
"_____no_output_____"
],
[
"#Extract the variances\nstock_variance = stock_change.var()\nstock_variance",
"_____no_output_____"
],
[
"# Make covariance matrix\nstock_var = stock_change.cov()",
"_____no_output_____"
],
[
"# Correlations matrix prettyfied\n\nimport seaborn as sns\n\nf, ax = plt.subplots(figsize=(10, 5))\ncorr = stock_change.corr()\nsns.heatmap(round(corr,2), annot=True, ax=ax, cmap = \"coolwarm\", fmt='.2f', linewidths=.05)\nt = f.suptitle('Correlation matrix stocks', fontsize=12)",
"_____no_output_____"
],
[
"# Plot the dataframe means of the two highest growth rates\nstock_change[['VLO_close', 'IBM_close']].plot(grid = True, figsize = (12,8)).axhline(y = 0, color = \"black\", lw = 2)\n# Then save the figure as a pdf file\n#plt.savefig('Louis_Booth_Means.pdf')",
"_____no_output_____"
],
[
"# Plot the dataframe variances of the two highest growth rates\nstock_change[['VLO_close', 'AAL_close']].plot(grid = True, figsize = (12,8)).axhline(y = 0, color = \"black\", lw = 2)\n# Then save the figure as a pdf file\n#plt.savefig('Louis_Booth_Variances.pdf')",
"_____no_output_____"
]
],
[
[
"# 2. Optimal Portfolio",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom cvxpy import *\n\nSigma = stock_var.values\nmu = stock_means\n\nw = Variable(8) # Decision variable\n\n\ngamma = 2 # Risk Parameter in the utility function\nret = mu*w\nrisk = quad_form(w, Sigma)\n\nobj = Maximize(ret - gamma*risk)\nconstraints = [sum(w) == 1 , w >=0]\n\n# Form the problem\nprob = Problem(obj, constraints)\nprob.solve()",
"_____no_output_____"
],
[
"port_data = []\nret_data = []\nrisk_data = []\nprob_data = []\n\ngamma_vals = [2,10]\nfor i in range(2):\n gamma = gamma_vals[i]\n prob.solve() # Solve the problem for a specific value of gamma\n risk_data.append(sqrt(risk).value) # optimal value of the risk (standard deviation)\n ret_data.append(ret.value) # optimal value of the return\n prob_data.append(prob.value) # maximum utility\n port_data.append(w.value) ",
"_____no_output_____"
],
[
"port_data",
"_____no_output_____"
],
[
"# Present your results nicely\n\nresults = pd.DataFrame({'Portfolio':['F', 'BA', 'RTN', 'VLO', 'AAL', 'CI', 'IBM', 'PG'], 'Risk Coeff 2':port_data[0], 'Risk Coeff 10':port_data[1]})\nresults",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecc70d779fa1c91d3e5d6da5d93de783a276f411 | 14,656 | ipynb | Jupyter Notebook | atmosphere/20210506_wekeo_webinar/20_Sentinel5P_TROPOMI_NO2_L2_retrieve.ipynb | Neilo99/wekeo-jupyter-lab | 80fa4a4663370718276a8625cbaf36d49ceaef59 | [
"MIT"
] | 31 | 2020-07-02T09:17:30.000Z | 2022-02-22T03:12:02.000Z | atmosphere/20210506_wekeo_webinar/20_Sentinel5P_TROPOMI_NO2_L2_retrieve.ipynb | Neilo99/wekeo-jupyter-lab | 80fa4a4663370718276a8625cbaf36d49ceaef59 | [
"MIT"
] | null | null | null | atmosphere/20210506_wekeo_webinar/20_Sentinel5P_TROPOMI_NO2_L2_retrieve.ipynb | Neilo99/wekeo-jupyter-lab | 80fa4a4663370718276a8625cbaf36d49ceaef59 | [
"MIT"
] | 33 | 2020-07-15T09:39:24.000Z | 2022-03-04T01:02:23.000Z | 27.705104 | 360 | 0.58911 | [
[
[
"<img src='./img/LogoWekeo_Copernicus_RGB_0.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='20%'></img>",
"_____no_output_____"
],
[
"<a href=\"./00_index.ipynb\"><< Index</a><br>\n<a href=\"./11_CAMS_European_air_quality_forecast_NO2_load_browse.ipynb\"><< 11 - CAMS European air quality forecast - Nitrogen Dioxide - Load and browse</a><span style=\"float:right;\"><a href=\"./21_Sentinel5P_TROPOMI_NO2_L2_load_browse.ipynb\">21 - Sentinel-5P NO<sub>2</sub> - Load and browse >></a></span><br>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\">\n<b>DATA RETRIEVE</b></div>",
"_____no_output_____"
],
[
"# Copernicus Sentinel-5 Precursor (Sentinel-5P) - NO<sub>2</sub>",
"_____no_output_____"
],
[
"The example below illustrates step-by-step how Copernicus Sentinel-5P NO<sub>2</sub> data can be retrieved from WEkEO with the help of the [Harmonized Data Access (HDA) API](https://wekeo.eu/hda-api).\n\nThe HDA API workflow is a six-step process:\n - [1. Search for datasets on WEkEO](#wekeo_search)\n - [2. Get the API request](#wekeo_api_request)\n - [3. Get your WEkEO API key](#wekeo_api_key)\n - [4. Initialise the WEkEO Harmonised Data Access request](#wekeo_hda_request)\n - [5. Load data descriptor file and request data](#wekeo_json)\n - [6. Download requested data](#wekeo_download)\n \nAll steps have to be performed in order to be able to retrieve data from WEkEO.",
"_____no_output_____"
],
[
"All HDA API functions needed to retrieve data are stored in the notebook [hda_api_functions](./hda_api_functions.ipynb).",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"#### Load required libraries",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport json\nimport time\nimport base64\n\nimport requests\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"#### Load helper functions",
"_____no_output_____"
]
],
[
[
"# HDA API tools\nsys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.getcwd())),'wekeo-hda'))\nfrom hda_api_functions import * # this is the PYTHON version",
"_____no_output_____"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"### <a id='wekeo_search'></a>1. Search for datasets on WEkEO",
"_____no_output_____"
],
[
"Under [WEkEO DATA](https://www.wekeo.eu/data), you can search all datasets available on WEkEO. To add additional layers, you have to click on the `+` sign, which opens the `Catalogue` interface.\nThere are two search options:<br> \n- a `free keyword search`, and \n- a pre-defined `predefined keyword search`, that helps to filter the data based on `area`, `platform`, `data provider` and more.<br> \n\nUnder `PLATFORM`, you can select *`Sentinel-5P`* and retrieve the results. You can either directly add the data to the map or you can click on `Details`, which opens a dataset description.\n\nWhen you click on `Add to map...`, a window opens where you can select one specific variable of Sentinel-5P TROPOMI. \n\n<br>\n\n<div style='text-align:center;'>\n<figure><img src='./img/wekeo_interface_s5p_1.png' width='90%' />\n <figcaption><i>WEkEO interface to search for datasets</i></figcaption>\n</figure>\n</div>",
"_____no_output_____"
],
[
"### <a id='wekeo_api_request'></a>2. Get the API request",
"_____no_output_____"
],
[
"When a layer is added to the map, you can select the download icon, which opens an interface that allows you to tailor your data request.\nFor Sentinel-5P, the following information can be selected:\n* `Bounding box`\n* `Sensing start stop time`\n* `Processing level`\n* `Product type`\n\nOnce you made your selection, you can either directly requet the data or you can click on `Show API request`, which opens a window with the HDA API request for the specific data selection.\n\n\n<br>\n\n<div style='text-align:center;'>\n<figure><img src='./img/wekeo_interface_s5p_2.png' width='80%' />\n <figcaption><i>Sentinel-5P API request - Example</i></figcaption>\n</figure>\n</div>\n<br>",
"_____no_output_____"
],
[
"`Copy` the API request and save it as a `JSON` file. We did the same and you can open the `data descriptor` file for Sentinel-5P [here](./s5P_data_descriptor.json).",
"_____no_output_____"
],
[
"Each dataset on WEkEO is assigned a unique `datasetId`. Let us store the dataset ID for Sentinel-5P as a variable called `dataset_id` to be used later.",
"_____no_output_____"
]
],
[
[
"dataset_id = \"EO:ESA:DAT:SENTINEL-5P:TROPOMI\"",
"_____no_output_____"
]
],
[
[
"### <a id='wekeo_api_key'></a>3. Get the WEkEO API key",
"_____no_output_____"
],
[
"In order to interact with WEkEO's Harmonised Data Access API, each user gets assigned an `API key` and `API token`. You will need the API key in order to download data in a programmatic way.\n\nThe `api key` is generated by encoding your `username` and `password` to Base64. You can use the function [generate_api_key](./hda_api_functions.ipynb#generate_api_key) to programmatically generate your Base64-encoded api key. For this, you have to replace the 'username' and 'password' strings with your WEkEO username and password in the cell below.\n\nAlternatively, you can go to this [website](https://www.base64encode.org/) that allows you to manually encode your `username:password` combination. An example of an encoded key is `wekeo-test:wekeo-test`, which is encoded to `d2VrZW8tdGVzdDp3ZWtlby10ZXN0`.\n",
"_____no_output_____"
]
],
[
[
"user_name = '##########'\npassword = '##########'",
"_____no_output_____"
],
[
"api_key = generate_api_key(user_name, password)\napi_key",
"_____no_output_____"
]
],
[
[
"##### Alternative: enter manually the generated api key",
"_____no_output_____"
]
],
[
[
"#api_key = ",
"_____no_output_____"
]
],
[
[
"### <a id='wekeo_hda_request'></a>4. Initialise the Harmonised Data Access (HDA) API request",
"_____no_output_____"
],
[
"In order to initialise an API request, you have to initialise a dictionary that contains information on `dataset_id`, `api_key` and `download_directory_path`.\n\nPlease enter the path of the directory where the data shall be downloaded to.",
"_____no_output_____"
]
],
[
[
"# Enter here the directory path where you want to download the data to\ndownload_dir_path = './data/'",
"_____no_output_____"
]
],
[
[
"With `dataset_id`, `api_key` and `download_dir_path`, you can initialise the dictionary with the function [init](./hda_api_functions.ipynb#init).",
"_____no_output_____"
]
],
[
[
"hda_dict = init(dataset_id, api_key, download_dir_path)",
"_____no_output_____"
]
],
[
[
"#### Request access token",
"_____no_output_____"
],
[
"Once initialised, you can request an access token with the function [get_access_token](./hda_api_functions.ipynb#get_access_token). The access token is stored in the `hda_dict` dictionary.\n\nYou might need to accept the Terms and Conditions, which you can do with the function [acceptTandC](./hda_api_functions.ipynb#acceptTandC).",
"_____no_output_____"
]
],
[
[
"hda_dict = get_access_token(hda_dict)",
"_____no_output_____"
]
],
[
[
"#### Accept Terms and Conditions (if applicable)",
"_____no_output_____"
]
],
[
[
"hda_dict = acceptTandC(hda_dict)",
"_____no_output_____"
]
],
[
[
"### <a id='wekeo_json'></a>5. Load data descriptor file and request data",
"_____no_output_____"
],
[
"The Harmonised Data Access API can read your data request from a `JSON` file. In this JSON-based file, you can describe the dataset you are interested in downloading. The file is in principle a dictionary. The following keys can be defined:\n- `datasetID`: the dataset's collection ID\n- `stringChoiceValues`: type of dataset, e.g. 'processing level' or 'product type'\n- `dataRangeSelectValues`: time period you would like to retrieve data\n- `boundingBoxValues`: optional to define a subset of a global field\n\nYou can load the `JSON` file with `json.load()`.",
"_____no_output_____"
]
],
[
[
"with open('./s5p_data_descriptor.json', 'r') as f:\n data = json.load(f)\ndata",
"_____no_output_____"
]
],
[
[
"#### Initiate the request by assigning a job ID",
"_____no_output_____"
],
[
"The function [get_job_id](./hda_api_functions.ipynb#get_job_id) will launch your data request and your request is assigned a `job ID`.",
"_____no_output_____"
]
],
[
[
"hda_dict = get_job_id(hda_dict,data)",
"_____no_output_____"
]
],
[
[
"#### Build list of file names to be ordered and downloaded",
"_____no_output_____"
],
[
"The next step is to gather a list of file names available, based on your assigned `job ID`. The function [get_results_list](./hda_api_functions.ipynb#get_results_list) creates the list.",
"_____no_output_____"
]
],
[
[
"hda_dict = get_results_list(hda_dict)",
"_____no_output_____"
]
],
[
[
"#### Create an `order ID` for each file to be downloaded",
"_____no_output_____"
],
[
"The next step is to create an `order ID` for each file name to be downloaded. You can use the function [get_order_ids](./hda_api_functions.ipynb#get_order_ids).",
"_____no_output_____"
]
],
[
[
"hda_dict = get_order_ids(hda_dict)",
"_____no_output_____"
]
],
[
[
"### <a id='wekeo_download'></a>6. Download requested data",
"_____no_output_____"
],
[
"As a final step, you can use the function [download_data](./hda_api_functions.ipynb#download_data) to initialize the data download and to download each file that has been assigned an `order ID`. ",
"_____no_output_____"
]
],
[
[
"download_data(hda_dict)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"<a href=\"./00_index.ipynb\"><< Index</a><br>\n<a href=\"./11_CAMS_European_air_quality_forecast_NO2_load_browse.ipynb\"><< 11 - CAMS European air quality forecast - Nitrogen Dioxide - Load and browse</a><span style=\"float:right;\"><a href=\"./21_Sentinel5P_TROPOMI_NO2_L2_load_browse.ipynb\">21 - Sentinel-5P NO<sub>2</sub> - Load and browse >></a></span>",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<p><img src='./img/all_partners_wekeo.png' align='left' alt='Logo EU Copernicus' width='100%'></img><p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecc70e68c7f33cf700f9fd0a549ed85a10a65b32 | 881,930 | ipynb | Jupyter Notebook | cs231n_assignments/assignment3/.ipynb_checkpoints/GANs-PyTorch-checkpoint.ipynb | ruishaopu561/Machine_Learning | b231ff33c48792173ec09bcf13fae3e06f693ec2 | [
"Apache-2.0"
] | null | null | null | cs231n_assignments/assignment3/.ipynb_checkpoints/GANs-PyTorch-checkpoint.ipynb | ruishaopu561/Machine_Learning | b231ff33c48792173ec09bcf13fae3e06f693ec2 | [
"Apache-2.0"
] | 15 | 2020-03-24T17:09:58.000Z | 2022-03-11T23:48:50.000Z | cs231n_assignments/assignment3/.ipynb_checkpoints/GANs-PyTorch-checkpoint.ipynb | ruishaopu561/Machine_Learning | b231ff33c48792173ec09bcf13fae3e06f693ec2 | [
"Apache-2.0"
] | null | null | null | 491.325905 | 85,220 | 0.937407 | [
[
[
"# Generative Adversarial Networks (GANs)\n\nSo far in CS231N, all the applications of neural networks that we have explored have been **discriminative models** that take an input and are trained to produce a labeled output. This has ranged from straightforward classification of image categories to sentence generation (which was still phrased as a classification problem, our labels were in vocabulary space and we’d learned a recurrence to capture multi-word labels). In this notebook, we will expand our repetoire, and build **generative models** using neural networks. Specifically, we will learn how to build models which generate novel images that resemble a set of training images.\n\n### What is a GAN?\n\nIn 2014, [Goodfellow et al.](https://arxiv.org/abs/1406.2661) presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the **discriminator**. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the **generator**, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real.\n\nWe can think of this back and forth process of the generator ($G$) trying to fool the discriminator ($D$), and the discriminator trying to correctly classify real vs. fake as a minimax game:\n$$\\underset{G}{\\text{minimize}}\\; \\underset{D}{\\text{maximize}}\\; \\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] + \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\nwhere $z \\sim p(z)$ are the random noise samples, $G(z)$ are the generated images using the neural network generator $G$, and $D$ is the output of the discriminator, specifying the probability of an input being real. In [Goodfellow et al.](https://arxiv.org/abs/1406.2661), they analyze this minimax game and show how it relates to minimizing the Jensen-Shannon divergence between the training data distribution and the generated samples from $G$.\n\nTo optimize this minimax game, we will aternate between taking gradient *descent* steps on the objective for $G$, and gradient *ascent* steps on the objective for $D$:\n1. update the **generator** ($G$) to minimize the probability of the __discriminator making the correct choice__. \n2. update the **discriminator** ($D$) to maximize the probability of the __discriminator making the correct choice__.\n\nWhile these updates are useful for analysis, they do not perform well in practice. Instead, we will use a different objective when we update the generator: maximize the probability of the **discriminator making the incorrect choice**. This small change helps to allevaiate problems with the generator gradient vanishing when the discriminator is confident. This is the standard update used in most GAN papers, and was used in the original paper from [Goodfellow et al.](https://arxiv.org/abs/1406.2661). \n\nIn this assignment, we will alternate the following updates:\n1. Update the generator ($G$) to maximize the probability of the discriminator making the incorrect choice on generated data:\n$$\\underset{G}{\\text{maximize}}\\; \\mathbb{E}_{z \\sim p(z)}\\left[\\log D(G(z))\\right]$$\n2. Update the discriminator ($D$), to maximize the probability of the discriminator making the correct choice on real and generated data:\n$$\\underset{D}{\\text{maximize}}\\; \\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] + \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\n\n### What else is there?\nSince 2014, GANs have exploded into a huge research area, with massive [workshops](https://sites.google.com/site/nips2016adversarial/), and [hundreds of new papers](https://github.com/hindupuravinash/the-gan-zoo). Compared to other approaches for generative models, they often produce the highest quality samples but are some of the most difficult and finicky models to train (see [this github repo](https://github.com/soumith/ganhacks) that contains a set of 17 hacks that are useful for getting models working). Improving the stabiilty and robustness of GAN training is an open research question, with new papers coming out every day! For a more recent tutorial on GANs, see [here](https://arxiv.org/abs/1701.00160). There is also some even more recent exciting work that changes the objective function to Wasserstein distance and yields much more stable results across model architectures: [WGAN](https://arxiv.org/abs/1701.07875), [WGAN-GP](https://arxiv.org/abs/1704.00028).\n\n\nGANs are not the only way to train a generative model! For other approaches to generative modeling check out the [deep generative model chapter](http://www.deeplearningbook.org/contents/generative_models.html) of the Deep Learning [book](http://www.deeplearningbook.org). Another popular way of training neural networks as generative models is Variational Autoencoders (co-discovered [here](https://arxiv.org/abs/1312.6114) and [here](https://arxiv.org/abs/1401.4082)). Variatonal autoencoders combine neural networks with variationl inference to train deep generative models. These models tend to be far more stable and easier to train but currently don't produce samples that are as pretty as GANs.\n\nHere's an example of what your outputs from the 3 different models you're going to train should look like... note that GANs are sometimes finicky, so your outputs might not look exactly like this... this is just meant to be a *rough* guideline of the kind of quality you can expect:\n\n",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport torchvision\nimport torchvision.transforms as T\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data import sampler\nimport torchvision.datasets as dset\nfrom torch.autograd import Variable\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\ndef show_images(images):\n images = np.reshape(images, [images.shape[0], -1]) # images reshape to (batch_size, D)\n sqrtn = int(np.ceil(np.sqrt(images.shape[0])))\n sqrtimg = int(np.ceil(np.sqrt(images.shape[1])))\n\n fig = plt.figure(figsize=(sqrtn, sqrtn))\n gs = gridspec.GridSpec(sqrtn, sqrtn)\n gs.update(wspace=0.05, hspace=0.05)\n\n for i, img in enumerate(images):\n ax = plt.subplot(gs[i])\n plt.axis('off')\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n ax.set_aspect('equal')\n plt.imshow(img.reshape([sqrtimg,sqrtimg]))\n return \n\ndef preprocess_img(x):\n return 2 * x - 1.0\n\ndef deprocess_img(x):\n return (x + 1.0) / 2.0\n\ndef rel_error(x,y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef count_params(model):\n \"\"\"Count the number of parameters in the current TensorFlow graph \"\"\"\n param_count = np.sum([np.prod(p.size()) for p in model.parameters()])\n return param_count\n\nanswers = dict(np.load('gan-checks-tf.npz'))",
"_____no_output_____"
]
],
[
[
"## Dataset\n GANs are notoriously finicky with hyperparameters, and also require many training epochs. In order to make this assignment approachable without a GPU, we will be working on the MNIST dataset, which is 60,000 training and 10,000 test images. Each picture contains a centered image of white digit on black background (0 through 9). This was one of the first datasets used to train convolutional neural networks and it is fairly easy -- a standard CNN model can easily exceed 99% accuracy. \n\nTo simplify our code here, we will use the PyTorch MNIST wrapper, which downloads and loads the MNIST dataset. See the [documentation](https://github.com/pytorch/vision/blob/master/torchvision/datasets/mnist.py) for more information about the interface. The default parameters will take 5,000 of the training examples and place them into a validation dataset. The data will be saved into a folder called `MNIST_data`. ",
"_____no_output_____"
]
],
[
[
"class ChunkSampler(sampler.Sampler):\n \"\"\"Samples elements sequentially from some offset. \n Arguments:\n num_samples: # of desired datapoints\n start: offset where we should start selecting from\n \"\"\"\n def __init__(self, num_samples, start=0):\n self.num_samples = num_samples\n self.start = start\n\n def __iter__(self):\n return iter(range(self.start, self.start + self.num_samples))\n\n def __len__(self):\n return self.num_samples\n\nNUM_TRAIN = 50000\nNUM_VAL = 5000\n\nNOISE_DIM = 96\nbatch_size = 128\n\nmnist_train = dset.MNIST('./cs231n/datasets/MNIST_data', train=True, download=True,\n transform=T.ToTensor())\nloader_train = DataLoader(mnist_train, batch_size=batch_size,\n sampler=ChunkSampler(NUM_TRAIN, 0))\n\nmnist_val = dset.MNIST('./cs231n/datasets/MNIST_data', train=True, download=True,\n transform=T.ToTensor())\nloader_val = DataLoader(mnist_val, batch_size=batch_size,\n sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))\n\n\nimgs = loader_train.__iter__().next()[0].view(batch_size, 784).numpy().squeeze()\nshow_images(imgs)",
"_____no_output_____"
]
],
[
[
"## Random Noise\nGenerate uniform noise from -1 to 1 with shape `[batch_size, dim]`.\n\nHint: use `torch.rand`.",
"_____no_output_____"
]
],
[
[
"def sample_noise(batch_size, dim):\n \"\"\"\n Generate a PyTorch Tensor of uniform random noise.\n\n Input:\n - batch_size: Integer giving the batch size of noise to generate.\n - dim: Integer giving the dimension of noise to generate.\n \n Output:\n - A PyTorch Tensor of shape (batch_size, dim) containing uniform\n random noise in the range (-1, 1).\n \"\"\"\n# return 2*(torch.rand(batch_size, dim)-0.5)\n#################################################################################\n return torch.Tensor(batch_size, dim).uniform_(-1, 1)\n",
"_____no_output_____"
]
],
[
[
"Make sure noise is the correct shape and type:",
"_____no_output_____"
]
],
[
[
"def test_sample_noise():\n batch_size = 3\n dim = 4\n torch.manual_seed(231)\n z = sample_noise(batch_size, dim)\n np_z = z.cpu().numpy()\n \n assert np_z.shape == (batch_size, dim)\n assert torch.is_tensor(z)\n assert np.all(np_z >= -1.0) and np.all(np_z <= 1.0)\n assert np.any(np_z < 0.0) and np.any(np_z > 0.0)\n print('All tests passed!')\n \ntest_sample_noise()",
"All tests passed!\n"
]
],
[
[
"## Flatten\n\nRecall our Flatten operation from previous notebooks... this time we also provide an Unflatten, which you might want to use when implementing the convolutional generator. We also provide a weight initializer (and call it for you) that uses Xavier initialization instead of PyTorch's uniform default.",
"_____no_output_____"
]
],
[
[
"class Flatten(nn.Module):\n def forward(self, x):\n N, C, H, W = x.size() # read in N, C, H, W\n return x.view(N, -1) # \"flatten\" the C * H * W values into a single vector per image\n \nclass Unflatten(nn.Module):\n \"\"\"\n An Unflatten module receives an input of shape (N, C*H*W) and reshapes it\n to produce an output of shape (N, C, H, W).\n \"\"\"\n def __init__(self, N=-1, C=128, H=7, W=7):\n super(Unflatten, self).__init__()\n self.N = N\n self.C = C\n self.H = H\n self.W = W\n def forward(self, x):\n print(x.view(self.N, self.C, self.H, self.W))\n return x.view(self.N, self.C, self.H, self.W)\n\ndef initialize_weights(m):\n if isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose2d):\n init.xavier_uniform_(m.weight.data)",
"_____no_output_____"
]
],
[
[
"## CPU / GPU\nBy default all code will run on CPU. GPUs are not needed for this assignment, but will help you to train your models faster. If you do want to run the code on a GPU, then change the `dtype` variable in the following cell.",
"_____no_output_____"
]
],
[
[
"dtype = torch.FloatTensor\n#dtype = torch.cuda.FloatTensor ## UNCOMMENT THIS LINE IF YOU'RE ON A GPU!",
"_____no_output_____"
]
],
[
[
"# Discriminator\nOur first step is to build a discriminator. Fill in the architecture as part of the `nn.Sequential` constructor in the function below. All fully connected layers should include bias terms. The architecture is:\n * Fully connected layer with input size 784 and output size 256\n * LeakyReLU with alpha 0.01\n * Fully connected layer with input_size 256 and output size 256\n * LeakyReLU with alpha 0.01\n * Fully connected layer with input size 256 and output size 1\n \nRecall that the Leaky ReLU nonlinearity computes $f(x) = \\max(\\alpha x, x)$ for some fixed constant $\\alpha$; for the LeakyReLU nonlinearities in the architecture above we set $\\alpha=0.01$.\n \nThe output of the discriminator should have shape `[batch_size, 1]`, and contain real numbers corresponding to the scores that each of the `batch_size` inputs is a real image.",
"_____no_output_____"
]
],
[
[
"def discriminator():\n \"\"\"\n Build and return a PyTorch model implementing the architecture above.\n \"\"\"\n model = nn.Sequential(\n Flatten(),\n nn.Linear(784, 256, bias=True),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.Linear(256, 256, bias=True),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.Linear(256, 1, bias=True)\n )\n return model",
"_____no_output_____"
]
],
[
[
"Test to make sure the number of parameters in the discriminator is correct:",
"_____no_output_____"
]
],
[
[
"def test_discriminator(true_count=267009):\n model = discriminator()\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in discriminator. Check your achitecture.')\n else:\n print('Correct number of parameters in discriminator.') \n\ntest_discriminator()",
"Correct number of parameters in discriminator.\n"
]
],
[
[
"# Generator\nNow to build the generator network:\n * Fully connected layer from noise_dim to 1024\n * `ReLU`\n * Fully connected layer with size 1024 \n * `ReLU`\n * Fully connected layer with size 784\n * `TanH` (to clip the image to be in the range of [-1,1])",
"_____no_output_____"
]
],
[
[
"def generator(noise_dim=NOISE_DIM):\n \"\"\"\n Build and return a PyTorch model implementing the architecture above.\n \"\"\"\n model = nn.Sequential(\n nn.Linear(noise_dim, 1024),\n nn.ReLU(inplace=True),\n nn.Linear(1024, 1024),\n nn.ReLU(inplace=True),\n nn.Linear(1024, 784),\n nn.Tanh()\n )\n return model",
"_____no_output_____"
]
],
[
[
"Test to make sure the number of parameters in the generator is correct:",
"_____no_output_____"
]
],
[
[
"def test_generator(true_count=1858320):\n model = generator(4)\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in generator. Check your achitecture.')\n else:\n print('Correct number of parameters in generator.')\n\ntest_generator()",
"Correct number of parameters in generator.\n"
]
],
[
[
"# GAN Loss\n\nCompute the generator and discriminator loss. The generator loss is:\n$$\\ell_G = -\\mathbb{E}_{z \\sim p(z)}\\left[\\log D(G(z))\\right]$$\nand the discriminator loss is:\n$$ \\ell_D = -\\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] - \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\nNote that these are negated from the equations presented earlier as we will be *minimizing* these losses.\n\n**HINTS**: You should use the `bce_loss` function defined below to compute the binary cross entropy loss which is needed to compute the log probability of the true label given the logits output from the discriminator. Given a score $s\\in\\mathbb{R}$ and a label $y\\in\\{0, 1\\}$, the binary cross entropy loss is\n\n$$ bce(s, y) = -y * \\log(s) - (1 - y) * \\log(1 - s) $$\n\nA naive implementation of this formula can be numerically unstable, so we have provided a numerically stable implementation for you below.\n\nYou will also need to compute labels corresponding to real or fake and use the logit arguments to determine their size. Make sure you cast these labels to the correct data type using the global `dtype` variable, for example:\n\n\n`true_labels = torch.ones(size).type(dtype)`\n\nInstead of computing the expectation of $\\log D(G(z))$, $\\log D(x)$ and $\\log \\left(1-D(G(z))\\right)$, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing.",
"_____no_output_____"
]
],
[
[
"def bce_loss(input, target):\n \"\"\"\n Numerically stable version of the binary cross-entropy loss function.\n\n As per https://github.com/pytorch/pytorch/issues/751\n See the TensorFlow docs for a derivation of this formula:\n https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits\n\n Inputs:\n - input: PyTorch Tensor of shape (N, ) giving scores.\n - target: PyTorch Tensor of shape (N,) containing 0 and 1 giving targets.\n\n Returns:\n - A PyTorch Tensor containing the mean BCE loss over the minibatch of input data.\n \"\"\"\n neg_abs = - input.abs()\n loss = input.clamp(min=0) - input * target + (1 + neg_abs.exp()).log()\n return loss.mean()",
"_____no_output_____"
],
[
"# ℓD=−𝔼x∼pdata[logD(x)]−𝔼z∼p(z)[log(1−D(G(z)))]\ndef discriminator_loss(logits_real, logits_fake):\n \"\"\"\n Computes the discriminator loss described above.\n \n Inputs:\n - logits_real: PyTorch Tensor of shape (N,) giving scores for the real data.\n - logits_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n \n Returns:\n - loss: PyTorch Tensor containing (scalar) the loss for the discriminator.\n \"\"\"\n loss=0\n target_real = torch.ones(len(logits_real)).type(dtype)\n target_fake = torch.zeros(len(logits_fake)).type(dtype)\n loss+=bce_loss(logits_real, target_real)\n loss+=bce_loss(logits_fake, target_fake)\n return loss\n################################################################################\n# N, _ = logits_real.size()\n# loss = (bce_loss(logits_real, Variable(torch.ones(N)).type(dtype)) +\n# bce_loss(logits_fake, Variable(torch.zeros(N)).type(dtype)))\n# return loss\n\n# ℓG=−𝔼z∼p(z)[logD(G(z))]\ndef generator_loss(logits_fake):\n \"\"\"\n Computes the generator loss described above.\n\n Inputs:\n - logits_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n \n Returns:\n - loss: PyTorch Tensor containing the (scalar) loss for the generator.\n \"\"\"\n# target_fake = torch.zeros(len(logits_fake)).type(dtype)\n# loss=bce_loss(logits_fake, target_fake)\n# return loss\n\n#################################################################################\n N, _ = logits_fake.size()\n loss = bce_loss(logits_fake, Variable(torch.ones(N)).type(dtype))\n return loss\n",
"_____no_output_____"
]
],
[
[
"Test your generator and discriminator loss. You should see errors < 1e-7.",
"_____no_output_____"
]
],
[
[
"def test_discriminator_loss(logits_real, logits_fake, d_loss_true):\n d_loss = discriminator_loss(torch.Tensor(logits_real).type(dtype),\n torch.Tensor(logits_fake).type(dtype)).cpu().numpy()\n print(\"Maximum error in d_loss: %g\"%rel_error(d_loss_true, d_loss))\n\ntest_discriminator_loss(answers['logits_real'], answers['logits_fake'],\n answers['d_loss_true'])",
"Maximum error in d_loss: 2.83811e-08\n"
],
[
"def test_generator_loss(logits_fake, g_loss_true):\n g_loss = generator_loss(torch.Tensor(logits_fake).type(dtype)).cpu().numpy()\n print(\"Maximum error in g_loss: %g\"%rel_error(g_loss_true, g_loss))\n\ntest_generator_loss(answers['logits_fake'], answers['g_loss_true'])",
"Maximum error in g_loss: 4.4518e-09\n"
]
],
[
[
"# Optimizing our loss\nMake a function that returns an `optim.Adam` optimizer for the given model with a 1e-3 learning rate, beta1=0.5, beta2=0.999. You'll use this to construct optimizers for the generators and discriminators for the rest of the notebook.",
"_____no_output_____"
]
],
[
[
"def get_optimizer(model):\n \"\"\"\n Construct and return an Adam optimizer for the model with learning rate 1e-3,\n beta1=0.5, and beta2=0.999.\n \n Input:\n - model: A PyTorch model that we want to optimize.\n \n Returns:\n - An Adam optimizer for the model with the desired hyperparameters.\n \"\"\"\n optimizer = optim.Adam(model.parameters(), lr=1e-3, betas=(0.5, 0.999))\n return optimizer",
"_____no_output_____"
]
],
[
[
"# Training a GAN!\n\nWe provide you the main training loop... you won't need to change this function, but we encourage you to read through and understand it. ",
"_____no_output_____"
]
],
[
[
"def run_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss, show_every=250, \n batch_size=128, noise_size=96, num_epochs=10):\n \"\"\"\n Train a GAN!\n \n Inputs:\n - D, G: PyTorch models for the discriminator and generator\n - D_solver, G_solver: torch.optim Optimizers to use for training the\n discriminator and generator.\n - discriminator_loss, generator_loss: Functions to use for computing the generator and\n discriminator loss, respectively.\n - show_every: Show samples after every show_every iterations.\n - batch_size: Batch size to use for training.\n - noise_size: Dimension of the noise to use as input to the generator.\n - num_epochs: Number of epochs over the training dataset to use for training.\n \"\"\"\n iter_count = 0\n for epoch in range(num_epochs):\n for x, _ in loader_train:\n if len(x) != batch_size:\n continue\n D_solver.zero_grad()\n real_data = x.type(dtype)\n logits_real = D(2* (real_data - 0.5)).type(dtype)\n\n g_fake_seed = sample_noise(batch_size, noise_size).type(dtype)\n fake_images = G(g_fake_seed).detach()\n logits_fake = D(fake_images.view(batch_size, 1, 28, 28))\n\n d_total_error = discriminator_loss(logits_real, logits_fake)\n d_total_error.backward() \n D_solver.step()\n\n G_solver.zero_grad()\n g_fake_seed = sample_noise(batch_size, noise_size).type(dtype)\n fake_images = G(g_fake_seed)\n\n gen_logits_fake = D(fake_images.view(batch_size, 1, 28, 28))\n g_error = generator_loss(gen_logits_fake)\n g_error.backward()\n G_solver.step()\n\n if (iter_count % show_every == 0):\n print('Iter: {}, D: {:.4}, G:{:.4}'.format(iter_count,d_total_error.item(),g_error.item()))\n imgs_numpy = fake_images.data.cpu().numpy()\n show_images(imgs_numpy[0:16])\n plt.show()\n print()\n iter_count += 1",
"_____no_output_____"
],
[
"# Make the discriminator\nD = discriminator().type(dtype)\n\n# Make the generator\nG = generator().type(dtype)\n\n# Use the function you wrote earlier to get optimizers for the Discriminator and the Generator\nD_solver = get_optimizer(D)\nG_solver = get_optimizer(G)\n# Run it!\nrun_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss)",
"Iter: 0, D: 1.328, G:0.7202\n"
]
],
[
[
"Well that wasn't so hard, was it? In the iterations in the low 100s you should see black backgrounds, fuzzy shapes as you approach iteration 1000, and decent shapes, about half of which will be sharp and clearly recognizable as we pass 3000.",
"_____no_output_____"
],
[
"# Least Squares GAN\nWe'll now look at [Least Squares GAN](https://arxiv.org/abs/1611.04076), a newer, more stable alernative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement equation (9) in the paper, with the generator loss:\n$$\\ell_G = \\frac{1}{2}\\mathbb{E}_{z \\sim p(z)}\\left[\\left(D(G(z))-1\\right)^2\\right]$$\nand the discriminator loss:\n$$ \\ell_D = \\frac{1}{2}\\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\left(D(x)-1\\right)^2\\right] + \\frac{1}{2}\\mathbb{E}_{z \\sim p(z)}\\left[ \\left(D(G(z))\\right)^2\\right]$$\n\n\n**HINTS**: Instead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing. When plugging in for $D(x)$ and $D(G(z))$ use the direct output from the discriminator (`scores_real` and `scores_fake`).",
"_____no_output_____"
]
],
[
[
"def ls_discriminator_loss(scores_real, scores_fake):\n \"\"\"\n Compute the Least-Squares GAN loss for the discriminator.\n \n Inputs:\n - scores_real: PyTorch Tensor of shape (N,) giving scores for the real data.\n - scores_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n \n Outputs:\n - loss: A PyTorch Tensor containing the loss.\n \"\"\"\n loss = 0.5*(torch.mean(pow(scores_real-1,2))+torch.mean(pow(scores_fake,2)))\n return loss\n\ndef ls_generator_loss(scores_fake):\n \"\"\"\n Computes the Least-Squares GAN loss for the generator.\n \n Inputs:\n - scores_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n \n Outputs:\n - loss: A PyTorch Tensor containing the loss.\n \"\"\"\n loss = 0.5*torch.mean(pow(scores_fake-1, 2))\n return loss",
"_____no_output_____"
]
],
[
[
"Before running a GAN with our new loss function, let's check it:",
"_____no_output_____"
]
],
[
[
"def test_lsgan_loss(score_real, score_fake, d_loss_true, g_loss_true):\n score_real = torch.Tensor(score_real).type(dtype)\n score_fake = torch.Tensor(score_fake).type(dtype)\n d_loss = ls_discriminator_loss(score_real, score_fake).cpu().numpy()\n g_loss = ls_generator_loss(score_fake).cpu().numpy()\n print(\"Maximum error in d_loss: %g\"%rel_error(d_loss_true, d_loss))\n print(\"Maximum error in g_loss: %g\"%rel_error(g_loss_true, g_loss))\n\ntest_lsgan_loss(answers['logits_real'], answers['logits_fake'],\n answers['d_loss_lsgan_true'], answers['g_loss_lsgan_true'])",
"Maximum error in d_loss: 1.64377e-08\nMaximum error in g_loss: 3.36961e-08\n"
]
],
[
[
"Run the following cell to train your model!",
"_____no_output_____"
]
],
[
[
"D_LS = discriminator().type(dtype)\nG_LS = generator().type(dtype)\n\nD_LS_solver = get_optimizer(D_LS)\nG_LS_solver = get_optimizer(G_LS)\n\nrun_a_gan(D_LS, G_LS, D_LS_solver, G_LS_solver, ls_discriminator_loss, ls_generator_loss)",
"Iter: 0, D: 0.5689, G:0.51\n"
]
],
[
[
"# Deeply Convolutional GANs\nIn the first part of the notebook, we implemented an almost direct copy of the original GAN network from Ian Goodfellow. However, this network architecture allows no real spatial reasoning. It is unable to reason about things like \"sharp edges\" in general because it lacks any convolutional layers. Thus, in this section, we will implement some of the ideas from [DCGAN](https://arxiv.org/abs/1511.06434), where we use convolutional networks \n\n#### Discriminator\nWe will use a discriminator inspired by the TensorFlow MNIST classification tutorial, which is able to get above 99% accuracy on the MNIST dataset fairly quickly. \n* Reshape into image tensor (Use Unflatten!)\n* Conv2D: 32 Filters, 5x5, Stride 1\n* Leaky ReLU(alpha=0.01)\n* Max Pool 2x2, Stride 2\n* Conv2D: 64 Filters, 5x5, Stride 1\n* Leaky ReLU(alpha=0.01)\n* Max Pool 2x2, Stride 2\n* Flatten\n* Fully Connected with output size 4 x 4 x 64\n* Leaky ReLU(alpha=0.01)\n* Fully Connected with output size 1",
"_____no_output_____"
]
],
[
[
"def build_dc_classifier():\n \"\"\"\n Build and return a PyTorch model for the DCGAN discriminator implementing\n the architecture above.\n \"\"\"\n return nn.Sequential(\n ###########################\n ######### TO DO ###########\n ###########################\n Unflatten(batch_size, 1, 28, 28),\n nn.Conv2d(1, 32, kernel_size=(5, 5), stride=1),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.MaxPool2d(kernel_size=(2, 2), stride=2),\n nn.Conv2d(32, 64, kernel_size=(5, 5), stride=1),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.MaxPool2d(kernel_size=(2, 2), stride=2),\n Flatten(),\n nn.Linear(4*4*64, 4*4*64),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.Linear(4*4*64, 1)\n )\n\ndata = next(enumerate(loader_train))[-1][0].type(dtype)\nb = build_dc_classifier().type(dtype)\nout = b(data)\nprint(out.size())",
"torch.Size([128, 1])\n"
]
],
[
[
"Check the number of parameters in your classifier as a sanity check:",
"_____no_output_____"
]
],
[
[
"def test_dc_classifer(true_count=1102721):\n model = build_dc_classifier()\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in generator. Check your achitecture.')\n else:\n print('Correct number of parameters in generator.')\n\ntest_dc_classifer()",
"Correct number of parameters in generator.\n"
]
],
[
[
"#### Generator\nFor the generator, we will copy the architecture exactly from the [InfoGAN paper](https://arxiv.org/pdf/1606.03657.pdf). See Appendix C.1 MNIST. See the documentation for [tf.nn.conv2d_transpose](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). We are always \"training\" in GAN mode. \n* Fully connected with output size 1024\n* `ReLU`\n* BatchNorm\n* Fully connected with output size 7 x 7 x 128 \n* ReLU\n* BatchNorm\n* Reshape into Image Tensor of shape 7, 7, 128\n* Conv2D^T (Transpose): 64 filters of 4x4, stride 2, 'same' padding\n* `ReLU`\n* BatchNorm\n* Conv2D^T (Transpose): 1 filter of 4x4, stride 2, 'same' padding\n* `TanH`\n* Should have a 28x28x1 image, reshape back into 784 vector",
"_____no_output_____"
]
],
[
[
"def build_dc_generator(noise_dim=NOISE_DIM):\n \"\"\"\n Build and return a PyTorch model implementing the DCGAN generator using\n the architecture described above.\n \"\"\"\n return nn.Sequential(\n ###########################\n ######### TO DO ###########\n ###########################\n nn.Linear(noise_dim, 1024),\n nn.ReLU(),\n nn.BatchNorm1d(1024),\n nn.Linear(1024, 7*7*128),\n nn.ReLU(),\n nn.BatchNorm1d(7*7*128),\n Unflatten(batch_size, 128, 7, 7),\n nn.Conv2d(128, 64, kernel_size=(4, 4), stride=2, padding=1),\n nn.ReLU(),\n nn.BatchNorm2d(64),\n nn.Conv2d(64, 1, kernel_size=(4, 4), stride=2, padding=1),\n nn.Tanh(),\n Flatten()\n )\n\ntest_g_gan = build_dc_generator().type(dtype)\ntest_g_gan.apply(initialize_weights)\n\nfake_seed = torch.randn(batch_size, NOISE_DIM).type(dtype)\nfake_images = test_g_gan.forward(fake_seed)\nfake_images.size()",
"_____no_output_____"
]
],
[
[
"Check the number of parameters in your generator as a sanity check:",
"_____no_output_____"
]
],
[
[
"def test_dc_generator(true_count=6580801):\n model = build_dc_generator(4)\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in generator. Check your achitecture.')\n else:\n print('Correct number of parameters in generator.')\n\ntest_dc_generator()",
"Correct number of parameters in generator.\n"
],
[
"D_DC = build_dc_classifier().type(dtype) \nD_DC.apply(initialize_weights)\nG_DC = build_dc_generator().type(dtype)\nG_DC.apply(initialize_weights)\n\nD_DC_solver = get_optimizer(D_DC)\nG_DC_solver = get_optimizer(G_DC)\n\nrun_a_gan(D_DC, G_DC, D_DC_solver, G_DC_solver, discriminator_loss, generator_loss, num_epochs=5)",
"_____no_output_____"
]
],
[
[
"## INLINE QUESTION 1\n\nWe will look at an example to see why alternating minimization of the same objective (like in a GAN) can be tricky business.\n\nConsider $f(x,y)=xy$. What does $\\min_x\\max_y f(x,y)$ evaluate to? (Hint: minmax tries to minimize the maximum value achievable.)\n\nNow try to evaluate this function numerically for 6 steps, starting at the point $(1,1)$, \nby using alternating gradient (first updating y, then updating x) with step size $1$. \nYou'll find that writing out the update step in terms of $x_t,y_t,x_{t+1},y_{t+1}$ will be useful.\n\nRecord the six pairs of explicit values for $(x_t,y_t)$ in the table below.",
"_____no_output_____"
],
[
"### Your answer:\n \n $y_0$ | $y_1$ | $y_2$ | $y_3$ | $y_4$ | $y_5$ | $y_6$ \n ----- | ----- | ----- | ----- | ----- | ----- | ----- \n 1 | | | | | | \n $x_0$ | $x_1$ | $x_2$ | $x_3$ | $x_4$ | $x_5$ | $x_6$ \n 1 | | | | | | \n ",
"_____no_output_____"
],
[
"## INLINE QUESTION 2\nUsing this method, will we ever reach the optimal value? Why or why not?",
"_____no_output_____"
],
[
"### Your answer:",
"_____no_output_____"
],
[
"## INLINE QUESTION 3\nIf the generator loss decreases during training while the discriminator loss stays at a constant high value from the start, is this a good sign? Why or why not? A qualitative answer is sufficient",
"_____no_output_____"
],
[
"### Your answer:",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecc70f0cb07a6fc3e76167ffe0642854d68e0c2c | 26,093 | ipynb | Jupyter Notebook | medve/archive/news_parser-Copy1.ipynb | szekelydata/szekelydata | 811b9f0133d792a2b6131718f696ab1978b716e8 | [
"MIT"
] | 4 | 2017-04-02T23:34:22.000Z | 2017-05-12T07:12:59.000Z | medve/archive/news_parser-Copy1.ipynb | szekelydata/szekelydata | 811b9f0133d792a2b6131718f696ab1978b716e8 | [
"MIT"
] | 40 | 2017-04-02T23:26:11.000Z | 2017-05-11T12:19:22.000Z | medve/archive/news_parser-Copy1.ipynb | csaladenes/szekelydata | b90a3ff7daea32d6ac31a122cf6047e1bee658a1 | [
"MIT"
] | null | null | null | 33.409731 | 132 | 0.51098 | [
[
[
"import bs4 as bs, urllib, pandas as pd, numpy as np",
"_____no_output_____"
]
],
[
[
"Parse past X years",
"_____no_output_____"
]
],
[
[
"keyword='medve'\nbaseurl=u'https://szekelyhon.ro/kereses?op=search&src_words='",
"_____no_output_____"
],
[
"start='2002-12'\n#end='2018-11-01'\n#start='2018-10'\nend='2018-12'\ndates=[]\ndatelist = pd.date_range(start=pd.to_datetime(start), end=pd.to_datetime(end), freq='M').tolist()\nfor date in datelist:\n dates.append(str(date)[:10])",
"_____no_output_____"
],
[
"dates[:5]",
"_____no_output_____"
],
[
"def extractor(time1,time2):\n time1=dates[i]\n time2=dates[i+1]\n print('Parsing...',time1,'-',time2)\n url=baseurl+keyword+'&src_time1='+time1+'&src_time2='+time2\n html = urllib.request.urlopen(url).read()\n soup = bs.BeautifulSoup(html,'lxml')\n return soup.findAll(\"div\", {\"class\": \"cikkocka2c\"})",
"_____no_output_____"
],
[
"divs=[]\nfor i in range(len(dates)-1):\n time1=dates[i]\n time2=dates[i+1]\n divs.append(extractor(time1,time2))",
"Parsing... 2002-12-31 - 2003-01-31\nParsing... 2003-01-31 - 2003-02-28\nParsing... 2003-02-28 - 2003-03-31\nParsing... 2003-03-31 - 2003-04-30\nParsing... 2003-04-30 - 2003-05-31\nParsing... 2003-05-31 - 2003-06-30\nParsing... 2003-06-30 - 2003-07-31\nParsing... 2003-07-31 - 2003-08-31\nParsing... 2003-08-31 - 2003-09-30\nParsing... 2003-09-30 - 2003-10-31\nParsing... 2003-10-31 - 2003-11-30\nParsing... 2003-11-30 - 2003-12-31\nParsing... 2003-12-31 - 2004-01-31\nParsing... 2004-01-31 - 2004-02-29\nParsing... 2004-02-29 - 2004-03-31\nParsing... 2004-03-31 - 2004-04-30\nParsing... 2004-04-30 - 2004-05-31\nParsing... 2004-05-31 - 2004-06-30\nParsing... 2004-06-30 - 2004-07-31\nParsing... 2004-07-31 - 2004-08-31\nParsing... 2004-08-31 - 2004-09-30\nParsing... 2004-09-30 - 2004-10-31\nParsing... 2004-10-31 - 2004-11-30\nParsing... 2004-11-30 - 2004-12-31\nParsing... 2004-12-31 - 2005-01-31\nParsing... 2005-01-31 - 2005-02-28\nParsing... 2005-02-28 - 2005-03-31\nParsing... 2005-03-31 - 2005-04-30\nParsing... 2005-04-30 - 2005-05-31\nParsing... 2005-05-31 - 2005-06-30\nParsing... 2005-06-30 - 2005-07-31\nParsing... 2005-07-31 - 2005-08-31\nParsing... 2005-08-31 - 2005-09-30\nParsing... 2005-09-30 - 2005-10-31\nParsing... 2005-10-31 - 2005-11-30\nParsing... 2005-11-30 - 2005-12-31\nParsing... 2005-12-31 - 2006-01-31\nParsing... 2006-01-31 - 2006-02-28\nParsing... 2006-02-28 - 2006-03-31\nParsing... 2006-03-31 - 2006-04-30\nParsing... 2006-04-30 - 2006-05-31\nParsing... 2006-05-31 - 2006-06-30\nParsing... 2006-06-30 - 2006-07-31\nParsing... 2006-07-31 - 2006-08-31\nParsing... 2006-08-31 - 2006-09-30\nParsing... 2006-09-30 - 2006-10-31\nParsing... 2006-10-31 - 2006-11-30\nParsing... 2006-11-30 - 2006-12-31\nParsing... 2006-12-31 - 2007-01-31\nParsing... 2007-01-31 - 2007-02-28\nParsing... 2007-02-28 - 2007-03-31\nParsing... 2007-03-31 - 2007-04-30\nParsing... 2007-04-30 - 2007-05-31\nParsing... 2007-05-31 - 2007-06-30\nParsing... 2007-06-30 - 2007-07-31\nParsing... 2007-07-31 - 2007-08-31\nParsing... 2007-08-31 - 2007-09-30\nParsing... 2007-09-30 - 2007-10-31\nParsing... 2007-10-31 - 2007-11-30\nParsing... 2007-11-30 - 2007-12-31\nParsing... 2007-12-31 - 2008-01-31\nParsing... 2008-01-31 - 2008-02-29\nParsing... 2008-02-29 - 2008-03-31\nParsing... 2008-03-31 - 2008-04-30\nParsing... 2008-04-30 - 2008-05-31\nParsing... 2008-05-31 - 2008-06-30\nParsing... 2008-06-30 - 2008-07-31\nParsing... 2008-07-31 - 2008-08-31\nParsing... 2008-08-31 - 2008-09-30\nParsing... 2008-09-30 - 2008-10-31\nParsing... 2008-10-31 - 2008-11-30\nParsing... 2008-11-30 - 2008-12-31\nParsing... 2008-12-31 - 2009-01-31\nParsing... 2009-01-31 - 2009-02-28\nParsing... 2009-02-28 - 2009-03-31\nParsing... 2009-03-31 - 2009-04-30\nParsing... 2009-04-30 - 2009-05-31\nParsing... 2009-05-31 - 2009-06-30\nParsing... 2009-06-30 - 2009-07-31\nParsing... 2009-07-31 - 2009-08-31\nParsing... 2009-08-31 - 2009-09-30\nParsing... 2009-09-30 - 2009-10-31\nParsing... 2009-10-31 - 2009-11-30\nParsing... 2009-11-30 - 2009-12-31\nParsing... 2009-12-31 - 2010-01-31\nParsing... 2010-01-31 - 2010-02-28\nParsing... 2010-02-28 - 2010-03-31\nParsing... 2010-03-31 - 2010-04-30\nParsing... 2010-04-30 - 2010-05-31\nParsing... 2010-05-31 - 2010-06-30\nParsing... 2010-06-30 - 2010-07-31\nParsing... 2010-07-31 - 2010-08-31\nParsing... 2010-08-31 - 2010-09-30\nParsing... 2010-09-30 - 2010-10-31\nParsing... 2010-10-31 - 2010-11-30\nParsing... 2010-11-30 - 2010-12-31\nParsing... 2010-12-31 - 2011-01-31\nParsing... 2011-01-31 - 2011-02-28\nParsing... 2011-02-28 - 2011-03-31\nParsing... 2011-03-31 - 2011-04-30\nParsing... 2011-04-30 - 2011-05-31\nParsing... 2011-05-31 - 2011-06-30\nParsing... 2011-06-30 - 2011-07-31\nParsing... 2011-07-31 - 2011-08-31\nParsing... 2011-08-31 - 2011-09-30\nParsing... 2011-09-30 - 2011-10-31\nParsing... 2011-10-31 - 2011-11-30\nParsing... 2011-11-30 - 2011-12-31\nParsing... 2011-12-31 - 2012-01-31\nParsing... 2012-01-31 - 2012-02-29\nParsing... 2012-02-29 - 2012-03-31\nParsing... 2012-03-31 - 2012-04-30\nParsing... 2012-04-30 - 2012-05-31\nParsing... 2012-05-31 - 2012-06-30\nParsing... 2012-06-30 - 2012-07-31\nParsing... 2012-07-31 - 2012-08-31\nParsing... 2012-08-31 - 2012-09-30\nParsing... 2012-09-30 - 2012-10-31\nParsing... 2012-10-31 - 2012-11-30\nParsing... 2012-11-30 - 2012-12-31\nParsing... 2012-12-31 - 2013-01-31\nParsing... 2013-01-31 - 2013-02-28\nParsing... 2013-02-28 - 2013-03-31\nParsing... 2013-03-31 - 2013-04-30\nParsing... 2013-04-30 - 2013-05-31\nParsing... 2013-05-31 - 2013-06-30\nParsing... 2013-06-30 - 2013-07-31\nParsing... 2013-07-31 - 2013-08-31\nParsing... 2013-08-31 - 2013-09-30\nParsing... 2013-09-30 - 2013-10-31\nParsing... 2013-10-31 - 2013-11-30\nParsing... 2013-11-30 - 2013-12-31\nParsing... 2013-12-31 - 2014-01-31\nParsing... 2014-01-31 - 2014-02-28\nParsing... 2014-02-28 - 2014-03-31\nParsing... 2014-03-31 - 2014-04-30\nParsing... 2014-04-30 - 2014-05-31\nParsing... 2014-05-31 - 2014-06-30\nParsing... 2014-06-30 - 2014-07-31\nParsing... 2014-07-31 - 2014-08-31\nParsing... 2014-08-31 - 2014-09-30\nParsing... 2014-09-30 - 2014-10-31\nParsing... 2014-10-31 - 2014-11-30\nParsing... 2014-11-30 - 2014-12-31\nParsing... 2014-12-31 - 2015-01-31\nParsing... 2015-01-31 - 2015-02-28\nParsing... 2015-02-28 - 2015-03-31\nParsing... 2015-03-31 - 2015-04-30\nParsing... 2015-04-30 - 2015-05-31\nParsing... 2015-05-31 - 2015-06-30\nParsing... 2015-06-30 - 2015-07-31\nParsing... 2015-07-31 - 2015-08-31\nParsing... 2015-08-31 - 2015-09-30\nParsing... 2015-09-30 - 2015-10-31\nParsing... 2015-10-31 - 2015-11-30\nParsing... 2015-11-30 - 2015-12-31\nParsing... 2015-12-31 - 2016-01-31\nParsing... 2016-01-31 - 2016-02-29\nParsing... 2016-02-29 - 2016-03-31\nParsing... 2016-03-31 - 2016-04-30\nParsing... 2016-04-30 - 2016-05-31\nParsing... 2016-05-31 - 2016-06-30\nParsing... 2016-06-30 - 2016-07-31\nParsing... 2016-07-31 - 2016-08-31\nParsing... 2016-08-31 - 2016-09-30\nParsing... 2016-09-30 - 2016-10-31\nParsing... 2016-10-31 - 2016-11-30\nParsing... 2016-11-30 - 2016-12-31\nParsing... 2016-12-31 - 2017-01-31\nParsing... 2017-01-31 - 2017-02-28\nParsing... 2017-02-28 - 2017-03-31\nParsing... 2017-03-31 - 2017-04-30\nParsing... 2017-04-30 - 2017-05-31\nParsing... 2017-05-31 - 2017-06-30\nParsing... 2017-06-30 - 2017-07-31\nParsing... 2017-07-31 - 2017-08-31\nParsing... 2017-08-31 - 2017-09-30\nParsing... 2017-09-30 - 2017-10-31\nParsing... 2017-10-31 - 2017-11-30\nParsing... 2017-11-30 - 2017-12-31\nParsing... 2017-12-31 - 2018-01-31\nParsing... 2018-01-31 - 2018-02-28\nParsing... 2018-02-28 - 2018-03-31\nParsing... 2018-03-31 - 2018-04-30\nParsing... 2018-04-30 - 2018-05-31\nParsing... 2018-05-31 - 2018-06-30\nParsing... 2018-06-30 - 2018-07-31\nParsing... 2018-07-31 - 2018-08-31\nParsing... 2018-08-31 - 2018-09-30\nParsing... 2018-09-30 - 2018-10-31\nParsing... 2018-10-31 - 2018-11-30\n"
],
[
"def date_hu_en(i):\n date=i[6:-4]\n if date=='augusztus': m='08'\n elif date=='december': m='12'\n elif date=='február': m='02'\n elif date=='január': m='01'\n elif date=='július': m='07'\n elif date=='június': m='06'\n elif date=='május': m='05'\n elif date=='március': m='03'\n elif date=='november': m='11'\n elif date==u'október': m='10'\n elif date==u'szeptember': m='09'\n elif date==u'április': m='04'\n else: return date\n return i[:4]+'-'+m+'-'+i[-3:-1]",
"_____no_output_____"
],
[
"def find_all(s, ch):\n return [i for i, letter in enumerate(s) if letter == ch]",
"_____no_output_____"
]
],
[
[
"Relevant = Medves cikk-e vagy sem: 1-igen, 0-nem biztos, -1:biztosan nem \nDeaths = Halalok szama (ha ismert) \nSeverity = Sulyossag: 0-mas jellegu hir, 1-latas, 2-allat-tamadas, 3-ember-tamadas \nDuplicate = 0: Eredeti cikk, 1: Masolat, 2: Osszegzes",
"_____no_output_____"
]
],
[
[
"def text_processor(title,content):\n relevant=0\n severity=0\n deaths=0\n tamadas=[u'támad',u'sebes']\n for i in tamadas:\n if i in title+content:\n relevant=1\n severity=2\n tamadas=[u'halál',u'áldozat',u'ölt ',u'pusztít']\n for i in tamadas:\n if i in title+content:\n relevant=1\n severity=3\n tamadas=[u'medve',u'medvé']\n for i in tamadas:\n if i in title.replace(',',' ').replace('.',' ').lower():\n relevant=1\n tamadas=[u'medvegyev',u'jegesmedvék',u'medvehagyma',u'aranymedve',u'szibéria',u' kupa',\n u'jégkorong',u'kosárlabda',u'labdarúgás',u'labdarúgó',u'steaua',\n u'c osztály',u'berlin',u'állatkert',u'medve-tó',u'oroszorsz',u' orosz ']\n for i in tamadas:\n if i in (title+content).replace(',',' ').replace('.',' ').lower():\n relevant=-1\n return relevant,severity,deaths",
"_____no_output_____"
],
[
"hirek=[]\ntagset=set()\nfor i in range(len(dates)-1):\n time2=dates[i+1]\n divgroup=divs[i]\n for div in divgroup:\n icat=''\n img=div.find('img')\n if img !=None: \n img=img['src']\n #infer image category from image link\n icats=find_all(img,'/')\n if len(icats)>4:\n icat=img[icats[3]+1:icats[4]]\n tags=div.find(\"div\", {\"class\": \"tags_con1\"})\n if tags!=None: \n tags=[j.text.strip() for j in tags.findAll('div')]\n idiv=div.find(\"div\", {\"class\": \"catinner\"})\n if idiv!=None:\n idiv=idiv.find('div')\n content=div.find('p')\n date=idiv.text[idiv.text.find('20'):idiv.text.find(',')]\n title=div.find('h2').text\n if content==None:\n sdiv=str(div)[::-1]\n content=sdiv[:sdiv.find('>a/<')].replace('\\r','').replace('\\t','').replace('\\n','')[::-1][:-6]\n else: content=content.text\n content=content.replace('</div><div class=\"clear\"></div></div><div class=\"clear\"></div>','')\n link=div.findAll('a')[-1]['href']\n #infer category from link\n cats=find_all(link,'/')\n if len(cats)>3:\n cat=link[cats[2]+1:cats[3]]\n else: cat=''\n #infer attack from plain text\n relevant,severity,deaths=text_processor(title,content)\n if tags!=None:\n notags=[u'Húsvét',u'Film',u'Egészségügy',u'Külföld',u'Színház',u'Ünnep']\n for notag in notags:\n if notag in tags:\n relevant=-1\n break\n if ((relevant>-1)&\\\n (cat not in ['sport','muvelodes','sms-e-mail-velemeny','tusvanyos'])&\\\n (title not in [u'Röviden'])):\n if tags!=None: \n tagset=tagset.union(set(tags))\n if 'medve' in tags:\n relevant=1\n hirek.append({'date':date_hu_en(date),\n 'hudate':date,\n 'title':title,\n 'image':img,\n 'tags':repr(tags),\n 'content':content,\n 'link':link,\n 'category':cat,\n 'icategory':icat,\n 'relevant':relevant,\n 'severity':severity,\n 'deaths':deaths,\n 'duplicate':0\n })",
"_____no_output_____"
]
],
[
[
"Összes medvés hír",
"_____no_output_____"
]
],
[
[
"df=pd.DataFrame().from_dict(hirek)\ndf['date']=pd.to_datetime(df['date'])\ndf=df.sort_values('date').drop_duplicates().reset_index(drop=True)",
"_____no_output_____"
],
[
"len(hirek)",
"_____no_output_____"
]
],
[
[
"Save to medve Excel. Manual curation",
"_____no_output_____"
]
],
[
[
"df.columns",
"_____no_output_____"
],
[
"dm=df[[ 'date', 'hudate', 'link','image', 'category','icategory','tags','title',\n 'content']]\ndc=df[['title','content','relevant', 'severity','deaths','duplicate']]",
"_____no_output_____"
],
[
"#save parsed data\ndm.to_excel('szekelyhon_medve.xlsx')",
"_____no_output_____"
],
[
"#save data for curation\n#1 if you dont have savedata yet\nexisting_savedata=False\nif not existing_savedata:\n dc.to_excel('szekelyhon_medve_curated.xlsx')\n#2 if you already have savedata\nelse:\n dc2=pd.read_excel('szekelyhon_medve_curated.xlsx')\n dc2.combine_first(dc).to_excel('szekelyhon_medve_curated.xlsx')",
"_____no_output_____"
],
[
"dr=pd.read_excel('data/szekelyhon_medve_curated_manual.xlsx')",
"_____no_output_____"
],
[
"dr=dr[['content','relevant','deaths','severity','duplicate']].set_index('content')",
"_____no_output_____"
],
[
"relevant=[]\ndeaths=[]\nseverity=[]\nduplicate=[]\nfor i in range(len(dc.index)):\n if dc.loc[i]['content']!='':\n if dc.loc[i]['content'] in dr.index:\n dk=dr.loc[dc.loc[i]['content']]\n else:\n dk=dc.loc[i]\n else:\n dk=dc.loc[i]\n relevant.append(np.array(dk['relevant']).flatten()[0])\n deaths.append(np.array(dk['deaths']).flatten()[0])\n severity.append(np.array(dk['severity']).flatten()[0])\n duplicate.append(np.array(dk['duplicate']).flatten()[0])",
"_____no_output_____"
],
[
"dc['relevant']=relevant\ndc['deaths']=deaths\ndc['severity']=severity\ndc['duplicate']=duplicate",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n This is separate from the ipykernel package so we can avoid doing imports until\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n after removing the cwd from sys.path.\n"
],
[
"dc.to_excel('szekelyhon_medve_curated.xlsx')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc71a6ab8697ce3970052dc8b23c7e45abf6a92 | 20,903 | ipynb | Jupyter Notebook | notebooks/PLN_01.ipynb | izafreitas/machine_learning_dellead_mentoring | 17cceec340d3b61ad367e8e0fb47db193d6d4889 | [
"MIT"
] | null | null | null | notebooks/PLN_01.ipynb | izafreitas/machine_learning_dellead_mentoring | 17cceec340d3b61ad367e8e0fb47db193d6d4889 | [
"MIT"
] | null | null | null | notebooks/PLN_01.ipynb | izafreitas/machine_learning_dellead_mentoring | 17cceec340d3b61ad367e8e0fb47db193d6d4889 | [
"MIT"
] | null | null | null | 40.198077 | 2,923 | 0.548247 | [
[
[
"# Spacy for NLP",
"_____no_output_____"
],
[
"In this project, sentiment analysis was done using natural language processing on the online reviews prevalant for various items on amazon,yelp and imdb which were lablelled. Using the spacy package of python to preprocess the data before, each individual review has been tokenized, lemmatized, filtered for stop words and vectorized inorder to prepare the data viable for the machine learning model. A pipeline was created which vectorized the preprocessed data using count vectorization or tfidf vectorizer, which is then split into training and testing datasets and were then used to train the machine learning model (support vector machine) and evaluate.\n\nhttps://mahadev001.github.io/Mahadev-Upadhyayula/Sentiment%20Analysis%20via%20NLP/Sentiment%20Analysis%20using%20NLP%20with%20Spacy%20and%20%20SVM.html\n\n\nThe data set contains about 1000 online reviews each for various items on Amazon, Yelp and IMDB and of these reviews about 500 were labelled positive and 500 were labelled negative reviews. For each company, the data was given the text format which are needed to be added to a dataframe",
"_____no_output_____"
]
],
[
[
"import spacy\nimport pandas as pd\nimport numpy as np\nimport sklearn as sk\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
]
],
[
[
"# Importanto e explorando dados",
"_____no_output_____"
],
[
"Os dados de cada uma das empresas são armazenados separadamente em arquivos txt. Cada um desses arquivos foi importado separadamente e associado a campos-chave.",
"_____no_output_____"
],
[
"## Importanto e reunindo dados",
"_____no_output_____"
]
],
[
[
"data_yelp = pd.read_table('yelp_labelled.txt')\ndata_amazon = pd.read_table('amazon_cells_labelled.txt')\ndata_imdb = pd.read_table('imdb_labelled.txt')",
"_____no_output_____"
],
[
"combined_col=[data_amazon, data_imdb, data_yelp]",
"_____no_output_____"
],
[
"print(data_amazon).columns",
" So there is no way for me to plug it in here in the US unless I go by a converter. \\\n0 Good case, Excellent value. \n1 Great for the jawbone. \n2 Tied to charger for conversations lasting more... \n3 The mic is great. \n4 I have to jiggle the plug to get it to line up... \n.. ... \n994 The screen does get smudged easily because it ... \n995 What a piece of junk.. I lose more calls on th... \n996 Item Does Not Match Picture. \n997 The only thing that disappoint me is the infra... \n998 You can not answer calls with the unit, never ... \n\n 0 \n0 1 \n1 1 \n2 0 \n3 1 \n4 0 \n.. .. \n994 0 \n995 0 \n996 0 \n997 0 \n998 0 \n\n[999 rows x 2 columns]\n"
]
],
[
[
"### Adicionando cabeçalhos nas colunas de cada conjunto de dados",
"_____no_output_____"
]
],
[
[
"for colname in combined_col:\n colname.columns = [\"Review\", \"Label\"]\n for colname in combined_col:\n print(colname.columns)",
"Index(['Review', 'Label'], dtype='object')\nIndex(['A very, very, very slow-moving, aimless movie about a distressed, drifting young man. ', '0'], dtype='object')\nIndex(['Wow... Loved this place.', '1'], dtype='object')\nIndex(['Review', 'Label'], dtype='object')\nIndex(['Review', 'Label'], dtype='object')\nIndex(['Wow... Loved this place.', '1'], dtype='object')\nIndex(['Review', 'Label'], dtype='object')\nIndex(['Review', 'Label'], dtype='object')\nIndex(['Review', 'Label'], dtype='object')\n"
],
[
"# Para saber a qual empresa pertence o conjunto de dados foi adicionado um a coluna \"Company\" como chave\ncompany = [\"Amazon\", \"imdb\", \"yelp\"]\ncomb_data = pd.concat(combined_col, keys = company)",
"_____no_output_____"
],
[
"# Explorando a estrutura do novo dataframe\nprint(comb_data.shape)\ncomb_data.head",
"(2745, 2)\n"
],
[
"comb_data.to_csv(\"Sentiment_Analysis_Dataset\")\nprint(comb_data.columns)\nprint(comb_data.isnull().sum())",
"Index(['Review', 'Label'], dtype='object')\nReview 0\nLabel 0\ndtype: int64\n"
]
],
[
[
"# Pré-processamento dos dados usando Spacy e treinamento do modelo de aprendizado de máquina usando sklearn ¶\n",
"_____no_output_____"
],
[
"Neste estágio, o pacote Spacy do python é usado para lematizar e remover palavras de parada do conjunto de dados obtido.",
"_____no_output_____"
]
],
[
[
"import spacy\n#import en_core_web_sm\nfrom spacy.lang.en.stop_words import STOP_WORDS\n#nlp = en_core_web_sm.load()\n#nlp = spacy.load('en')\n# Para construir uma lista de stop words para filtragem\nstopwords = list(STOP_WORDS)\nprint(stopwords)",
"['further', 'then', 'though', 'five', 'will', 'your', 'latter', 'am', 'throughout', 'first', 'last', 'ours', 'becoming', 'everything', 'them', 'seem', 'him', 'side', 'whose', 'just', 'it', 'sometimes', 'anywhere', 'seeming', 're', 'ca', 'often', 'once', 'full', 'show', 'two', 'say', 'its', 'all', 'whereas', 'can', '’m', 'hundred', 'here', 'ourselves', 'thereupon', 'so', 'per', 'we', 'regarding', 'four', 'after', 'against', 'around', 'everyone', '‘d', 'take', 'bottom', 'anyone', 'front', 'nothing', 'since', 'thereby', 'under', 'using', 'onto', 'forty', 'put', 'indeed', 'something', 'serious', 'alone', 'from', 'also', 'through', 'somewhere', 'made', 'who', 'being', 'yours', 'each', 'if', 'why', 'thence', 'in', \"'ve\", 'please', 'enough', 'mostly', 'within', 'therein', 'and', 'themselves', 'were', 'every', 'that', 'while', '‘re', 'where', 'into', 'empty', 'those', 'three', 'whereupon', 'whither', 'less', 'sixty', \"n't\", 'as', 'ever', 'almost', 'us', 'whereby', 'herein', 'yet', 'to', 'could', 'six', 'everywhere', 'which', 'would', 'across', 'any', 'eleven', 'itself', 'have', 'yourselves', 'whereafter', 'she', 'my', 'same', 'anything', 'fifty', 'name', 'make', 'mine', 'out', 'during', 'call', 'over', 'whenever', 'next', 'other', 'nevertheless', 'an', 'seemed', 'still', 'anyhow', 'thus', 'he', 'with', 'thereafter', '’ve', 'many', 'on', 'move', 'down', 'together', 'twenty', 'hers', 'do', '‘ve', 'third', 'becomes', 'only', 'behind', 'myself', 'was', 'without', 'some', 'done', 'another', 'perhaps', 'his', 'part', \"'re\", 'such', 'always', 'least', 'very', 'well', 'used', \"'d\", 'above', 'became', 'what', 'nine', 'amongst', 'various', 'again', 'for', 'how', 'someone', 'sometime', 'noone', 'whoever', 'both', 'doing', 'too', 'you', 'this', 'although', 'never', 'me', 'twelve', 'until', '‘m', 'does', 'beside', 'hereafter', \"'m\", '’re', 'give', 'nobody', 'hence', 'much', 'a', 'her', 'not', 'has', 'up', 'is', 'none', 'n’t', '‘ll', 'when', 'already', 'formerly', 'get', 'the', 'below', 'hereupon', 'anyway', 'else', 'no', 'nor', 'become', 'there', 'whether', 'amount', 'beyond', 'moreover', 'whom', 'whatever', 'wherein', 'of', 'others', 'towards', 'back', 'otherwise', 'therefore', 'herself', 'by', 'except', 'namely', 'fifteen', '’s', 'because', 'really', 'had', 'these', '‘s', 'himself', 'might', 'see', 'their', 'n‘t', 'about', 'our', 'now', 'toward', 'few', 'unless', 'i', 'nowhere', 'yourself', 'meanwhile', 'more', 'must', 'whole', 'several', 'keep', 'somehow', 'either', 'thru', 'be', 'former', 'even', 'own', 'latterly', 'via', '’d', \"'s\", 'most', 'or', 'along', 'whence', 'they', 'neither', 'among', 'rather', 'off', 'did', 'elsewhere', 'between', 'than', 'at', 'eight', 'upon', 'ten', \"'ll\", 'afterwards', 'should', 'hereby', '’ll', 'quite', 'however', 'been', 'before', 'are', 'may', 'cannot', 'but', 'go', 'besides', 'one', 'top', 'beforehand', 'wherever', 'seems', 'due']\n"
],
[
"import string \npunctuations = string.punctuation\n# Criando um Parser Spacy\nfrom spacy.lang.en import English\nparser = English()",
"_____no_output_____"
],
[
"def my_tokenizer (sentence):\n mytokens = parser(sentence)\n mytokens = [ word.lema_.lower().strip() if word.lema_ != \"-PRON-\" else word.lower_ for word in mytokens]\n mytokens = [ word for word in mytokens if word not in stopwords and word not in punctuations]\n return mytokens",
"_____no_output_____"
],
[
"# Pacotes de ML \nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.base import TransformerMixin\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.svm import LinearSVC",
"_____no_output_____"
],
[
"# Customizando o transformador usando SpaCy\nclass preditors (TransformerMixin):\n def transform (self, X, **transform_params):\n return[clean_text (text) for text in X]\n def fit (self, X, y, **fit_params):\n return self\n# Função básica para limpar o texto\ndef clean_text (text):\n return text.strip().lower()",
"_____no_output_____"
],
[
"# Vetorization\nvectorizer = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,1))\nclassifier = LinearSVC()",
"_____no_output_____"
],
[
"# Usando o TF-IDF\ntfvectorizer = TfidfVectorizer(tokenizer = spacy_tokenizer)",
"_____no_output_____"
],
[
"# Dividindo o dataset\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"# Colunas e rótulos\nX = comb_data['Review']\nylabels = comb_data['Label']\n\n#Dividindo em teste e treino\nX_train, X_test, y_train, y_test = train_test_split(X, ylabels, test_size=0.2, random_state=42)",
"_____no_output_____"
],
[
"# Criando a pipeline para limpar, tokenizar, vetorizar e classificar usando o \"CountVectorizer\"\npipe_countvect = Pipeline([(\"cleaner\", predictors()), ('vectorizer', vectorizer), ('classfier', classifier)])\n\n# Dando o fit nos dados - ajustando\npipe_countvect,fit(X_train, y_train)\n# Fazendo as predições com o conjunto de dados de teste\nsample_prediction = pipe_countvect.predict(X_test)\n\n# Resultado das predições\n# 1 = Positive review\n# 0 = Negative review\nfor (sample, pred) in zip (X_test, sample_prediction):\n print(sample, 'Prediction=>', pred)\n \n \n# Accuracy\nprint(\"Accuracy: \", pipe_countvect.score(X_test, y_test))\nprint(\"Accuracy: \", pipe_countvect.score(X_test, sample_prediction))\n\n# Accuracy\nprint(\"Accuracy: \", pipe_countvect.score(X_train, y_train))",
"_____no_output_____"
],
[
"# Outra review aleatória\npipe.predict([\"This was a great movie\"])",
"_____no_output_____"
],
[
"example = [\" I do enjoy my job\", \"What a poor product!, I will have to get a new one\", \"I feel amazing!\"]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc73fd99d62736de8931d441fa470d0f85d2162 | 311,468 | ipynb | Jupyter Notebook | notebooks/exploratory_data_analysis.ipynb | GoniaW/protagonist_tagger | 68451cf5e0fc671510188cd439f888ffbdad6722 | [
"BSD-3-Clause"
] | null | null | null | notebooks/exploratory_data_analysis.ipynb | GoniaW/protagonist_tagger | 68451cf5e0fc671510188cd439f888ffbdad6722 | [
"BSD-3-Clause"
] | null | null | null | notebooks/exploratory_data_analysis.ipynb | GoniaW/protagonist_tagger | 68451cf5e0fc671510188cd439f888ffbdad6722 | [
"BSD-3-Clause"
] | null | null | null | 633.065041 | 36,655 | 0.945333 | [
[
[
"import os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom spacy.lang.pl import Polish\n\nfrom tool.annotations_utils import read_annotations\nfrom tool.file_and_directory_management import read_file_to_list\nPATH_TO_TITLES = '..\\\\data\\\\novels_titles\\\\polish.txt'\nPATH_TO_CHARACTERS_LISTS = '..\\\\data\\\\lists_of_characters\\\\'\nPATH_TO_ANNOTATIONS = '..\\\\experiments\\\\polish\\\\'\nPATH_TO_GOLD_ANNOTATIONS = '..\\\\data\\\\testing_sets\\\\test_polish_gold_standard\\\\'",
"_____no_output_____"
],
[
"TITLES = read_file_to_list(PATH_TO_TITLES)\nfor i in range(len(TITLES)):\n TITLES[i] = '_'.join(TITLES[i].split(' '))\n\nTITLES",
"_____no_output_____"
],
[
"def count_characters_occurrences(annotations, characters):\n count_persons_annotations = dict.fromkeys(characters, 0)\n for a in annotations:\n entities = a['entities']\n for i in range(len(entities)):\n character = entities[i][2]\n count_persons_annotations[character] += 1\n\n df = pd.Series(count_persons_annotations).to_frame('count')\n return df.loc[df['count'] != 0]\n\n\ndef plot_novel_character_statistics(title):\n protagonists = read_file_to_list(PATH_TO_CHARACTERS_LISTS + title)\n protagonists.append('PERSON')\n pre_annotations = read_annotations(PATH_TO_ANNOTATIONS + f'{title}.json')\n gold_annotations = read_annotations(PATH_TO_GOLD_ANNOTATIONS + f'{title}.json')\n\n characters_count_pre_annotation = count_characters_occurrences(pre_annotations, protagonists)\n characters_count_gold_annotation = count_characters_occurrences(gold_annotations, protagonists)\n print(title)\n fig, ax = plt.subplots(1, 2, figsize=[16, 6])\n\n characters_count_pre_annotation.sort_values(by='count').plot.barh(ax=ax[0])\n ax[0].set_title('Pre-annotated data')\n characters_count_gold_annotation.sort_values(by='count').plot.barh(ax=ax[1])\n ax[1].set_title('Correctly annotated data')\n\n plt.subplots_adjust(wspace=0.5)\n plt.savefig('..\\\\experiments\\\\plots\\\\ksiega_dzungli_plot.png')\n #plt.show()",
"_____no_output_____"
]
],
[
[
"## Characters occurrences",
"_____no_output_____"
]
],
[
[
"for title in TITLES:\n plot_novel_character_statistics(title)",
"Hrabia_Monte_Christo\n"
]
],
[
[
" ## Books' statistics",
"_____no_output_____"
]
],
[
[
"plot_novel_character_statistics('Ksiega_dzungli')",
"Ksiega_dzungli\n"
],
[
"def count_word_average_length(doc, nlp):\n tokens_text = []\n for token in doc:\n tokens_text.append(token.text)\n\n words = []\n for token in tokens_text:\n if not (nlp.vocab[token].is_stop or nlp.vocab[token].is_punct):\n words.append(token)\n\n num_of_characters = 0\n for word in words:\n num_of_characters += len(word)\n\n return num_of_characters / len(words)\n\n\ndef count_sentence_average_character_length(sentences):\n num_of_characters = 0\n for sent in sentences:\n num_of_characters += len(sent)\n\n return num_of_characters / len(sentences)\n\n\ndef count_sentence_average_word_length(sentences, nlp):\n num_of_words = 0\n for sent in sentences:\n num_of_words += len(nlp(sent))\n\n return num_of_words / len(sentences)\n\ndef calculate_statistics(title, df):\n annotations = read_annotations(PATH_TO_GOLD_ANNOTATIONS + f'{title}.json')\n contents = [a['content'] for a in annotations]\n\n nlp = Polish()\n doc = nlp(' '.join(contents))\n title = title.replace('_', ' ')\n df.loc[title, 'average_word_length'] = count_word_average_length(doc, nlp)\n df.loc[title, 'average_sentence_length'] = count_sentence_average_character_length(contents)\n df.loc[title, 'average_num_of_words_in_sentence'] = count_sentence_average_word_length(contents, nlp)\n",
"_____no_output_____"
],
[
"statistics_df = pd.DataFrame()\nfor title in TITLES:\n calculate_statistics(title, statistics_df)\nstatistics_df",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 3, figsize=[18, 4])\nstatistics_df['average_word_length'].plot.bar(ax=ax[0])\nax[0].set_title('Average word length in chosen texts.')\nstatistics_df['average_sentence_length'].plot.bar(ax=ax[1])\nax[1].set_title('Average sentence length in chosen texts. (characters)')\nstatistics_df['average_num_of_words_in_sentence'].plot.bar(ax=ax[2])\nax[2].set_title('Average sentence length in chosen texts. (words)')\nplt.subplots_adjust(wspace=0.5, bottom=0.15)\nplt.savefig('..\\\\experiments\\\\plots\\\\general_statistics.png', bbox_inches='tight')\n#plt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecc742a25c93ea1a97bbf420a18f660631446f6c | 173,690 | ipynb | Jupyter Notebook | Liniar_regression.ipynb | dhires9196/ML_reinvent | 68df719219b12a9ebaade6c18c869abf3190e0bb | [
"MIT"
] | null | null | null | Liniar_regression.ipynb | dhires9196/ML_reinvent | 68df719219b12a9ebaade6c18c869abf3190e0bb | [
"MIT"
] | null | null | null | Liniar_regression.ipynb | dhires9196/ML_reinvent | 68df719219b12a9ebaade6c18c869abf3190e0bb | [
"MIT"
] | null | null | null | 103.325402 | 66,586 | 0.843065 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"df=pd.read_csv(r\"C:\\Users\\dhire\\Documents\\Machine_learning_Inuron\\ML_Live_Class\\data\\Advertising.csv\")\ndf.head()",
"_____no_output_____"
],
[
"x=df.drop(['sales'],axis=1)\nx.head()",
"_____no_output_____"
],
[
"y=df['sales']\ny",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=42)",
"_____no_output_____"
],
[
"X_train",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression\nmodel=LinearRegression()",
"_____no_output_____"
],
[
"model.fit(X_train,y_train)",
"_____no_output_____"
],
[
"test_pred=model.predict(X_test)",
"_____no_output_____"
],
[
"test_pred",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_absolute_error,mean_squared_error",
"_____no_output_____"
],
[
"sns.histplot(data=df,x='sales',bins=30)",
"_____no_output_____"
],
[
"mean_absolute_error(y_test,test_pred)",
"_____no_output_____"
],
[
"mean_squared_error(y_test,test_pred)",
"_____no_output_____"
],
[
"np.sqrt(mean_squared_error(y_test,test_pred))",
"_____no_output_____"
],
[
"from sklearn.metrics import r2_score\nr2_score(y_test,test_pred)",
"_____no_output_____"
]
],
[
[
"#### Residual plot",
"_____no_output_____"
]
],
[
[
"test_residual=y_test-test_pred\ntest_residual",
"_____no_output_____"
],
[
"#residual plot\nsns.scatterplot(y_test,test_residual)\nplt.axhline(y=0,color='red',ls='--')",
"C:\\Users\\dhire\\anaconda3\\envs\\dhiraj_ml_march\\lib\\site-packages\\seaborn\\_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n"
],
[
"sns.distplot(test_residual,bins=20)",
"C:\\Users\\dhire\\anaconda3\\envs\\dhiraj_ml_march\\lib\\site-packages\\seaborn\\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n"
],
[
"model.coef_",
"_____no_output_____"
],
[
"y_hat=model.predict(x)\ny_hat",
"_____no_output_____"
],
[
"from IPython.core.pylabtools import figsize\nfig,axes=plt.subplots(nrows=1,ncols=3,figsize=(16,6))\naxes[0].plot(df['TV'],df['sales'],'o')\naxes[0].plot(df['TV'],y_hat,'o',color='red')\naxes[0].set_ylabel(\"sales\")\naxes[0].set_title(\"TV_spend\")\n\naxes[1].plot(df['radio'],df['sales'],'o')\naxes[1].plot(df['radio'],y_hat,'o',color='red')\naxes[1].set_ylabel(\"sales\")\naxes[1].set_title(\"radio_spend\")\n\naxes[2].plot(df['newspaper'],df['sales'],'o')\naxes[2].plot(df['newspaper'],y_hat,'o',color='red')\naxes[2].set_ylabel(\"sales\")\naxes[2].set_title(\"newspaper_spend\")",
"_____no_output_____"
],
[
"import os",
"_____no_output_____"
],
[
"os.getcwd()",
"_____no_output_____"
],
[
"from joblib import dump,load ##save the model in binary file",
"_____no_output_____"
],
[
"model_dir='models'\nos.makedirs(model_dir,exist_ok=True)\nfile_path=os.path.join(model_dir,'model.joblib')\ndump(model,file_path)",
"_____no_output_____"
],
[
"load_model=load(r'C:\\Users\\dhire\\Documents\\Machine_learning_Inuron\\ML_Live_Class\\models\\model.joblib')\n",
"_____no_output_____"
],
[
"load_model.coef_",
"_____no_output_____"
]
],
[
[
"#### Polynomial regression",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
],
[
"x1=df.drop(['sales'],axis=1)",
"_____no_output_____"
],
[
"x1.head()",
"_____no_output_____"
],
[
"from sklearn.preprocessing import PolynomialFeatures",
"_____no_output_____"
],
[
"poly_conv=PolynomialFeatures(degree=2,include_bias=False)",
"_____no_output_____"
],
[
"poly_conv.fit(x1)",
"_____no_output_____"
],
[
"poly_feat=poly_conv.transform(x1)",
"_____no_output_____"
],
[
"poly_feat[0]",
"_____no_output_____"
],
[
"x1.iloc[0]",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(poly_feat, y, test_size=0.33, random_state=42)",
"_____no_output_____"
],
[
"model1=LinearRegression()",
"_____no_output_____"
],
[
"model1.fit(X_train,y_train)",
"_____no_output_____"
],
[
"test_predict=model1.predict(X_test)\ntest_predict",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_squared_error,mean_absolute_error",
"_____no_output_____"
],
[
"MAE=mean_absolute_error(y_test,test_predict)\nMAE",
"_____no_output_____"
],
[
"MSE=mean_squared_error(y_test,test_predict)\nMSE",
"_____no_output_____"
],
[
"RMSE=np.sqrt(MSE)\nRMSE",
"_____no_output_____"
],
[
"model1.coef_",
"_____no_output_____"
]
],
[
[
"##### Choose the best degree for polynomial",
"_____no_output_____"
]
],
[
[
"train_rmse_error=[]\ntest_rmse_error=[]\nfor d in range(1,10):\n poly_conv=PolynomialFeatures(degree=d,include_bias=False)\n poly_conv.fit(x1)\n poly_feat=poly_conv.transform(x1)\n X_train, X_test, y_train, y_test = train_test_split(poly_feat, y, test_size=0.33, random_state=42)\n model1=LinearRegression()\n model1.fit(X_train,y_train)\n train_pred=model1.predict(X_train)\n test_pred=model1.predict(X_test)\n train_RMSE=np.sqrt(mean_squared_error(y_train,train_pred))\n test_RMSE=np.sqrt(mean_squared_error(y_test,test_pred))\n train_rmse_error.append(train_RMSE)\n test_rmse_error.append(test_RMSE)\n ",
"_____no_output_____"
],
[
"train_rmse_error",
"_____no_output_____"
],
[
"test_rmse_error ##overfitting is happening 5th degree",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.plot(range(1,10),train_rmse_error,label='Train_RMSE')\nplt.plot(range(1,10),test_rmse_error,label='Test_RMSE')\nplt.xlabel(\"model complexity/degree of polynomial\")\nplt.ylabel(\"RMSE\")\nplt.legend()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.plot(range(1,6),train_rmse_error[0:5],label='Train_RMSE')\nplt.plot(range(1,6),test_rmse_error[0:5],label='Test_RMSE')\nplt.xlabel(\"model complexity/degree of polynomial\")\nplt.ylabel(\"RMSE\")\nplt.legend()",
"_____no_output_____"
],
[
"final_polyconverter=PolynomialFeatures(degree=3,include_bias=False)",
"_____no_output_____"
],
[
"full_converted_x=final_polyconverter.fit_transform(x)",
"_____no_output_____"
],
[
"final_model=LinearRegression()",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(full_converted_x, y, test_size=0.33, random_state=42)\nfinal_model.fit(X_train,y_train)",
"_____no_output_____"
],
[
"model_dir='models'\nos.makedirs(model_dir,exist_ok=True)\nfile_path=os.path.join(model_dir,'poly.joblib')\ndump(final_model,file_path)",
"_____no_output_____"
],
[
"model_dir='models'\nos.makedirs(model_dir,exist_ok=True)\nfile_path=os.path.join(model_dir,'final_poly_converter.joblib')\ndump(final_polyconverter,file_path)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc7540162dea64a1f55e8756e0b4cabbf1aaafd | 20,125 | ipynb | Jupyter Notebook | examples/user_guide/09-Gridded_Datasets.ipynb | ppwadhwa/holoviews | e8e2ec08c669295479f98bb2f46bbd59782786bf | [
"BSD-3-Clause"
] | 864 | 2019-11-13T08:18:27.000Z | 2022-03-31T13:36:13.000Z | examples/user_guide/09-Gridded_Datasets.ipynb | ppwadhwa/holoviews | e8e2ec08c669295479f98bb2f46bbd59782786bf | [
"BSD-3-Clause"
] | 1,117 | 2019-11-12T16:15:59.000Z | 2022-03-30T22:57:59.000Z | examples/user_guide/09-Gridded_Datasets.ipynb | ppwadhwa/holoviews | e8e2ec08c669295479f98bb2f46bbd59782786bf | [
"BSD-3-Clause"
] | 180 | 2019-11-19T16:44:44.000Z | 2022-03-28T22:49:18.000Z | 33.766779 | 784 | 0.62723 | [
[
[
"# Gridded Datasets",
"_____no_output_____"
]
],
[
[
"import xarray as xr\nimport numpy as np\nimport holoviews as hv\nfrom holoviews import opts\nhv.extension('matplotlib')\n\nopts.defaults(opts.Scatter3D(color='Value', cmap='fire', edgecolor='black', s=50))",
"_____no_output_____"
]
],
[
[
"In the [Tabular Data](./08-Tabular_Datasets.ipynb) guide we covered how to work with columnar data in HoloViews. Apart from tabular or column based data there is another data format that is particularly common in the science and engineering contexts, namely multi-dimensional arrays. The gridded data interfaces allow working with grid-based datasets directly.\n\nGrid-based datasets have two types of dimensions:\n\n* they have coordinate or key dimensions, which describe the sampling of each dimension in the value arrays\n* they have value dimensions which describe the quantity of the multi-dimensional value arrays\n\nThere are many different types of gridded datasets, which each approximate or measure a surface or space at discretely specified coordinates. In HoloViews, gridded datasets are typically one of three possible types: Regular rectilinear grids, irregular rectilinear grids, and curvilinear grids. Regular rectilinear grids can be defined by 1D coordinate arrays specifying the spacing along each dimension, while the other types require grid coordinates with the same dimensionality as the underlying value arrays, specifying the full n-dimensional coordinates of the corresponding array value. HoloViews provides many different elements supporting regularly spaced rectilinear grids, but currently only QuadMesh supports irregularly spaced rectilinear and curvilinear grids.\n\nThe difference between uniform, rectilinear and curvilinear grids is best illustrated by the figure below:\n\n<figure>\n <img src=\"http://www.earthsystemmodeling.org/esmf_releases/public/ESMF_6_3_0r/ESMC_crefdoc/img9.png\" alt=\"grid-types\">\n <figcaption>Types of logically rectangular grid tiles. Red circles show the values needed to specify grid coordinates for each type. Reproduced from <a href=\"http://www.earthsystemmodeling.org/esmf_releases/public/ESMF_6_3_0r/ESMC_crefdoc/node5.html\">ESMF documentation</a></figcaption>\n</figure>\n\n\nIn this section we will first discuss how to work with the simpler rectilinear grids and then describe how to define a curvilinear grid with 2D coordinate arrays.\n\n## Declaring gridded data\n\nAll Elements that support a ColumnInterface also support the GridInterface. The simplest example of a multi-dimensional (or more precisely 2D) gridded dataset is an image, which has implicit or explicit x-coordinates, y-coordinates and an array representing the values for each combination of these coordinates. Let us start by declaring an Image with explicit x- and y-coordinates:",
"_____no_output_____"
]
],
[
[
"img = hv.Image((range(10), range(5), np.random.rand(5, 10)), datatype=['grid'])\nimg",
"_____no_output_____"
]
],
[
[
"In the above example we defined that there would be 10 samples along the x-axis, 5 samples along the y-axis and then defined a random ``5x10`` array, matching those dimensions. This follows the NumPy (row, column) indexing convention. When passing a tuple HoloViews will use the first gridded data interface, which stores the coordinates and value arrays as a dictionary mapping the dimension name to a NumPy array representing the data:",
"_____no_output_____"
]
],
[
[
"img.data",
"_____no_output_____"
]
],
[
[
"However HoloViews also ships with an interface for ``xarray`` and the [GeoViews](https://geoviews.org) library ships with an interface for ``iris`` objects, which are two common libraries for working with multi-dimensional datasets:",
"_____no_output_____"
]
],
[
[
"arr_img = img.clone(datatype=['image'])\nprint(type(arr_img.data))\n\ntry: \n xr_img = img.clone(datatype=['xarray'])\n\n print(type(xr_img.data)) \nexcept:\n print('xarray interface could not be imported.')",
"_____no_output_____"
]
],
[
[
"In the case of an Image HoloViews also has a simple image representation which stores the data as a single array and converts the x- and y-coordinates to a set of bounds:",
"_____no_output_____"
]
],
[
[
"print(\"Array type: %s with bounds %s\" % (type(arr_img.data), arr_img.bounds))",
"_____no_output_____"
]
],
[
[
"To summarize the constructor accepts a number of formats where the value arrays should always match the shape of the coordinate arrays:\n\n 1. A simple np.ndarray along with (l, b, r, t) bounds\n 2. A tuple of the coordinate and value arrays\n 3. A dictionary of the coordinate and value arrays indexed by their dimension names\n 3. XArray DataArray or XArray Dataset\n 4. An Iris cube",
"_____no_output_____"
],
[
"# Working with a multi-dimensional dataset",
"_____no_output_____"
],
[
"A gridded Dataset may have as many dimensions as desired, however individual Element types only support data of a certain dimensionality. Therefore we usually declare a ``Dataset`` to hold our multi-dimensional data and take it from there.",
"_____no_output_____"
]
],
[
[
"dataset3d = hv.Dataset((range(3), range(5), range(7), np.random.randn(7, 5, 3)),\n ['x', 'y', 'z'], 'Value')\ndataset3d",
"_____no_output_____"
]
],
[
[
"This is because even a 3D multi-dimensional array represents volumetric data which we can display easily only if it contains few samples. In this simple case we can get an overview of what this data looks like by casting it to a ``Scatter3D`` Element (which will help us visualize the operations we are applying to the data:",
"_____no_output_____"
]
],
[
[
"hv.Scatter3D(dataset3d)",
"_____no_output_____"
]
],
[
[
"### Indexing",
"_____no_output_____"
],
[
"In order to explore the dataset we therefore often want to define a lower dimensional slice into the array and then convert the dataset:",
"_____no_output_____"
]
],
[
[
"dataset3d.select(x=1).to(hv.Image, ['y', 'z']) + hv.Scatter3D(dataset3d.select(x=1))",
"_____no_output_____"
]
],
[
[
"### Groupby\n\nAnother common method to apply to our data is to facet or animate the data using ``groupby`` operations. HoloViews provides a convenient interface to apply ``groupby`` operations and select which dimensions to visualize. ",
"_____no_output_____"
]
],
[
[
"(dataset3d.to(hv.Image, ['y', 'z'], 'Value', ['x']) +\nhv.HoloMap({x: hv.Scatter3D(dataset3d.select(x=x)) for x in range(3)}, kdims='x'))",
"_____no_output_____"
]
],
[
[
"### Aggregating",
"_____no_output_____"
],
[
"Another common operation is to aggregate the data with a function thereby reducing a dimension. You can either ``aggregate`` the data by passing the dimensions to aggregate or ``reduce`` a specific dimension. Both have the same function:",
"_____no_output_____"
]
],
[
[
"hv.Image(dataset3d.aggregate(['x', 'y'], np.mean)) + hv.Image(dataset3d.reduce(z=np.mean))",
"_____no_output_____"
]
],
[
[
"By aggregating the data we can reduce it to any number of dimensions we want. We can for example compute the spread of values for each z-coordinate and plot it using a ``Spread`` and ``Curve`` Element. We simply aggregate by that dimension and pass the aggregation functions we want to apply:",
"_____no_output_____"
]
],
[
[
"hv.Spread(dataset3d.aggregate('z', np.mean, np.std)) * hv.Curve(dataset3d.aggregate('z', np.mean))",
"_____no_output_____"
]
],
[
[
"It is also possible to generate lower-dimensional views into the dataset which can be useful to summarize the statistics of the data along a particular dimension. A simple example is a box-whisker of the ``Value`` for each x-coordinate. Using the ``.to`` conversion interface we declare that we want a ``BoxWhisker`` Element indexed by the ``x`` dimension showing the ``Value`` dimension. Additionally we have to ensure to set ``groupby`` to an empty list because by default the interface will group over any remaining dimension.",
"_____no_output_____"
]
],
[
[
"dataset3d.to(hv.BoxWhisker, 'x', 'Value', groupby=[])",
"_____no_output_____"
]
],
[
[
"Similarly we can generate a ``Distribution`` Element showing the ``Value`` dimension, group by the 'x' dimension and then overlay the distributions, giving us another statistical summary of the data:",
"_____no_output_____"
]
],
[
[
"dataset3d.to(hv.Distribution, 'Value', [], groupby='x').overlay()",
"_____no_output_____"
]
],
[
[
"## Categorical dimensions",
"_____no_output_____"
],
[
"The key dimensions of the multi-dimensional arrays do not have to represent continuous values, we can display datasets with categorical variables as a ``HeatMap`` Element:",
"_____no_output_____"
]
],
[
[
"heatmap = hv.HeatMap((['A', 'B', 'C'], ['a', 'b', 'c', 'd', 'e'], np.random.rand(5, 3)))\nheatmap + hv.Table(heatmap)",
"_____no_output_____"
]
],
[
[
"## Non-uniform rectilinear grids\n\nAs discussed above, there are two main types of grids handled by HoloViews. So far, we have mainly dealt with uniform, rectilinear grids, but we can use the ``QuadMesh`` element to work with non-uniform rectilinear grids and curvilinear grids.\n\nIn order to define a non-uniform, rectilinear grid we can declare explicit irregularly spaced x- and y-coordinates. In the example below we specify the x/y-coordinate bin edges of the grid as arrays of shape ``M+1`` and ``N+1`` and a value array (``zs``) of shape ``NxM``:",
"_____no_output_____"
]
],
[
[
"n = 8 # Number of bins in each direction\nxs = np.logspace(1, 3, n)\nys = np.linspace(1, 10, n)\nzs = np.arange((n-1)**2).reshape(n-1, n-1)\nprint('Shape of x-coordinates:', xs.shape)\nprint('Shape of y-coordinates:', ys.shape)\nprint('Shape of value array:', zs.shape)\nhv.QuadMesh((xs, ys, zs))",
"_____no_output_____"
]
],
[
[
"## Curvilinear grids\n\nTo define a curvilinear grid the x/y-coordinates of the grid should be defined as 2D arrays of shape ``NxM`` or ``N+1xM+1``, i.e. either as the bin centers or the bin edges of each 2D bin.",
"_____no_output_____"
]
],
[
[
"n=20\ncoords = np.linspace(-1.5,1.5,n)\nX,Y = np.meshgrid(coords, coords);\nQx = np.cos(Y) - np.cos(X)\nQy = np.sin(Y) + np.sin(X)\nZ = np.sqrt(X**2 + Y**2)\n\nprint('Shape of x-coordinates:', Qx.shape)\nprint('Shape of y-coordinates:', Qy.shape)\nprint('Shape of value array:', Z.shape)\n\nqmesh = hv.QuadMesh((Qx, Qy, Z))\nqmesh",
"_____no_output_____"
]
],
[
[
"## Working with xarray data types\nAs demonstrated previously, `Dataset` comes with support for the `xarray` library, which offers a powerful way to work with multi-dimensional, regularly spaced data. In this example, we'll load an example dataset, turn it into a HoloViews `Dataset` and visualize it. First, let's have a look at the xarray dataset's contents:",
"_____no_output_____"
]
],
[
[
"xr_ds = xr.tutorial.open_dataset(\"air_temperature\").load()\nxr_ds",
"_____no_output_____"
]
],
[
[
"It is trivial to turn this xarray Dataset into a Holoviews `Dataset` (the same also works for DataArray):",
"_____no_output_____"
]
],
[
[
"hv_ds = hv.Dataset(xr_ds)[:, :, \"2013-01-01\"]\nprint(hv_ds)",
"_____no_output_____"
]
],
[
[
"We have used the usual slice notation in order to select one single day in the rather large dataset. Finally, let's visualize the dataset by converting it to a `HoloMap` of `Images` using the `to()` method. We need to specify which of the dataset's key dimensions will be consumed by the images (in this case \"lat\" and \"lon\"), where the remaing key dimensions will be associated with the HoloMap (here: \"time\"). We'll use the slice notation again to clip the longitude.",
"_____no_output_____"
]
],
[
[
"airtemp = hv_ds.to(hv.Image, kdims=[\"lon\", \"lat\"], dynamic=False)\nairtemp[:, 220:320, :].opts(colorbar=True, fig_size=200)",
"_____no_output_____"
]
],
[
[
"Here, we have explicitly specified the default behaviour `dynamic=False`, which returns a HoloMap. Note, that this approach immediately converts all available data to images, which will take up a lot of RAM for large datasets. For these situations, use `dynamic=True` to generate a [DynamicMap](./07-Live_Data.ipynb) instead. Additionally, [xarray features dask support](http://xarray.pydata.org/en/stable/dask.html), which is helpful when dealing with large amounts of data.\n\nIt is also possible to render curvilinear grids with xarray, and here we will load one such example. The dataset below defines a curvilinear grid of air temperatures varying over time. The curvilinear grid can be identified by the fact that the ``xc`` and ``yc`` coordinates are defined as two-dimensional arrays:",
"_____no_output_____"
]
],
[
[
"rasm = xr.tutorial.open_dataset(\"rasm\").load()\nrasm.coords",
"_____no_output_____"
]
],
[
[
"To simplify the example we will select a single timepoint and add explicit coordinates for the x and y dimensions:",
"_____no_output_____"
]
],
[
[
"rasm = rasm.isel(time=0, x=slice(0, 200)).assign_coords(x=np.arange(200), y=np.arange(205))\nrasm.coords",
"_____no_output_____"
]
],
[
[
"Now that we have defined both rectilinear and curvilinear coordinates we can visualize the difference between the two by explicitly defining which set of coordinates to use:",
"_____no_output_____"
]
],
[
[
"hv.QuadMesh(rasm, ['x', 'y']) + hv.QuadMesh(rasm, ['xc', 'yc'])",
"_____no_output_____"
]
],
[
[
"\n\nAdditional examples of visualizing xarrays in the context of geographical data can be found in the GeoViews documentation: [Gridded Datasets I](http://geoviews.org/user_guide/Gridded_Datasets_I.html) and\n[Gridded Datasets II](http://geoviews.org/user_guide/Gridded_Datasets_II.html). These guides also contain useful information on the interaction between xarray data structures and HoloViews Datasets in general.",
"_____no_output_____"
],
[
"# API",
"_____no_output_____"
],
[
"## Accessing the data",
"_____no_output_____"
],
[
"In order to be able to work with data in different formats Holoviews defines a general interface to access the data. The dimension_values method allows returning underlying arrays.\n\n#### Key dimensions (coordinates)\n\nBy default ``dimension_values`` will return the expanded columnar format of the data:",
"_____no_output_____"
]
],
[
[
"heatmap.dimension_values('x')",
"_____no_output_____"
]
],
[
[
"To access just the unique coordinates along a dimension simply supply the ``expanded=False`` keyword:",
"_____no_output_____"
]
],
[
[
"heatmap.dimension_values('x', expanded=False)",
"_____no_output_____"
]
],
[
[
"Finally we can also get a non-flattened, expanded coordinate array returning a coordinate array of the same shape as the value arrays",
"_____no_output_____"
]
],
[
[
"heatmap.dimension_values('x', flat=False)",
"_____no_output_____"
]
],
[
[
"#### Value dimensions",
"_____no_output_____"
],
[
"When accessing a value dimension the method will similarly return a flat view of the data:",
"_____no_output_____"
]
],
[
[
"heatmap.dimension_values('z')",
"_____no_output_____"
]
],
[
[
"We can pass the ``flat=False`` argument to access the multi-dimensional array:",
"_____no_output_____"
]
],
[
[
"heatmap.dimension_values('z', flat=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc758da92e7cf4b17bdf6829ddcd5a239477249 | 583,491 | ipynb | Jupyter Notebook | tutorials/dnn_text_to_speech/.ipynb_checkpoints/dnn_text_to_speech-checkpoint.ipynb | pritishyuvraj/text-to-speech | ad571a9aa4f22bba3ba85a1bc566652e280d2ea2 | [
"MIT"
] | null | null | null | tutorials/dnn_text_to_speech/.ipynb_checkpoints/dnn_text_to_speech-checkpoint.ipynb | pritishyuvraj/text-to-speech | ad571a9aa4f22bba3ba85a1bc566652e280d2ea2 | [
"MIT"
] | null | null | null | tutorials/dnn_text_to_speech/.ipynb_checkpoints/dnn_text_to_speech-checkpoint.ipynb | pritishyuvraj/text-to-speech | ad571a9aa4f22bba3ba85a1bc566652e280d2ea2 | [
"MIT"
] | null | null | null | 1,594.237705 | 558,752 | 0.956611 | [
[
[
"%pylab inline\nrcParams[\"figure.figsize\"] = (16,5)\n\nfrom nnmnkwii.datasets import FileDataSource, FileSourceDataset\nfrom nnmnkwii.datasets import MemoryCacheFramewiseDataset\nfrom nnmnkwii.preprocessing import trim_zeros_frames, remove_zeros_frames\nfrom nnmnkwii.preprocessing import minmax, meanvar, minmax_scale, scale\nfrom nnmnkwii import paramgen\nfrom nnmnkwii.io import hts\nfrom nnmnkwii.frontend import merlin as fe\nfrom nnmnkwii.postfilters import merlin_post_filter\n\nfrom os.path import join, expanduser, basename, splitext, basename, exists\nimport os\nfrom glob import glob\nimport numpy as np\nfrom scipy.io import wavfile\nfrom sklearn.model_selection import train_test_split\nimport pyworld\nimport pysptk\nimport librosa\nimport librosa.display\nimport IPython\nfrom IPython.display import Audio",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"DATA_ROOT = \"./data/slt_arctic_full_data\"\ntest_size = 0.112\nrandom_state = 1234",
"_____no_output_____"
],
[
"# Data Specification\nmgc_dim = 180\nlf0_dim = 3\nvuv_dim = 1\nbap_dim = 3\n\nduration_linguistic_dim = 416\nacoustic_linguistic_dim = 425\nduration_dim = 5\nacoustic_dim = mgc_dim + lf0_dim + vuv_dim + bap_dim\n\nfs = 16000\nframe_period = 5\nhop_length = 80\nfftlen = 1024\nalpha = 0.41\n\nmgc_start_idx = 0 \nlf0_start_idx = 180\nvuv_start_idx = 183\nbap_start_idx = 184\n\nwindows = [\n (0, 0, np.array([1.0])),\n (1, 1, np.array([-0.5, 0.0, 0.5])),\n (1, 1, np.array([1.0, -2.0, 1.0]))\n]",
"_____no_output_____"
],
[
"class BinaryFileSource(FileDataSource):\n def __init__(self, data_root, dim, train):\n self.data_root = data_root\n self.dim = dim \n self.train = train\n \n def collect_files(self):\n files = sorted(glob(join(self.data_root, \"*.bin\")))\n files = files[:len(files)-5]\n train_files, test_files = train_test_split(files,\n test_size = test_size,\n random_state = random_state)\n if self.train:\n return train_files\n else:\n return test_files\n def collect_features(self, path):\n return np.fromfile(path, dtype=np.float32).reshape(-1, self.dim)\n ",
"_____no_output_____"
],
[
"X = {\"duration\":{}, \"acoustic\": {}}\nY = {\"duration\": {}, \"acoustic\": {}}\nutt_lengths = {\"duration\": {}, \"acoustic\":{}}\nfor ty in [\"duration\", \"acoustic\"]:\n for phase in [\"train\", \"test\"]:\n train = phase == \"train\"\n x_dim = duration_linguistic_dim if ty==\"duration\" else acoustic_linguistic_dim\n y_dim = duration_dim if ty == \"duration\" else acoustic_dim \n X[ty][phase] = FileSourceDataset(BinaryFileSource(join(DATA_ROOT,\n \"X_{}\".format(ty)),\n dim = x_dim,\n train=train))\n Y[ty][phase] = FileSourceDataset(BinaryFileSource(join(DATA_ROOT,\n \"Y_{}\".format(ty)),\n dim = y_dim,\n train=train))\n utt_lengths[ty][phase] = [len(x) for x in X[ty][phase]]",
"_____no_output_____"
],
[
"print(\"Total number of utterances:\", len(utt_lengths[\"duration\"][\"train\"]))\nprint(\"Total number of frames:\", np.sum(utt_lengths[\"duration\"][\"train\"]))\nhist(utt_lengths[\"duration\"][\"train\"], bins = 64)",
"Total number of utterances: 1000\nTotal number of frames: 32006\n"
],
[
"print(\"Total number of utterances: \", len(utt_lengths[\"acoustic\"][\"train\"]))\nprint(\"Total number of frames: \", np.sum(utt_lengths[\"acoustic\"][\"train\"]))\nhist(utt_lengths[\"acoustic\"][\"train\"], bins=64)",
"Total number of utterances: 1000\nTotal number of frames: 534363\n"
],
[
"def vis_utterance(X, Y, lengths, idx):\n x = X[idx][:lengths[idx]]\n y = Y[idx][:lengths[idx]]\n \n figure(figsize=(16, 20))\n subplot(4, 1, 1)\n librosa.display.specshow(x.T, sr=fs, hop_length=hop_length, x_axis = \"time\")\n \n subplot(4, 1, 2)\n logsp = np.log(pysptk.mc2sp(y[:, mgc_start_idx:mgc_dim//len(windows)], alpha=alpha, fftlen=fftlen))\n librosa.display.specshow(logsp.T, sr=fs, hop_length=hop_length, x_axis = \"time\", y_axis = \"linear\")\n \n subplot(4, 1, 3)\n lf0 = y[:, mgc_start_idx]\n vuv = y[:, vuv_start_idx]\n plot(lf0, linewidth=2, label=\"Continuous log-f0\")\n plot(vuv, linewidth=2, label=\"Voiced/unvoiced flag\")\n legend(prop={\"size\":14}, loc=\"upper right\")\n \n subplot(4, 1, 4)\n bap = y[:, bap_start_idx:bap_start_idx+bap_dim//len(windows)]\n bap = np.ascontiguousarray(bap).astype(np.float64)\n aperiodicity = pyworld.decode_aperiodicity(bap, fs, fftlen)\n librosa.display.specshow(aperiodicity.T, sr=fs, hop_length=hop_length, x_axis=\"time\", y_axis=\"linear\")\n \n \n ",
"_____no_output_____"
],
[
"idx = 0\nvis_utterance(X[\"acoustic\"][\"train\"], Y[\"acoustic\"][\"train\"], utt_lengths[\"acoustic\"][\"train\"], idx)",
"_____no_output_____"
],
[
"X_min = {}\nX_max = {}\nY_mean = {}\nY_var = {}\nY_scale = {}\n\nfor typ in [\"acoustic\", \"duration\"]:\n X_min[typ], X_max[typ] = minmax(X[typ][\"train\"], utt_lengths[typ][\"train\"])\n Y_mean[typ], Y_var[typ] = meanvar(Y[typ][\"train\"], utt_lengths[typ][\"train\"])\n Y_scale[typ] = np.sqrt(Y_var[typ])",
"_____no_output_____"
],
[
"idx = 0\ntyp = \"acoustic\"\nx = X[typ][\"train\"][idx][:utt_lengths[typ][\"train\"][idx]]\nx = minmax_scale(x, X_min[typ], X_max[typ], feature_range=(0.01, 0.99))\nlibrosa.display.specshow(x.T, sr=fs, hop_length=hop_length, x_axis=\"time\")\ncolorbar()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc76af83e44a7243b0832b8cb4d12f861bd8d63 | 560,778 | ipynb | Jupyter Notebook | GANiversity.ipynb | sebaspv/GANiversity | a4ae97eb634f0a86dbb4e85d0234e5b0687adb74 | [
"MIT"
] | null | null | null | GANiversity.ipynb | sebaspv/GANiversity | a4ae97eb634f0a86dbb4e85d0234e5b0687adb74 | [
"MIT"
] | null | null | null | GANiversity.ipynb | sebaspv/GANiversity | a4ae97eb634f0a86dbb4e85d0234e5b0687adb74 | [
"MIT"
] | null | null | null | 1,168.2875 | 517,438 | 0.953267 | [
[
[
"<a href=\"https://colab.research.google.com/github/sebaspv/GANiversity/blob/main/GANiversity.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"# import the necessary libraries\r\nimport tensorflow as tf\r\nimport tensorflow.keras as keras\r\n \r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\nimport zipfile\r\nfrom IPython import display\r\nimport warnings\r\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
],
[
"def plot_results(images, n_cols=None):\r\n '''visualizes fake images'''\r\n display.clear_output(wait=False) \r\n \r\n n_cols = n_cols or len(images)\r\n n_rows = (len(images) - 1) // n_cols + 1\r\n \r\n if images.shape[-1] == 1:\r\n images = np.squeeze(images, axis=-1)\r\n \r\n plt.figure(figsize=(n_cols*2, n_rows*2))\r\n \r\n for index, image in enumerate(images):\r\n plt.subplot(n_rows, n_cols, index + 1)\r\n plt.imshow(image)\r\n plt.axis(\"off\")",
"_____no_output_____"
],
[
"IMAGE_SIZE = 64\r\nBATCH_SIZE = 16",
"_____no_output_____"
],
[
"data_generator = tf.keras.preprocessing.image.ImageDataGenerator(\r\n rescale = 1./255,\r\n zoom_range = 0.08,\r\n width_shift_range = 0.05,\r\n height_shift_range = 0.05,\r\n shear_range = 0.05,\r\n rotation_range = 5,\r\n horizontal_flip = True\r\n)",
"_____no_output_____"
],
[
"!ls",
"drive sample_data\n"
],
[
"image_dataset_generator = data_generator.flow_from_directory(\r\n target_size = (IMAGE_SIZE, IMAGE_SIZE),\r\n color_mode = 'rgb',\r\n class_mode = None,\r\n batch_size = BATCH_SIZE,\r\n directory = 'drive/MyDrive/logos/'\r\n)",
"Found 474 images belonging to 1 classes.\n"
],
[
"len(image_dataset_generator) # we have 15 batches of data, with 32 images each.",
"_____no_output_____"
],
[
"# plot example image\r\nfor i in image_dataset_generator:\r\n plt.imshow(i[1])\r\n break",
"_____no_output_____"
],
[
"# create generator\r\nrandom_dimensions = 32\r\ngenerator = keras.models.Sequential([\r\nkeras.layers.Dense(2*2*1024, input_shape = [random_dimensions]),\r\nkeras.layers.Reshape([2, 2, 1024]),\r\nkeras.layers.BatchNormalization(momentum = 0.5),\r\nkeras.layers.Conv2DTranspose(256, 5, 2, 'same', activation = 'selu'), # 4*4*512\r\nkeras.layers.Conv2DTranspose(128, 5, 2, 'same', activation = 'selu'), # 8*8*256\r\nkeras.layers.Conv2DTranspose(64, 5, 2, 'same', activation = 'selu'), # 16*16*128\r\nkeras.layers.Conv2DTranspose(32, 5, 2, 'same', activation = 'selu'), # 32*32*128\r\nkeras.layers.Conv2DTranspose(3, 5, 2, 'same', activation = 'tanh') # output (64*64*3)\r\n])",
"_____no_output_____"
],
[
"generator.summary()",
"Model: \"sequential_34\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_22 (Dense) (None, 4096) 135168 \n_________________________________________________________________\nreshape_11 (Reshape) (None, 2, 2, 1024) 0 \n_________________________________________________________________\nbatch_normalization_11 (Batc (None, 2, 2, 1024) 4096 \n_________________________________________________________________\nconv2d_transpose_55 (Conv2DT (None, 4, 4, 256) 6553856 \n_________________________________________________________________\nconv2d_transpose_56 (Conv2DT (None, 8, 8, 128) 819328 \n_________________________________________________________________\nconv2d_transpose_57 (Conv2DT (None, 16, 16, 64) 204864 \n_________________________________________________________________\nconv2d_transpose_58 (Conv2DT (None, 32, 32, 32) 51232 \n_________________________________________________________________\nconv2d_transpose_59 (Conv2DT (None, 64, 64, 3) 2403 \n=================================================================\nTotal params: 7,770,947\nTrainable params: 7,768,899\nNon-trainable params: 2,048\n_________________________________________________________________\n"
],
[
"discriminator = keras.models.Sequential([\r\nkeras.layers.Conv2D(32, 5, 2, 'same', input_shape = [64, 64, 3], activation = keras.layers.LeakyReLU(alpha = 0.2)), # 64*32*32\r\nkeras.layers.Conv2D(64, 5, 2, 'same', activation = keras.layers.LeakyReLU(alpha = 0.2)), #16*16*128\r\nkeras.layers.Conv2D(128, 5, 2, 'same', activation = keras.layers.LeakyReLU(alpha = 0.2)), # 8*8*256\r\nkeras.layers.Conv2D(256, 5, 2, 'same', activation = keras.layers.LeakyReLU(alpha = 0.2)), # 4*4*512\r\nkeras.layers.Flatten(),\r\nkeras.layers.Dense(1, activation = 'sigmoid')\r\n])",
"_____no_output_____"
],
[
"discriminator.summary()",
"Model: \"sequential_35\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_48 (Conv2D) (None, 32, 32, 32) 2432 \n_________________________________________________________________\nconv2d_49 (Conv2D) (None, 16, 16, 64) 51264 \n_________________________________________________________________\nconv2d_50 (Conv2D) (None, 8, 8, 128) 204928 \n_________________________________________________________________\nconv2d_51 (Conv2D) (None, 4, 4, 256) 819456 \n_________________________________________________________________\nflatten_11 (Flatten) (None, 4096) 0 \n_________________________________________________________________\ndense_23 (Dense) (None, 1) 4097 \n=================================================================\nTotal params: 1,082,177\nTrainable params: 1,082,177\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"discriminator.compile(optimizer = keras.optimizers.Adam(lr = 0.0001, beta_1=0.5), loss = 'binary_crossentropy')\r\ndiscriminator.trainable = False",
"_____no_output_____"
],
[
"# compile all of the layers\r\ngan = keras.models.Sequential([generator, discriminator])\r\ngan.compile(loss = 'binary_crossentropy', optimizer = keras.optimizers.Adam(lr = 0.0001, beta_1=0.5))",
"_____no_output_____"
],
[
"def train_gan(gan, dataset, random_normal_dimensions, n_epochs=50):\r\n # get the two sub networks from the GAN model\r\n generator, discriminator = gan.layers\r\n number_of_batches = len(dataset)\r\n for epoch in range(n_epochs):\r\n print(\"Epoch {}/{}\".format(epoch + 1, n_epochs)) \r\n for i in range(number_of_batches):\r\n real_images = dataset.next()\r\n real_batch_size = real_images.shape[0]\r\n # random noise\r\n noise = tf.random.normal(shape = [real_batch_size, random_normal_dimensions])\r\n # Use the noise to generate fake images\r\n fake_images = generator(noise)\r\n mixed_images = tf.concat([fake_images, real_images], axis = 0)\r\n discriminator_labels = tf.constant([[0.]]*real_batch_size + [[.9]]*real_batch_size)\r\n # Ensure that the discriminator is trainable\r\n discriminator.trainable = True\r\n discriminator.train_on_batch(mixed_images, discriminator_labels)\r\n # PHASE 2 OF TRAINING\r\n noise = tf.random.normal([real_batch_size, random_normal_dimensions])\r\n # label all generated images to be \"real\"\r\n generator_labels = tf.constant([[1.]]*real_batch_size)\r\n # Freeze the discriminator\r\n discriminator.trainable = False\r\n # Train the GAN on the noise with the labels all set to be true\r\n gan.train_on_batch(noise, generator_labels)\r\n plot_results(np.array(fake_images), 16) \r\n plt.show()\r\n return fake_images",
"_____no_output_____"
],
[
"# you can change the number of epochs\r\nEPOCHS = 10\r\n \r\n# run the training loop and collect images\r\nfake_images = train_gan(gan, image_dataset_generator, random_dimensions, EPOCHS)",
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc76ea728f9d5f6a120741928e6f30f17a6508e | 129,035 | ipynb | Jupyter Notebook | web_scraping/Lending_Data_Analysis.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | 2 | 2021-02-13T05:52:05.000Z | 2022-02-08T09:52:35.000Z | web_scraping/Lending_Data_Analysis.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | null | null | null | web_scraping/Lending_Data_Analysis.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | null | null | null | 73.903207 | 385 | 0.795133 | [
[
[
"Let's get data from <a href=\"http://www.lendingclub.com\">Lending Club</a>. We'll try to create a scatter plot of loan interest rate versus FICO score to see if there is a relationship between the two.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nloansData = pd.read_csv('https://spark-public.s3.amazonaws.com/dataanalysis/loansData.csv')",
"_____no_output_____"
]
],
[
[
"Now, let's take a look at the data. We'll take a peek at the first 5 records using Panda's dataframe's head() method:",
"_____no_output_____"
]
],
[
[
"loansData.head()",
"_____no_output_____"
]
],
[
[
"The problem I see immediately is that the interest rate has the % symbol at the end. This will pose a problem when plotting since the value will be read or viewed as a string/text instead of a decimal number. There is also a problem with the FICO score. It is a range instead of a single value. Since the ranges seem pretty tight, let's just use the first value in the range.",
"_____no_output_____"
],
[
"#### Now, let's remove the percent symbol from the interest rate and replace it with a \"blank\" space. Then read it as a float type value:",
"_____no_output_____"
]
],
[
[
"irate = [float(rate.replace('%','')) for rate in loansData[\"Interest.Rate\"]]",
"_____no_output_____"
]
],
[
[
"I used Python's list comprehension to create a Python list containing interest rates without the percent symbol. Now, let's see if I successfully did that. Let's look at the first 50 values:",
"_____no_output_____"
]
],
[
[
"print(irate[:50])",
"[8.9, 12.12, 21.98, 9.99, 11.71, 15.31, 7.9, 17.14, 14.33, 6.91, 19.72, 14.27, 21.67, 8.9, 7.62, 15.65, 12.12, 10.37, 9.76, 9.99, 21.98, 19.05, 17.99, 11.99, 16.82, 7.9, 14.42, 15.31, 8.59, 7.9, 21.0, 12.12, 16.49, 15.8, 13.55, 7.9, 7.9, 7.62, 8.9, 12.49, 17.27, 11.14, 19.13, 21.74, 17.27, 11.86, 10.38, 23.91, 7.49, 12.12]\n"
]
],
[
[
"#### Looks like it worked!",
"_____no_output_____"
],
[
"#### Now, let's get the first or lowest value of the FICO score range:",
"_____no_output_____"
]
],
[
[
"fico_score = [int(fico.split('-')[0]) for fico in loansData[\"FICO.Range\"]]\nprint(fico_score[:50])",
"[735, 715, 690, 695, 695, 670, 720, 705, 685, 715, 670, 665, 670, 735, 725, 730, 695, 740, 730, 760, 665, 695, 665, 695, 670, 705, 675, 675, 765, 760, 685, 685, 720, 685, 675, 780, 720, 830, 715, 660, 670, 720, 660, 660, 675, 715, 710, 670, 785, 705]\n"
]
],
[
[
"#### Now that we have \"cleaned up\" our data, we can now do a scatter plot of interest rate vs FICO score:",
"_____no_output_____"
]
],
[
[
"scatter(fico_score, irate)\ntitle(\"Loan Interest Rate vs FICO Score\", weight=\"bold\")\nylabel(\"Interest Rate (%)\", weight=\"bold\")\nxlabel(\"FICO Score\", weight=\"bold\")\nshow()",
"_____no_output_____"
],
[
"hist(irate)\ntitle(\"Loan Interest Rate Histogram\", weight=\"bold\")\nxlabel(\"Interest Rate %\")\nylabel(\"Frequency/Qty\")\nshow()",
"_____no_output_____"
],
[
"hist(fico_score)\ntitle(\"Histogram of FICO Score\", weight=\"bold\")\nxlabel(\"FICO Score\")\nylabel(\"Frequency/Qty\")\nshow()",
"_____no_output_____"
]
],
[
[
"#### Let's see what the breakdown of the loan purposes were:",
"_____no_output_____"
]
],
[
[
"grouped = loansData.groupby(\"Loan.Purpose\")[\"Loan.Purpose\"].count()\ngrouped.sort(ascending=False)\ngrouped",
"_____no_output_____"
]
],
[
[
"#### Looks like most loans were for debt consolidation and credit card debt.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecc772281b30b6653c34942f4030f13e51ff1525 | 2,535 | ipynb | Jupyter Notebook | chapter1/homework/computer/3-15/201611680283.ipynb | hpishacker/python_tutorial | 9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd | [
"MIT"
] | 76 | 2017-09-26T01:07:26.000Z | 2021-02-23T03:06:25.000Z | chapter1/homework/computer/3-15/201611680283.ipynb | hpishacker/python_tutorial | 9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd | [
"MIT"
] | 5 | 2017-12-10T08:40:11.000Z | 2020-01-10T03:39:21.000Z | chapter1/homework/computer/3-15/201611680283.ipynb | hacker-14/python_tutorial | 4a110b12aaab1313ded253f5207ff263d85e1b56 | [
"MIT"
] | 112 | 2017-09-26T01:07:30.000Z | 2021-11-25T19:46:51.000Z | 20.119048 | 68 | 0.469822 | [
[
[
"#练习2:仿照实践1,写出由用户指定整数个数,并由用户输入多个整数,并求和的代码。\nn = int(input('请输入一个正整数表示你想要输入整数的个数,以回车结束。'))\ni = 0\ntotal = 0\nwhile i < n:\n i = i + 1\n a = int(input('请输入一个正整数,以回车结束。'))\n total = total + a \nprint(total)",
"请输入一个正整数表示你想要输入整数的个数,以回车结束。2\n请输入一个正整数,以回车结束。1\n请输入一个正整数,以回车结束。2\n3\n"
],
[
"#练习3:用户可以输入的任意多个数字,直到用户不想输入为止。\nprint('请输入任意多数字,不想输入时输入数字零结束')\ntotal = 0\nn = 0\nwhile n < 1:\n a = int(input('请输入一个正整数,以回车结束。'))\n total = total + a \n if a==0:\n break; \nprint(total)\nprint('结束')",
"请输入任意多数字,不想输入时输入数字零结束\n请输入一个正整数,以回车结束。1\n请输入一个正整数,以回车结束。2\n请输入一个正整数,以回车结束。3\n请输入一个正整数,以回车结束。0\n6\n结束\n"
],
[
"#练习4:用户可以输入的任意多个数字,直到输入所有数字的和比当前输入数字小,且输入所有数字的积比当前输入数字的平方小。\ntotal = 0\nmult = 1\nn = 0\nwhile n < 1:\n a = int(input('请输入一个正整数,以回车结束。'))\n total = total + a \n mult =mult * a\n if total < a and mult < a ** 2:\n break; \nprint('结束')",
"请输入一个正整数,以回车结束。1\n请输入一个正整数,以回车结束。-1\n请输入一个正整数,以回车结束。2\n请输入一个正整数,以回车结束。-3\n请输入一个正整数,以回车结束。0\n请输入一个正整数,以回车结束。2\n结束\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ecc775a1e4afa40878ffc7a1580c7b02c386b678 | 304,114 | ipynb | Jupyter Notebook | Task1_will_my_question_get_answered.ipynb | utkueray/CQA | 38f5957fb982845810a5f230f9ab18a171bc9fb7 | [
"MIT"
] | 1 | 2020-08-31T07:20:01.000Z | 2020-08-31T07:20:01.000Z | Task1_will_my_question_get_answered.ipynb | zseda/CQA | 38f5957fb982845810a5f230f9ab18a171bc9fb7 | [
"MIT"
] | 5 | 2021-03-30T13:40:28.000Z | 2021-09-22T19:09:19.000Z | Task1_will_my_question_get_answered.ipynb | zseda/CQA | 38f5957fb982845810a5f230f9ab18a171bc9fb7 | [
"MIT"
] | 2 | 2020-06-12T20:20:41.000Z | 2021-03-09T16:32:06.000Z | 37.19139 | 426 | 0.362019 | [
[
[
"ENS 491-492- GRADUATION PROJECT TASK 1\n\nWill my question get answered? \n\nClassification task will be applied on stackexchange AI data",
"_____no_output_____"
],
[
"Importing libraries",
"_____no_output_____"
]
],
[
[
"import gensim\nimport pandas as pd\nimport warnings\nfrom bs4 import BeautifulSoup\nimport numpy as np\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n!pip install category_encoders\nfrom sklearn.metrics import mean_squared_error \nfrom sklearn import preprocessing\nimport xml.etree.ElementTree as et\nimport re\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nfrom itertools import product\nfrom os.path import join\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn import preprocessing\n",
"Requirement already satisfied: category_encoders in c:\\users\\utku\\anaconda3\\lib\\site-packages (2.1.0)\nRequirement already satisfied: statsmodels>=0.6.1 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from category_encoders) (0.10.0)\nRequirement already satisfied: scipy>=0.19.0 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from category_encoders) (1.2.1)\nRequirement already satisfied: patsy>=0.4.1 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from category_encoders) (0.5.1)\nRequirement already satisfied: numpy>=1.11.3 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from category_encoders) (1.16.4)\nRequirement already satisfied: pandas>=0.21.1 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from category_encoders) (0.24.2)\nRequirement already satisfied: scikit-learn>=0.20.0 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from category_encoders) (0.23.1)\nRequirement already satisfied: six in c:\\users\\utku\\anaconda3\\lib\\site-packages (from patsy>=0.4.1->category_encoders) (1.12.0)\nRequirement already satisfied: pytz>=2011k in c:\\users\\utku\\anaconda3\\lib\\site-packages (from pandas>=0.21.1->category_encoders) (2019.1)\nRequirement already satisfied: python-dateutil>=2.5.0 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from pandas>=0.21.1->category_encoders) (2.8.0)\nRequirement already satisfied: threadpoolctl>=2.0.0 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from scikit-learn>=0.20.0->category_encoders) (2.0.0)\nRequirement already satisfied: joblib>=0.11 in c:\\users\\utku\\anaconda3\\lib\\site-packages (from scikit-learn>=0.20.0->category_encoders) (0.13.2)\n"
]
],
[
[
"İmporting nltk library for text data cleaning process",
"_____no_output_____"
]
],
[
[
"# Setup\n!pip install -q wordcloud\nimport wordcloud\n\nimport nltk\nnltk.download('stopwords')\nnltk.download('wordnet')\nnltk.download('punkt')\nnltk.download('averaged_perceptron_tagger') \n\n\nimport matplotlib.pyplot as plt\nimport io\nimport unicodedata\nimport string\n",
"[nltk_data] Downloading package stopwords to\n[nltk_data] C:\\Users\\Utku\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n[nltk_data] Downloading package wordnet to\n[nltk_data] C:\\Users\\Utku\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n[nltk_data] Downloading package punkt to\n[nltk_data] C:\\Users\\Utku\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n[nltk_data] Downloading package averaged_perceptron_tagger to\n[nltk_data] C:\\Users\\Utku\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package averaged_perceptron_tagger is already up-to-\n[nltk_data] date!\n"
]
],
[
[
" \nData Gathering and Implementation of Data Frame\n\n",
"_____no_output_____"
]
],
[
[
"xtree = et.parse('Posts.xml')",
"_____no_output_____"
],
[
"xroot = xtree.getroot()",
"_____no_output_____"
],
[
"dfCols = [\"Closed Date\", \"Favorite Count\", \"Comment Count\", \"Answer Count\", \"Tags\", \"Title\",\n \"Last Activity Date\", \"Owner User ID\", \"Body\", \"View Count\", \"Score\", \"Creation Date\", \"Post Type ID\", \n \"ID\", \"Parent ID\", \"Last Edit Date\", \"Last Editor User ID\", \"Accepted Answer ID\"]\ndfRows = []",
"_____no_output_____"
],
[
"for node in xroot:\n closedDate = node.attrib.get(\"ClosedDate\")\n favCount = node.attrib.get(\"FavoriteCount\")\n commentCount = node.attrib.get(\"CommentCount\")\n ansCount = node.attrib.get(\"AnswerCount\")\n tags = node.attrib.get(\"Tags\")\n title = node.attrib.get(\"Title\")\n lastActDate = node.attrib.get(\"LastActivityDate\")\n ownerUserID = node.attrib.get(\"OwnerUserId\")\n body = node.attrib.get(\"Body\")\n viewCount = node.attrib.get(\"ViewCount\") \n score = node.attrib.get(\"Score\") \n creationDate = node.attrib.get(\"CreationDate\") \n postTypeID = node.attrib.get(\"PostTypeId\") \n ID = node.attrib.get(\"Id\") \n parentID = node.attrib.get(\"ParentId\") \n lastEditDate = node.attrib.get(\"LastEditDate\") \n lastEditorUserID = node.attrib.get(\"LastEditorUserId\") \n acceptedAnswerID = node.attrib.get(\"AcceptedAnswerID\")\n \n dfRows.append({\"Closed Date\": closedDate, \"Favorite Count\": favCount, \"Comment Count\": commentCount,\n \"Answer Count\": ansCount, \"Tags\": tags, \"Title\": title, \"Last Activity Date\": lastActDate,\n \"Owner User ID\": ownerUserID, \"Body\": body, \"View Count\": viewCount, \"Score\": score, \n \"Creation Date\": creationDate, \"Post Type ID\": postTypeID, \"ID\": ID, \"Parent ID\": parentID,\n \"Last Edit Date\": lastEditDate, \"Last Editor User ID\": lastEditorUserID, \"Accepted Answer ID\": acceptedAnswerID})",
"_____no_output_____"
],
[
"out = pd.DataFrame(dfRows, columns=dfCols)\n",
"_____no_output_____"
],
[
"out = out.fillna(0)",
"_____no_output_____"
]
],
[
[
"Changing the data types of the parameters",
"_____no_output_____"
]
],
[
[
"out['Creation Date'] = pd.to_datetime(out['Creation Date'])\nout['Last Activity Date'] = pd.to_datetime(out['Last Activity Date'])\nout['Last Edit Date'] = pd.to_datetime(out['Last Edit Date'])\nout['Comment Count'] = out['Comment Count'].astype(int)\nout['Owner User ID'] = out['Owner User ID'].astype(int)\nout['Post Type ID'] = out['Post Type ID'].astype(int)\nout['Score'] = out['Score'].astype(int)\nout['Favorite Count'] = out['Favorite Count'].astype(int)\nout['Answer Count'] = out['Answer Count'].astype(int)\nout['View Count'] = out['View Count'].astype(int)\n",
"_____no_output_____"
],
[
"out.dtypes\n",
"_____no_output_____"
]
],
[
[
"Categorizing Time Data into Morning/Midday/Evening/Night\n\n**According to UTC**\n\n05:00-10:00 Morning -->1 24:00 - 05:00 NY Night\n\n11:00 16:00 Midday -->2 06:00 - 11:00 NY Morning\n\n17:00 22:00 Afternoon -->3 12:00 - 17:00 NY Midday\n\n23:00 04:00 Night -->4 18:00 - 23:00 NY Evening",
"_____no_output_____"
]
],
[
[
"def get_part_of_day(x):\n return (\n \"1\" if 5 <= int(x.strftime('%H')) <= 10\n else\n \"2\" if 11<= int(x.strftime('%H'))<= 16\n else\n \"3\" if 17 <= int(x.strftime('%H')) <= 22\n else\n \"4\"\n )\n \nout[\"Creation Date\"] = out[\"Creation Date\"].apply(get_part_of_day)\nout.head(5)",
"_____no_output_____"
]
],
[
[
"One Hot Encoding implemented on the date data that had been encoded with label encoding\n",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import OneHotEncoder\n\nohe = OneHotEncoder(categories = \"auto\",sparse = False, drop = \"first\")\n# One-Hot-Encode Class column of df\ntemp = ohe.fit_transform(out[[\"Creation Date\"]])\n\n\n# Converting into dataframe\nohe_column = pd.DataFrame(temp, columns = [\"CreationDate_1\",\"CreationDate_2\",\"CreationDate_3\"])\n#containating ohe column\nout= pd.concat([out,ohe_column],axis = 1)\n",
"_____no_output_____"
]
],
[
[
"Demonstration of dataframe after one hot encoding applied",
"_____no_output_____"
]
],
[
[
"out.tail(3)",
"_____no_output_____"
]
],
[
[
"Those who has Post Type Id= 1 are questions in this task only questions data will be used",
"_____no_output_____"
]
],
[
[
"out = out[(out['Post Type ID'] == 1)]",
"_____no_output_____"
],
[
"out=out.drop(axis=1,columns=[\"Post Type ID\"])",
"_____no_output_____"
]
],
[
[
"Since our question is to classify whether the question will get answered , we have labeled data according to it is answered or not by checking answer counts of the questions ",
"_____no_output_____"
]
],
[
[
"out['IsAnswered'] = np.where(out['Answer Count']>0, '1', '0')",
"_____no_output_____"
]
],
[
[
"Tag count is calculated and used as a parameter in dataframe \n",
"_____no_output_____"
]
],
[
[
"out['Tag Count'] = out['Tags'].str.count('<')",
"_____no_output_____"
]
],
[
[
"q (questions abbreviation) data frame consist of the parameters that will be used from Posts.xml data\n",
"_____no_output_____"
]
],
[
[
"q = out[[\"CreationDate_1\",\"CreationDate_2\",\"CreationDate_3\",'Owner User ID','Title','Body','ID','Tags','Tag Count','IsAnswered']]\n",
"_____no_output_____"
]
],
[
[
"10 parameters will be used from Posts.xml data and ID (question id) will be removed later .It was kept in this stage to identify questions .",
"_____no_output_____"
]
],
[
[
"print(q.shape)",
"(5805, 10)\n"
],
[
"q",
"_____no_output_____"
]
],
[
[
"xml to dict library was used to convert data to data frame, since some failure accured using xml e-tree library ",
"_____no_output_____"
]
],
[
[
"!pip install xmltodict\nimport xmltodict\npath = 'Users.xml'\nxmlDict = xmltodict.parse(et.tostring(et.parse(path).getroot()))",
"Requirement already satisfied: xmltodict in c:\\users\\utku\\anaconda3\\lib\\site-packages (0.12.0)\n"
],
[
"a = (list(xmlDict.items())[0])[1]\nb= (list(a.items())[0])[1]",
"_____no_output_____"
]
],
[
[
"This is the initial version of the dataframe",
"_____no_output_____"
]
],
[
[
"users= pd.DataFrame.from_dict(b)\nusers.columns = users.columns.str[1:]\nusers.head(5)",
"_____no_output_____"
]
],
[
[
"Nan Values are filled with \"0\" before any preprocessing",
"_____no_output_____"
]
],
[
[
"users.fillna(0)",
"_____no_output_____"
]
],
[
[
"Users.xml dataset will be merged with Posts.xml data set \n\n* They are merged from their joint parameter user id \n* user id is \"Id\" parameter in User.xml data \"Owner User ID\" in Posts.xml data\n* Both parameters first converted into int type \n* There were 3909 unique ids which indicated there are 3909 different users who have asked questions in this data sets \n* So the data frames were merged according to them\n\n\n",
"_____no_output_____"
]
],
[
[
"users['Id'] = users['Id'].astype(int)",
"_____no_output_____"
],
[
"posts=q",
"_____no_output_____"
],
[
"posts['Owner User ID']=posts ['Owner User ID'].astype(int)",
"_____no_output_____"
],
[
"posts['Owner User ID'].nunique()",
"_____no_output_____"
],
[
"users['Id'].nunique()",
"_____no_output_____"
],
[
"intersection=set(users['Id']).intersection(set(posts['Owner User ID']))",
"_____no_output_____"
],
[
"len(intersection)",
"_____no_output_____"
]
],
[
[
"New id was defined as \"Owner User ID\" from Posts.xml and \"Id\" User.xml\n",
"_____no_output_____"
]
],
[
[
"posts['New Id']=posts['Owner User ID']",
"_____no_output_____"
],
[
"users['New Id']=users['Id']",
"_____no_output_____"
]
],
[
[
"To eliminate confusion on naming of parameters between user and posts data, parameter names have changed such as CreationDate to User Creation Date",
"_____no_output_____"
]
],
[
[
"users ['User Creation Date']=users ['CreationDate']\nusers ['User Last Access Date']=users ['LastAccessDate']\n",
"_____no_output_____"
]
],
[
[
"2 data sets were merged from their joint parameter (user id)",
"_____no_output_____"
]
],
[
[
"new=posts.merge(users,on=\"New Id\")",
"_____no_output_____"
]
],
[
[
"Also unnecessary Id parameters and Url parameters were eliminated.\nCreation Date and LastAccessDate were dropped since their updated with other parameters name above",
"_____no_output_____"
]
],
[
[
"newposts=new.drop(axis=1,columns=[\"Owner User ID\",\"AccountId\",\"DisplayName\",\"CreationDate\",\"Id\",\"LastAccessDate\",\"Location\",\"ProfileImageUrl\",\"WebsiteUrl\",\"AboutMe\"])",
"_____no_output_____"
]
],
[
[
"Since our task is to make a prediction at the time the question asked, we did not use posts down votes and upvotes .Thus user downvotes upvotes are the only ones. For ease of use We have changed the parameter names.",
"_____no_output_____"
]
],
[
[
"newposts['User DownVotes']=newposts['DownVotes']\nnewposts['User UpVotes']=newposts['UpVotes']\nnewposts['User Reputation']=newposts['Reputation']\nnewposts['User Views']=newposts['Views']\n",
"_____no_output_____"
],
[
"newposts=newposts.drop(axis=1,columns=[\"DownVotes\",\"UpVotes\",\"Reputation\",\"Views\"])",
"_____no_output_____"
],
[
"newposts.head()",
"_____no_output_____"
],
[
"newposts['User Creation Date'] = pd.to_datetime(newposts['User Creation Date'])\nnewposts['User Last Access Date'] = pd.to_datetime(newposts['User Last Access Date'])\nnewposts['New Id'] = newposts['New Id'].astype(int)\nnewposts['User DownVotes'] = newposts['User DownVotes'].astype(int)\nnewposts['User UpVotes'] = newposts['User UpVotes'].astype(int)\nnewposts['User Reputation'] = newposts['User Reputation'].astype(int)\nnewposts['User Views'] = newposts['User Views'].astype(int)\nnewposts['ID'] = newposts['ID'].astype(int)\nnewposts['IsAnswered']=newposts['IsAnswered'].astype(int)\n",
"_____no_output_____"
],
[
"newposts.dtypes\n",
"_____no_output_____"
]
],
[
[
"Label encoding on dates implemented on User creation date and user last acces date also.",
"_____no_output_____"
]
],
[
[
"def get_part_of_day(x):\n return (\n \"1\" if 5 <= int(x.strftime('%H')) <= 10\n else\n \"2\" if 11<= int(x.strftime('%H'))<= 16\n else\n \"3\" if 17 <= int(x.strftime('%H')) <= 22\n else\n \"4\"\n )\n \nnewposts[\"User Creation Date\"] = newposts[\"User Creation Date\"].apply(get_part_of_day)\nnewposts[\"User Last Access Date\"] = newposts[\"User Last Access Date\"].apply(get_part_of_day)\n",
"_____no_output_____"
]
],
[
[
"one hot encoding implemented on user.xml date format data",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import OneHotEncoder\n\nohe = OneHotEncoder(categories = \"auto\",sparse = False, drop = \"first\")\n# One-Hot-Encode Class column of df\ntemp = ohe.fit_transform(newposts[[\"User Creation Date\"]])\n\n\n# Converting into dataframe\nohe_column = pd.DataFrame(temp, columns = [\"UserCreationDate_1\",\"UserCreationDate_2\",\"UserCreationDate_3\"])\n#containating ohe column\nnewposts= pd.concat([newposts,ohe_column],axis = 1)\n",
"_____no_output_____"
],
[
"from sklearn.preprocessing import OneHotEncoder\n\nohe = OneHotEncoder(categories = \"auto\",sparse = False, drop = \"first\")\n# One-Hot-Encode Class column of df\ntemp = ohe.fit_transform(newposts[[\"User Last Access Date\"]])\n\n\n# Converting into dataframe\nohe_column = pd.DataFrame(temp, columns = [\"UserLastAccessDate_1\",\"UserLastAccessDate_2\",\"UserLastAccessDate_3\"])\n#containating ohe column\nnewposts= pd.concat([newposts,ohe_column],axis = 1)\n",
"_____no_output_____"
],
[
"newposts=newposts.drop(axis=1, columns=[\"User Creation Date\",\"User Last Access Date\"])\n",
"_____no_output_____"
]
],
[
[
" xml tags are eliminated from tags to be used and preprocessing applied on text before they are added to data frame",
"_____no_output_____"
]
],
[
[
"newposts['Tags'] = newposts['Tags'].str.replace('<', '')\nnewposts['Tags'] = newposts['Tags'].str.replace('>', ' ')\nnewposts['Tags'] = newposts['Tags'].str.replace('-', ' ')\nnewposts['Tags'] = newposts['Tags'].str.split(' ')",
"_____no_output_____"
],
[
"newposts['Tags']",
"_____no_output_____"
],
[
"newposts['Tags'] = newposts['Tags'].apply(lambda x: ' '.join(map(str, x)))",
"_____no_output_____"
],
[
"newposts=newposts[newposts['Tags'].notna()]",
"_____no_output_____"
],
[
"newposts = newposts.reset_index()",
"_____no_output_____"
],
[
"del newposts['index']",
"_____no_output_____"
]
],
[
[
"Count vectorizer method applied on tags data\n",
"_____no_output_____"
]
],
[
[
"def wm2df(wm, feat_names):\n \n # create an index for each row\n doc_names = ['Doc{:d}'.format(idx) for idx, _ in enumerate(wm)]\n df = pd.DataFrame(data=wm.toarray(), index=doc_names,\n columns=feat_names)\n return(df)",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer\nnewposts.to_pickle(\"CQA_FRONTEND/static/data/newposts\") \ncvec = CountVectorizer()\ncorpus = newposts['Tags'].tolist()\ntags_vec = cvec.fit_transform(corpus)\ntokens = cvec.get_feature_names()\nwm2df(tags_vec, tokens)\nnewposts.insert(4,'TagsVec', tags_vec) ",
"_____no_output_____"
],
[
"word_df = pd.DataFrame(tags_vec.toarray(),columns=tokens)",
"_____no_output_____"
],
[
"tags_added=pd.concat([newposts, word_df], axis=1)",
"_____no_output_____"
],
[
"tags_added=tags_added.drop(axis=1,columns=[\"TagsVec\"])",
"_____no_output_____"
],
[
"tags_added= tags_added.drop(axis = 1, columns = ['Tags'])",
"_____no_output_____"
],
[
"tags_added= tags_added.drop(axis = 1, columns = ['New Id'])",
"_____no_output_____"
]
],
[
[
"Demonstration of data frame after tags added",
"_____no_output_____"
]
],
[
[
"tags_added.tail(1)",
"_____no_output_____"
]
],
[
[
"Body(body of question) data is cleaned from xml tags",
"_____no_output_____"
]
],
[
[
"content = []\n\nfor data in tags_added['Body']:\n cleantext = BeautifulSoup(data, \"lxml\").text\n content.append(cleantext)\ntags_added.insert(5,\"Bodynew\", content)\ntags_added= tags_added.drop(axis = 1, columns = ['Body'])",
"_____no_output_____"
]
],
[
[
"Content parameter = Title of question + Body of question ",
"_____no_output_____"
]
],
[
[
"tags_added['Content'] = tags_added[['Title', 'Bodynew']].apply(lambda x: ' '.join(x), axis=1)",
"_____no_output_____"
],
[
"tags_added['Content'] = tags_added['Content'].str.lower().str.split()",
"_____no_output_____"
],
[
"tags_added['Content']",
"_____no_output_____"
]
],
[
[
"Doc2vec method implemented on Content data",
"_____no_output_____"
]
],
[
[
"# Set file names for train and test data\n\ndef read_corpus(fname, tokens_only=False):\n for i, line in enumerate(fname):\n tokens = gensim.utils.simple_preprocess(line)\n if tokens_only:\n yield tokens\n else:\n # For training data, add tags\n yield gensim.models.doc2vec.TaggedDocument(tokens, [i])\n\n",
"_____no_output_____"
],
[
"tags_added = tags_added[tags_added['Content'].notna()]",
"_____no_output_____"
],
[
"tags_added_fix = tags_added.reset_index()",
"_____no_output_____"
],
[
"del tags_added_fix['index']",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nvecList = []\nfor index, row in tags_added_fix.iterrows():\n train_corpus = list(read_corpus(row['Content']))\n \n model = gensim.models.doc2vec.Doc2Vec(vector_size=100, min_count=1, epochs=30)\n model.build_vocab(train_corpus)\n model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)\n vector = model.infer_vector(row['Content'])\n #print(vector)\n vecList.append(vector)",
"_____no_output_____"
]
],
[
[
"Vector size is defined as 100. Each vector has been added to the data frame for corresponding questions",
"_____no_output_____"
]
],
[
[
"col_list_content = ['content' + str(x) for x in range(0,100)]\n",
"_____no_output_____"
],
[
"doc2vecdf = pd.DataFrame(vecList,columns=col_list_content)\n",
"_____no_output_____"
],
[
"final_train=pd.concat([tags_added_fix, doc2vecdf], axis=1)",
"_____no_output_____"
],
[
"final_train.sort_index(axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"To make a better prediction data is shuffled \n",
"_____no_output_____"
]
],
[
[
"shuffled_final=final_train.sample(frac=1)\n",
"_____no_output_____"
],
[
"shuffled_final = shuffled_final.reset_index()",
"_____no_output_____"
],
[
"shuffled_final=shuffled_final.drop(axis=1, columns=[\"index\",\"ID\",\"Content\",\"Bodynew\",\"Title\"])",
"_____no_output_____"
],
[
"shuffled_final=shuffled_final.fillna(0.0) ",
"_____no_output_____"
],
[
"shuffled_final.sort_index(axis=1, inplace=True)",
"_____no_output_____"
],
[
"shuffled_final.replace(-9223372036854775808,0)",
"_____no_output_____"
]
],
[
[
" This is where we define the test set, for the user interface it will be dynamic parameter and updated for each question will be asked , however at this stage we have define the question manually.",
"_____no_output_____"
]
],
[
[
"derived_question_body =\"As far I know, the RNN accepts a sequence as input and can produce as a sequence as output.Are there neural networks that accept graphs or trees as inputs, so that to represent the relationships between the nodes of the graph or tree?\"\nderived_title=\"Are there neural networks that accept graphs or trees as inputs?\"\nderived_creation_date=\"2016-08-02T15:41:22.020\"\nderived_tags=\"<neural-networks><graphs>\"\nderived_userReputation=\"344\"\nderived_userCreationDate=\"2016-08-02T15:38:36.723\"\nderived_userLastAccessDate=\"2019-11-30T19:21:45.687\"\nderived_userViews=\"30\" \nderived_userUpVotes=\"0\" \nderived_userDownVotes=\"0\"\n# create a new data frame \n\ndf = pd.DataFrame({'Question': [derived_question_body],'Title' :[derived_title],'Creation Date':[derived_creation_date],'Tags':[derived_tags],'User Reputation': [derived_userReputation],\"User Creation Date\" : [derived_userCreationDate],\"User Last Access Date\" : [derived_userLastAccessDate],\"User Views\": [derived_userViews],\"User UpVotes\": [derived_userUpVotes],\"User DownVotes\": [derived_userDownVotes]}) \n \ndf",
"_____no_output_____"
]
],
[
[
"Same preprocessing applied on Date format data \n\n* Label encoding\n* One hot encoding\n\nSince there is only 1 question on test set that means there will be only 1 part of the day to encode . So one-hot-encoding would not be able to create columns for each part of the day . We have written a function to convert label encoded parameter to one hot encoded parameters.\n\n",
"_____no_output_____"
]
],
[
[
"df['Creation Date'] = pd.to_datetime(df['Creation Date'])\ndf['User Creation Date'] = pd.to_datetime(df['User Creation Date'])\ndf['User Last Access Date'] = pd.to_datetime(df['User Last Access Date'])\ndf['User DownVotes'] = df['User DownVotes'].astype(int)\ndf['User UpVotes'] = df['User UpVotes'].astype(int)\ndf['User Reputation'] = df['User Reputation'].astype(int)\ndf['User Views'] = df['User Views'].astype(int)\n\n",
"_____no_output_____"
],
[
"df= df.fillna(0)",
"_____no_output_____"
]
],
[
[
"Label encoding",
"_____no_output_____"
]
],
[
[
"def get_part_of_day(x):\n return (\n \"1\" if 5 <= int(x.strftime('%H')) <= 10\n else\n \"2\" if 11<= int(x.strftime('%H'))<= 16\n else\n \"3\" if 17 <= int(x.strftime('%H')) <= 22\n else\n \"4\"\n )\n \ndf[\"Creation Date\"] = df[\"Creation Date\"].apply(get_part_of_day)\ndf[\"User Creation Date\"] = df[\"User Creation Date\"].apply(get_part_of_day)\ndf[\"User Last Access Date\"] = df[\"User Last Access Date\"].apply(get_part_of_day)\n",
"_____no_output_____"
]
],
[
[
"One hot encoding function for the date format data",
"_____no_output_____"
]
],
[
[
"def encoder(x):\n a = 0.0\n b = 0.0\n c = 0.0\n\n if(x == \"1\"):\n return a,b,c\n elif (x == \"2\"):\n a = 1.0\n return a,b,c\n elif (x == \"3\"):\n b = 1.0\n return a,b,c\n else:\n c = 1.0\n return a,b,c\n\ndf[\"CreationDate_1\"], df[\"CreationDate_2\"], df[\"CreationDate_3\"] = zip(*df['Creation Date'].map(encoder))\ndf[\"UserCreationDate_1\"], df[\"UserCreationDate_2\"], df[\"UserCreationDate_3\"] = zip(*df['User Creation Date'].map(encoder))\ndf[\"UserLastAccessDate_1\"], df[\"UserLastAccessDate_2\"], df[\"UserLastAccessDate_3\"] = zip(*df['User Last Access Date'].map(encoder))",
"_____no_output_____"
]
],
[
[
"Tag count calculated ",
"_____no_output_____"
]
],
[
[
"df['Tag Count'] = df['Tags'].str.count('<')",
"_____no_output_____"
]
],
[
[
"Tags data is cleaned from xml format",
"_____no_output_____"
]
],
[
[
"df['Tags'] = df['Tags'].str.replace('<', '')\ndf['Tags'] = df['Tags'].str.replace('>', ' ')\ndf['Tags'] = df['Tags'].str.replace('-', ' ')\ndf['Tags'] = df['Tags'].str.split(' ')",
"_____no_output_____"
]
],
[
[
"To create same data frame for the test data , we have used the tokens that we created from train set's tags data. Updated dataframe according to the existence of tag",
"_____no_output_____"
]
],
[
[
"alltags = pd.DataFrame(0, index=np.arange(1), columns=tokens)\nfor tag in df.iloc[0]['Tags']:\n alltags[tag] = 1\n\n",
"_____no_output_____"
],
[
"alltags\nq_tags_added=pd.concat([df, alltags], axis=1)",
"_____no_output_____"
],
[
"q_tags_added = q_tags_added.iloc[:, :-1]",
"_____no_output_____"
]
],
[
[
"Data frame after tags are added ",
"_____no_output_____"
]
],
[
[
"q_tags_added",
"_____no_output_____"
]
],
[
[
"Doc2vec method implemented in same manner as train set\nbody and title of the question used together to create vectoral representation of question.",
"_____no_output_____"
]
],
[
[
"#words kolonu title ile bodynin birleşmiş hali, \nq_tags_added['Content'] = q_tags_added[['Title', 'Question']].apply(lambda x: ' '.join(x), axis=1)",
"_____no_output_____"
],
[
"q_tags_added['Content'] = q_tags_added['Content'].str.lower().str.split()",
"_____no_output_____"
],
[
"# Set file names for train and test data\n\ndef read_corpus(fname, tokens_only=False):\n for i, line in enumerate(fname):\n tokens = gensim.utils.simple_preprocess(line)\n if tokens_only:\n yield tokens\n else:\n # For training data, add tags\n yield gensim.models.doc2vec.TaggedDocument(tokens, [i])\n import numpy as np",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n# choose question posts and take body and convert list and with the help of corpus function each question will be presented as documents.\n\n#test_corpus = list(read_corpus(b[b[\"PostTypeId\"]==1][\"Body\"], tokens_only=True))\nq_vecList = []\nfor index, row in q_tags_added.iterrows():\n\n train_corpus = list(read_corpus(row['Content']))\n model = gensim.models.doc2vec.Doc2Vec(vector_size=100, min_count=1, epochs=30)\n model.build_vocab(train_corpus)\n model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)\n vector = model.infer_vector(row['Content'])\n #print(vector)\n q_vecList.append(vector)",
"_____no_output_____"
],
[
"q_col_list = ['content' + str(x) for x in range(0,100)]",
"_____no_output_____"
],
[
"q_doc2vecdf = pd.DataFrame(q_vecList,columns=q_col_list)",
"_____no_output_____"
],
[
"final_test=pd.concat([q_tags_added, q_doc2vecdf], axis=1)",
"_____no_output_____"
],
[
"final_test=final_test.drop(axis=1,columns=[\"Content\",\"Question\",\"Title\",\"Creation Date\",\"Tags\",\"User Creation Date\",\"User Last Access Date\"])",
"_____no_output_____"
],
[
"final_test=final_test.reset_index()",
"_____no_output_____"
],
[
"del final_test['index']",
"_____no_output_____"
],
[
"final_test=final_test.replace(-9223372036854775808,0)",
"_____no_output_____"
],
[
"final_test.sort_index(axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"For each question their corresponding 100 columns vector representation added to dataframe . This is final version of test data",
"_____no_output_____"
]
],
[
[
"final_test",
"_____no_output_____"
],
[
"shuffled_final.head(1)",
"_____no_output_____"
]
],
[
[
"There are total 5046 questions when we merged user.xml data and posts.xml data . 3811 of them is answered ,1235 of them is not answered",
"_____no_output_____"
]
],
[
[
"shuffled_final.groupby(\"IsAnswered\").count()",
"_____no_output_____"
],
[
"shuffled_final.to_pickle(\"CQA_FRONTEND/static/data/shuffled_perc\") # where to save it, usually as a .pkl",
"_____no_output_____"
],
[
"xtrain = shuffled_final.drop(axis = 1, columns=['IsAnswered'])\nlabels = shuffled_final['IsAnswered']",
"_____no_output_____"
]
],
[
[
"For the question that has been used as test question: The model predicted that, the question will be answered with 83% probability. This is the final model result for the user interface implementation. Different models have been tried and results of them has shown in the report. This model worked with ≈ 80% accuracy when the data is divided into train and test set.",
"_____no_output_____"
]
],
[
[
"# fit final model\nrfm=RandomForestClassifier(bootstrap = True,n_estimators=100,min_samples_leaf=1,\n random_state=50, max_depth= 50,min_samples_split= 20)\n\nrfm.fit(xtrain,labels)\n\nynew = rfm.predict_proba(final_test)\n# show the inputs and predicted outputs\nfor i in range(len(final_test)):\n\tprint(\" Predicted=%s\" % (ynew[i]))",
" Predicted=[0.25950955 0.74049045]\n"
]
],
[
[
"The printed results show that\n\n* The question is predicted to be answered with ≈ 81% probability\n* The question is predicted to be not answered with ≈ 19% probability",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecc786c9c8a623e86ec01b9415a361b801958a62 | 10,989 | ipynb | Jupyter Notebook | Data Science Course/1. Programming/3. Python (with solutions)/Module 1 - Python Intro and Numpy/Practice Solution/01-Python Crash Course Exercises - Solutions.ipynb | tensorbored/career-now-program | 3c942fb1c4cd3d7f3bd4a30436b2735577d45dcd | [
"MIT"
] | null | null | null | Data Science Course/1. Programming/3. Python (with solutions)/Module 1 - Python Intro and Numpy/Practice Solution/01-Python Crash Course Exercises - Solutions.ipynb | tensorbored/career-now-program | 3c942fb1c4cd3d7f3bd4a30436b2735577d45dcd | [
"MIT"
] | null | null | null | Data Science Course/1. Programming/3. Python (with solutions)/Module 1 - Python Intro and Numpy/Practice Solution/01-Python Crash Course Exercises - Solutions.ipynb | tensorbored/career-now-program | 3c942fb1c4cd3d7f3bd4a30436b2735577d45dcd | [
"MIT"
] | null | null | null | 20.387755 | 163 | 0.48503 | [
[
[
"\n<center> <h1 style=\"background-color:#975be5; color:white\"><br>02-NumPy Arrays<br></h1></center>\n____",
"_____no_output_____"
],
[
"<div align=\"right\">\n <b><a href=\"https://keytodatascience.com/\">KeytoDataScience.com </a></b>\n</div>",
"_____no_output_____"
],
[
"## Exercises\n\nAnswer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.",
"_____no_output_____"
],
[
"### What is 7 to the power of 4?",
"_____no_output_____"
]
],
[
[
"7**4",
"_____no_output_____"
]
],
[
[
"### Split this string:\n\nSplit this string\n\n s = \"Hi there Sam!\"\n \ninto a list.",
"_____no_output_____"
]
],
[
[
"s = 'Hi there Sam!'",
"_____no_output_____"
],
[
"s.split()",
"_____no_output_____"
]
],
[
[
"### Print the following string\n\nGiven the variables:\n\n planet = \"Earth\"\n diameter = 12742\n\n**Use .format() to print the following string:**\n\n `The diameter of Earth is 12742 kilometers.`",
"_____no_output_____"
]
],
[
[
"planet = \"Earth\"\ndiameter = 12742",
"_____no_output_____"
],
[
"print(\"The diameter of {} is {} kilometers.\".format(planet,diameter))",
"The diameter of Earth is 12742 kilometers.\n"
]
],
[
[
"### Fetch the word \"hello\"\n\n**PART 1:**\n\nGiven this nested list, use indexing to grab the word \"hello\"",
"_____no_output_____"
]
],
[
[
"lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]",
"_____no_output_____"
],
[
"lst[3][1][2][0]",
"_____no_output_____"
]
],
[
[
"**PART 2:** \n\nGiven this nest dictionary grab the word \"hello\". Be prepared, this will be annoying/tricky",
"_____no_output_____"
]
],
[
[
"d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}",
"_____no_output_____"
],
[
"d['k1'][3]['tricky'][3]['target'][3]",
"_____no_output_____"
]
],
[
[
"### What is the main difference between a tuple and a list?",
"_____no_output_____"
]
],
[
[
"# Tuple is immutable",
"_____no_output_____"
]
],
[
[
"### Create a function - Grabs email domain\n\nCreate a function that grabs the email website domain from a string in the form:\n\n [email protected]\n \n**So for example, passing \"[email protected]\" would return: domain.com**",
"_____no_output_____"
]
],
[
[
"def domainGet(email):\n return email.split('@')[-1]",
"_____no_output_____"
],
[
"domainGet('[email protected]')",
"_____no_output_____"
]
],
[
[
"### Create a function - Check if a word is present\n\nCreate a basic function that returns True if the word 'dog' is contained in the input string. \n\nDon't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.",
"_____no_output_____"
]
],
[
[
"def findDog(st):\n return 'dog' in st.lower().split()",
"_____no_output_____"
],
[
"findDog('Is there a dog here?')",
"_____no_output_____"
]
],
[
[
"### Create a function - Counts the occurance of a word\n\nCreate a function that counts the number of times the word \"dog\" occurs in a string.",
"_____no_output_____"
]
],
[
[
"def countDog(st):\n count = 0\n for word in st.lower().split():\n if word == 'dog':\n count += 1\n return count",
"_____no_output_____"
],
[
"countDog('This dog runs faster than the other dog dude!')",
"_____no_output_____"
]
],
[
[
"### Remove words that don't start with 's'\n\nUse lambda expressions and the filter() function to filter out words from a list that don't start with the **letter 's'**. For example:\n\n seq = ['soup','dog','salad','cat','great']\n\n**should be filtered down to:**\n\n ['soup','salad']",
"_____no_output_____"
]
],
[
[
"seq = ['soup','dog','salad','cat','great']",
"_____no_output_____"
],
[
"list(filter(lambda word: word[0]=='s',seq))",
"_____no_output_____"
]
],
[
[
"## Final Problem\n\n*You are driving a little too fast, and a police officer stops you. \n\nWrite a function to return one of 3 possible results: **\"No ticket\", \"Small ticket\", or \"Big Ticket\".**\n\n- If your speed is 60 or less, the result is \"No Ticket\".\n- If speed is between 61 and 80 inclusive, the result is \"Small Ticket\".\n- If speed is 81 or more, the result is \"Big Ticket\". \n\nUnless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 units higher in all cases.",
"_____no_output_____"
]
],
[
[
"def caught_speeding(speed, is_birthday):\n \n if is_birthday:\n speeding = speed - 5\n else:\n speeding = speed\n \n if speeding > 80:\n return 'Big Ticket'\n elif speeding > 60:\n return 'Small Ticket'\n else:\n return 'No Ticket'",
"_____no_output_____"
],
[
"caught_speeding(81,True)",
"_____no_output_____"
],
[
"caught_speeding(81,False)",
"_____no_output_____"
]
],
[
[
"\n<center> <h1 style=\"background-color:#975be5; color:white\"><br>Great Job!<br></h1><br></center>\n____",
"_____no_output_____"
],
[
"<div align=\"right\">\n <b><a href=\"https://keytodatascience.com/\">KeytoDataScience.com</a></b>\n</div>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecc7a3e071c4f0ce23756fdf84ecda2cddbed290 | 239,329 | ipynb | Jupyter Notebook | timeseries_v4.ipynb | Lbrady1025/OPTN-analysis | 843f7bb4bcdb16d80267edbe6304cc7a2ea65e9c | [
"CC-BY-3.0"
] | 2 | 2021-01-22T16:15:09.000Z | 2021-03-01T15:43:27.000Z | timeseries_v4.ipynb | Lbrady1025/OPTN-analysis | 843f7bb4bcdb16d80267edbe6304cc7a2ea65e9c | [
"CC-BY-3.0"
] | null | null | null | timeseries_v4.ipynb | Lbrady1025/OPTN-analysis | 843f7bb4bcdb16d80267edbe6304cc7a2ea65e9c | [
"CC-BY-3.0"
] | null | null | null | 204.031543 | 51,879 | 0.746646 | [
[
[
"#Import Dependencies\nimport pandas as pd\nimport numpy as np\nimport statsmodels.api as sm\nfrom statsmodels.tsa.api import VAR\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model",
"_____no_output_____"
],
[
"# Load dataset\ndf = pd.read_csv('timeseries_testv4.csv')\ndf.head()\n",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"np.asarray(data)",
"_____no_output_____"
],
[
"# Declare Variables\nX = df[['Diabetes','Obesity','Overdose','Pct_O']]\ny = df['Total_Adj']",
"_____no_output_____"
],
[
"# Make a small forecast\npred = model_fit.forecast(model_fit.y, steps=1)\nprint(pred)",
"[[33554.05900626 33578.92109699]]\n"
],
[
"# Import Granger's Causality Test\nfrom statsmodels.tsa.stattools import grangercausalitytests\ngranger_test = sm.tsa.stattools.grangercausalitytests(data, maxlag=2, verbose=True)\ngranger_test",
"\nGranger Causality\nnumber of lags (no zero) 1\nssr based F test: F=0.1055 , p=0.7484 , df_denom=22, df_num=1\nssr based chi2 test: chi2=0.1199 , p=0.7291 , df=1\nlikelihood ratio test: chi2=0.1196 , p=0.7294 , df=1\nparameter F test: F=0.1055 , p=0.7484 , df_denom=22, df_num=1\n\nGranger Causality\nnumber of lags (no zero) 2\nssr based F test: F=0.2840 , p=0.7559 , df_denom=19, df_num=2\nssr based chi2 test: chi2=0.7175 , p=0.6985 , df=2\nlikelihood ratio test: chi2=0.7070 , p=0.7022 , df=2\nparameter F test: F=0.2840 , p=0.7559 , df_denom=19, df_num=2\n"
],
[
"# Time to Split the data Determine validity of nobs\nnobs = 4\ndata_train, data_test = data[0:-nobs], data[-nobs:]",
"_____no_output_____"
],
[
"# Check for stationarity ADF (ADF Test) / unit root test\nfrom statsmodels.tsa.stattools import adfuller\n\ndef adf_test(ts, signif=0.05):\n datatest = adfuller(ts, autolag='AIC')\n adf = pd.Series(datatest[0:4], index=['Test Statistic', 'p-value','# Lags','# Observations'])\n for key, value in datatest[4].items():\n adf['Critical Value (%s)'%key] = value\n print (adf)\n\n p = adf['p-value']\n\n if p <= signif:\n print(f\" Series is Stationary\")\n else:\n print(f\" Series is Non-Stationary\")\n\n# apply adf test on the series\nadf_test(data_train[\"Removed\"])\nadf_test(data_train[\"Waiting_List\"])",
"Test Statistic -2.635286\np-value 0.085921\n# Lags 9.000000\n# Observations 12.000000\nCritical Value (1%) -4.137829\nCritical Value (5%) -3.154972\nCritical Value (10%) -2.714477\ndtype: float64\n Series is Non-Stationary\nTest Statistic -4.247470\np-value 0.000547\n# Lags 8.000000\n# Observations 13.000000\nCritical Value (1%) -4.068854\nCritical Value (5%) -3.127149\nCritical Value (10%) -2.701730\ndtype: float64\n Series is Stationary\n"
],
[
"#Make data stationary 1st difference\ndata_differenced = data_train.diff().dropna()\n# stationary test again with differenced data\nadf_test(data_differenced[\"Removed\"])",
"Test Statistic -3.256404\np-value 0.016943\n# Lags 0.000000\n# Observations 20.000000\nCritical Value (1%) -3.809209\nCritical Value (5%) -3.021645\nCritical Value (10%) -2.650713\ndtype: float64\n Series is Stationary\n"
],
[
"# Modeling & Fitting, also determin maxlags: 9 appears to be best\nmodel = VAR(data_differenced)\nresults = model.fit(maxlags=9, ic='aic')\nresults.summary()\n",
"_____no_output_____"
],
[
"# Forecasting\nlag_order = results.k_ar\nresults.forecast(data.values[-lag_order:], 5)",
"_____no_output_____"
],
[
"# Plot with Forecasts\nresults.plot_forecast(5)",
"_____no_output_____"
],
[
"# Evaluating Forecast Error Variance Decomposition (FEVD)\nfevd = results.fevd(5)\nfevd.summary()",
"FEVD for Removed\n Removed Waiting_List\n0 1.000000 0.000000\n1 0.668267 0.331733\n2 0.628909 0.371091\n3 0.567532 0.432468\n4 0.622384 0.377616\n\nFEVD for Waiting_List\n Removed Waiting_List\n0 0.493203 0.506797\n1 0.461294 0.538706\n2 0.713009 0.286991\n3 0.556513 0.443487\n4 0.616783 0.383217\n\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc7b97011dbe095f1bf828c762e17a5203291a9 | 48,161 | ipynb | Jupyter Notebook | 10_tensorflow2_estimator/2_use_estimator.ipynb | xiabai84/tensorflow_for_enginner | 8036239050e1142aa708429d5c3febff4cfb9a88 | [
"MIT"
] | 1 | 2019-04-12T12:36:23.000Z | 2019-04-12T12:36:23.000Z | 10_tensorflow2_estimator/2_use_estimator.ipynb | xiabai84/tensorflow_for_enginner | 8036239050e1142aa708429d5c3febff4cfb9a88 | [
"MIT"
] | null | null | null | 10_tensorflow2_estimator/2_use_estimator.ipynb | xiabai84/tensorflow_for_enginner | 8036239050e1142aa708429d5c3febff4cfb9a88 | [
"MIT"
] | null | null | null | 148.645062 | 7,311 | 0.723698 | [
[
[
"### Estimator API",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nfrom tensorflow import keras\nimport pandas as pd\nimport os",
"_____no_output_____"
],
[
"train_file = \"./data/titanic/train.csv\"\neval_file = \"./data/titanic/eval.csv\"\n\ntrain_df = pd.read_csv(train_file)\neval_df = pd.read_csv(eval_file)\n\n# dataframe pip function remove column from dataset and take the removed column\ny_train = train_df.pop('survived')\ny_eval = eval_df.pop('survived')\n\nprint(train_df.head())\nprint(eval_df.head())",
" sex age n_siblings_spouses parch ... class deck embark_town alone\n0 male 22.0 1 0 ... Third unknown Southampton n\n1 female 38.0 1 0 ... First C Cherbourg n\n2 female 26.0 0 0 ... Third unknown Southampton y\n3 female 35.0 1 0 ... First C Southampton n\n4 male 28.0 0 0 ... Third unknown Queenstown y\n\n[5 rows x 9 columns]\n sex age n_siblings_spouses parch ... class deck embark_town alone\n0 male 35.0 0 0 ... Third unknown Southampton y\n1 male 54.0 0 0 ... First E Southampton y\n2 female 58.0 0 0 ... First C Southampton y\n3 female 55.0 0 0 ... Second unknown Southampton y\n4 male 34.0 0 0 ... Second D Southampton y\n\n[5 rows x 9 columns]\n"
],
[
"# discrete value\ncategorical_columns = ['sex', 'n_siblings_spouses', 'parch', 'class',\n 'deck', 'embark_town', 'alone']\nnumeric_columns = ['age', 'fare']\n\nfeature_columns = []\n\nfor categorical_column in categorical_columns:\n vocab = train_df[categorical_column].unique()\n \n print(categorical_column, vocab)\n \n feature_columns.append(\n # indicator_column -> one_hot discrete value\n tf.feature_column.indicator_column(\n # fill feature_column with vocab data (name, possible value)\n tf.feature_column.categorical_column_with_vocabulary_list(\n categorical_column,\n vocab)))\n\nfor numeric_column in numeric_columns: \n feature_columns.append(\n # for numeric value just add them directly to feature_column\n tf.feature_column.numeric_column(numeric_column, dtype=tf.float32)\n )",
"sex ['male' 'female']\nn_siblings_spouses [1 0 3 4 2 5 8]\nparch [0 1 2 5 3 4]\nclass ['Third' 'First' 'Second']\ndeck ['unknown' 'C' 'G' 'A' 'B' 'D' 'F' 'E']\nembark_town ['Southampton' 'Cherbourg' 'Queenstown' 'unknown']\nalone ['n' 'y']\n"
],
[
"def make_dataset(data_df, label_df, epochs = 10, shuffle = True,\n batch_size = 32):\n dataset = tf.data.Dataset.from_tensor_slices(\n (dict(data_df), label_df))\n \n if shuffle == True:\n dataset = dataset.shuffle(10000)\n dataset = dataset.repeat(epochs).batch(batch_size)\n return dataset",
"_____no_output_____"
],
[
"model = keras.models.Sequential([\n keras.layers.DenseFeatures(feature_columns=feature_columns),\n keras.layers.Dense(100, activation = 'relu'),\n keras.layers.Dense(100, activation = 'relu'),\n keras.layers.Dense(2, activation = 'softmax')\n])\n\nmodel.compile(loss = \"sparse_categorical_crossentropy\",\n optimizer = keras.optimizers.SGD(learning_rate=0.01),\n metrics = [\"accuracy\"])",
"_____no_output_____"
],
[
"output_dir = 'baseline_model'\n\nif not os.path.exists(output_dir):\n os.mkdir(output_dir)\n\nbaseline_estimator = tf.estimator.BaselineClassifier(\n model_dir = output_dir,\n n_classes = 2\n)\n\nbaseline_estimator.train(input_fn = lambda: make_dataset(\n train_df, y_train, epochs = 100\n))",
"WARNING: Logging before flag parsing goes to stderr.\nW0724 10:00:28.642052 140406365710144 deprecation.py:323] From /home/bai/.virtualenvs/tensorflow2/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.\nW0724 10:00:28.773476 140406365710144 deprecation.py:323] From /home/bai/.virtualenvs/tensorflow2/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/head/base_head.py:574: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\nW0724 10:00:28.845140 140406365710144 deprecation.py:323] From /home/bai/.virtualenvs/tensorflow2/lib/python3.6/site-packages/tensorflow/python/ops/nn_impl.py:182: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nW0724 10:00:28.929767 140406365710144 deprecation.py:506] From /home/bai/.virtualenvs/tensorflow2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/ftrl.py:142: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\n2019-07-24 10:00:29.059423: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\n2019-07-24 10:00:29.081751: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2208000000 Hz\n2019-07-24 10:00:29.082048: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3a00dd0 executing computations on platform Host. Devices:\n2019-07-24 10:00:29.082080: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>\n2019-07-24 10:00:29.112805: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1483] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.\n"
],
[
"baseline_estimator.evaluate(input_fn = lambda: make_dataset(\n eval_df, y_eval, epochs = 1, shuffle = False, batch_size = 20))",
"_____no_output_____"
],
[
"# 2. use estimator (alternative to fit)\nestimator = keras.estimator.model_to_estimator(model)\n\n# input_fn -> must be a function i.g. a lambda function\n# return \n# a. (features, labels)\n# b. dataset(feature, label)\nestimator.train(\n input_fn = lambda: make_dataset(train_df, y_train, epochs = 100)\n)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc7cb56b1c9bed3f40fd7e46d4e32977495f090 | 40,121 | ipynb | Jupyter Notebook | notebooks/1-cvillafraz-process-data.ipynb | cvillafraz/personal-linkedin-eda | 92037a4eb1d246328355ae1a70ec69481edaae95 | [
"MIT"
] | null | null | null | notebooks/1-cvillafraz-process-data.ipynb | cvillafraz/personal-linkedin-eda | 92037a4eb1d246328355ae1a70ec69481edaae95 | [
"MIT"
] | null | null | null | notebooks/1-cvillafraz-process-data.ipynb | cvillafraz/personal-linkedin-eda | 92037a4eb1d246328355ae1a70ec69481edaae95 | [
"MIT"
] | null | null | null | 47.201176 | 2,849 | 0.484135 | [
[
[
"# Read and process LinkedIn data\n",
"_____no_output_____"
]
],
[
[
"%load_ext nb_black\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"## Import libraries\n",
"_____no_output_____"
]
],
[
[
"import personal_linkedin_eda.utils.paths as path\nimport personal_linkedin_eda.utils.preprocess as prep\nimport pandas as pd\nimport numpy as np\nimport nltk\n\nnltk.download(\"stopwords\")",
"[nltk_data] Downloading package stopwords to\n[nltk_data] /home/cvillafraz/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n"
]
],
[
[
"## Load data and delete unnecessary columns\n",
"_____no_output_____"
]
],
[
[
"df_certifications = pd.read_csv(path.data_raw_dir(\"Certifications.csv\"))\ndf_connections = pd.read_csv(path.data_raw_dir(\"Connections.csv\"), header=2)\ndf_messages = pd.read_csv(path.data_raw_dir(\"messages.csv\"))\ndf_queries = pd.read_csv(path.data_raw_dir(\"SearchQueries.csv\"))",
"_____no_output_____"
],
[
"# Print all columns from each dataset\nprint(f\"Certifications columns: {df_certifications.columns}\\n\")\nprint(f\"Connections columns: {df_connections.columns}\\n\")\nprint(f\"Messages columns: {df_messages.columns}\\n\")\nprint(f\"Queries columns: {df_queries.columns}\")",
"Certifications columns: Index(['Name', 'Url', 'Authority', 'Started On', 'Finished On',\n 'License Number'],\n dtype='object')\n\nConnections columns: Index(['First Name', 'Last Name', 'Email Address', 'Company', 'Position',\n 'Connected On'],\n dtype='object')\n\nMessages columns: Index(['CONVERSATION ID', 'CONVERSATION TITLE', 'FROM', 'SENDER PROFILE URL',\n 'TO', 'DATE', 'SUBJECT', 'CONTENT', 'FOLDER'],\n dtype='object')\n\nQueries columns: Index(['Time', 'Search Query'], dtype='object')\n"
],
[
"df_certifications.drop(columns=[\"Url\", \"License Number\"], inplace=True)\ndf_connections.drop(columns=[\"First Name\", \"Last Name\", \"Email Address\"], inplace=True)\ndf_messages.drop(\n columns=[\"SENDER PROFILE URL\", \"FOLDER\", \"CONVERSATION ID\"], inplace=True\n)",
"_____no_output_____"
]
],
[
[
"## Check the structure of the datasets\n",
"_____no_output_____"
]
],
[
[
"df_certifications.head()",
"_____no_output_____"
],
[
"df_connections.head()",
"_____no_output_____"
],
[
"df_messages.head()",
"_____no_output_____"
],
[
"df_queries.head()",
"_____no_output_____"
]
],
[
[
"## Remove unnecessary/null values\n",
"_____no_output_____"
]
],
[
[
"print(f\"Certifications null values:\\n {df_certifications.isna().sum()}\\n\")\nprint(f\"Connections null values:\\n {df_connections.isna().sum()}\\n\")\nprint(f\"Messages null values:\\n {df_messages.isna().sum()}\\n\")\nprint(f\"Queries null values:\\n {df_queries.isna().sum()}\\n\")",
"Certifications null values:\n Name 0\nAuthority 0\nStarted On 0\nFinished On 27\ndtype: int64\n\nConnections null values:\n Company 4\nPosition 4\nConnected On 0\ndtype: int64\n\nMessages null values:\n CONVERSATION TITLE 184\nFROM 3\nTO 3\nDATE 0\nSUBJECT 184\nCONTENT 3\ndtype: int64\n\nQueries null values:\n Time 0\nSearch Query 0\ndtype: int64\n\n"
],
[
"df_connections.dropna(inplace=True)\ndf_messages.dropna(inplace=True, subset=[\"CONTENT\"])\n### Remove promo messages from LinkedIn itself\ndf_messages = df_messages[\n ~df_messages[\"FROM\"].str.contains(\"from linkedin|linkedin premium\", case=False)\n]",
"_____no_output_____"
]
],
[
[
"## Text preprocessing\n",
"_____no_output_____"
],
[
"### Normalize common position names\n",
"_____no_output_____"
]
],
[
[
"position_patterns = [\n (\n df_connections[\"Position\"].str.contains(\"Full Stack|Web Developer\", case=False),\n \"Full Stack Developer\",\n ),\n (\n df_connections[\"Position\"].str.contains(\n \"Frontend|Front End|Front-end\", case=False\n ),\n \"Front-end Developer\",\n ),\n (\n df_connections[\"Position\"].str.contains(\n \"Backend|Back-end|Software\", case=False\n ),\n \"Software Engineer\",\n ),\n (\n df_connections[\"Position\"].str.contains(\n \"Ciencia de datos|Data Scientist\", case=False\n ),\n \"Data Scientist\",\n ),\n (\n df_connections[\"Position\"].str.contains(\n \"CEO|Chief Executive Officer|Business Owner\", case=False\n ),\n \"CEO\",\n ),\n (\n df_connections[\"Position\"].str.contains(\"Founder\", regex=False, case=False),\n \"Founder\",\n ),\n]",
"_____no_output_____"
],
[
"position_criteria, position_values = zip(*position_patterns)\ndf_connections[\"Position Normalized\"] = np.select(\n position_criteria, position_values, None\n)\n# Replace \"None\" values with original position\ndf_connections[\"Position Normalized\"] = df_connections[\n \"Position Normalized\"\n].combine_first(df_connections[\"Position\"])",
"_____no_output_____"
]
],
[
[
"### Remove stopwords and normalize messages and search queries\n",
"_____no_output_____"
]
],
[
[
"remove_stopwords = prep.remove_stopwords\nnormalize_text = prep.normalize_text\ndf_messages[\"CONTENT\"] = df_messages[\"CONTENT\"].apply(\n lambda str: remove_stopwords(normalize_text(str))\n)\ndf_messages[\"SUBJECT\"] = df_messages[\"SUBJECT\"].apply(\n lambda string: remove_stopwords(normalize_text(string)) if isinstance(string, str) else None\n)\ndf_queries[\"Search Query\"] = df_queries[\"Search Query\"].apply(\n lambda str: normalize_text(str)\n)",
"_____no_output_____"
]
],
[
[
"## Save data\n",
"_____no_output_____"
]
],
[
[
"df_certifications.to_parquet(path.data_processed_dir(\"certifications_clean.parquet\"))\ndf_connections.to_parquet(path.data_processed_dir(\"connections_clean.parquet\"))\ndf_messages.to_parquet(path.data_processed_dir(\"messages_clean.parquet\"))\ndf_queries.to_parquet(path.data_processed_dir(\"queries_clean.parquet\"))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc7d05f32357f1e9dfebecd31a4eb50887aa61b | 477,303 | ipynb | Jupyter Notebook | 1-Introduction/01-defining-data-science/.ipynb_checkpoints/notebook-checkpoint.ipynb | spakinrinwa/Data-Science-For-Beginners | 7477ff53513b8b9d7e7d483e433543a6a6fffee2 | [
"MIT"
] | null | null | null | 1-Introduction/01-defining-data-science/.ipynb_checkpoints/notebook-checkpoint.ipynb | spakinrinwa/Data-Science-For-Beginners | 7477ff53513b8b9d7e7d483e433543a6a6fffee2 | [
"MIT"
] | null | null | null | 1-Introduction/01-defining-data-science/.ipynb_checkpoints/notebook-checkpoint.ipynb | spakinrinwa/Data-Science-For-Beginners | 7477ff53513b8b9d7e7d483e433543a6a6fffee2 | [
"MIT"
] | null | null | null | 1,139.147971 | 304,582 | 0.95554 | [
[
[
"# Challenge: Analyzing Text about Data Science\n\nIn this example, let's do a simple exercise that covers all steps of a traditional data science process. You do not have to write any code, you can just click on the cells below to execute them and observe the result. As a challenge, you are encouraged to try this code out with different data. \n\n## Goal\n\nIn this lesson, we have been discussing different concepts related to Data Science. Let's try to discover more related concepts by doing some **text mining**. We will start with a text about Data Science, extract keywords from it, and then try to visualize the result.\n\nAs a text, I will use the page on Data Science from Wikipedia:",
"_____no_output_____"
]
],
[
[
"url = 'https://en.wikipedia.org/wiki/Data_science'",
"_____no_output_____"
]
],
[
[
"## Step 1: Getting the Data\n\nFirst step in every data science process is getting the data. We will use `requests` library to do that:",
"_____no_output_____"
]
],
[
[
"import requests\n\ntext = requests.get(url).content.decode('utf-8')\nprint(text[:1000])",
"<!DOCTYPE html>\n<html class=\"client-nojs\" lang=\"en\" dir=\"ltr\">\n<head>\n<meta charset=\"UTF-8\"/>\n<title>Data science - Wikipedia</title>\n<script>document.documentElement.className=\"client-js\";RLCONF={\"wgBreakFrames\":false,\"wgSeparatorTransformTable\":[\"\",\"\"],\"wgDigitTransformTable\":[\"\",\"\"],\"wgDefaultDateFormat\":\"dmy\",\"wgMonthNames\":[\"\",\"January\",\"February\",\"March\",\"April\",\"May\",\"June\",\"July\",\"August\",\"September\",\"October\",\"November\",\"December\"],\"wgRequestId\":\"1a117efe-079a-4eff-8d2a-d4616da1bc52\",\"wgCSPNonce\":false,\"wgCanonicalNamespace\":\"\",\"wgCanonicalSpecialPageName\":false,\"wgNamespaceNumber\":0,\"wgPageName\":\"Data_science\",\"wgTitle\":\"Data science\",\"wgCurRevisionId\":1066671640,\"wgRevisionId\":1066671640,\"wgArticleId\":35458904,\"wgIsArticle\":true,\"wgIsRedirect\":false,\"wgAction\":\"view\",\"wgUserName\":null,\"wgUserGroups\":[\"*\"],\"wgCategories\":[\"CS1 maint: others\",\"Articles with short description\",\"Short description matches Wikidata\",\"Use dmy dates from August 2021\",\"Information science\",\"Computer \n"
]
],
[
[
"## Step 2: Transforming the Data\n\nThe next step is to convert the data into the form suitable for processing. In our case, we have downloaded HTML source code from the page, and we need to convert it into plain text.\n\nThere are many ways this can be done. We will use the simplest built-in [HTMLParser](https://docs.python.org/3/library/html.parser.html) object from Python. We need to subclass the `HTMLParser` class and define the code that will collect all text inside HTML tags, except `<script>` and `<style>` tags.",
"_____no_output_____"
]
],
[
[
"from html.parser import HTMLParser\n\nclass MyHTMLParser(HTMLParser):\n script = False\n res = \"\"\n def handle_starttag(self, tag, attrs):\n if tag.lower() in [\"script\",\"style\"]:\n self.script = True\n def handle_endtag(self, tag):\n if tag.lower() in [\"script\",\"style\"]:\n self.script = False\n def handle_data(self, data):\n if str.strip(data)==\"\" or self.script:\n return\n self.res += ' '+data.replace('[ edit ]','')\n\nparser = MyHTMLParser()\nparser.feed(text)\ntext = parser.res\nprint(text[:1000])",
" Data science - Wikipedia Data science From Wikipedia, the free encyclopedia Jump to navigation Jump to search Interdisciplinary field of study focused on deriving knowledge and insights from data Not to be confused with information science . The existence of Comet NEOWISE (here depicted as a series of red dots) was discovered by analyzing astronomical survey data acquired by a space telescope , the Wide-field Infrared Survey Explorer . Part of a series on Machine learning and data mining Problems Classification Clustering Regression Anomaly detection Data Cleaning AutoML Association rules Reinforcement learning Structured prediction Feature engineering Feature learning Online learning Semi-supervised learning Unsupervised learning Learning to rank Grammar induction Supervised learning ( classification • regression ) Decision trees Ensembles Bagging Boosting Random forest k -NN Linear regression Naive Bayes Artificial neural networks Logistic regression Perceptron Relevance v\n"
]
],
[
[
"## Step 3: Getting Insights\n\nThe most important step is to turn our data into some form from which we can draw insights. In our case, we want to extract keywords from the text, and see which keywords are more meaningful.\n\nWe will use Python library called [RAKE](https://github.com/aneesha/RAKE) for keyword extraction. First, let's install this library in case it is not present: ",
"_____no_output_____"
]
],
[
[
"import sys\n!{sys.executable} -m pip install nlp_rake",
"Requirement already satisfied: nlp_rake in c:\\users\\user\\anaconda3\\lib\\site-packages (0.0.2)\nRequirement already satisfied: langdetect>=1.0.8 in c:\\users\\user\\anaconda3\\lib\\site-packages (from nlp_rake) (1.0.9)\nRequirement already satisfied: numpy>=1.14.4 in c:\\users\\user\\anaconda3\\lib\\site-packages (from nlp_rake) (1.19.1)\nRequirement already satisfied: pyrsistent>=0.14.2 in c:\\users\\user\\anaconda3\\lib\\site-packages (from nlp_rake) (0.16.0)\nRequirement already satisfied: regex>=2018.6.6 in c:\\users\\user\\anaconda3\\lib\\site-packages (from nlp_rake) (2020.6.8)\nRequirement already satisfied: six in c:\\users\\user\\anaconda3\\lib\\site-packages (from langdetect>=1.0.8->nlp_rake) (1.15.0)\n"
]
],
[
[
"The main functionality is available from `Rake` object, which we can customize using some parameters. In our case, we will set the minimum length of a keyword to 5 characters, minimum frequency of a keyword in the document to 3, and maximum number of words in a keyword - to 2. Feel free to play around with other values and observe the result.",
"_____no_output_____"
]
],
[
[
"import nlp_rake\nextractor = nlp_rake.Rake(max_words=2,min_freq=3,min_chars=5)\nres = extractor.apply(text)\nres",
"_____no_output_____"
]
],
[
[
"\nWe obtained a list terms together with associated degree of importance. As you can see, the most relevant disciplines, such as machine learning and big data, are present in the list at top positions.\n\n## Step 4: Visualizing the Result\n\nPeople can interpret the data best in the visual form. Thus it often makes sense to visualize the data in order to draw some insights. We can use `matplotlib` library in Python to plot simple distribution of the keywords with their relevance:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\ndef plot(pair_list):\n k,v = zip(*pair_list)\n plt.bar(range(len(k)),v)\n plt.xticks(range(len(k)),k,rotation='vertical')\n plt.show()\n\nplot(res)",
"_____no_output_____"
]
],
[
[
"There is, however, even better way to visualize word frequencies - using **Word Cloud**. We will need to install another library to plot the word cloud from our keyword list.",
"_____no_output_____"
]
],
[
[
"!{sys.executable} -m pip install wordcloud",
"Collecting wordcloud\n Downloading wordcloud-1.8.1-cp37-cp37m-win_amd64.whl (154 kB)\nRequirement already satisfied: matplotlib in c:\\users\\user\\anaconda3\\lib\\site-packages (from wordcloud) (3.2.1)\nRequirement already satisfied: numpy>=1.6.1 in c:\\users\\user\\anaconda3\\lib\\site-packages (from wordcloud) (1.19.1)\nRequirement already satisfied: pillow in c:\\users\\user\\anaconda3\\lib\\site-packages (from wordcloud) (7.1.2)\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\user\\anaconda3\\lib\\site-packages (from matplotlib->wordcloud) (1.2.0)\nRequirement already satisfied: python-dateutil>=2.1 in c:\\users\\user\\anaconda3\\lib\\site-packages (from matplotlib->wordcloud) (2.8.1)\nRequirement already satisfied: cycler>=0.10 in c:\\users\\user\\anaconda3\\lib\\site-packages (from matplotlib->wordcloud) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\\users\\user\\anaconda3\\lib\\site-packages (from matplotlib->wordcloud) (2.4.7)\nRequirement already satisfied: six>=1.5 in c:\\users\\user\\anaconda3\\lib\\site-packages (from python-dateutil>=2.1->matplotlib->wordcloud) (1.15.0)\nInstalling collected packages: wordcloud\nSuccessfully installed wordcloud-1.8.1\n"
]
],
[
[
"`WordCloud` object is responsible for taking in either original text, or pre-computed list of words with their frequencies, and returns and image, which can then be displayed using `matplotlib`:",
"_____no_output_____"
]
],
[
[
"from wordcloud import WordCloud\nimport matplotlib.pyplot as plt\n\nwc = WordCloud(background_color='white',width=800,height=600)\nplt.figure(figsize=(15,7))\nplt.imshow(wc.generate_from_frequencies({ k:v for k,v in res }))",
"_____no_output_____"
]
],
[
[
"We can also pass in the original text to `WordCloud` - let's see if we are able to get similar result:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(15,7))\nplt.imshow(wc.generate(text))",
"_____no_output_____"
],
[
"wc.generate(text).to_file('images/ds_wordcloud.png')",
"_____no_output_____"
]
],
[
[
"You can see that word cloud now looks more impressive, but it also contains a lot of noise (eg. unrelated words such as `Retrieved on`). Also, we get fewer keywords that consist of two words, such as *data scientist*, or *computer science*. This is because RAKE algorithm does much better job at selecting good keywords from text. This example illustrates the importance of data pre-processing and cleaning, because clear picture at the end will allow us to make better decisions.\n\nIn this exercise we have gone through a simple process of extracting some meaning from Wikipedia text, in the form of keywords and word cloud. This example is quite simple, but it demonstrates well all typical steps a data scientist will take when working with data, starting from data acquisition, up to visualization.\n\nIn our course we will discuss all those steps in detail. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecc7d1d4ad208096309ce74ce83555091334d9e4 | 386,854 | ipynb | Jupyter Notebook | Chapter08/54_Training_SSD.ipynb | aihill/Modern-Computer-Vision-with-PyTorch | b957acb0c42b11f4f2bff26d64c95e9a7bdfa46f | [
"MIT"
] | null | null | null | Chapter08/54_Training_SSD.ipynb | aihill/Modern-Computer-Vision-with-PyTorch | b957acb0c42b11f4f2bff26d64c95e9a7bdfa46f | [
"MIT"
] | null | null | null | Chapter08/54_Training_SSD.ipynb | aihill/Modern-Computer-Vision-with-PyTorch | b957acb0c42b11f4f2bff26d64c95e9a7bdfa46f | [
"MIT"
] | null | null | null | 555.02726 | 140,326 | 0.937351 | [
[
[
"<a href=\"https://colab.research.google.com/github/PacktPublishing/Hands-On-Computer-Vision-with-PyTorch/blob/master/Chapter08/Training_SSD.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import os\nif not os.path.exists('open-images-bus-trucks'):\n !pip install -q torch_snippets\n !wget --quiet https://www.dropbox.com/s/agmzwk95v96ihic/open-images-bus-trucks.tar.xz\n !tar -xf open-images-bus-trucks.tar.xz\n !rm open-images-bus-trucks.tar.xz\n !git clone https://github.com/sizhky/ssd-utils/\n%cd ssd-utils",
"Cloning into 'ssd-utils'...\nremote: Enumerating objects: 9, done.\u001b[K\nremote: Counting objects: 100% (9/9), done.\u001b[K\nremote: Compressing objects: 100% (8/8), done.\u001b[K\nremote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 0\u001b[K\nUnpacking objects: 100% (9/9), done.\n/content/ssd-utils/ssd-utils\n"
],
[
"from torch_snippets import *\nDATA_ROOT = '../open-images-bus-trucks/'\nIMAGE_ROOT = f'{DATA_ROOT}/images'\nDF_RAW = df = pd.read_csv(f'{DATA_ROOT}/df.csv')\n\ndf = df[df['ImageID'].isin(df['ImageID'].unique().tolist())]\n\nlabel2target = {l:t+1 for t,l in enumerate(DF_RAW['LabelName'].unique())}\nlabel2target['background'] = 0\ntarget2label = {t:l for l,t in label2target.items()}\nbackground_class = label2target['background']\nnum_classes = len(label2target)\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'",
"_____no_output_____"
],
[
"import collections, os, torch\nfrom PIL import Image\nfrom torchvision import transforms\nnormalize = transforms.Normalize(\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]\n)\ndenormalize = transforms.Normalize(\n mean=[-0.485/0.229, -0.456/0.224, -0.406/0.255],\n std=[1/0.229, 1/0.224, 1/0.255]\n)\n\ndef preprocess_image(img):\n img = torch.tensor(img).permute(2,0,1)\n img = normalize(img)\n return img.to(device).float()\n \nclass OpenDataset(torch.utils.data.Dataset):\n w, h = 300, 300\n def __init__(self, df, image_dir=IMAGE_ROOT):\n self.image_dir = image_dir\n self.files = glob.glob(self.image_dir+'/*')\n self.df = df\n self.image_infos = df.ImageID.unique()\n logger.info(f'{len(self)} items loaded')\n \n def __getitem__(self, ix):\n # load images and masks\n image_id = self.image_infos[ix]\n img_path = find(image_id, self.files)\n img = Image.open(img_path).convert(\"RGB\")\n img = np.array(img.resize((self.w, self.h), resample=Image.BILINEAR))/255.\n data = df[df['ImageID'] == image_id]\n labels = data['LabelName'].values.tolist()\n data = data[['XMin','YMin','XMax','YMax']].values\n data[:,[0,2]] *= self.w\n data[:,[1,3]] *= self.h\n boxes = data.astype(np.uint32).tolist() # convert to absolute coordinates\n return img, boxes, labels\n\n def collate_fn(self, batch):\n images, boxes, labels = [], [], []\n for item in batch:\n img, image_boxes, image_labels = item\n img = preprocess_image(img)[None]\n images.append(img)\n boxes.append(torch.tensor(image_boxes).float().to(device)/300.)\n labels.append(torch.tensor([label2target[c] for c in image_labels]).long().to(device))\n images = torch.cat(images).to(device)\n return images, boxes, labels\n def __len__(self):\n return len(self.image_infos)",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\ntrn_ids, val_ids = train_test_split(df.ImageID.unique(), test_size=0.1, random_state=99)\ntrn_df, val_df = df[df['ImageID'].isin(trn_ids)], df[df['ImageID'].isin(val_ids)]\nlen(trn_df), len(val_df)\n\ntrain_ds = OpenDataset(trn_df)\ntest_ds = OpenDataset(val_df)\n\ntrain_loader = DataLoader(train_ds, batch_size=4, collate_fn=train_ds.collate_fn, drop_last=True)\ntest_loader = DataLoader(test_ds, batch_size=4, collate_fn=test_ds.collate_fn, drop_last=True)",
"2020-10-13 10:38:19.093 | INFO | __main__:__init__:25 - 13702 items loaded\n2020-10-13 10:38:19.138 | INFO | __main__:__init__:25 - 1523 items loaded\n"
],
[
"def train_batch(inputs, model, criterion, optimizer):\n model.train()\n N = len(train_loader)\n images, boxes, labels = inputs\n _regr, _clss = model(images)\n loss = criterion(_regr, _clss, boxes, labels)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n return loss\n \[email protected]_grad()\ndef validate_batch(inputs, model, criterion):\n model.eval()\n images, boxes, labels = inputs\n _regr, _clss = model(images)\n loss = criterion(_regr, _clss, boxes, labels)\n return loss",
"_____no_output_____"
],
[
"from model import SSD300, MultiBoxLoss\nfrom detect import *",
"_____no_output_____"
],
[
"n_epochs = 3\n\nmodel = SSD300(num_classes, device)\noptimizer = torch.optim.AdamW(model.parameters(), lr=1e-4, weight_decay=1e-5)\ncriterion = MultiBoxLoss(priors_cxcy=model.priors_cxcy, device=device)\n\nlog = Report(n_epochs=n_epochs)\nlogs_to_print = 5",
"Downloading: \"https://download.pytorch.org/models/vgg16-397923af.pth\" to /root/.cache/torch/hub/checkpoints/vgg16-397923af.pth\n"
],
[
"for epoch in range(n_epochs):\n _n = len(train_loader)\n for ix, inputs in enumerate(train_loader):\n loss = train_batch(inputs, model, criterion, optimizer)\n pos = (epoch + (ix+1)/_n)\n log.record(pos, trn_loss=loss.item(), end='\\r')\n\n _n = len(test_loader)\n for ix,inputs in enumerate(test_loader):\n loss = validate_batch(inputs, model, criterion)\n pos = (epoch + (ix+1)/_n)\n log.record(pos, val_loss=loss.item(), end='\\r')",
"_____no_output_____"
],
[
"image_paths = Glob(f'{DATA_ROOT}/images/*')\nimage_id = choose(test_ds.image_infos)\nimg_path = find(image_id, test_ds.files)\noriginal_image = Image.open(img_path, mode='r')\noriginal_image = original_image.convert('RGB')",
"2020-10-13 10:39:28.949 | INFO | torch_snippets.loader:Glob:178 - 15225 files found at ../open-images-bus-trucks//images/*\n"
],
[
"image_paths = Glob(f'{DATA_ROOT}/images/*')\nfor _ in range(3):\n image_id = choose(test_ds.image_infos)\n img_path = find(image_id, test_ds.files)\n original_image = Image.open(img_path, mode='r')\n bbs, labels, scores = detect(original_image, model, min_score=0.9, max_overlap=0.5,top_k=200, device=device)\n labels = [target2label[c.item()] for c in labels]\n label_with_conf = [f'{l} @ {s:.2f}' for l,s in zip(labels,scores)]\n print(bbs, label_with_conf)\n show(original_image, bbs=bbs, texts=label_with_conf, text_sz=10)\n",
"[[35, 34, 212, 123]] ['Truck @ 1.00']\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc7e7a88b5768fcd0a0e6d33b61ac463e99e994 | 9,522 | ipynb | Jupyter Notebook | Github/Github_Get_weekly_commits_from_repository.ipynb | vivard/awesome-notebooks | 899558bcc2165bb2155f5ab69ac922c6458e1799 | [
"BSD-3-Clause"
] | null | null | null | Github/Github_Get_weekly_commits_from_repository.ipynb | vivard/awesome-notebooks | 899558bcc2165bb2155f5ab69ac922c6458e1799 | [
"BSD-3-Clause"
] | null | null | null | Github/Github_Get_weekly_commits_from_repository.ipynb | vivard/awesome-notebooks | 899558bcc2165bb2155f5ab69ac922c6458e1799 | [
"BSD-3-Clause"
] | null | null | null | 28.255193 | 302 | 0.561647 | [
[
[
"# Github - Get weekly commits from repository\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Github/Github_Get_weekly_commits_from_repository.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #github #commits #stats #naas_drivers #plotly #linechart",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport plotly.express as px\nfrom naas_drivers import github\nimport naas",
"_____no_output_____"
]
],
[
[
"## Setup Github\n**How to find your personal access token on Github?**\n\n- First we need to create a personal access token to get the details of our organization from here: https://github.com/settings/tokens\n- You will be asked to select scopes for the token. Which scopes you choose will determine what information and actions you will be able to perform against the API.\n- You should be careful with the ones prefixed with write:, delete: and admin: as these might be quite destructive.\n- You can find description of each scope in docs here (https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps).",
"_____no_output_____"
]
],
[
[
"# Github repository url\nREPO_URL = \"https://github.com/jupyter-naas/awesome-notebooks\"\n\n# Github token\nGITHUB_TOKEN = \"ghp_fUYP0Z5i29AG4ggX8owctGnHUoVHi******\"",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Get commits from repository url",
"_____no_output_____"
]
],
[
[
"df_commits = github.connect(GITHUB_TOKEN).repositories.get_commits(REPO_URL)\ndf_commits",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Get weekly commits",
"_____no_output_____"
]
],
[
[
"def get_weekly_commits(df):\n # Exclude Github commits\n df = df[(df.COMMITTER_EMAIL.str[-10:] != \"github.com\")]\n \n # Groupby and count\n df = df.groupby(pd.Grouper(freq='W', key='AUTHOR_DATE')).agg({\"ID\": \"count\"}).reset_index()\n df[\"WEEKS\"] = df[\"AUTHOR_DATE\"].dt.strftime(\"W%U-%Y\")\n \n # Cleaning\n df = df.rename(columns={\"ID\": \"NB_COMMITS\"})\n return df\n\ndf_weekly = get_weekly_commits(df_commits)\ndf_weekly",
"_____no_output_____"
]
],
[
[
"### Plot a bar chart of weekly commit activity",
"_____no_output_____"
]
],
[
[
"def create_barchart(df, repository):\n # Get repository\n repository = repository.split(\"/\")[-1]\n \n # Calc commits\n commits = df.NB_COMMITS.sum()\n \n # Create fig\n fig = px.bar(df,\n title=f\"Github - {repository} : Weekly user commits <br><span style='font-size: 13px;'>Total commits: {commits}</span>\",\n x=\"WEEKS\",\n y=\"NB_COMMITS\",\n labels={\n 'WEEKS':'Weeks committed',\n 'NB_COMMITS':\"Nb. commits\"\n })\n fig.update_traces(marker_color='black')\n fig.update_layout(\n plot_bgcolor=\"#ffffff\",\n width=1200,\n height=800,\n font=dict(family=\"Arial\", size=14, color=\"black\"),\n paper_bgcolor=\"white\",\n margin_pad=10,\n )\n fig.show()\n return fig\n\nfig = create_barchart(df_weekly, REPO_URL)",
"_____no_output_____"
]
],
[
[
"### Save and export html",
"_____no_output_____"
]
],
[
[
"output_path = f\"{REPO_URL.split('/')[-1]}_weekly_commits.html\"\nfig.write_html(output_path)\nnaas.asset.add(output_path, params={\"inline\": True})",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc7f787b15ca484a30c699c9a8c4ed3d4735bd8 | 5,742 | ipynb | Jupyter Notebook | doc/posts/hs/udata-python.ipynb | xinetzone/d2py | 657362a0451921ef5a7b05b4a8378f7379063cdf | [
"Apache-2.0"
] | 3 | 2022-03-09T14:08:42.000Z | 2022-03-10T04:17:17.000Z | doc/posts/hs/udata-python.ipynb | xinetzone/d2py | 657362a0451921ef5a7b05b4a8378f7379063cdf | [
"Apache-2.0"
] | 3 | 2021-11-07T13:11:26.000Z | 2022-03-19T03:28:48.000Z | doc/posts/hs/udata-python.ipynb | xinetzone/d2py | 657362a0451921ef5a7b05b4a8378f7379063cdf | [
"Apache-2.0"
] | 1 | 2022-03-15T14:18:32.000Z | 2022-03-15T14:18:32.000Z | 26.831776 | 93 | 0.412574 | [
[
[
"```{post} 2021/11/24 00:00\n:category: Python\n:tags: basic, udata\n:excerpt: 1\n```\n\n# Python `udata` 取数\n\n\n1. 登录平台,[获取Token](https://udata.hs.net/help/292)\n2. 在数据页面,获取接口名称、请求参数,并查看返回参数及代码示例;\n3. 编写 Python 脚本,并执行,如下所示:",
"_____no_output_____"
]
],
[
[
"# 引入 hs_udata 模块中 set_token 和 stock_list\nfrom hs_udata import set_token, stock_list\n# 设置 Token\nset_token(token='Xg6Mx3LZo2HACYGJ-ir825yGFKXJwZh5O4hY8g2HDtep4uGTwqYPHupLKIte6Hp_')\ndata = stock_list() # 获取 股票列表数据,返回格式为dataframe\ndata.head() # 打印数据前5行",
"_____no_output_____"
]
],
[
[
"## 导出数据",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.extend(['../../../'])",
"_____no_output_____"
],
[
"from d2py.utils.file import mkdir",
"_____no_output_____"
],
[
"save_root = 'data'\nmkdir(save_root)\n\ndata.to_excel(f'{save_root}/股票列表.xlsx') # 写出Excel文件\ndata.to_csv(f'{save_root}/股票列表.csv',sep=',',encoding='utf_8_sig') # 写出CSV文件\ndata.to_csv(f'{save_root}/股票列表.txt',sep=' ',encoding='utf_8_sig') # 写出TXT文件",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecc7fa7b5bb8dce7d15a1e4db8e6accc0e2ca93e | 94,142 | ipynb | Jupyter Notebook | 3. Landmark Detection and Tracking.ipynb | jlandman71/cvnd-slam | a7cc0aaac82676881d1a91dab3a254ab639497f2 | [
"MIT"
] | null | null | null | 3. Landmark Detection and Tracking.ipynb | jlandman71/cvnd-slam | a7cc0aaac82676881d1a91dab3a254ab639497f2 | [
"MIT"
] | null | null | null | 3. Landmark Detection and Tracking.ipynb | jlandman71/cvnd-slam | a7cc0aaac82676881d1a91dab3a254ab639497f2 | [
"MIT"
] | null | null | null | 109.850642 | 31,032 | 0.810733 | [
[
[
"# Project 3: Implement SLAM \n\n---\n\n## Project Overview\n\nIn this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!\n\nSLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem. \n\nUsing what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`. \n> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world\n\nYou can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:\n```\nmu = matrix([[Px0],\n [Py0],\n [Px1],\n [Py1],\n [Lx0],\n [Ly0],\n [Lx1],\n [Ly1]])\n```\n\nYou can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.\n\n## Generating an environment\n\nIn a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.\n\n---",
"_____no_output_____"
],
[
"## Create the world\n\nUse the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds! \n\n`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.\n\n#### Helper functions\n\nYou will be working with the `robot` class that may look familiar from the first notebook, \n\nIn fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom helpers import make_data\n\n# your implementation of slam should work with the following inputs\n# feel free to change these input values and see how it responds!\n\n# world parameters\nnum_landmarks = 5 # number of landmarks 5\nN = 20 # time steps 20\nworld_size = 100.0 # size of world (square)\n\n# robot parameters\nmeasurement_range = 50.0 # range at which we can sense landmarks\nmotion_noise = 2.0 # noise in robot motion 2.0\nmeasurement_noise = 2.0 # noise in the measurements 2.0\ndistance = 20.0 # distance by which robot (intends to) move each iteratation \n\n\n# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks\ndata = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)",
" \nLandmarks: [[68, 7], [87, 78], [58, 45], [18, 39], [78, 20]]\nRobot: [x=32.76195 y=30.78946]\n"
]
],
[
[
"### A note on `make_data`\n\nThe function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:\n1. Instantiating a robot (using the robot class)\n2. Creating a grid world with landmarks in it\n\n**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**\n\nThe `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.\n\n\nIn `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:\n```\nmeasurement = data[i][0]\nmotion = data[i][1]\n```\n",
"_____no_output_____"
]
],
[
[
"# print out some stats about the data\ntime_step = 18\n\nprint('Example measurements: \\n', data[time_step][0])\nprint('\\n')\nprint('Example motion: \\n', data[time_step][1])",
"Example measurements: \n [[0, 29.43162426811542, -7.149054013161365], [2, 19.709182720982618, 31.345789529898084], [3, -20.388659938214204, 24.881488613640986], [4, 36.85843473008569, 3.5459614235641497]]\n\n\nExample motion: \n [-8.792025581550421, 17.963860558726317]\n"
]
],
[
[
"Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.",
"_____no_output_____"
],
[
"## Initialize Constraints\n\nOne of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.\n\n<img src='images/motion_constraint.png' width=50% height=50% />\n\n\nIn *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.\n\n<img src='images/constraints2D.png' width=50% height=50% />\n\nYou may also choose to create two of each omega and xi (one for x and one for y positions).",
"_____no_output_____"
],
[
"### TODO: Write a function that initializes omega and xi\n\nComplete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.\n\n*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*",
"_____no_output_____"
]
],
[
[
"def initialize_constraints(N, num_landmarks, world_size):\n ''' This function takes in a number of time steps N, number of landmarks, and a world_size,\n and returns initialized constraint matrices, omega and xi.'''\n \n ## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable\n \n size = 2*N + 2*num_landmarks\n \n ## TODO: Define the constraint matrix, Omega, with two initial \"strength\" values\n ## for the initial x, y location of our robot\n omega = np.zeros((size,size))\n \n omega[0,0] = 1.0\n omega[1,1] = 1.0\n \n ## TODO: Define the constraint *vector*, xi\n ## you can assume that the robot starts out in the middle of the world with 100% confidence\n xi = np.zeros((size,1))\n \n xi[0] = world_size/2.0\n xi[1] = world_size/2.0\n \n return omega, xi\n ",
"_____no_output_____"
]
],
[
[
"### Test as you go\n\nIt's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.\n\nBelow, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.\n\n**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.\n\nThis code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.",
"_____no_output_____"
]
],
[
[
"# import data viz resources\nimport matplotlib.pyplot as plt\nfrom pandas import DataFrame\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"# define a small N and world_size (small for ease of visualization)\nN_test = 5\nnum_landmarks_test = 2\nsmall_world = 10\n\n# initialize the constraints\ninitial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)",
"_____no_output_____"
],
[
"# define figure size\nplt.rcParams[\"figure.figsize\"] = (10,7)\n\n# display omega\nsns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)",
"_____no_output_____"
],
[
"# define figure size\nplt.rcParams[\"figure.figsize\"] = (1,7)\n\n# display xi\nsns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)",
"_____no_output_____"
]
],
[
[
"---\n## SLAM inputs \n\nIn addition to `data`, your slam function will also take in:\n* N - The number of time steps that a robot will be moving and sensing\n* num_landmarks - The number of landmarks in the world\n* world_size - The size (w/h) of your world\n* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`\n* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`\n\n#### A note on noise\n\nRecall that `omega` holds the relative \"strengths\" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.\n\n### TODO: Implement Graph SLAM\n\nFollow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation! \n\n#### Updating with motion and measurements\n\nWith a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\\mu = \\Omega^{-1}\\xi$\n\n**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**",
"_____no_output_____"
]
],
[
[
"## TODO: Complete the code to implement SLAM\n\n## slam takes in 6 arguments and returns mu, \n## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations\ndef slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):\n \n ## TODO: Use your initilization to create constraint matrices, omega and xi\n omega, xi = initialize_constraints(N, num_landmarks, world_size)\n \n #print(\"omega\",omega)\n #print(\"xi\",xi)\n \n ## TODO: Iterate through each time step in the data\n ## get all the motion and measurement data as you iterate\n \n for step, data_step in enumerate(data):\n measurements = data_step[0]\n motion = data_step[1]\n \n x_idx = 2*step\n y_idx = x_idx + 1\n \n x_next_idx = x_idx + 2\n y_next_idx = y_idx + 2\n \n for measurement in measurements:\n \n ## TODO: update the constraint matrix/vector to account for all *measurements*\n ## this should be a series of additions that take into account the measurement noise\n \n m_index = measurement[0]\n m_x_idx = 2*N + 2*m_index\n m_y_idx = m_x_idx + 1\n \n omega[x_idx, x_idx] += 1.0/measurement_noise\n omega[x_idx, m_x_idx] += -1.0/measurement_noise\n xi[x_idx] += -measurement[1]/measurement_noise\n \n omega[m_x_idx, m_x_idx] += 1.0/measurement_noise\n omega[m_x_idx, x_idx] += -1.0/measurement_noise\n xi[m_x_idx] += measurement[1]/measurement_noise\n \n omega[y_idx, y_idx] += 1.0/measurement_noise\n omega[y_idx, m_y_idx] += -1.0/measurement_noise\n xi[y_idx] += -measurement[2]/measurement_noise\n \n omega[m_y_idx, m_y_idx] += 1.0/measurement_noise\n omega[m_y_idx, y_idx] += -1.0/measurement_noise\n xi[m_y_idx] += measurement[2]/measurement_noise\n \n ## TODO: update the constraint matrix/vector to account for all *motion* and motion noise\n \n omega[x_idx, x_idx] += 1.0/motion_noise\n omega[x_idx, x_next_idx] += -1.0/motion_noise\n xi[x_idx] += -motion[0]/motion_noise\n \n omega[x_next_idx, x_next_idx] += 1.0/motion_noise\n omega[x_next_idx, x_idx] += -1.0/motion_noise\n xi[x_next_idx] += motion[0]/motion_noise\n \n omega[y_idx, y_idx] += 1.0/motion_noise\n omega[y_idx, y_next_idx] += -1.0/motion_noise\n xi[y_idx] += -motion[1]/motion_noise\n \n omega[y_next_idx, y_next_idx] += 1.0/motion_noise\n omega[y_next_idx, y_idx] += -1.0/motion_noise\n xi[y_next_idx] += motion[1]/motion_noise\n \n ## TODO: After iterating through all the data\n ## Compute the best estimate of poses and landmark positions\n ## using the formula, omega_inverse * Xi\n \n #print(\"omega\",omega)\n #print(\"xi\",xi)\n \n omega_inv = np.linalg.inv(np.matrix(omega))\n mu = omega_inv * xi\n \n return mu # return `mu`\n",
"_____no_output_____"
]
],
[
[
"## Helper functions\n\nTo check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists. \n\nThen, we define a function that nicely print out these lists; both of these we will call, in the next step.\n",
"_____no_output_____"
]
],
[
[
"# a helper function that creates a list of poses and of landmarks for ease of printing\n# this only works for the suggested constraint architecture of interlaced x,y poses\ndef get_poses_landmarks(mu, N):\n # create a list of poses\n poses = []\n for i in range(N):\n poses.append((mu[2*i].item(), mu[2*i+1].item()))\n\n # create a list of landmarks\n landmarks = []\n for i in range(num_landmarks):\n landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))\n\n # return completed lists\n return poses, landmarks\n",
"_____no_output_____"
],
[
"def print_all(poses, landmarks):\n print('\\n')\n print('Estimated Poses:')\n for i in range(len(poses)):\n print('['+', '.join('%.3f'%p for p in poses[i])+']')\n print('\\n')\n print('Estimated Landmarks:')\n for i in range(len(landmarks)):\n print('['+', '.join('%.3f'%l for l in landmarks[i])+']')\n",
"_____no_output_____"
]
],
[
[
"## Run SLAM\n\nOnce you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!\n\n### What to Expect\n\nThe `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.\n\nWith these values in mind, you should expect to see a result that displays two lists:\n1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.\n2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length. \n\n#### Landmark Locations\n\nIf you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).",
"_____no_output_____"
]
],
[
[
"# call your implementation of slam, passing in the necessary parameters\nmu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)\n\n# print out the resulting landmarks and poses\nif(mu is not None):\n # get the lists of poses and landmarks\n # and print them out\n poses, landmarks = get_poses_landmarks(mu, N)\n print_all(poses, landmarks)",
"\n\nEstimated Poses:\n[50.000, 50.000]\n[61.145, 32.627]\n[73.582, 16.058]\n[84.397, 0.878]\n[94.894, 18.398]\n[77.462, 24.076]\n[58.105, 30.860]\n[37.974, 38.095]\n[18.480, 44.819]\n[0.342, 51.686]\n[15.445, 64.107]\n[29.493, 76.847]\n[44.215, 91.071]\n[24.640, 88.551]\n[4.999, 86.560]\n[14.489, 68.431]\n[23.662, 50.545]\n[31.087, 32.446]\n[38.586, 15.519]\n[29.794, 33.483]\n\n\nEstimated Landmarks:\n[67.267, 8.207]\n[86.709, 78.490]\n[57.338, 46.477]\n[17.051, 40.115]\n[77.888, 21.346]\n"
]
],
[
[
"## Visualize the constructed world\n\nFinally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!\n\n**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**",
"_____no_output_____"
]
],
[
[
"# import the helper function\nfrom helpers import display_world\n\n# Display the final world!\n\n# define figure size\nplt.rcParams[\"figure.figsize\"] = (20,20)\n\n# check if poses has been created\nif 'poses' in locals():\n # print out the last pose\n print('Last pose: ', poses[-1])\n # display the last position of the robot *and* the landmark positions\n display_world(int(world_size), poses[-1], landmarks)",
"Last pose: (29.794051303453998, 33.4832135711395)\n"
]
],
[
[
"### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?\n\nYou can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.",
"_____no_output_____"
],
[
"**Answer**: \nFinal pose = [29.794, 33.483] and true pose = [x=32.76195 y=30.78946]\n\nSo the final pose is within 10% of the true pose position.\nThe difference is caused by both motion noise and measurement noise. Because of these the measurements and motions as known to the robot differ from the true landmark positions and true motions.\n\nIf we increased the number of measurements or decreased the noise then the estimates of the landmarks and the poses would become more accurate.\n",
"_____no_output_____"
],
[
"## Testing\n\nTo confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.\n\n### Submit your project\n\nIf you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!",
"_____no_output_____"
]
],
[
[
"# Here is the data and estimated outputs for test case 1\n\ntest_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]\n\n## Test Case 1\n##\n# Estimated Pose(s):\n# [50.000, 50.000]\n# [37.858, 33.921]\n# [25.905, 18.268]\n# [13.524, 2.224]\n# [27.912, 16.886]\n# [42.250, 30.994]\n# [55.992, 44.886]\n# [70.749, 59.867]\n# [85.371, 75.230]\n# [73.831, 92.354]\n# [53.406, 96.465]\n# [34.370, 100.134]\n# [48.346, 83.952]\n# [60.494, 68.338]\n# [73.648, 53.082]\n# [86.733, 38.197]\n# [79.983, 20.324]\n# [72.515, 2.837]\n# [54.993, 13.221]\n# [37.164, 22.283]\n\n\n# Estimated Landmarks:\n# [82.679, 13.435]\n# [70.417, 74.203]\n# [36.688, 61.431]\n# [18.705, 66.136]\n# [20.437, 16.983]\n\n\n### Uncomment the following three lines for test case 1 and compare the output to the values above ###\n\nmu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)\nposes, landmarks = get_poses_landmarks(mu_1, 20)\nprint_all(poses, landmarks)",
"\n\nEstimated Poses:\n[50.000, 50.000]\n[37.973, 33.652]\n[26.185, 18.155]\n[13.745, 2.116]\n[28.097, 16.783]\n[42.384, 30.902]\n[55.831, 44.497]\n[70.857, 59.699]\n[85.697, 75.543]\n[74.011, 92.434]\n[53.544, 96.454]\n[34.525, 100.080]\n[48.623, 83.953]\n[60.197, 68.107]\n[73.778, 52.935]\n[87.132, 38.538]\n[80.303, 20.508]\n[72.798, 2.945]\n[55.245, 13.255]\n[37.416, 22.317]\n\n\nEstimated Landmarks:\n[82.956, 13.539]\n[70.495, 74.141]\n[36.740, 61.281]\n[18.698, 66.060]\n[20.635, 16.875]\n"
],
[
"# Here is the data and estimated outputs for test case 2\n\ntest_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]] \n\n\n## Test Case 2\n##\n# Estimated Pose(s):\n# [50.000, 50.000]\n# [69.035, 45.061]\n# [87.655, 38.971]\n# [76.084, 55.541]\n# [64.283, 71.684]\n# [52.396, 87.887]\n# [44.674, 68.948]\n# [37.532, 49.680]\n# [31.392, 30.893]\n# [24.796, 12.012]\n# [33.641, 26.440]\n# [43.858, 43.560]\n# [54.735, 60.659]\n# [65.884, 77.791]\n# [77.413, 94.554]\n# [96.740, 98.020]\n# [76.149, 99.586]\n# [70.211, 80.580]\n# [64.130, 61.270]\n# [58.183, 42.175]\n\n\n# Estimated Landmarks:\n# [76.777, 42.415]\n# [85.109, 76.850]\n# [13.687, 95.386]\n# [59.488, 39.149]\n# [69.283, 93.654]\n\n\n### Uncomment the following three lines for test case 2 and compare to the values above ###\n\nmu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)\nposes, landmarks = get_poses_landmarks(mu_2, 20)\nprint_all(poses, landmarks)\n",
"\n\nEstimated Poses:\n[50.000, 50.000]\n[69.181, 45.665]\n[87.743, 39.703]\n[76.270, 56.311]\n[64.317, 72.176]\n[52.257, 88.154]\n[44.059, 69.401]\n[37.002, 49.918]\n[30.924, 30.955]\n[23.508, 11.419]\n[34.180, 27.133]\n[44.155, 43.846]\n[54.806, 60.920]\n[65.698, 78.546]\n[77.468, 95.626]\n[96.802, 98.821]\n[75.957, 99.971]\n[70.200, 81.181]\n[64.054, 61.723]\n[58.107, 42.628]\n\n\nEstimated Landmarks:\n[76.779, 42.887]\n[85.065, 77.438]\n[13.548, 95.652]\n[59.449, 39.595]\n[69.263, 94.240]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
ecc8038f36a4eee35f3b919a87c7cb118eeb74cd | 722,214 | ipynb | Jupyter Notebook | Notebooks/1.Manual_curation/Refinement_1b_Duplicate_Reactions.ipynb | AgileBioFoundry/Rt_IFO0880 | c0cb9c57f662543f5db0ca80ce4990f841f39e8a | [
"CC-BY-4.0"
] | null | null | null | Notebooks/1.Manual_curation/Refinement_1b_Duplicate_Reactions.ipynb | AgileBioFoundry/Rt_IFO0880 | c0cb9c57f662543f5db0ca80ce4990f841f39e8a | [
"CC-BY-4.0"
] | null | null | null | Notebooks/1.Manual_curation/Refinement_1b_Duplicate_Reactions.ipynb | AgileBioFoundry/Rt_IFO0880 | c0cb9c57f662543f5db0ca80ce4990f841f39e8a | [
"CC-BY-4.0"
] | null | null | null | 65.739487 | 2,996 | 0.547943 | [
[
[
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nfrom matplotlib import colors\nimport csv\nimport numpy as np\nimport pandas as pd\nimport cobra",
"_____no_output_____"
],
[
"cobra.__version__",
"_____no_output_____"
],
[
"Annotation = pd.read_excel('../../Data/R_toruloides_Data_for_Reconstruction.xlsx',\n sheet_name='Annotation', index_col=0)\nAnnotation.index = Annotation.index.map(str)\nAnnotation = Annotation.fillna('')\nTranscriptomics = pd.read_excel('../../Data/R_toruloides_Data_for_Reconstruction.xlsx',\n sheet_name='Transcriptomics', header=[0,1,2,3], index_col=0)\nTranscriptomics.index = Transcriptomics.index.map(str)\nProteomics = pd.read_excel('../../Data/R_toruloides_Data_for_Reconstruction.xlsx',\n sheet_name='Proteomics', header=[0,1,2], index_col=0)\nProteomics.index = Proteomics.index.map(str)\nFitness = pd.read_excel('../../Data/R_toruloides_Data_for_Reconstruction.xlsx',\n sheet_name='Fitness', index_col=0)\nFitness.index = Fitness.index.map(str)",
"_____no_output_____"
],
[
"def background_gradient(s, cmap='seismic', text_color_threshold=0.408):\n lim = max(abs(s.min().min()),abs(s.max().max()))\n rng = 2.0*lim\n norm = colors.Normalize(-lim - (rng * 0.2), lim + (rng * 0.2))\n rgbas = plt.cm.get_cmap(cmap)(norm(s.values))\n def relative_luminance(rgba):\n r, g, b = (x / 12.92 if x <= 0.03928 else ((x + 0.055) / 1.055 ** 2.4) for x in rgba[:3])\n return 0.2126 * r + 0.7152 * g + 0.0722 * b\n def css(rgba):\n dark = relative_luminance(rgba) < text_color_threshold\n text_color = '#f1f1f1' if dark else '#000000'\n return 'background-color: {b};color: {c};'.format(b=colors.rgb2hex(rgba), c=text_color)\n\n if s.ndim == 1:\n return [css(rgba) for rgba in rgbas]\n else:\n return pd.DataFrame([[css(rgba) for rgba in row] for row in rgbas], index=s.index, columns=s.columns)\n\ndef Show_Data(x):\n display(Transcriptomics.loc[x].style.background_gradient(cmap='Reds', low=0.2, high=0.2, axis=None))\n temp = [y for y in x if y in Proteomics.index]\n display(Proteomics.loc[temp].style.background_gradient(cmap='Reds', low=0.2, high=0.2, axis=None))\n temp = [y for y in x if y in Fitness.index]\n display(Fitness.loc[temp].style.apply(background_gradient, cmap='seismic', axis=None))\n return;",
"_____no_output_____"
],
[
"eco = cobra.io.load_json_model('../../Data/BiGG_Models/iML1515.json')\nsce = cobra.io.load_json_model('../../Data/BiGG_Models/iMM904.json')\nhsa = cobra.io.load_json_model('../../Data/BiGG_Models/RECON1.json')\nhsa2 = cobra.io.load_json_model('../../Data/BiGG_Models/Recon3D.json')\nptri = cobra.io.load_json_model('../../Data/BiGG_Models/iLB1027_lipid.json')\ncre = cobra.io.load_json_model('../../Data/BiGG_Models/iRC1080.json')",
"_____no_output_____"
],
[
"model = cobra.io.load_json_model(\"IFO0880_GPR_1a.json\")",
"_____no_output_____"
],
[
"print(len(model.genes))\nprint(len([x for x in model.genes if not x.id[0].isalpha()]))\nmodel",
"1362\n1149\n"
]
],
[
[
"### Duplicate reactions",
"_____no_output_____"
]
],
[
[
"duplicated = set()\nfor i, r in enumerate(model.reactions):\n temp = 0\n for j, r2 in enumerate(model.reactions):\n if j > i and r2.id not in duplicated:\n if r.reactants == r2.reactants and r.products == r2.products:\n if temp == 0:\n temp = 1\n duplicated.add(r.id)\n print(r.id, r.reaction, r.gene_reaction_rule)\n duplicated.add(r2.id)\n print(r2.id, r2.reaction, r2.gene_reaction_rule)\n if temp == 1:\n print()",
"ARD dhmtp_c + o2_c --> 2kmb_c + for_c + 2.0 h_c 16330\nACDO dhmtp_c + o2_c --> 2kmb_c + for_c + h_c 16330\n\nyli_R1488 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1487 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1489 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1490 atp_c + btn_c --> btamp_c + ppi_c 16404\n\nyli_R0002 akg_c + gln__L_c + h_c + nadph_c --> glu__L_c + nadp_c 15713\nGLUSy akg_c + gln__L_c + h_c + nadph_c --> 2.0 glu__L_c + nadp_c (PP_5075 and 15713) or (b3213 and 15713)\n\nyli_R1425 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 12898\nyli_R1378 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 10040\nyli_R1377 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 10205\n\nyli_R1466 2ippm_m + h2o_m <=> 3c2hmp_m 14914\nyli_R7859 2ippm_m + h2o_m <=> 3c2hmp_m 14914\n\nyli_R8859 3c3hmp_m <=> 2ippm_m + h2o_m 14914\nyli_R1465 3c3hmp_m <=> 2ippm_m + h2o_m 14914\n\nCERH124_copy2 cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c 9664\nCERH124_copy1 cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c 15314\n\nFMNAT atp_c + fmn_c + h_c --> fad_c + ppi_c 11542\nAFAT atp_c + fmn_c + 2.0 h_c --> fad_c + ppi_c 11542 or 9298\n\nCERH126_copy2 cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c 9664\nCERH126_copy1 cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c 15314\n\nALDD2x acald_c + h2o_c + nad_c --> ac_c + 2.0 h_c + nadh_c 12042 or 13426 or 15814 or 16323\nALDD2x_copy1 acald_c + h2o_c + nad_c --> ac_c + 2.0 h_c + nadh_c 12042 or 13426 or 15814\n\nGTPCI gtp_c + h2o_c --> ahdt_c + for_c + h_c 10332\nGTPCI_2 gtp_c + h2o_c --> ahdt_c + for_c + 2.0 h_c 10332\n\nGCC2cm dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\nGCC2cm_copy2 dhlam_m + nad_m --> h_m + lpam_m + nadh_m (10007 and 10040 and 12116) or (10040 and 12116 and 9274)\nGCC2cm_copy1 dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\n\nGTHOr gthox_c + h_c + nadph_c <=> 2.0 gthrd_c + nadp_c 15482 or (15038 and 15482) or (15482 and 16549) or (15482 and 8790)\nyli_R0291 gthox_c + h_c + nadph_c --> gthrd_c + nadp_c 15482\n\nUREASE atp_c + hco3_c + urea_c <=> adp_c + allphn_c + h_c + pi_c 9326\nURCB atp_c + hco3_c + urea_c --> adp_c + allphn_c + 2.0 h_c + pi_c 9326\n\nPTPATi atp_c + h_c + pan4p_c --> dpcoa_c + ppi_c 14849\nAPPAT atp_c + 2.0 h_c + pan4p_c <=> dpcoa_c + ppi_c 14849\n\nyli_R1435 2.0 accoa_x --> aacoa_x + coa_x 8678 or 8885\nACACT1x 2.0 accoa_x <=> aacoa_x + coa_x 8678 or 8885\n\nyli_R0034 chtn_c + h2o_c --> acgam_c 13082\nCHTNASE chtn_c + 2.0 h2o_c --> 3.0 acgam_c 13082\n\nyli_R1375 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c 13630 and 13948\nyli_R0357 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c 15791 or (13630 and 13948)\n\nARGSS asp__L_c + atp_c + citr__L_c --> amp_c + argsuc_c + h_c + ppi_c 16196\nARGSS_1 asp__L_c + atp_c + citr__L_c --> amp_c + argsuc_c + 2.0 h_c + ppi_c 16196\n\nPRAGSr atp_c + gly_c + pram_c <=> adp_c + gar_c + h_c + pi_c 14259\nPPRGL atp_c + gly_c + pram_c --> adp_c + gar_c + 2.0 h_c + pi_c 14259\n\nAIRCr air_c + co2_c <=> 5aizc_c + h_c 12132\nPRAIC air_c + co2_c <=> 5aizc_c + 2.0 h_c 12132\n\nyli_R0224 8.0 coa_m + 8.0 h2o_m + 8.0 nad_m + nadph_m + 7.0 o2_m + yli_M04625_m --> 9.0 accoa_m + 7.0 h2o2_m + 7.0 h_m + 8.0 nadh_m + nadp_m (12742 and 13813 and 14805) or (12742 and 14805 and 9065) or (12752 and 13813 and 14805) or (12752 and 14805 and 9065) or (13813 and 14805 and 9700) or (14805 and 9065 and 9700)\nyli_R0223 8.0 coa_m + 8.0 h2o_m + 8.0 nad_m + 2.0 nadph_m + 8.0 o2_m + yli_M04625_m --> 9.0 accoa_m + 8.0 h2o2_m + 6.0 h_m + 8.0 nadh_m + 2.0 nadp_m (12742 and 13813 and 14805) or (12742 and 14805 and 9065) or (12752 and 13813 and 14805) or (12752 and 14805 and 9065) or (13813 and 14805 and 9700) or (14805 and 9065 and 9700)\n\nC4STMO1 44mzym_c + 3.0 h_c + 3.0 nadph_c + 3.0 o2_c --> 4mzym_int1_c + 4.0 h2o_c + 3.0 nadp_c 16640\n44MZYMMO 44mzym_c + 2.0 h_c + 3.0 nadph_c + 3.0 o2_c <=> 4mzym_int1_c + 4.0 h2o_c + 3.0 nadp_c 15314\n\nFAO182p_evenodd 8.0 coa_x + 8.0 h2o_x + 8.0 nad_x + nadph_x + 7.0 o2_x + ocdycacoa_x --> 9.0 accoa_x + 7.0 h2o2_x + 7.0 h_x + 8.0 nadh_x + nadp_x (10293 and 11362 and 12742 and 13228 and 13813) or (10293 and 11362 and 12742 and 13228 and 9065) or (10293 and 11362 and 12752 and 13228 and 13813) or (10293 and 11362 and 12752 and 13228 and 9065) or (10293 and 11362 and 13228 and 13813 and 9700) or (10293 and 11362 and 13228 and 9065 and 9700)\nFAO182p_eveneven 8.0 coa_x + 8.0 h2o_x + 8.0 nad_x + 2.0 nadph_x + 8.0 o2_x + ocdycacoa_x --> 9.0 accoa_x + 8.0 h2o2_x + 6.0 h_x + 8.0 nadh_x + 2.0 nadp_x (10293 and 11362 and 12742 and 13228 and 13813) or (10293 and 11362 and 12742 and 13228 and 9065) or (10293 and 11362 and 12752 and 13228 and 13813) or (10293 and 11362 and 12752 and 13228 and 9065) or (10293 and 11362 and 13228 and 13813 and 9700) or (10293 and 11362 and 13228 and 9065 and 9700)\n\nyli_R1510 1ag3p_SC_r + acoa_r --> coa_r + pa_EC_r 10427 or 16030 or 9746\nyli_R1523 1ag3p_SC_r + acoa_r --> coa_r + pa_EC_r 16030\n\nNMNAT atp_c + h_c + nmn_c --> nad_c + ppi_c 10430\nANNAT atp_c + 2.0 h_c + nmn_c <=> nad_c + ppi_c 10430\n\nyli_R0848 4.0 h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m (13630 and 13948) or (15685 and 9800)\nPDHam1mi h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m (13630 and 13948) or (13948 and 15791) or (15685 and 9800)\n\nyli_R1513 h2o_r + pa_EC_r --> dag_hs_r + pi_r 12485 or 13087\nyli_R1393 h2o_r + 0.01 pa_EC_r --> 0.01 dag_hs_r + pi_r 12485\n\nNNATr atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c 10430\nNNATr_copy1 atp_c + h_c + nicrnt_c --> dnad_c + ppi_c 14638\nNNATr_copy2 atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c 10430\n\nyli_R1508 glyald_c + h2o_c + nad_c --> glyc__R_c + h_c + nadh_c 12042 or 13426 or 16323\nGLYALDDr glyald_c + h2o_c + nad_c <=> glyc__R_c + 2.0 h_c + nadh_c 12042 or 13426\n\nHISTD h2o_c + histd_c + 2.0 nad_c --> 3.0 h_c + his__L_c + 2.0 nadh_c 11646\nHDH h2o_c + histd_c + 2.0 nad_c --> 4.0 h_c + his__L_c + 2.0 nadh_c 11646\n\nGUAD gua_c + h2o_c + h_c --> nh4_c + xan_c 9050 or 9708\nGUAD_1 gua_c + h2o_c + 2.0 h_c --> nh4_c + xan_c 9050 or 9708\n\nPANTS ala_B_c + atp_c + pant__R_c --> amp_c + h_c + pnto__R_c + ppi_c 10475\nPBAL ala_B_c + atp_c + pant__R_c --> amp_c + 2.0 h_c + pnto__R_c + ppi_c 10475\n\nNADDP h2o_c + nad_c --> amp_c + 2.0 h_c + nmn_c 12434\nNPH h2o_c + nad_c --> amp_c + 3.0 h_c + nmn_c 15385\n\nSTARCH300DEGR2A 49.0 h2o_h + 250.0 pi_h + starch300_h --> 50.0 Glc_aD_h + 250.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRA 74.0 h2o_h + 225.0 pi_h + starch300_h --> 75.0 Glc_aD_h + 225.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\n\nHDC 2.0 h_c + his__L_c --> co2_c + hista_c 10722 or 9434 or 9435\nHISDC h_c + his__L_c --> co2_c + hista_c 10104\n\nSTARCH300DEGR2B 49.0 h2o_h + 250.0 pi_h + starch300_h --> 250.0 g1p_h + 50.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRB 74.0 h2o_h + 225.0 pi_h + starch300_h --> 225.0 g1p_h + 75.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\n\n"
]
],
[
[
"#### ARD",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('16330').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"ACDO dhmtp_c + o2_c --> 2kmb_c + for_c + h_c 16330\nARD dhmtp_c + o2_c --> 2kmb_c + for_c + 2.0 h_c 16330\nARD1 dhmtp_c + o2_c --> co_c + for_c + h_c + mtpp_c 16330\nDKMPPD2 dkmpp_c + 3.0 h2o_c --> 2kmb_c + for_c + 6.0 h_c + pi_c YEL038W and 16330\nyli_R1495 dhmtp_c + o2_c --> co_c + for_c + mtpp_c 16330\n"
],
[
"# 5mta -> 5mdr1p -> 5mdru1p -> dkmpp\nfor r in sorted(model.metabolites.get_by_id('5mta_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('5mdru1p_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('dkmpp_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('dhmtp_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('2kmb_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('mtpp_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"ACPCS amet_c --> 1acpc_c + 5mta_c + h_c 8613\nACPT16018111Z 12dgr16018111Z_c + amet_c --> 5mta_c + dghs16018111Z_c + h_c 10576\nACPT1601819Z 12dgr1601819Z_c + amet_c --> 5mta_c + dghs1601819Z_c + h_c 10576\nACPT18111Z18111Z 12dgr18111Z18111Z_c + amet_c --> 5mta_c + dghs18111Z18111Z_c + h_c 10576\nACPT18111Z1819Z 12dgr18111Z1819Z_c + amet_c --> 5mta_c + dghs18111Z1819Z_c + h_c 10576\nACPT1819Z18111Z 12dgr1819Z18111Z_c + amet_c --> 5mta_c + dghs1819Z18111Z_c + h_c 10576\nACPT1819Z1819Z 12dgr1819Z1819Z_c + amet_c --> 5mta_c + dghs1819Z1819Z_c + h_c 10576\nMTAP 5mta_c + pi_c --> 5mdr1p_c + ade_c 14521 or 8372\nSPMS ametam_c + ptrc_c --> 5mta_c + h_c + spmd_c 10577 or 16833\nSPRMS ametam_c + spmd_c --> 5mta_c + h_c + sprm_c 10577 or 16833\n\nMDRPD 5mdru1p_c --> dkmpp_c + h2o_c 15829\nMTRI 5mdr1p_c <=> 5mdru1p_c 13385 or 15595\n\nDKMPPD2 dkmpp_c + 3.0 h2o_c --> 2kmb_c + for_c + 6.0 h_c + pi_c YEL038W and 16330\nMDRPD 5mdru1p_c --> dkmpp_c + h2o_c 15829\n\nACDO dhmtp_c + o2_c --> 2kmb_c + for_c + h_c 16330\nARD dhmtp_c + o2_c --> 2kmb_c + for_c + 2.0 h_c 16330\nARD1 dhmtp_c + o2_c --> co_c + for_c + h_c + mtpp_c 16330\nyli_R1495 dhmtp_c + o2_c --> co_c + for_c + mtpp_c 16330\n\nACDO dhmtp_c + o2_c --> 2kmb_c + for_c + h_c 16330\nARD dhmtp_c + o2_c --> 2kmb_c + for_c + 2.0 h_c 16330\nDKMPPD2 dkmpp_c + 3.0 h2o_c --> 2kmb_c + for_c + 6.0 h_c + pi_c YEL038W and 16330\nUNK3 2kmb_c + glu__L_c --> akg_c + met__L_c 12407 or 14281 or 14908 or 15839 or 8936\n\nARD1 dhmtp_c + o2_c --> co_c + for_c + h_c + mtpp_c 16330\nyli_R1495 dhmtp_c + o2_c --> co_c + for_c + mtpp_c 16330\n"
],
[
"temp = ['8372','14521','13385','15595','15829','11455','16330','12407','14281','14908','15839','8936']\ndisplay(Annotation.loc[temp])\nShow_Data(temp)",
"_____no_output_____"
]
],
[
[
"S-methyl-5'-thioadenosine degradation \nMTAP 5mta_c + pi_c --> 5mdr1p_c + ade_c 14521 or 8372 \n8372 cysk MEU1 \n14521 extr uridine phosphorylase (also catalyze deoxy) -> not MTAP\n\nMTRI 5mdr1p_c <=> 5mdru1p_c 13385 or 15595 \n13385 cyto mtnA/MRI1 \n15595 cito GCN3 translation initiation factor eIF-2B subunit alpha EIF2B1 -> not MTRI\n\nMDRPD 5mdru1p_c --> dkmpp_c + h2o_c 15829 \n15829 cyto_pero mtnB/MDE1\n\nDKMPPD2 dkmpp_c + 3.0 h2o_c --> 2kmb_c + for_c + 6.0 h_c + pi_c YEL038W and 16330 \nACRS, 2OH3K5MPPISO, ACDO lumped, incorrect stoichiometry\n\nDKMPPD3_1 - dkmpp + h2o -> dhmtp + pi + h by mtnC (ACRS, 2OH3K5MPPISO lumped) \nACRS - dkmpp -> hkmpp + h by mtnW/mtnC \n2OH3K5MPPISO - hkmpp + h2o -> dhmtp + pi by mtnX/mtnC \n11455 cyto mtnX blast to 11455\n\nACDO dhmtp_c + o2_c --> 2kmb_c + for_c + h_c 16330, fe2+ requiring \nARD dhmtp_c + o2_c --> 2kmb_c + for_c + 2.0 h_c 16330, incorrect stoichiometry \nARD1 dhmtp_c + o2_c --> co_c + for_c + h_c + mtpp_c 16330, ni2+ requiring \n16330 cyto_nucl mtnD/ADI1 is fe2+ requiring and produce 2kmb and for -> keep ACDO\n\nUNK3 2kmb_c + glu__L_c --> akg_c + met__L_c 12407 or 14281 or 14908 or 15839 or 8936 \nUNK3 reaction is different in the pathway page, it is two step. \n2kmb + gln_L -> 2ogm + met_L by mtnE \n2ogm + h2o -> akg + nh4 by mtnU \nThe enzyme page 2.6.1.57 is consistent with UNK3, but 2.6.1.57 is not this reaction. \nBlast of known enzymes results in mixed results \nKEGG says this reaction is possible by 2.6.1.5 or 2.6.1.57 \n12407 cyto_nucl ARO8 tryptophan aminotransferase 2.6.1.27 \n14281 mito AAT1/AAT2 aspartate aminotransferase, mitochondrial 2.6.1.1 \n14908 cyto ARO8 aromatic amino acid aminotransferase I / 2-aminoadipate transaminase 2.6.1.57/2.6.1.39/2.6.1.27/2.6.1.5 \n15839 cysk ARO8 aromatic amino acid aminotransferase I / 2-aminoadipate transaminase 2.6.1.57/2.6.1.39/2.6.1.27/2.6.1.5 \n8936 cyto AAT2 aspartate aminotransferase, cytoplasmic 2.6.1.1 \nKeep genes for now\n\nChange MTAP genes to '8372' \nChange MTRI genes to '13385' \nAdd ACRS and 2OH3K5MPPISO, change genes '11455' \nRemove DKMPPD2, ARD, ARD1, yli_R1495",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('MTAP').gene_reaction_rule = '8372'\nmodel.reactions.get_by_id('MTRI').gene_reaction_rule = '13385'\nr1 = cre.reactions.get_by_id('ACRS').copy()\nr1.gene_reaction_rule = '11455'\nr2 = cre.reactions.get_by_id('2OH3K5MPPISO').copy()\nr2.gene_reaction_rule = '11455'\nmodel.add_reactions([r1,r2])\nmodel.remove_reactions(['DKMPPD2','ARD','ARD1','yli_R1495'],remove_orphans=True)",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('8372').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('13385').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15829').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('11455').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('16330').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"MTAP 5mta_c + pi_c --> 5mdr1p_c + ade_c 8372\n\nMTRI 5mdr1p_c <=> 5mdru1p_c 13385\nMTRK 5mtr_c + atp_c --> 5mdr1p_c + adp_c + h_c 13385\n\nMDRPD 5mdru1p_c --> dkmpp_c + h2o_c 15829\n\n2OH3K5MPPISO h2o_c + hkmpp_c --> dhmtp_c + pi_c 11455\nACRS dkmpp_c --> h_c + hkmpp_c 11455\n\nACDO dhmtp_c + o2_c --> 2kmb_c + for_c + h_c 16330\n"
],
[
"for r in sorted(model.metabolites.get_by_id('5mtr_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('5mdr1p_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"MTRK 5mtr_c + atp_c --> 5mdr1p_c + adp_c + h_c 13385\n\nMTAP 5mta_c + pi_c --> 5mdr1p_c + ade_c 8372\nMTRI 5mdr1p_c <=> 5mdru1p_c 13385\nMTRK 5mtr_c + atp_c --> 5mdr1p_c + adp_c + h_c 13385\n"
]
],
[
[
"MTRK is catalyzed by mtnK/MTK1 \nBlast of uniprot reviewed mtnK seqs results in no hit",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('MTRK').remove_from_model(remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### btn",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('16404').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"BACCL atp_c + btn_c + h_c --> btamp_c + ppi_c 16404\nBACCLm atp_m + btn_m + h_m --> btamp_m + ppi_m 16404\nBTNPL apoC_Lys_c + btamp_c --> amp_c + apoC_Lys_btn_c + h_c 16404\nBTNPLm apoC_Lys_m + btamp_m --> amp_m + apoC_Lys_btn_m + h_m 16404\nyli_R1487 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1488 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1489 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1490 atp_c + btn_c --> btamp_c + ppi_c 16404\n"
],
[
"for r in sorted(model.metabolites.get_by_id('btn_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('btamp_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('apoC_Lys_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('apoC_Lys_btn_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"BACCL atp_c + btn_c + h_c --> btamp_c + ppi_c 16404\nBTS5 2fe2s_c + amet_c + dtbt_c --> 2fe1s_c + btn_c + dad_5_c + h_c + met__L_c 15908\nBTSr dtbt_c + s_c <=> btn_c + 2.0 h_c 15908\nyli_R1487 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1488 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1489 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1490 atp_c + btn_c --> btamp_c + ppi_c 16404\n\nBACCL atp_c + btn_c + h_c --> btamp_c + ppi_c 16404\nBTNPL apoC_Lys_c + btamp_c --> amp_c + apoC_Lys_btn_c + h_c 16404\nyli_R1487 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1488 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1489 atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1490 atp_c + btn_c --> btamp_c + ppi_c 16404\n\nBTNPL apoC_Lys_c + btamp_c --> amp_c + apoC_Lys_btn_c + h_c 16404\n\nBTNPL apoC_Lys_c + btamp_c --> amp_c + apoC_Lys_btn_c + h_c 16404\n"
],
[
"for r in sorted(model.metabolites.get_by_id('btn_m').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('btamp_m').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('apoC_Lys_m').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('apoC_Lys_btn_m').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"BACCLm atp_m + btn_m + h_m --> btamp_m + ppi_m 16404\n\nBACCLm atp_m + btn_m + h_m --> btamp_m + ppi_m 16404\nBTNPLm apoC_Lys_m + btamp_m --> amp_m + apoC_Lys_btn_m + h_m 16404\n\nBTNPLm apoC_Lys_m + btamp_m --> amp_m + apoC_Lys_btn_m + h_m 16404\n\nBTNPLm apoC_Lys_m + btamp_m --> amp_m + apoC_Lys_btn_m + h_m 16404\n"
],
[
"temp = ['12736','12731','15908','16404']\ndisplay(Annotation.loc[temp])\nShow_Data(temp)",
"_____no_output_____"
],
[
"for x in temp:\n if x in model.genes:\n for r in sorted(model.genes.get_by_id(x).reactions, key=lambda x: x.id):\n print(r, r.gene_reaction_rule)\n else:\n print(x, 'no reactions')\n print()",
"AOXSr2: ala__L_c + pimACP_c --> 8aonn_c + ACP_c + co2_c 12736\n\nAMAOTr: 8aonn_c + amet_c <=> amob_c + dann_c 12731\n\nBTS5: 2fe2s_c + amet_c + dtbt_c --> 2fe1s_c + btn_c + dad_5_c + h_c + met__L_c 15908\nBTSr: dtbt_c + s_c <=> btn_c + 2.0 h_c 15908\n\nBACCL: atp_c + btn_c + h_c --> btamp_c + ppi_c 16404\nBACCLm: atp_m + btn_m + h_m --> btamp_m + ppi_m 16404\nBTNPL: apoC_Lys_c + btamp_c --> amp_c + apoC_Lys_btn_c + h_c 16404\nBTNPLm: apoC_Lys_m + btamp_m --> amp_m + apoC_Lys_btn_m + h_m 16404\nyli_R1487: atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1488: atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1489: atp_c + btn_c --> btamp_c + ppi_c 16404\nyli_R1490: atp_c + btn_c --> btamp_c + ppi_c 16404\n\n"
]
],
[
[
"Biotin \n16404 extr 12 cyto 6 cyto_nucl 5.5 mito 4 BPL1 biotin---protein ligase \nrequired for acetyl-CoA carboxylase (ACC1, cyto), should it be part of that reaction? \nBACCL has correct stoichiomtery\n\nbioF is peroxisomal http://www.jbc.org/content/286/35/30455 \npimACP or pimcoa works as a substrate for 8aonn synthesis \nIt is not clear how pimelate or pimcoa is synthesized \npimeloyl-CoA biosynthesis in peroxisome? beta-oxidation? http://www.jbc.org/content/286/49/42133 \nCheck genes near 12731-12736, maybe a cluster \nIn E. coli, it is fatty acid synthesis https://www.nature.com/articles/nchembio.420 \nIn B. subtilis, it is from long chain acyl-acp by C450 enzyme https://www.ncbi.nlm.nih.gov/pubmed/11368323 \nThe rest of the biotin biosynthesis is mitochondrial \n15908 mito BIO2 biotin synthase -> check 2Fe-2S ferredoxin metabolites \nBTS5 has correct stoichiomtery\n\nRemove BTSr, yli_R1487, yli_R1488, yli_R1489, yli_R1490, BACCLm, BTNPLm",
"_____no_output_____"
]
],
[
[
"m = model.metabolites.get_by_id('pimACP_c')\nm.id = 'pimcoa_c'\nm.name = 'Pimeloyl-CoA'\nm.formula = 'C28H41N7O19P3S'\nm.charge = -5\n\nr = model.reactions.get_by_id('AOXSr2').copy()\nr.id = 'AOXSp'\nmodel.add_reactions([r])\nr.add_metabolites({'ACP_c': -1.0, 'coa_c': 1.0, 'h_c': -1.0})\nfor m in r.metabolites:\n if not m.id.replace('_c','_x') in model.metabolites:\n m2 = m.copy()\n m2.id = m.id.replace('_c','_x')\n m2.compartment = 'x'\n model.add_metabolites([m2])\n r.add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_x'): r.get_coefficient(m.id)})\n\nr = sce.reactions.get_by_id('DBTS').copy()\nr.gene_reaction_rule = '12731'\nmodel.add_reactions([r])\n\nfor x in ['AMAOTr','DBTS','BTS5']:\n r = model.reactions.get_by_id(x)\n r.id = r.id + 'm'\n for m in r.metabolites:\n if not m.id.replace('_c','_m') in model.metabolites:\n m2 = m.copy()\n m2.id = m.id.replace('_c','_m')\n m2.compartment = 'm'\n model.add_metabolites([m2])\n r.add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_m'): r.get_coefficient(m.id)})\n\nmodel.remove_reactions(['AOXSr2','BTSr','yli_R1487','yli_R1488','yli_R1489','yli_R1490'],remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### GLUSy",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('15713').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('12248').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9856').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"GLUDy glu__L_c + h2o_c + nadp_c <=> akg_c + h_c + nadph_c + nh4_c 12248 or (b3213 and 15713)\nGLUS akg_h + gln__L_h + h_h + nadh_h --> 2.0 glu__L_h + nad_h 15713\nGLUS_ferr akg_h + 2.0 fdxrd_h + gln__L_h --> 2.0 fdxox_h + 2.0 glu__L_h + 2.0 h_h (CRv4_Au5_s16_g6229_t1 and 15713) or (CRv4_Au5_s17_g7064_t1 and 15713) or (CRv4_Au5_s3_g10824_t1 and 15713) or (CRv4_Au5_s6_g13523_t1 and 15713) or (CRv4_Au5_s7_g14133_t1 and 15713)\nGLUS_nadph akg_h + gln__L_h + h_h + nadph_h --> 2.0 glu__L_h + nadp_h 15713\nGLUSx akg_c + gln__L_c + h_c + nadh_c --> 2.0 glu__L_c + nad_c 15713\nGLUSy akg_c + gln__L_c + h_c + nadph_c --> 2.0 glu__L_c + nadp_c (PP_5075 and 15713) or (b3213 and 15713)\nyli_R0002 akg_c + gln__L_c + h_c + nadph_c --> glu__L_c + nadp_c 15713\n\nGDHm glu__L_m + h2o_m + nad_m <=> akg_m + h_m + nadh_m + nh4_m 12248\nGLUDy glu__L_c + h2o_c + nadp_c <=> akg_c + h_c + nadph_c + nh4_c 12248 or (b3213 and 15713)\nGLUDym glu__L_m + h2o_m + nadp_m <=> akg_m + h_m + nadph_m + nh4_m 12248\n\nGLUDxi glu__L_c + h2o_c + nad_c --> akg_c + h_c + nadh_c + nh4_c 9856\n"
]
],
[
[
"15713 cyto GLT1 glutamate synthase NAD -> GLUSx \n12248 cyto GDH1, GDH3 glutamate dehydrogenase NADP -> GLUDy \n9856 cyto GDH2 glutamate dehydrogenase NAD -> GLUDxi\n\nChange GLUDy genes to '12248' \nRemove GLUS, GLUS_ferr, GLUS_nadph, GLUSy, yli_R0002, GDHm, GLUDym",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('GLUDy').gene_reaction_rule = '12248'\nmodel.remove_reactions(['GLUS','GLUS_ferr','GLUS_nadph','GLUSy','yli_R0002','GDHm','GLUDym'],remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### mlthf / GCC2cm",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.metabolites.get_by_id('mlthf_x').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"yli_R1377 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 10205\nyli_R1378 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 10040\nyli_R1387 gly_x + h2o_x + mlthf_x <=> ser__L_x + thf_x 9667\nyli_R1425 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 12898\n"
],
[
"for r in sorted(model.genes.get_by_id('10205').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('10040').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9667').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('12898').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"GCC2am gly_m + h_m + lpam_m <=> alpam_m + co2_m 10040 and 10205 and 12898 and 15184\nGCC2bim alpam_m + thf_m --> dhlam_m + mlthf_m + nh4_m 10040 and 10205 and 12898 and 15184\nGCC2cm dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\nGCC2cm_copy1 dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\nGCCam gly_m + h_m + lpro_m <=> alpro_m + co2_m 10205 or (10040 and 10205 and 12898 and 15184)\nGCCbim alpro_m + thf_m --> dhlpro_m + mlthf_m + nh4_m 12898 or (10040 and 10205 and 12898 and 15184)\nGCCcm dhlpro_m + nad_m <=> h_m + lpro_m + nadh_m 10040 or (10040 and 10205 and 12898 and 15184)\nGLYCL gly_c + nad_c + thf_c --> co2_c + mlthf_c + nadh_c + nh4_c 10040 and 10205 and 12898 and 15184\nGLYCLm gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 12898 or (CRv4_Au5_s12_g4121_t1 and 14894) or (12898 and 14894) or (10040 and 10205 and 12898 and 15184)\nGLYDHD gly_m + lpro_m --> alpro_m + co2_m 10205 and 15184\nTHFATm h2o_m + methf_m --> 5fthf_m + h_m 12898 or (10205 and 12898 and 13630 and 13948 and 15184)\nyli_R1377 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 10205\n\n2OXOADOXm 2oxoadp_m + coa_m + nad_m --> co2_m + glutcoa_m + nadh_m (PDHX and 10007 and 10040 and 12116) or (PDHX and 10040 and 12116 and 9274) or (Pdhx and 10007 and 10040 and 12116) or (Pdhx and 10040 and 12116 and 9274)\nAKGDH akg_c + coa_c + nad_c --> co2_c + nadh_c + succoa_c (10007 and 10040 and 12116) or (10040 and 12116 and 9274)\nAKGDam akg_m + h_m + lpam_m <=> co2_m + sdhlam_m 10007 or 9274 or (10007 and 10040 and 12116) or (10040 and 12116 and 9274)\nAKGDbm coa_m + sdhlam_m --> dhlam_m + succoa_m (10007 and 10040 and 12116) or (10040 and 12116 and 9274)\nAKGDm akg_m + coa_m + nad_m --> co2_m + nadh_m + succoa_m (PDHX and 10007 and 10040 and 12116) or (PDHX and 10040 and 12116 and 9274) or (Pdhx and 10007 and 10040 and 12116) or (Pdhx and 10040 and 12116 and 9274)\nGCC2am gly_m + h_m + lpam_m <=> alpam_m + co2_m 10040 and 10205 and 12898 and 15184\nGCC2bim alpam_m + thf_m --> dhlam_m + mlthf_m + nh4_m 10040 and 10205 and 12898 and 15184\nGCC2cm dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\nGCC2cm_copy1 dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\nGCC2cm_copy2 dhlam_m + nad_m --> h_m + lpam_m + nadh_m (10007 and 10040 and 12116) or (10040 and 12116 and 9274)\nGCCam gly_m + h_m + lpro_m <=> alpro_m + co2_m 10205 or (10040 and 10205 and 12898 and 15184)\nGCCbim alpro_m + thf_m --> dhlpro_m + mlthf_m + nh4_m 12898 or (10040 and 10205 and 12898 and 15184)\nGCCcm dhlpro_m + nad_m <=> h_m + lpro_m + nadh_m 10040 or (10040 and 10205 and 12898 and 15184)\nGLYCL gly_c + nad_c + thf_c --> co2_c + mlthf_c + nadh_c + nh4_c 10040 and 10205 and 12898 and 15184\nGLYCLm gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 12898 or (CRv4_Au5_s12_g4121_t1 and 14894) or (12898 and 14894) or (10040 and 10205 and 12898 and 15184)\nOIVD1m 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD2m 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD3m 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nPDH coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c 13948 or (PP_0338 and PP_0339 and 10040) or (b0114 and b0115 and 10040)\nPDHcr dhlam_c + nad_c <=> h_c + lpam_c + nadh_c 10040\nPDHm coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m (PDHX and 10040 and 13630 and 13948 and 14126) or (Pdhx and 10040 and 13630 and 13948 and 14126) or (10040 and 13630 and 13722 and 13948 and 14126)\nyli_R0381 nad_m + yli_M03934_m --> h_m + nadh_m + yli_M03933_m 10040\nyli_R1378 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 10040\nyli_R1419 nad_c + yli_M03934_c --> h_c + nadh_c + yli_M03933_c 10040\n\nALATA_D2 ala__D_c + pydx5p_c --> pyam5p_c + pyr_c 16182 or 9222 or 9667\nALATA_L2 ala__L_c + pydx5p_c --> pyam5p_c + pyr_c 16182 or 9222 or 9667\nGHMT2r ser__L_c + thf_c <=> gly_c + h2o_c + mlthf_c 9667\nGHMT2rm ser__L_m + thf_m <=> gly_m + h2o_m + mlthf_m 9667\nGHMT3 3htmelys_c + h_c --> 4tmeabut_c + gly_c 9667\nGHMT3m 3htmelys_m + h_m --> 4tmeabut_m + gly_m 9667\nTHFAT h2o_c + methf_c --> 5fthf_c + h_c 12898 or 9667\nTHRA thr__L_c --> acald_c + gly_c 16182 or 9222 or 9667\nTHRA2 athr__L_c --> acald_c + gly_c 16182 or 9222 or 9667\nyli_R1387 gly_x + h2o_x + mlthf_x <=> ser__L_x + thf_x 9667\n\nGCC2am gly_m + h_m + lpam_m <=> alpam_m + co2_m 10040 and 10205 and 12898 and 15184\nGCC2bim alpam_m + thf_m --> dhlam_m + mlthf_m + nh4_m 10040 and 10205 and 12898 and 15184\nGCC2cm dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\nGCC2cm_copy1 dhlam_m + nad_m <=> h_m + lpam_m + nadh_m 10040 and 10205 and 12898 and 15184\nGCCam gly_m + h_m + lpro_m <=> alpro_m + co2_m 10205 or (10040 and 10205 and 12898 and 15184)\nGCCbim alpro_m + thf_m --> dhlpro_m + mlthf_m + nh4_m 12898 or (10040 and 10205 and 12898 and 15184)\nGCCcm dhlpro_m + nad_m <=> h_m + lpro_m + nadh_m 10040 or (10040 and 10205 and 12898 and 15184)\nGLYCL gly_c + nad_c + thf_c --> co2_c + mlthf_c + nadh_c + nh4_c 10040 and 10205 and 12898 and 15184\nGLYCL_2 co2_c + mlthf_c + nadh_c + nh4_c --> gly_c + nad_c + thf_c 12898\nGLYCLm gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 12898 or (CRv4_Au5_s12_g4121_t1 and 14894) or (12898 and 14894) or (10040 and 10205 and 12898 and 15184)\nMTAM 5fthf_m + 2.0 h_m --> h2o_m + methf_m (CRv4_Au5_s12_g4121_t1 and 14894) or (12898 and 14894)\nMTAM_nh4 alpro_m + h_m + thf_m <=> dhlpro_m + mlthf_m + nh4_m (CRv4_Au5_s12_g4121_t1 and 14894) or (12898 and 14894)\nTHFAT h2o_c + methf_c --> 5fthf_c + h_c 12898 or 9667\nTHFATm h2o_m + methf_m --> 5fthf_m + h_m 12898 or (10205 and 12898 and 13630 and 13948 and 15184)\nyli_R1425 co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x 12898\n"
]
],
[
[
"10040 mito 9 cyto 8 LPD1 \nglycine cleavage complex \n12898 mito GCV1 \n10205 mito GCV2 \n15184 mito GCV3 \nalpha keto-glutarate dehydrogenase \n10007 mito KGD1 \n12116 mito KGD2 \noxoadipate dehydrogenase \n9724 nucl KGD1 probable 2-oxoglutarate dehydrogenase E1 component DHKTD1 \npyruvate dehydrogenase \n13630 mito PDA1 \n13948 cysk PDB1 \n13722 mito PDX1 \n14126 cyto LAT1 \nserine hydroxymethyltransferase \n9667 cyto SHM1, SHM2 (GHMT2r) \nL-threonine aldolase \n16182 cyto GLY1 \n9222 mito GLY1, hydroxytrimethyllysine aldolase \n\n14894 is IBA57 protein involved in incorporating iron-sulfur clusters into proteins\n\nGCC2am, GCC2bim, GCC2cm (GCC2cm_copy1, GCC2cm_copy2) is GLYCLm \nGCCam, GCCbim, GCCcm is GLYCLm\n\nMTAM is wrong (FTHFCLm is correct with ATP) \nReplace FTHFCL with FTHFCLm and change genes to '12562' mito fau1 \nMTAM_nh4 is GCCbim\n\nTHFAT/THFATm reaction is catalyzed by serine hydroxymethyltransferase, not gcv or pda/pdb \nIt is not known that yeast SHM has THFAT activity, and S. cerevisiae biocyc does not have this reaction. \nTHFATm and FTHFCLm make a futile cycle, and the role of 5fthf is not clear.\n\nGHMT2r only cyto SHMT is present in Rhodo \nGHMT3/GHMT3m is part of L-carnitine biosynthesis -> by mito GLY1 (paperblast to hydroxytrimethyllysine aldolase) \nhttps://www.ncbi.nlm.nih.gov/pubmed/19289605 \nS. cer cannot synthesize carnitine, is it important for lipid for its role in shuttle? \nGHMT3/GHMT3m has incorrect stoichiometry (proton), replace with HTMLA_m from iLB1027_lipid\n\nALATA_D2, ALATA_L2 is inactivation of the enzyme in E. coli, activity is low \nhttps://onlinelibrary.wiley.com/doi/full/10.1046/j.0014-2956.2001.02606.x\n\nChange GLYCLm genes to '10040 and 10205 and 12898 and 15184' \nRemove GCC2am, GCC2bim, GCC2cm, GCC2cm_copy1, GCC2cm_copy2, GCCam, GCCbim, GCCcm \nRemove GLYCL, GLYCL_2, GLYDHD, MTAM, MTAM_nh4, THFAT, THFATm, yli_R1377, yli_R1378, yli_R1425\n\nAdd FTHFCLm and change genes to '12562' \nRemove FTHFCL\n\nChange AKGDm genes to '10007 and 10040 and 12116' \nChange 2OXOADOXm genes to '10040 and 12116 and 9274' \nRemove AKGDH, AKGDam, AKGDbm\n\nChange PDHm genes to '10040 and 13630 and 13722 and 13948 and 14126' \nRemove PDH, PDHcr, yli_R0381, yli_R1419\n\nRemove GHMT2rm, GHMT3, GHMT3m, ALATA_D2, ALATA_L2, yli_R1387 \n\nChange THRA genes to '16182' \nChange THRA2 genes to '16182' \nAdd HTMLA_m from iLB1027_lipid, and change genes to '9222'",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('GLYCLm').gene_reaction_rule = '10040 and 10205 and 12898 and 15184'\nmodel.remove_reactions(['GCC2am','GCC2bim','GCC2cm','GCC2cm_copy1','GCC2cm_copy2','GCCam','GCCbim','GCCcm'],remove_orphans=True)\nmodel.remove_reactions(['GLYCL','GLYCL_2','GLYDHD','MTAM','MTAM_nh4','THFAT','THFATm','yli_R1377','yli_R1378','yli_R1425'],remove_orphans=True)\nr = sce.reactions.get_by_id('FTHFCLm').copy()\nr.gene_reaction_rule = '12562'\nmodel.add_reactions([r])\nmodel.remove_reactions(['FTHFCL'],remove_orphans=True)\nmodel.reactions.get_by_id('AKGDm').gene_reaction_rule = '10007 and 10040 and 12116'\nmodel.reactions.get_by_id('2OXOADOXm').gene_reaction_rule = '10040 and 12116 and 9274'\nmodel.remove_reactions(['AKGDH','AKGDam','AKGDbm'],remove_orphans=True)\nmodel.reactions.get_by_id('PDHm').gene_reaction_rule = '10040 and 13630 and 13722 and 13948 and 14126'\nmodel.remove_reactions(['PDH','PDHcr','yli_R0381','yli_R1419'],remove_orphans=True)\nmodel.remove_reactions(['GHMT2rm','GHMT3','GHMT3m','ALATA_D2','ALATA_L2','yli_R1387'],remove_orphans=True)\nmodel.reactions.get_by_id('THRA').gene_reaction_rule = '16182'\nmodel.reactions.get_by_id('THRA2').gene_reaction_rule = '16182'\nr = ptri.reactions.get_by_id('HTMLA_m').copy()\nr.gene_reaction_rule = '9222'\nmodel.add_reactions([r])",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('10040').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('10205').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('12898').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15184').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('12562').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('10007').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('12116').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9274').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('13630').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('13722').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('13948').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('14126').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9667').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('16182').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9222').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"2OXOADOXm 2oxoadp_m + coa_m + nad_m --> co2_m + glutcoa_m + nadh_m 10040 and 12116 and 9274\nAKGDm akg_m + coa_m + nad_m --> co2_m + nadh_m + succoa_m 10007 and 10040 and 12116\nGLYCLm gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 10040 and 10205 and 12898 and 15184\nOIVD1m 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD2m 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD3m 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nPDHm coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m 10040 and 13630 and 13722 and 13948 and 14126\n\nGLYCLm gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 10040 and 10205 and 12898 and 15184\n\nGLYCLm gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 10040 and 10205 and 12898 and 15184\n\nGLYCLm gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 10040 and 10205 and 12898 and 15184\n\nFOMETRi 5fthf_c + h_c --> h2o_c + methf_c 12562\nFTHFCLm 5fthf_m + atp_m --> adp_m + methf_m + pi_m 12562\n\nAKGDa akg_c + h_c + lpam_c <=> co2_c + sdhlam_c 10007 or 9274\nAKGDm akg_m + coa_m + nad_m --> co2_m + nadh_m + succoa_m 10007 and 10040 and 12116\nGLCOASYNT S_gtrdhdlp_c + coa_c --> dhlam_c + glutcoa_c 10007 or 9274\nyli_R0428 2oxoadp_c + coa_c + nad_c --> co2_c + glutcoa_c + nadh_c 10007 or 9274\nyli_R0788 akg_m + h_m + yli_M03933_m --> co2_m + yli_M04007_m 10007 or 9274\nyli_R1575 HC01435_m + yli_M03933_m --> thmpp_m + yli_M04007_m 10007 or 9274\nyli_R1576 akg_m + thmpp_m --> HC01435_m + co2_m 10007 or 9274\n\n2OXOADOXm 2oxoadp_m + coa_m + nad_m --> co2_m + glutcoa_m + nadh_m 10040 and 12116 and 9274\nAKGDHe2r coa_m + h_m + sdhlam_m <=> dhlam_m + succoa_m 12116\nAKGDb coa_c + sdhlam_c <=> dhlam_c + succoa_c 12116\nAKGDm akg_m + coa_m + nad_m --> co2_m + nadh_m + succoa_m 10007 and 10040 and 12116\nOXOADLR 2oxoadp_c + h_c + lpam_c --> S_gtrdhdlp_c + co2_c 12116\nyli_R0784 succoa_m + yli_M03934_m <=> coa_m + yli_M04007_m 12116\nyli_R1532 glutcoa_m + yli_M03934_m <=> S_gtrdhdlp_m + coa_m 12116\n\n2OXOADOXm 2oxoadp_m + coa_m + nad_m --> co2_m + glutcoa_m + nadh_m 10040 and 12116 and 9274\nAKGDa akg_c + h_c + lpam_c <=> co2_c + sdhlam_c 10007 or 9274\nGLCOASYNT S_gtrdhdlp_c + coa_c --> dhlam_c + glutcoa_c 10007 or 9274\nyli_R0428 2oxoadp_c + coa_c + nad_c --> co2_c + glutcoa_c + nadh_c 10007 or 9274\nyli_R0788 akg_m + h_m + yli_M03933_m --> co2_m + yli_M04007_m 10007 or 9274\nyli_R1575 HC01435_m + yli_M03933_m --> thmpp_m + yli_M04007_m 10007 or 9274\nyli_R1576 akg_m + thmpp_m --> HC01435_m + co2_m 10007 or 9274\n\nPDHam1hi h_h + pyr_h + thmpp_h --> 2ahethmpp_h + co2_h (CRv4_Au5_s3_g11028_t1 and 13630) or (15685 and 9800)\nPDHam1mi h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m (13630 and 13948) or (13948 and 15791) or (15685 and 9800)\nPDHam2hi 2ahethmpp_h + lpam_h --> adhlam_h + thmpp_h (CRv4_Au5_s3_g11028_t1 and 13630) or (15685 and 9800)\nPDHam2mi 2ahethmpp_m + lpam_m --> adhlam_m + thmpp_m (13630 and 13948) or (13948 and 15791) or (15685 and 9800)\nPDHm coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m 10040 and 13630 and 13722 and 13948 and 14126\nyli_R0357 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c 15791 or (13630 and 13948)\nyli_R0377 2ahethmpp_m + yli_M03933_m --> 3.0 h_c + thmpp_m + yli_M04008_m 13630 and 13948\nyli_R0848 4.0 h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m (13630 and 13948) or (15685 and 9800)\nyli_R1375 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c 13630 and 13948\n\nPDHm coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m 10040 and 13630 and 13722 and 13948 and 14126\n\nPDHam1mi h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m (13630 and 13948) or (13948 and 15791) or (15685 and 9800)\nPDHam2mi 2ahethmpp_m + lpam_m --> adhlam_m + thmpp_m (13630 and 13948) or (13948 and 15791) or (15685 and 9800)\nPDHm coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m 10040 and 13630 and 13722 and 13948 and 14126\nyli_R0357 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c 15791 or (13630 and 13948)\nyli_R0377 2ahethmpp_m + yli_M03933_m --> 3.0 h_c + thmpp_m + yli_M04008_m 13630 and 13948\nyli_R0848 4.0 h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m (13630 and 13948) or (15685 and 9800)\nyli_R1375 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c 13630 and 13948\n\nPDHe2r adhlam_m + coa_m <=> accoa_m + dhlam_m 14126\nPDHm coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m 10040 and 13630 and 13722 and 13948 and 14126\nyli_R0374 accoa_m + yli_M03934_m <=> coa_m + yli_M04008_m 14126\n\nGHMT2r ser__L_c + thf_c <=> gly_c + h2o_c + mlthf_c 9667\n\n4HTHRA 4hthr_c <=> gcald_c + gly_c 16182 or 9222\nTHRA thr__L_c --> acald_c + gly_c 16182\nTHRA2 athr__L_c --> acald_c + gly_c 16182\nTHRA_1 thr__L_h <=> acald_h + gly_h 16182 or 9222\n\n4HTHRA 4hthr_c <=> gcald_c + gly_c 16182 or 9222\nHTMLA_m 3htmelys_m --> 4tmeabut_m + gly_m 9222\nTHRA_1 thr__L_h <=> acald_h + gly_h 16182 or 9222\n"
],
[
"for r in sorted(model.metabolites.get_by_id('4hthr_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('phthr_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"4HTHRA 4hthr_c <=> gcald_c + gly_c 16182 or 9222\n4HTHRK 4hthr_c + atp_c --> adp_c + h_c + phthr_c 8651\n4HTHRS h2o_c + phthr_c --> 4hthr_c + pi_c 9742\n\n4HTHRK 4hthr_c + atp_c --> adp_c + h_c + phthr_c 8651\n4HTHRS h2o_c + phthr_c --> 4hthr_c + pi_c 9742\n"
],
[
"for r in sorted(model.genes.get_by_id('8651').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9742').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"4HTHRK 4hthr_c + atp_c --> adp_c + h_c + phthr_c 8651\nHSK atp_c + hom__L_c --> adp_c + h_c + phom_c 8651\nHSK_1 atp_h + hom__L_h --> adp_h + h_h + phom_h 8651\n\n4HTHRS h2o_c + phthr_c --> 4hthr_c + pi_c 9742\nTHRS h2o_c + phom_c --> pi_c + thr__L_c 9742\nTHRS_1 h2o_h + phom_h --> pi_h + thr__L_h 9742\n"
]
],
[
[
"Remove FOMETRi, AKGDa, GLCOASYNT, yli_R0428, yli_R0788, yli_R1575, yli_R1576 \nRemove AKGDHe2r, AKGDb, OXOADLR, yli_R0784, yli_R1532 \nRemove PDHam1hi, PDHam1mi, PDHam2hi, PDHam2mi, yli_R0357, yli_R0377, yli_R0848, yli_R1375, PDHe2r, yli_R0374 \nRemove THRA_1, 4HTHRA, 4HTHRK, HSK_1, 4HTHRS, THRS_1 \n4HTHRA is by E. coli itaE, not by glyA (GLY1) and not in S. cer biocyc. Similar for 4HTHRK, 4HTHRS",
"_____no_output_____"
]
],
[
[
"model.remove_reactions(['FOMETRi','AKGDa','GLCOASYNT','yli_R0428','yli_R0788','yli_R1575','yli_R1576'],remove_orphans=True)\nmodel.remove_reactions(['AKGDHe2r','AKGDb','OXOADLR','yli_R0784','yli_R1532'],remove_orphans=True)\nmodel.remove_reactions(['PDHam1hi','PDHam1mi','PDHam2hi','PDHam2mi','yli_R0357','yli_R0377','yli_R0848','yli_R1375','PDHe2r','yli_R0374'],remove_orphans=True)\nmodel.remove_reactions(['THRA_1','4HTHRA','4HTHRK','HSK_1','4HTHRS','THRS_1'],remove_orphans=True)",
"_____no_output_____"
],
[
"# Search for reactions involiving tmlys, 3htmelys \nfor r in sorted(model.genes.get_by_id('14530').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('16853').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9180').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9222').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"LYSMTF1n amet_n + peplys_n --> Nmelys_n + ahcys_n 14530\nLYSMTF2n Nmelys_n + amet_n --> Ndmelys_n + ahcys_n 14530\nLYSMTF3n Ndmelys_n + amet_n --> Ntmelys_n + ahcys_n 14530\n\nPLYSPSer Ntmelys_r + h2o_r --> pepslys_r + tmlys_r (Spcs2 and Spcs3 and 16853 and 9881) or (SEC11A and SPCS2 and 10517 and 16853 and 9881)\n\nTMLYSOX akg_c + o2_c + tmlys_c --> 3htmelys_c + co2_c + succ_c 9180\n\nHTMLA_m 3htmelys_m --> 4tmeabut_m + gly_m 9222\n"
]
],
[
[
"L-carnitine biosynthesis \n14530\tmito 22, nucl 2, cyto 2, cyto_nucl 2\tK11427: DOT1L, DOT1; histone-lysine N-methyltransferase, H3 lysine-79 specific \n9180 mito TMLHE, trimethyllysine dioxygenase -> replace TMLYSOX with TMLOX_m from iLB1027_lipid \n9222 mito GLY1, hydroxytrimethyllysine aldolase \n13426 mito blast of 4-trimethylammoniobutyraldehyde dehydrogenase ALDH9A1 hit -> TMABDH1_m \n15060 cyto 10, nucl 7, pero 7, extr 2 BBOX1, gamma-butyrobetaine dioxygenase -> BBHOX exist in BiGG, but assume mito and use GBBOX_m \nLonger RNA-extended gene model is predicted to be mitochondrial \nRemove upstream reactions (by 14530) since it is not clear where tmlys_c is coming from\n\nTMLOX_m akg_m + o2_m + tmlys_m ⇌ co2_m + succ_m + 3htmelys_m 9180 \nHTMLA_m 3htmelys_m --> 4tmeabut_m + gly_m 9222 \nTMABDH1_m h2o_m + nad_m + 4tmeabut_m ⇌ 2.0 h_m + nadh_m + gbbtn_m 13426 \nGBBOX_m akg_m + o2_m + gbbtn_m → co2_m + crn_m + succ_m 15060\n\nAdd TMLOX_m from iLB1027_lipid and change genes to '9180' \nAdd TMABDH1_m from iLB1027_lipid and change genes to '13426' \nAdd GBBOX_m from iLB1027_lipid and change genes to '15060' \nRemove TMLYSOX, LYSMTF1n, LYSMTF2n, LYSMTF3n, PLYSPSer",
"_____no_output_____"
]
],
[
[
"r = ptri.reactions.get_by_id('TMLOX_m').copy()\nr.gene_reaction_rule = '9180'\nmodel.add_reactions([r])\nr = ptri.reactions.get_by_id('TMABDH1_m').copy()\nr.gene_reaction_rule = '13426'\nmodel.add_reactions([r])\nr = ptri.reactions.get_by_id('GBBOX_m').copy()\nr.gene_reaction_rule = '15060'\nmodel.add_reactions([r])\nmodel.remove_reactions(['TMLYSOX','LYSMTF1n','LYSMTF2n','LYSMTF3n','PLYSPSer'],remove_orphans=True)",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('12566').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15436').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('11183').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('12086').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('11188').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"MOD 3mob_m + h_m + thmpp_m --> 2mhop_m + co2_m 12566 and 15436\nMOD_2mbdhl 2mhob_m + lpam_m --> 2mbdhl_m + thmpp_m 12566 and 15436\nMOD_2mhop 2mhop_m + lpam_m --> 2mpdhl_m + thmpp_m 12566 and 15436\nMOD_3mhtpp 3mhtpp_m + lpam_m --> 3mbdhl_m + thmpp_m 12566 and 15436\nMOD_3mop 3mop_m + h_m + thmpp_m --> 2mhob_m + co2_m 12566 and 15436\nMOD_4mop 4mop_m + h_m + thmpp_m --> 3mhtpp_m + co2_m 12566 and 15436\nOIVD1m 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD1r 4mop_c + coa_c + nad_c <=> co2_c + ivcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2 3mob_c + coa_c + nad_c --> co2_c + ibcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2m 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD3 3mop_c + coa_c + nad_c --> 2mbcoa_c + co2_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD3m 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nyli_R0937 4mop_c + h_c + yli_M03933_c --> co2_c + yli_M03936_c (12086 and 15436) or (12566 and 15436)\nyli_R0938 3mob_c + h_c + yli_M03933_c --> co2_c + yli_M03938_c (12086 and 15436) or (12566 and 15436)\nyli_R0939 3mop_c + h_c + yli_M03933_c --> co2_c + yli_M03940_c (12086 and 15436) or (12566 and 15436)\nyli_R1587 4mop_c + thmpp_c <=> 3mhtpp_c + co2_c (12086 and 15436) or (12566 and 15436)\nyli_R1590 2mhop_c + yli_M03933_c --> thmpp_c + yli_M03938_c (12086 and 15436) or (12566 and 15436)\nyli_R1591 3mhtpp_c + yli_M03933_c --> thmpp_c + yli_M03936_c (12086 and 15436) or (12566 and 15436)\nyli_R1592 2mhob_c + yli_M03933_c --> thmpp_c + yli_M03940_c (12086 and 15436) or (12566 and 15436)\n\nMOD 3mob_m + h_m + thmpp_m --> 2mhop_m + co2_m 12566 and 15436\nMOD_2mbdhl 2mhob_m + lpam_m --> 2mbdhl_m + thmpp_m 12566 and 15436\nMOD_2mhop 2mhop_m + lpam_m --> 2mpdhl_m + thmpp_m 12566 and 15436\nMOD_3mhtpp 3mhtpp_m + lpam_m --> 3mbdhl_m + thmpp_m 12566 and 15436\nMOD_3mop 3mop_m + h_m + thmpp_m --> 2mhob_m + co2_m 12566 and 15436\nMOD_4mop 4mop_m + h_m + thmpp_m --> 3mhtpp_m + co2_m 12566 and 15436\nOIVD1m 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD1r 4mop_c + coa_c + nad_c <=> co2_c + ivcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2 3mob_c + coa_c + nad_c --> co2_c + ibcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2m 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD3 3mop_c + coa_c + nad_c --> 2mbcoa_c + co2_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD3m 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nyli_R0937 4mop_c + h_c + yli_M03933_c --> co2_c + yli_M03936_c (12086 and 15436) or (12566 and 15436)\nyli_R0938 3mob_c + h_c + yli_M03933_c --> co2_c + yli_M03938_c (12086 and 15436) or (12566 and 15436)\nyli_R0939 3mop_c + h_c + yli_M03933_c --> co2_c + yli_M03940_c (12086 and 15436) or (12566 and 15436)\nyli_R1587 4mop_c + thmpp_c <=> 3mhtpp_c + co2_c (12086 and 15436) or (12566 and 15436)\nyli_R1590 2mhop_c + yli_M03933_c --> thmpp_c + yli_M03938_c (12086 and 15436) or (12566 and 15436)\nyli_R1591 3mhtpp_c + yli_M03933_c --> thmpp_c + yli_M03936_c (12086 and 15436) or (12566 and 15436)\nyli_R1592 2mhob_c + yli_M03933_c --> thmpp_c + yli_M03940_c (12086 and 15436) or (12566 and 15436)\n\nDHRT_2mbcoa 2mbdhl_m + coa_m --> 2mbcoa_m + dhlam_m 11183 or 11188\nDHRT_ibcoa 2mpdhl_m + coa_m --> dhlam_m + ibcoa_m 11183 or 11188\nDHRT_ivcoa 3mbdhl_m + coa_m --> dhlam_m + ivcoa_m 11183 or 11188\nOIVD1m 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD1r 4mop_c + coa_c + nad_c <=> co2_c + ivcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2 3mob_c + coa_c + nad_c --> co2_c + ibcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2m 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD3 3mop_c + coa_c + nad_c --> 2mbcoa_c + co2_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD3m 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nyli_R0878 coa_c + yli_M03938_c --> ibcoa_c + yli_M03934_c 11183 or 11188\nyli_R0879 coa_c + yli_M03940_c --> yli_M03934_c + yli_M03941_c 11183 or 11188\nyli_R0880 coa_c + yli_M03936_c --> ivcoa_c + yli_M03934_c 11183 or 11188\n\nOIVD1m 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD2m 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD3m 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nyli_R0937 4mop_c + h_c + yli_M03933_c --> co2_c + yli_M03936_c (12086 and 15436) or (12566 and 15436)\nyli_R0938 3mob_c + h_c + yli_M03933_c --> co2_c + yli_M03938_c (12086 and 15436) or (12566 and 15436)\nyli_R0939 3mop_c + h_c + yli_M03933_c --> co2_c + yli_M03940_c (12086 and 15436) or (12566 and 15436)\nyli_R1587 4mop_c + thmpp_c <=> 3mhtpp_c + co2_c (12086 and 15436) or (12566 and 15436)\nyli_R1590 2mhop_c + yli_M03933_c --> thmpp_c + yli_M03938_c (12086 and 15436) or (12566 and 15436)\nyli_R1591 3mhtpp_c + yli_M03933_c --> thmpp_c + yli_M03936_c (12086 and 15436) or (12566 and 15436)\nyli_R1592 2mhob_c + yli_M03933_c --> thmpp_c + yli_M03940_c (12086 and 15436) or (12566 and 15436)\n\nDHRT_2mbcoa 2mbdhl_m + coa_m --> 2mbcoa_m + dhlam_m 11183 or 11188\nDHRT_ibcoa 2mpdhl_m + coa_m --> dhlam_m + ibcoa_m 11183 or 11188\nDHRT_ivcoa 3mbdhl_m + coa_m --> dhlam_m + ivcoa_m 11183 or 11188\nOIVD1m 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD1r 4mop_c + coa_c + nad_c <=> co2_c + ivcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2 3mob_c + coa_c + nad_c --> co2_c + ibcoa_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD2m 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nOIVD3 3mop_c + coa_c + nad_c --> 2mbcoa_c + co2_c + nadh_c (PP_4404 and 11183 and 12566 and 15436) or (PP_4404 and 11188 and 12566 and 15436)\nOIVD3m 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nyli_R0878 coa_c + yli_M03938_c --> ibcoa_c + yli_M03934_c 11183 or 11188\nyli_R0879 coa_c + yli_M03940_c --> yli_M03934_c + yli_M03941_c 11183 or 11188\nyli_R0880 coa_c + yli_M03936_c --> ivcoa_c + yli_M03934_c 11183 or 11188\n"
]
],
[
[
"branched-chain alpha-keto acid dehydrogenase (missing in S. cer) \n12566 mito bkdA1 Branched chain alpha-keto acid dehydrogenase E1, alpha subunit \n15436 mito bkdA2 Branched chain alpha-keto acid dehydrogenase E1, beta subunit \n11183 mito bkdB Dihydrolipoamide transacylase (alpha-keto acid dehydrogenase E2 subunit) \nextra cyto subunits? \n12086 cyto ortholog of E1, Short-chain acyl-CoA dehydrogenase (butyryl) has cytochrome b5-like domain -> fatty acid oxidation \n11188 cyto Dihydrolipoamide transacylase (alpha-keto acid dehydrogenase E2 subunit) -> keep as isozyme for now\n\nKeep mito OIVDs (OIVD1m, OIVD2m, OIVD3m), and change genes to '(10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)' \n\nBCAA additional reactions, e.g., OBDHm in S. cer is missing genes \nAdd OBDHm from iMM904 and change genes to OIVD1m genes\n\nRemove cyto OIVDs (OIVD1r, OIVD2, OIVD3), MOD, MOD_2mbdhl, MOD_2mhop, MOD_3mhtpp, MOD_3mop, MOD_4mop \nRemove yli_R0937, yli_R0938, yli_R0939, yli_R1587, yli_R1590, yli_R1591, yli_R1592 \nRemove DHRT_2mbcoa, DHRT_ibcoa, DHRT_ivcoa, yli_R0878, yli_R0879, yli_R0880",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('OIVD1m').gene_reaction_rule = '(10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)'\nmodel.reactions.get_by_id('OIVD2m').gene_reaction_rule = '(10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)'\nmodel.reactions.get_by_id('OIVD3m').gene_reaction_rule = '(10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)'\nr = sce.reactions.get_by_id('OBDHm').copy()\nr.gene_reaction_rule = '(10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)'\nmodel.add_reactions([r])\nmodel.remove_reactions(['OIVD1r','OIVD2','OIVD3','MOD','MOD_2mbdhl','MOD_2mhop','MOD_3mhtpp','MOD_3mop','MOD_4mop'],remove_orphans=True)\nmodel.remove_reactions(['yli_R0937','yli_R0938','yli_R0939','yli_R1587','yli_R1590','yli_R1591','yli_R1592'],remove_orphans=True)\nmodel.remove_reactions(['DHRT_2mbcoa','DHRT_ibcoa','DHRT_ivcoa','yli_R0878','yli_R0879','yli_R0880'],remove_orphans=True)",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('15685').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9800').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"ACAS_2ahbut 2ahethmpp_h + 2obut_h --> 2ahbut_h + thmpp_h 15685 and 9800\nACHBS 2obut_c + h_c + pyr_c --> 2ahbut_c + co2_c (b3670 and 15685) or (15685 and 9800)\nACHBSm 2obut_m + h_m + pyr_m --> 2ahbut_m + co2_m 15685 and 9800\nACLS h_c + 2.0 pyr_c --> alac__S_c + co2_c (b3670 and 15685) or (15685 and 9800)\nACLSm h_m + 2.0 pyr_m --> alac__S_m + co2_m 15685 and 9800\nAPLh 2ahethmpp_h + pyr_h --> alac__S_h + thmpp_h 15685 and 9800\nAPLm 2ahethmpp_m + pyr_m --> alac__S_m + thmpp_m 15685 and 9800\nPPATDh h_h + 2.0 pyr_h --> alac__S_h + co2_h 15685 and 9800\nyli_R0861 2ahethmpp_m + pyr_m --> alac__S_m + 3.0 h_m + thmpp_m 15685 and 9800\nyli_R0862 2ahethmpp_m + 2obut_m --> 2ahbut_m + 3.0 h_m + thmpp_m 15685 and 9800\nyli_R1588 2obut_m + pyr_m --> 2ahbut_m + co2_m 15685 and 9800\n\nACAS_2ahbut 2ahethmpp_h + 2obut_h --> 2ahbut_h + thmpp_h 15685 and 9800\nACHBS 2obut_c + h_c + pyr_c --> 2ahbut_c + co2_c (b3670 and 15685) or (15685 and 9800)\nACHBSm 2obut_m + h_m + pyr_m --> 2ahbut_m + co2_m 15685 and 9800\nACLS h_c + 2.0 pyr_c --> alac__S_c + co2_c (b3670 and 15685) or (15685 and 9800)\nACLSm h_m + 2.0 pyr_m --> alac__S_m + co2_m 15685 and 9800\nAPLh 2ahethmpp_h + pyr_h --> alac__S_h + thmpp_h 15685 and 9800\nAPLm 2ahethmpp_m + pyr_m --> alac__S_m + thmpp_m 15685 and 9800\nPPATDh h_h + 2.0 pyr_h --> alac__S_h + co2_h 15685 and 9800\nyli_R0861 2ahethmpp_m + pyr_m --> alac__S_m + 3.0 h_m + thmpp_m 15685 and 9800\nyli_R0862 2ahethmpp_m + 2obut_m --> 2ahbut_m + 3.0 h_m + thmpp_m 15685 and 9800\nyli_R1588 2obut_m + pyr_m --> 2ahbut_m + co2_m 15685 and 9800\n"
]
],
[
[
"15685 mito ILV2 \n9800 mito ILV6 \nILV2 and ILV6 catalyze ACHBSm and ACLSm \nRemove ACAS_2ahbut, ACHBS, ACLS, APLh, APLm, PPATDh, yli_R0861, yli_R0862, yli_R1588",
"_____no_output_____"
]
],
[
[
"model.remove_reactions(['ACAS_2ahbut','ACHBS','ACLS','APLh','APLm','PPATDh','yli_R0861','yli_R0862','yli_R1588'],remove_orphans=True)",
"_____no_output_____"
],
[
"for r in sorted(model.metabolites.get_by_id('2ahethmpp_m').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('2ahethmpp_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"PDCm 2ahethmpp_m --> acald_m + thmpp_m 15791\n\nyli_R0364 2ahethmpp_c --> acald_c + 3.0 h_c + thmpp_c 15791\n"
],
[
"for r in sorted(model.genes.get_by_id('15791').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"3MOBDC 3mob_c + h_c --> 2mppal_c + co2_c 15791\n3MOPDC 3mop_c + h_c --> 2mbald_c + co2_c 15791\n4MOPDC 4mop_c + h_c --> 3mbald_c + co2_c 15791\nACALDCD 2.0 acald_c --> actn__R_c 15791\nINDPYRD h_c + indpyr_c <=> co2_c + id3acald_c 15791\nPDCm 2ahethmpp_m --> acald_m + thmpp_m 15791\nPPYRDC h_c + phpyr_c --> co2_c + pacald_c 15791\nPYRDC h_c + pyr_c --> acald_c + co2_c 15791\nPYRDC2 acald_c + h_c + pyr_c --> actn__R_c + co2_c 15791\nPYRDC_1 h_m + pyr_m --> acald_m + co2_m 15791\nyli_R0364 2ahethmpp_c --> acald_c + 3.0 h_c + thmpp_c 15791\n"
]
],
[
[
"15791 cyto PDC \nRemove PDCm, PYRDC_1, yli_R0364",
"_____no_output_____"
]
],
[
[
"model.remove_reactions(['PDCm','PYRDC_1','yli_R0364'],remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### 2ippm",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('14914').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"IPPMIa 3c2hmp_c <=> 2ippm_c + h2o_c 14914 or (CRv4_Au5_s6_g12448_t1 and 14914) or (PP_1986 and 14914) or (b0071 and 14914)\nIPPMIb 2ippm_c + h2o_c <=> 3c3hmp_c 14914 or (CRv4_Au5_s6_g12448_t1 and 14914) or (PP_1986 and 14914) or (b0071 and 14914)\nyli_R1465 3c3hmp_m <=> 2ippm_m + h2o_m 14914\nyli_R1466 2ippm_m + h2o_m <=> 3c2hmp_m 14914\nyli_R7859 2ippm_m + h2o_m <=> 3c2hmp_m 14914\nyli_R8859 3c3hmp_m <=> 2ippm_m + h2o_m 14914\n"
],
[
"# 14914 is cytosolic, and contains both large (leuC) and small (leuD) subunits\nmodel.reactions.get_by_id('IPPMIa').gene_reaction_rule = '14914'\nmodel.reactions.get_by_id('IPPMIb').gene_reaction_rule = '14914'\nmodel.remove_reactions(['yli_R1465','yli_R1466','yli_R7859','yli_R8859'], remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### cer1_24",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('9664').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15314').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"CERH124_copy2 cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c 9664\nCERH126_copy2 cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c 9664\nCERS324 cer2_24_c + h_c + nadph_c + o2_c --> cer3_24_c + h2o_c + nadp_c 9664\nCERS326 cer2_26_c + h_c + nadph_c + o2_c --> cer3_26_c + h2o_c + nadp_c 9664\n\n44MZYMMO 44mzym_c + 2.0 h_c + 3.0 nadph_c + 3.0 o2_c <=> 4mzym_int1_c + 4.0 h2o_c + 3.0 nadp_c 15314\nCERH124_copy1 cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c 15314\nCERH126_copy1 cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c 15314\nPSPHS h_c + nadph_c + o2_c + sphgn_c --> h2o_c + nadp_c + psphings_c 15314\nyli_R0702 h_c + nadph_c + o2_c + yli_M05742_c --> h2o_c + nadp_c + yli_M05749_c 15314\nyli_R0703 h_c + nadph_c + o2_c + yli_M05741_c --> h2o_c + nadp_c + yli_M05748_c 15314\nyli_R0704 h_c + nadph_c + o2_c + yli_M05749_c --> h2o_c + nadp_c + yli_M05725_c 15314\nyli_R0705 h_c + nadph_c + o2_c + yli_M05748_c --> h2o_c + nadp_c + yli_M05724_c 15314\nyli_R1441 h_r + nadph_r + o2_r + sphgn_r --> h2o_r + nadp_r + psphings_r 15314\nyli_R1571 dhcrm_cho_r + h_r + nadph_r + o2_r --> h2o_r + nadp_r + phcrm_hs_r 15314\n"
]
],
[
[
"Sphingolipid biosynthesis \nhttps://onlinelibrary.wiley.com/doi/pdf/10.1111/tra.12239 \nhttps://www.sciencedirect.com/science/article/pii/S0022283616303746 \nhttps://link.springer.com/content/pdf/10.1007%2F978-3-319-43676-0_21-1.pdf \nhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC3683901/ \nhttps://pdfs.semanticscholar.org/e239/05c20c98c108bd8a7613dde68d283e7cba22.pdf \nSERPT \n10303 cyto LCB1 (SPOTS complex in ER membrane) serine palmitoyltransferase \n9425 cyto LCB2 (SPOTS complex in ER membrane) serine palmitoyltransferase \n9394 mito Small subunit of serine palmitoyltransferase-like -> TSC3 \n3DSPHR \n9979 extr TSC10 (lipid droplet) \nPSPHS \n15314 mito 9, plas 5, E.R. 4 SUR2 (uses cytochrome b5) \nCERS124/CERS124er \n11391 plas LAC1, LAG1 (ER) ceramide synthase \n15168 plas ceramide synthase \ncan't find LIP1 \nSBPP1/SBPP1er \n13412 plas LCB3, YSR3 (ER) check for isozyme \n16352 DAG/PA phosphatase LPP1? \n12927 AUR1 Phosphatidylinositol:ceramide phosphoinositol transferase \n13172 IPT1 Phosphatidylinositol:ceramide phosphoinositol transferase \n13087 DPP1 DAG/PA phosphatase \n15202 PA phosphatase 3 LPP3 in Arabidopsis \n14049 dolichyldiphosphatase \nSLCBK1/SLCBK2 \n10391 cyto LCB4 (plasma, ER, golgi), LCB5 (golgi) Sphingosine kinase \n16228 cyto Sphingosine kinase \nCERH124_copy2/CERS324 \n9664 mito 9 pero 7 cyto_mito 7 mito_nucl 7 SCS7 (membrane, ER) \nPSPHPL \n11925 DPL1 (ER) sphinganine-1-phosphate aldolase\n\n10303\tcyto 11, cyto_nucl 9.5, mito 7, nucl 6\tK00654: SPT; serine palmitoyltransferase\tGKK* \n9425\tcyto 11.5, mito 10, cyto_nucl 9, nucl 5.5\tK00654: SPT; serine palmitoyltransferase\tEHA* \n9394\tmito 12, extr 7, mito_nucl 7\tHMMPfam:Protein of unknown function (DUF3317):PF11779\tELL* \n9979\textr 16, nucl 3, mito 3, mito_nucl 3\tK04708: E1.1.1.102; 3-dehydrosphinganine reductase\tVVG* \n15314\tmito 9, plas 5, E.R. 4, extr 3, golg 3, cyto 2\tK04713: SUR2; sphinganine C4-monooxygenase\tKTE* \n11391\tplas 22, mito 2, cyto 1, pero 1, E.R. 1, mito_nucl 1, cyto_pero 1\tK04709: LAG1; Acyl-CoA-dependent ceramide synthase\tKKR* \n15168\tplas 20, mito 3, vacu 2\tK04709: LAG1; Acyl-CoA-dependent ceramide synthase\tKER* \n13412\tplas 26\tK04717: SGPP2; sphingosine-1-phosphate phosphotase 2\tSVR* \n16352\tplas 14, extr 6, E.R. 3, mito 2, vacu 2\tKOG3030: Lipid phosphate phosphatase and related enzymes of the PAP2 family\tAVV* \n12927\tplas 22, mito 2, vacu 2\tHMMPfam:PAP2 superfamily:PF01569,SMART:Acid phosphatase homologues:SM00014,SUPERFAMILY::SSF48317\tRRD* \n13172\textr 10, mito 7, plas 4, cyto_mito 4\tHMMPfam:PAP2 superfamily:PF01569\tSLA* \n13087\tplas 23, mito 2\tK18693: DPP1; diacylglycerol diphosphate phosphatase / phosphatidate phosphatase\tGYY* \n15202\tplas 27\tKOG3030: Lipid phosphate phosphatase and related enzymes of the PAP2 family\tRMV* \n14049\tmito 13, extr 12\tKOG3146: Dolichyl pyrophosphate phosphatase and related acid phosphatases\tGEL* \n10391\tcyto 13.5, cyto_nucl 12, nucl 5.5, pero 4, mito 3\tK04718: SPHK; sphingosine kinase\tGFD* \n16228\tcyto 12, nucl 8, mito 5\tHMMPfam:Diacylglycerol kinase catalytic domain:PF00781,ProSiteProfiles:DAG-kinase catalytic (DAGKc) domain profile.:PS50146,SUPERFAMILY::SSF111331\tEKE* \n9664\tmito 9, pero 7, cyto_mito 7, mito_nucl 7\tK19703: FA2H, SCS7; 4-hydroxysphinganine ceramide fatty acyl 2-hydroxylase\tAKA* \n11925\tcyto 17.5, cyto_nucl 9.5, mito 9\tK01634: SGPL1, DPL1; sphinganine-1-phosphate aldolase\tLYA*",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('10303').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9425').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9979').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15314').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('11391').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15168').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('13412').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('10391').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9664').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('11925').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"GLYATx accoa_x + gly_x <=> 2aobut_x + coa_x + h_x 10303\nSERPT h_c + pmtcoa_c + ser__L_c --> 3dsphgn_c + co2_c + coa_c 10303 or 9425 or (10303 and 9425) or (YBR058C_A and 10303 and 9425)\nyli_R1438 h_r + pmtcoa_r + ser__L_r --> 3dsphgn_r + co2_r + coa_r 10303 or 9425\n\nSERPT h_c + pmtcoa_c + ser__L_c --> 3dsphgn_c + co2_c + coa_c 10303 or 9425 or (10303 and 9425) or (YBR058C_A and 10303 and 9425)\nyli_R1438 h_r + pmtcoa_r + ser__L_r --> 3dsphgn_r + co2_r + coa_r 10303 or 9425\n\n3DSPHR 3dsphgn_c + h_c + nadph_c --> nadp_c + sphgn_c 9979\nyli_R1440 3dsphgn_r + h_r + nadph_r --> nadp_r + sphgn_r 9979\n\n44MZYMMO 44mzym_c + 2.0 h_c + 3.0 nadph_c + 3.0 o2_c <=> 4mzym_int1_c + 4.0 h2o_c + 3.0 nadp_c 15314\nCERH124_copy1 cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c 15314\nCERH126_copy1 cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c 15314\nPSPHS h_c + nadph_c + o2_c + sphgn_c --> h2o_c + nadp_c + psphings_c 15314\nyli_R0702 h_c + nadph_c + o2_c + yli_M05742_c --> h2o_c + nadp_c + yli_M05749_c 15314\nyli_R0703 h_c + nadph_c + o2_c + yli_M05741_c --> h2o_c + nadp_c + yli_M05748_c 15314\nyli_R0704 h_c + nadph_c + o2_c + yli_M05749_c --> h2o_c + nadp_c + yli_M05725_c 15314\nyli_R0705 h_c + nadph_c + o2_c + yli_M05748_c --> h2o_c + nadp_c + yli_M05724_c 15314\nyli_R1441 h_r + nadph_r + o2_r + sphgn_r --> h2o_r + nadp_r + psphings_r 15314\nyli_R1571 dhcrm_cho_r + h_r + nadph_r + o2_r --> h2o_r + nadp_r + phcrm_hs_r 15314\n\nCERS124 sphgn_c + ttccoa_c --> cer1_24_c + coa_c + h_c 11391\nCERS124er sphgn_r + ttccoa_r --> cer1_24_r + coa_r + h_r 11391\nCERS126 hexccoa_c + sphgn_c --> cer1_26_c + coa_c + h_c 11391\nCERS126er hexccoa_r + sphgn_r --> cer1_26_r + coa_r + h_r 11391\nCERS224 psphings_c + ttccoa_c --> cer2_24_c + coa_c + h_c 11391\nCERS224er psphings_r + ttccoa_r --> cer2_24_r + coa_r + h_r 11391\nCERS226 hexccoa_c + psphings_c --> cer2_26_c + coa_c + h_c 11391\nCERS226er hexccoa_r + psphings_r --> cer2_26_r + coa_r + h_r 11391\nyli_R0698 sphgn_c + yli_M04597_c --> coa_c + h_c + yli_M05742_c 11391 or 15168\nyli_R0699 sphgn_c + yli_M04095_c --> coa_c + h_c + yli_M05741_c 11391 or 15168\nyli_R0700 psphings_c + yli_M04095_c --> coa_c + h_c + yli_M05748_c 11391 or 15168\nyli_R0701 psphings_c + yli_M04597_c --> coa_c + h_c + yli_M05749_c 11391 or 15168\nyli_R1570 acoa_r + sphgn_r --> coa_r + dhcrm_cho_r 11391 or 15168\n\nyli_R0698 sphgn_c + yli_M04597_c --> coa_c + h_c + yli_M05742_c 11391 or 15168\nyli_R0699 sphgn_c + yli_M04095_c --> coa_c + h_c + yli_M05741_c 11391 or 15168\nyli_R0700 psphings_c + yli_M04095_c --> coa_c + h_c + yli_M05748_c 11391 or 15168\nyli_R0701 psphings_c + yli_M04597_c --> coa_c + h_c + yli_M05749_c 11391 or 15168\nyli_R1570 acoa_r + sphgn_r --> coa_r + dhcrm_cho_r 11391 or 15168\n\nSBPP1 h2o_c + sph1p_c --> pi_c + sphgn_c 13412\nSBPP1er h2o_r + sph1p_r --> pi_r + sphgn_r 13412\nSBPP2er h2o_r + psph1p_r --> pi_r + psphings_r 13412\n\nSLCBK1 atp_c + sphgn_c --> adp_c + h_c + sph1p_c 10391\nSLCBK2 atp_c + psphings_c --> adp_c + h_c + psph1p_c 10391\nSPHK21c atp_c + sphings_c --> adp_c + h_c + sphs1p_c 10391\nyli_R1439 atp_r + sphgn_r --> adp_r + h_r + sph1p_r 10391\nyli_R1569 atp_r + yli_M07050_r --> adp_r + sphs1p_r 10391\n\nCERH124_copy2 cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c 9664\nCERH126_copy2 cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c 9664\nCERS324 cer2_24_c + h_c + nadph_c + o2_c --> cer3_24_c + h2o_c + nadp_c 9664\nCERS326 cer2_26_c + h_c + nadph_c + o2_c --> cer3_26_c + h2o_c + nadp_c 9664\n\nPSPHPL psph1p_c --> 2hhxdal_c + ethamp_c 11925\nSGPL11r sph1p_r --> ethamp_r + hxdcal_r 11925\nSGPL12r h2o_r + sphs1p_r --> ethamp_r + h_r + hdca_r 11925\nSGPL13 sphs1p_c --> ethamp_c + hxdceal_c 11925\nSPHPL sph1p_c --> ethamp_c + hxdcal_c 11925\nyli_R1567 sphs1p_r --> ethamp_r + hxdceal_r 11925\nyli_R1568 sph1p_r --> ethamp_r + yli_M07049_r 11925\n"
]
],
[
[
"Change yli_R1438 to SERPTer \nChange SERPTer genes to '10303 and 9394 and 9425' \nRemove GLYATx, yli_R1438 \nChange yli_R1440 to 3DSPHRer \nRemove 3DSPHR \nChange CERH124_copy1 to CERH124er, and change metabolites from c to r \nChange CERH126_copy1 to CERH126er, and change metabolites from c to r \nChange yli_R1441 to PSPHSer \nRemove PSPHS, yli_R0702, yli_R0703, yli_R0704, yli_R0705, yli_R1571 \nChange CERS124er genes to '11391 or 15168' \nChange CERS126er genes to '11391 or 15168' \nChange CERS224er genes to '11391 or 15168' \nChange CERS226er genes to '11391 or 15168' \nRemove CERS124, CERS126, CERS224, CERS226, yli_R0698, yli_R0699, yli_R0700, yli_R0701, yli_R1570 \nRemove SBPP1 \nChange SLCBK1 to SLCBK1er, and change genes to '10391 or 16228' \nChange SLCBK2 to SLCBK2er, and change genes to '10391 or 16228' \nRemove SPHK21c, SGOR, yli_R1439, yli_R1569 \nChange CERH124_copy2 to CERS2p24, change metabolites from c to r, and change cer2_24 to cer2p_24 \nChange CERH126_copy2 to CERS2p26, change metabolites from c to r, and change cer2_26 to cer2p_26 \nChange CERS324 to CERS324er, and change metabolites from c to r \nChange CERS326 to CERS326er, and change metabolites from c to r \nChange PSPHPL to PSPHPLer, and change metabolites from c to r \nChange SPHPL to SPHPLer, and change metabolites from c to r \nRemove SGPL11r, SGPL12r, SGPL13, yli_R1567, yli_R1568",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('yli_R1438').id = 'SERPTer'\nmodel.reactions.get_by_id('SERPTer').gene_reaction_rule = '10303 and 9394 and 9425'\nmodel.remove_reactions(['GLYATx','SERPT'], remove_orphans=True)\nmodel.reactions.get_by_id('yli_R1440').id = '3DSPHRer'\nmodel.remove_reactions(['3DSPHR'], remove_orphans=True)\nr = model.reactions.get_by_id('CERH124_copy1').copy()\nmodel.reactions.get_by_id('CERH124_copy1').id = 'CERH124er'\nfor m in r.metabolites:\n model.reactions.get_by_id('CERH124er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nr = model.reactions.get_by_id('CERH126_copy1').copy()\nmodel.reactions.get_by_id('CERH126_copy1').id = 'CERH126er'\nfor m in r.metabolites:\n model.reactions.get_by_id('CERH126er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nmodel.reactions.get_by_id('yli_R1441').id = 'PSPHSer'\nmodel.remove_reactions(['PSPHS','yli_R0702','yli_R0703','yli_R0704','yli_R0705','yli_R1571'], remove_orphans=True)\nmodel.reactions.get_by_id('CERS124er').gene_reaction_rule = '11391 or 15168'\nmodel.reactions.get_by_id('CERS126er').gene_reaction_rule = '11391 or 15168'\nmodel.reactions.get_by_id('CERS224er').gene_reaction_rule = '11391 or 15168'\nmodel.reactions.get_by_id('CERS226er').gene_reaction_rule = '11391 or 15168'\nmodel.remove_reactions(['CERS124','CERS126','CERS224','CERS226','yli_R0698','yli_R0699','yli_R0700','yli_R0701','yli_R1570','SBPP1'], remove_orphans=True)\nr = model.reactions.get_by_id('SLCBK1').copy()\nmodel.reactions.get_by_id('SLCBK1').id = 'SLCBK1er'\nmodel.reactions.get_by_id('SLCBK1er').gene_reaction_rule = '10391 or 16228'\nfor m in r.metabolites:\n model.reactions.get_by_id('SLCBK1er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nr = model.reactions.get_by_id('SLCBK2').copy()\nmodel.reactions.get_by_id('SLCBK2').id = 'SLCBK2er'\nmodel.reactions.get_by_id('SLCBK2er').gene_reaction_rule = '10391 or 16228'\nfor m in r.metabolites:\n model.reactions.get_by_id('SLCBK2er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nmodel.remove_reactions(['SPHK21c','SGOR','yli_R1439','yli_R1569'], remove_orphans=True)\nm1 = model.metabolites.get_by_id('cer2_24_c').copy()\nm2 = model.metabolites.get_by_id('cer2_26_c').copy()\nm1.id = 'cer2p_24_r'\nm1.compartment = 'r'\nm2.id = 'cer2p_26_r'\nm2.compartment = 'r'\nm1.name = 'Ceramide 2p (Sphinganine:n-C24:0OH)'\nm2.name = 'Ceramide 2p (Sphinganine:n-C26:0OH)'\nmodel.add_metabolites([m1,m2])\nr = model.reactions.get_by_id('CERH124_copy2').copy()\nmodel.reactions.get_by_id('CERH124_copy2').id = 'CERS2p24er'\nmodel.reactions.get_by_id('CERS2p24er').name = 'Ceramide 2p synthase 24C'\nfor m in r.metabolites:\n model.reactions.get_by_id('CERS2p24er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nmodel.reactions.get_by_id('CERS2p24er').add_metabolites({'cer2_24_r': -1.0, 'cer2p_24_r': 1.0})\nr = model.reactions.get_by_id('CERH126_copy2').copy()\nmodel.reactions.get_by_id('CERH126_copy2').id = 'CERS2p26er'\nmodel.reactions.get_by_id('CERS2p26er').name = 'Ceramide 2p synthase 26C'\nfor m in r.metabolites:\n model.reactions.get_by_id('CERS2p26er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nmodel.reactions.get_by_id('CERS2p26er').add_metabolites({'cer2_26_r': -1.0, 'cer2p_26_r': 1.0})\nm1 = model.metabolites.get_by_id('cer3_24_c').copy()\nm2 = model.metabolites.get_by_id('cer3_26_c').copy()\nm1.id = 'cer3_24_r'\nm1.compartment = 'r'\nm2.id = 'cer3_26_r'\nm2.compartment = 'r'\nmodel.add_metabolites([m1,m2])\nr = model.reactions.get_by_id('CERS324').copy()\nmodel.reactions.get_by_id('CERS324').id = 'CERS324er'\nfor m in r.metabolites:\n model.reactions.get_by_id('CERS324er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\n\nr = model.reactions.get_by_id('CERS326').copy()\nmodel.reactions.get_by_id('CERS326').id = 'CERS326er'\nfor m in r.metabolites:\n model.reactions.get_by_id('CERS326er').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nm1 = model.metabolites.get_by_id('2hhxdal_c').copy()\nm2 = model.metabolites.get_by_id('hxdcal_c').copy()\nm1.id = '2hhxdal_r'\nm1.compartment = 'r'\nmodel.add_metabolites([m1])\nr = model.reactions.get_by_id('PSPHPL').copy()\nmodel.reactions.get_by_id('PSPHPL').id = 'PSPHPLer'\nfor m in r.metabolites:\n model.reactions.get_by_id('PSPHPLer').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nr = model.reactions.get_by_id('SPHPL').copy()\nmodel.reactions.get_by_id('SPHPL').id = 'SPHPLer'\nfor m in r.metabolites:\n model.reactions.get_by_id('SPHPLer').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_r'): r.get_coefficient(m.id)})\nmodel.remove_reactions(['SGPL11r','SGPL12r','SGPL13','yli_R1567','yli_R1568'], remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### FMNAT",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('11542').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9298').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"AFAT atp_c + fmn_c + 2.0 h_c --> fad_c + ppi_c 11542 or 9298\nFMNAT atp_c + fmn_c + h_c --> fad_c + ppi_c 11542\nFMNATm atp_m + fmn_m + h_m --> fad_m + ppi_m 11542\n\nAFAT atp_c + fmn_c + 2.0 h_c --> fad_c + ppi_c 11542 or 9298\nRBFK atp_c + ribflv_c --> adp_c + fmn_c + h_c 9298\nRBFKm atp_m + ribflv_m --> adp_m + fmn_m + h_m 9298\n"
]
],
[
[
"11542 cyto FAD1 \n16092 mito FAD synthetase \n9298 mito 11, cyto_mito 10.666 FMN1 Riboflavin kinase\n\nFMNAT has correct stoichiometry, AFAT incorrect \nChange FMNATm genes to '16092'\nRemove AFAT",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('FMNATm').gene_reaction_rule = '16092'\nmodel.remove_reactions(['AFAT'], remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### ALDD2x",
"_____no_output_____"
]
],
[
[
"model.remove_reactions(['ALDD2x_copy1','GTPCI_2'], remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### GTHOr",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('15482').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15038').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('16549').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('8790').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('9250').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"GDR gthox_c + h_c + nadh_c --> 2.0 gthrd_c + nad_c 15482\nGDR_nadp_h gthox_h + h_h + nadph_h --> 2.0 gthrd_h + nadp_h 15482\nGDRh gthox_h + h_h + nadh_h --> 2.0 gthrd_h + nad_h 15482\nGDRm gthox_m + h_m + nadh_m --> 2.0 gthrd_m + nad_m 15482\nGTHOm gthox_m + h_m + nadph_m --> 2.0 gthrd_m + nadp_m 15482 or (15482 and 9250)\nGTHOr gthox_c + h_c + nadph_c <=> 2.0 gthrd_c + nadp_c 15482 or (15038 and 15482) or (15482 and 16549) or (15482 and 8790)\nTRDR h_c + nadph_c + trdox_c --> nadp_c + trdrd_c 15339 or 15482 or 16019 or 9688 or (CRv4_Au5_s2_g8777_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s2_g8777_t1 and 15339) or (CRv4_Au5_s2_g8777_t1 and 16019) or (CRv4_Au5_s8_g14830_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s8_g14830_t1 and 15339) or (CRv4_Au5_s8_g14830_t1 and 16019) or (CRv4_Au5_s9_g15314_t1 and 9688) or (15339 and 9688) or (16019 and 9688)\nTRDRm h_m + nadph_m + trdox_m --> nadp_m + trdrd_m 15482 or (YCR083W and 9688) or (15339 and 9688)\nyli_R0291 gthox_c + h_c + nadph_c --> gthrd_c + nadp_c 15482\n\nDASCBR dhdascb_c + h_c + nadph_c --> ascb__L_c + nadp_c 15038\nGRXR grxox_c + 2.0 gthrd_c --> grxrd_c + gthox_c 15038 or 16549 or 9250\nGTHOr gthox_c + h_c + nadph_c <=> 2.0 gthrd_c + nadp_c 15482 or (15038 and 15482) or (15482 and 16549) or (15482 and 8790)\nGTHPi 2.0 gthrd_c + h2o2_c --> gthox_c + 2.0 h2o_c 12715 or 15038 or 16549 or 8579\nPAPSR2 grxrd_c + paps_c --> grxox_c + 2.0 h_c + pap_c + so3_c (b0849 and 11741) or (b1064 and 11741) or (11741 and 15038) or (11741 and 16549) or (11741 and 9250)\nRNDR1b adp_c + grxrd_c --> dadp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR2b gdp_c + grxrd_c --> dgdp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR3b cdp_c + grxrd_c --> dcdp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR4b grxrd_c + udp_c --> dudp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\n\nGRXR grxox_c + 2.0 gthrd_c --> grxrd_c + gthox_c 15038 or 16549 or 9250\nGTHOr gthox_c + h_c + nadph_c <=> 2.0 gthrd_c + nadp_c 15482 or (15038 and 15482) or (15482 and 16549) or (15482 and 8790)\nGTHPi 2.0 gthrd_c + h2o2_c --> gthox_c + 2.0 h2o_c 12715 or 15038 or 16549 or 8579\nPAPSR2 grxrd_c + paps_c --> grxox_c + 2.0 h_c + pap_c + so3_c (b0849 and 11741) or (b1064 and 11741) or (11741 and 15038) or (11741 and 16549) or (11741 and 9250)\nRNDR1b adp_c + grxrd_c --> dadp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR2b gdp_c + grxrd_c --> dgdp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR3b cdp_c + grxrd_c --> dcdp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR4b grxrd_c + udp_c --> dudp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\n\nGTHOr gthox_c + h_c + nadph_c <=> 2.0 gthrd_c + nadp_c 15482 or (15038 and 15482) or (15482 and 16549) or (15482 and 8790)\n\nGRXR grxox_c + 2.0 gthrd_c --> grxrd_c + gthox_c 15038 or 16549 or 9250\nGTHOm gthox_m + h_m + nadph_m --> 2.0 gthrd_m + nadp_m 15482 or (15482 and 9250)\nGTHPm 2.0 gthrd_m + h2o2_m <=> gthox_m + 2.0 h2o_m 12715 or 8579 or 9250\nPAPSR2 grxrd_c + paps_c --> grxox_c + 2.0 h_c + pap_c + so3_c (b0849 and 11741) or (b1064 and 11741) or (11741 and 15038) or (11741 and 16549) or (11741 and 9250)\nRNDR1b adp_c + grxrd_c --> dadp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR2b gdp_c + grxrd_c --> dgdp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR3b cdp_c + grxrd_c --> dcdp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\nRNDR4b grxrd_c + udp_c --> dudp_c + grxox_c + h2o_c (b0849 and b2675 and b2676) or (b1064 and b2675 and b2676) or (b2675 and b2676 and 15038) or (b2675 and b2676 and 16549) or (b2675 and b2676 and 9250)\n"
],
[
"for r in sorted(model.metabolites.get_by_id('dhdascb_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('paps_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"DASCBR dhdascb_c + h_c + nadph_c --> ascb__L_c + nadp_c 15038\nDHAAt1r dhdascb_e <=> dhdascb_c 15762\n\nADSK aps_c + atp_c --> adp_c + h_c + paps_c 8709\nBPNT2 h2o_c + paps_c --> aps_c + pi_c 10673\nPAPSPAPthr pap_c + paps_h <=> pap_h + paps_c 10998 or 11267\nPAPSR paps_c + trdrd_c --> 2.0 h_c + pap_c + so3_c + trdox_c 11741 or (11741 and 15339) or (11741 and 16019)\nPAPSR2 grxrd_c + paps_c --> grxox_c + 2.0 h_c + pap_c + so3_c (b0849 and 11741) or (b1064 and 11741) or (11741 and 15038) or (11741 and 16549) or (11741 and 9250)\nPAPStg paps_c <=> paps_g 10998 or 11267\n"
]
],
[
[
"Glutathione \n15482 cyto GLR1 cytosolic and mitochondrial glutathione oxidoreductase \n15038 nucl 10.5, cyto_nucl 9.333, mito 8.5, cyto_mito 8.333 GRX1 glutaredoxin \n8790 cyto_nucl 13.333, cyto 12.5, nucl 11, mito_nucl 6.999 GRX3,GRX4 glutaredoxin \n16549 mito (no sigP) GRX2 glutaredoxin \n9250 mito GRX5 glutaredoxin \n13734 mito (anchor, 416 aa) glutaredoxin \n8579 cyto 17, cyto_nucl 12.5 GPX2,HYR1\n\n12715 cyto 15.5, cyto_mito 9.5 TSA1,TSA2 -> THIORDXi\n\nChange GTHOr genes to '15482' \nChange GRXR genes to '15038 or 8790' \nChange GTHPi genes to '8579' \nChange GTHOm genes to '15482' \nAdd GRXRm, and change genes to '13734 or 16549 or 9250' \nRemove GDR, GDR_nadp_h, GDRh, GDRm, yli_R0291, GTHPm\n\nReplace DASCBR (nadph) with DHAOX_c (gthrd) from iLB1027_lipid \nPAPSR, 11741 cyto 13.5, cyto_nucl 12.333 MET16 uses thioredoxin\n\nThioredoxin reductase \n9688 mito 25.5, cyto_mito 14 (no sigP) TRR1,TRR2 -> cyto_mito \n9687 has sigP, but it seems truncated (~20% coverage), downstream of 9688, long intron could make another isoform \nThioredoxins \n15339 mito 11, cyto_mito 9.166, pero 7, cyto_nucl 5.166, cyto 5 (no sigP) TRX1,TRX2 -> pero \n16019 mito 25 thioredoxin by orthomcl (TRX3?) -> mito \n10848 cyto 17.5, cyto_nucl 12.5 thioredoxin -> cyto_nucl \n12730 cyto 11.5, cyto_nucl 8.5 thioredoxin -> cyto_nucl \n12737 cyto 16, cyto_nucl 12 thioredoxin -> cyto_nucl\n\nChange PAPSR genes to '(10848 and 11741) or (11741 and 12730) or (11741 and 12737) or (11741 and 15339)' \nRemove PAPSPAPthr, PAPSR2\n\nIn S. cer, cyto TRDR trx1, trx2, trr1 / mito TRDRm trx3, trr2\nChange TRDR genes to '(10848 and 9688) or (12730 and 9688) or (12737 and 9688)' \nChange TRDRm genes to '16019 and 9688' \nRNDRs use thioredoxins \nRemove RNDR1b, RNDR2b, RNDR3b, RNDR3b",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('GTHOr').gene_reaction_rule = '15482'\nmodel.reactions.get_by_id('GRXR').gene_reaction_rule = '15038 or 8790'\nmodel.reactions.get_by_id('GTHPi').gene_reaction_rule = '8579'\nmodel.reactions.get_by_id('GTHOm').gene_reaction_rule = '15482'\nm1 = model.metabolites.get_by_id('grxox_c').copy()\nm2 = model.metabolites.get_by_id('grxrd_c').copy()\nm1.id = 'grxox_m'\nm1.compartment = 'm'\nm2.id = 'grxrd_m'\nm2.compartment = 'm'\nmodel.add_metabolites([m1,m2])\nr = model.reactions.get_by_id('GRXR').copy()\nr.id = 'GRXRm'\nr.gene_reaction_rule = '13734 or 16549 or 9250'\nmodel.add_reactions([r])\nfor m in model.reactions.get_by_id('GRXR').metabolites:\n model.reactions.get_by_id('GRXRm').add_metabolites({m.id: -r.get_coefficient(m.id), m.id.replace('_c','_m'): r.get_coefficient(m.id)})\nmodel.remove_reactions(['GDR','GDR_nadp_h','GDRh','GDRm','yli_R0291','GTHPm'], remove_orphans=True)\nmodel.add_reactions([ptri.reactions.get_by_id('DHAOX_c').copy()])\nmodel.reactions.get_by_id('DHAOX_c').gene_reaction_rule = '15038'\nmodel.reactions.get_by_id('DASCBR').remove_from_model(remove_orphans=True)\nmodel.reactions.get_by_id('PAPSR').gene_reaction_rule = '(10848 and 11741) or (11741 and 12730) or (11741 and 12737) or (11741 and 15339)'\nmodel.reactions.get_by_id('PAPSPAPthr').remove_from_model(remove_orphans=True)\nmodel.reactions.get_by_id('PAPSR2').remove_from_model(remove_orphans=True)\nmodel.reactions.get_by_id('TRDR').gene_reaction_rule = '(10848 and 9688) or (12730 and 9688) or (12737 and 9688)'\nmodel.reactions.get_by_id('TRDRm').gene_reaction_rule = '16019 and 9688'\nmodel.remove_reactions(['RNDR1b','RNDR2b','RNDR3b','RNDR4b'], remove_orphans=True)",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('9688').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15339').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"TDSRh h_h + nadph_h + trdox_h --> nadp_h + trdrd_h (CRv4_Au5_s23_g9860_t1 and CRv4_Au5_s2_g8777_t1) or (CRv4_Au5_s23_g9860_t1 and CRv4_Au5_s8_g14830_t1) or (CRv4_Au5_s23_g9860_t1 and 9688) or (CRv4_Au5_s2_g8777_t1 and CRv4_Au5_s5_g12205_t1) or (CRv4_Au5_s2_g8777_t1 and 15339) or (CRv4_Au5_s2_g8777_t1 and 16019) or (CRv4_Au5_s5_g12205_t1 and CRv4_Au5_s8_g14830_t1) or (CRv4_Au5_s5_g12205_t1 and 9688) or (CRv4_Au5_s8_g14830_t1 and 15339) or (CRv4_Au5_s8_g14830_t1 and 16019) or (15339 and 9688) or (16019 and 9688)\nTRDR h_c + nadph_c + trdox_c --> nadp_c + trdrd_c (10848 and 9688) or (12730 and 9688) or (12737 and 9688)\nTRDRm h_m + nadph_m + trdox_m --> nadp_m + trdrd_m 16019 and 9688\n\nAHAL achms_h + trdrd_h + tsul_h --> ac_h + h_h + hcys__L_h + so3_h + trdox_h (CRv4_Au5_s23_g9860_t1 and CRv4_Au5_s60_g12367_t1) or (CRv4_Au5_s5_g12205_t1 and CRv4_Au5_s60_g12367_t1) or (CRv4_Au5_s5_g12205_t2 and CRv4_Au5_s60_g12367_t1) or (CRv4_Au5_s60_g12367_t1 and 15339) or (CRv4_Au5_s60_g12367_t1 and 16019)\nCYSS_trdrd acser_h + trdrd_h + tsul_h --> ac_h + cys__L_h + h_h + so3_h + trdox_h (CRv4_Au5_s23_g9860_t1 and 12031) or (CRv4_Au5_s23_g9860_t1 and 13106) or (CRv4_Au5_s23_g9860_t1 and 15712) or (CRv4_Au5_s5_g12205_t1 and 12031) or (CRv4_Au5_s5_g12205_t1 and 13106) or (CRv4_Au5_s5_g12205_t1 and 15712) or (CRv4_Au5_s5_g12205_t2 and 12031) or (CRv4_Au5_s5_g12205_t2 and 13106) or (CRv4_Au5_s5_g12205_t2 and 15712) or (12031 and 15339) or (12031 and 16019) or (13106 and 15339) or (13106 and 16019) or (15339 and 15712) or (15712 and 16019)\nDSBDR dsbdox_c + trdrd_c --> dsbdrd_c + trdox_c (b4136 and 15339) or (b4136 and 16019)\nMETSOXR1 metsox_S__L_c + trdrd_c --> h2o_c + met__L_c + trdox_c 15469 or (b3551 and 15339) or (b3551 and 16019) or (15339 and 15902) or (15902 and 16019)\nMETSOXR2 metsox_R__L_c + trdrd_c --> h2o_c + met__L_c + trdox_c (15339 and 15469) or (15339 and 9153) or (15469 and 16019) or (16019 and 9153)\nPAPSR paps_c + trdrd_c --> 2.0 h_c + pap_c + so3_c + trdox_c (10848 and 11741) or (11741 and 12730) or (11741 and 12737) or (11741 and 15339)\nRNDR1 adp_c + trdrd_c --> dadp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nRNDR1n adp_n + trdrd_n --> dadp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nRNDR2 gdp_c + trdrd_c --> dgdp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nRNDR2n gdp_n + trdrd_n --> dgdp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nRNDR3 cdp_c + trdrd_c --> dcdp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nRNDR3n cdp_n + trdrd_n --> dcdp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nRNDR4 trdrd_c + udp_c --> dudp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nRNDR4n trdrd_n + udp_n --> dudp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nRNTR1 atp_c + trdrd_c --> datp_c + h2o_c + trdox_c 15339 or 16019 or (CRv4_Au5_s27_g10030_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s27_g10030_t1 and 15339) or (CRv4_Au5_s27_g10030_t1 and 16019)\nRNTR2 gtp_c + trdrd_c --> dgtp_c + h2o_c + trdox_c 15339 or 16019 or (CRv4_Au5_s27_g10030_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s27_g10030_t1 and 15339) or (CRv4_Au5_s27_g10030_t1 and 16019)\nRNTR3 ctp_c + trdrd_c --> dctp_c + h2o_c + trdox_c 15339 or 16019 or (CRv4_Au5_s27_g10030_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s27_g10030_t1 and 15339) or (CRv4_Au5_s27_g10030_t1 and 16019)\nRNTR4 trdrd_c + utp_c --> dutp_c + h2o_c + trdox_c 15339 or 16019 or (CRv4_Au5_s27_g10030_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s27_g10030_t1 and 15339) or (CRv4_Au5_s27_g10030_t1 and 16019)\nTDSRh h_h + nadph_h + trdox_h --> nadp_h + trdrd_h (CRv4_Au5_s23_g9860_t1 and CRv4_Au5_s2_g8777_t1) or (CRv4_Au5_s23_g9860_t1 and CRv4_Au5_s8_g14830_t1) or (CRv4_Au5_s23_g9860_t1 and 9688) or (CRv4_Au5_s2_g8777_t1 and CRv4_Au5_s5_g12205_t1) or (CRv4_Au5_s2_g8777_t1 and 15339) or (CRv4_Au5_s2_g8777_t1 and 16019) or (CRv4_Au5_s5_g12205_t1 and CRv4_Au5_s8_g14830_t1) or (CRv4_Au5_s5_g12205_t1 and 9688) or (CRv4_Au5_s8_g14830_t1 and 15339) or (CRv4_Au5_s8_g14830_t1 and 16019) or (15339 and 9688) or (16019 and 9688)\nTHIORDXi h2o2_c + trdrd_c --> 2.0 h2o_c + trdox_c 8579 or (YDR453C and 15339) or (12715 and 15339) or (12715 and 16019) or (15037 and 15339) or (15037 and 16019)\nTHIORDXm h2o2_m + trdrd_m <=> 2.0 h2o_m + trdox_m 10200 and 15339\nTHIORDXni h2o2_n + trdrd_n --> 2.0 h2o_n + trdox_n (15037 and 15339) or (15037 and 16019)\nTHIORDXp h2o2_x + trdrd_x <=> 2.0 h2o_x + trdox_x (13262 and 15339) or (13262 and 16019)\n"
]
],
[
[
"Thioredoxin \nRemove TDSRh, CYSS_trdrd, AHAL -> check cysteine biosynthesis \nRemove DSBDR (E. coli specific) \n15902 mito 8, cysk 6, cyto_mito 6 MXR1 (cyto) methionine-S-sulfoxide reductase (peptide methionine, non-peptide) \n15469 cyto_mito (no sigP) YKL069W (cyto) methionine-R-sulfoxide reductase (non-peptide methionine) \n9153 mito MXR2 (mito) methionine-R-sulfoxide reductase (peptide methionine) \nChange METSOXR1 genes to '(10848 and 15902) or (12730 and 15902) or (12737 and 15902) or (15339 and 15902)' \nChange METSOXR2 genes to '(10848 and 15469) or (12730 and 15469) or (12737 and 15469) or (15339 and 15469)'\n\nRNDR needs small heterodimer and large homodimer \n11290 cyto 14.5, cyto_mito 12.5 RNR1,RNR3 ribonucleotide reductase, alpha subunit (large) \n14237 cyto 13.5, cyto_nucl 10.5 RNR2 ribonucleotide reductase, beta subunit (small) \n11172 cyto_nucl RNR2 ribonucleotide reductase, beta subunit (small) \nChange RNDRx genes to '11172 and 11290 and 14237 and trdrd_cyto (10848 or 12730 or 12737 or 15339)' \n'(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)' \nChange RNDRnx genes to '11172 and 11290 and 14237 and trdrd_nucl (10848 or 12730 or 12737)' \n'(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)' \nRemove RNTRx (not present in S. cer and no genes in Rhodo)\n\nThioredoxin peroxidase \nbeta-oxidation http://www.jbc.org/content/274/6/3402.long \n12715 cyto 15.5, cyto_mito 9.5 TSA1,TSA2 (cyto) -> THIORDXi \n15037 nucl 12.5, cyto_nucl 11 DOT5 (nucl) nuclear thiol peroxidase -> THIORDXni \n13262 cyto 20.5, cyto_nucl 12.5, pero 4 AHP1 (pero) -> THIORDXp \nahp1-trx2 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3366830/ \n10200 nucl 11, cyto 8, pero 5, cyto_mito 5 PRX1 (mito) -> THIORDXm \nChange THIORDXi genes to '(10848 and 12715) or (12715 and 12730) or (12715 and 12737)' \nChange THIORDXni genes to '(10848 and 15037) or (12730 and 15037) or (12737 and 15037)' \nChange THIORDXp genes to '13262 and 15339'\nChange THIORDXm genes to '10200 and 16019' ",
"_____no_output_____"
]
],
[
[
"model.remove_reactions(['TDSRh','CYSS_trdrd','AHAL','DSBDR'], remove_orphans=True)\nmodel.reactions.get_by_id('METSOXR1').gene_reaction_rule = '(10848 and 15902) or (12730 and 15902) or (12737 and 15902) or (15339 and 15902)'\nmodel.reactions.get_by_id('METSOXR2').gene_reaction_rule = '(10848 and 15469) or (12730 and 15469) or (12737 and 15469) or (15339 and 15469)'\nmodel.reactions.get_by_id('RNDR1').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)'\nmodel.reactions.get_by_id('RNDR2').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)'\nmodel.reactions.get_by_id('RNDR3').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)'\nmodel.reactions.get_by_id('RNDR4').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)'\nmodel.reactions.get_by_id('RNDR1n').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)'\nmodel.reactions.get_by_id('RNDR2n').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)'\nmodel.reactions.get_by_id('RNDR3n').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)'\nmodel.reactions.get_by_id('RNDR4n').gene_reaction_rule = '(10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)'\nmodel.remove_reactions(['RNTR1','RNTR2','RNTR3','RNTR4'], remove_orphans=True)\nmodel.reactions.get_by_id('THIORDXi').gene_reaction_rule = '(10848 and 12715) or (12715 and 12730) or (12715 and 12737)'\nmodel.reactions.get_by_id('THIORDXni').gene_reaction_rule = '(10848 and 15037) or (12730 and 15037) or (12737 and 15037)'\nmodel.reactions.get_by_id('THIORDXp').gene_reaction_rule = '13262 and 15339'\nmodel.reactions.get_by_id('THIORDXm').gene_reaction_rule = '10200 and 16019'",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('15679').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15496').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('8943').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"ATDGDm atp_m + dgdp_m + h_m --> adp_m + dgtp_m 15679\nATGDm atp_m + gdp_m + h_m --> adp_m + gtp_m 15679\nNDPK1 atp_c + gdp_c <=> adp_c + gtp_c 15496 or 15679 or 8943\nNDPK10 atp_c + didp_c <=> adp_c + ditp_c 15679 or 8943\nNDPK10n atp_n + didp_n <=> adp_n + ditp_n 15679\nNDPK1n atp_n + gdp_n <=> adp_n + gtp_n 15679\nNDPK2 atp_c + udp_c <=> adp_c + utp_c 15496 or 15679 or 8943\nNDPK2m atp_m + udp_m --> adp_m + utp_m 15679\nNDPK2n atp_n + udp_n <=> adp_n + utp_n 15679\nNDPK3 atp_c + cdp_c <=> adp_c + ctp_c 15496 or 15679 or 8943\nNDPK3m atp_m + cdp_m --> adp_m + ctp_m 15679\nNDPK3n atp_n + cdp_n <=> adp_n + ctp_n 15679\nNDPK4 atp_c + dtdp_c <=> adp_c + dttp_c 15496 or 15679 or 8943\nNDPK4m atp_m + dtdp_m --> adp_m + dttp_m 15679\nNDPK4n atp_n + dtdp_n <=> adp_n + dttp_n 15679\nNDPK5 atp_c + dgdp_c <=> adp_c + dgtp_c 15496 or 15679 or 8943\nNDPK5n atp_n + dgdp_n <=> adp_n + dgtp_n 15679\nNDPK6 atp_c + dudp_c <=> adp_c + dutp_c 15496 or 15679 or 8943\nNDPK6m atp_m + dudp_m --> adp_m + dutp_m 15679\nNDPK6n atp_n + dudp_n <=> adp_n + dutp_n 15679\nNDPK7 atp_c + dcdp_c <=> adp_c + dctp_c 15496 or 15679 or 8943\nNDPK7m atp_m + dcdp_m --> adp_m + dctp_m 15679\nNDPK7n atp_n + dcdp_n <=> adp_n + dctp_n 15679\nNDPK8 atp_c + dadp_c <=> adp_c + datp_c 15496 or 15679 or 8943\nNDPK8m atp_m + dadp_m --> adp_m + datp_m 15679\nNDPK8n atp_n + dadp_n <=> adp_n + datp_n 15679\nNDPK9 atp_c + idp_c <=> adp_c + itp_c 15679 or 8943\nNDPK9m atp_m + idp_m --> adp_m + itp_m 15679\nNDPK9n atp_n + idp_n <=> adp_n + itp_n 15679\n\nADK1 amp_c + atp_c <=> 2.0 adp_c 12300 or 13190 or 15496\nADK1m amp_m + atp_m <=> 2.0 adp_m 12300 or 15129 or 15496\nADK3 amp_c + gtp_c <=> adp_c + gdp_c 15496\nADK4 amp_c + itp_c <=> adp_c + idp_c 15496\nADNK1 adn_c + atp_c --> adp_c + amp_c + h_c 15496 or 8385\nDADK atp_c + damp_c <=> adp_c + dadp_c 12300 or 15129 or 15496\nNDPK1 atp_c + gdp_c <=> adp_c + gtp_c 15496 or 15679 or 8943\nNDPK2 atp_c + udp_c <=> adp_c + utp_c 15496 or 15679 or 8943\nNDPK3 atp_c + cdp_c <=> adp_c + ctp_c 15496 or 15679 or 8943\nNDPK4 atp_c + dtdp_c <=> adp_c + dttp_c 15496 or 15679 or 8943\nNDPK5 atp_c + dgdp_c <=> adp_c + dgtp_c 15496 or 15679 or 8943\nNDPK6 atp_c + dudp_c <=> adp_c + dutp_c 15496 or 15679 or 8943\nNDPK7 atp_c + dcdp_c <=> adp_c + dctp_c 15496 or 15679 or 8943\nNDPK8 atp_c + dadp_c <=> adp_c + datp_c 15496 or 15679 or 8943\n\nNDPK1 atp_c + gdp_c <=> adp_c + gtp_c 15496 or 15679 or 8943\nNDPK10 atp_c + didp_c <=> adp_c + ditp_c 15679 or 8943\nNDPK2 atp_c + udp_c <=> adp_c + utp_c 15496 or 15679 or 8943\nNDPK3 atp_c + cdp_c <=> adp_c + ctp_c 15496 or 15679 or 8943\nNDPK4 atp_c + dtdp_c <=> adp_c + dttp_c 15496 or 15679 or 8943\nNDPK5 atp_c + dgdp_c <=> adp_c + dgtp_c 15496 or 15679 or 8943\nNDPK6 atp_c + dudp_c <=> adp_c + dutp_c 15496 or 15679 or 8943\nNDPK7 atp_c + dcdp_c <=> adp_c + dctp_c 15496 or 15679 or 8943\nNDPK8 atp_c + dadp_c <=> adp_c + datp_c 15496 or 15679 or 8943\nNDPK9 atp_c + idp_c <=> adp_c + itp_c 15679 or 8943\n"
]
],
[
[
"15679 cyto YNK1 (cyto, mito intermembrane space) nucleoside diphosphate kinase \n8943 mito nucleoside diphosphate kinase \n\nRemove NPDKxm, NDPKxn \nAdd NPDKxm (reversible) from Recon1 \nChange NDPKx genes to '15679' \nChange NDPKxm genes to '8943'",
"_____no_output_____"
]
],
[
[
"model.remove_reactions([r.id for r in model.reactions if r.id.startswith('NDPK') and r.id.endswith('n')], remove_orphans=True)\nmodel.remove_reactions([r.id for r in model.reactions if r.id.startswith('NDPK') and r.id.endswith('m')], remove_orphans=True)\nmodel.add_reactions([r.copy() for r in hsa.reactions if r.id.startswith('NDPK') and r.id.endswith('m')])\nfor r in model.reactions:\n if r.id.startswith('NDPK') and r.id.endswith('m'):\n r.gene_reaction_rule = '8943'\n elif r.id.startswith('NDPK'):\n r.gene_reaction_rule = '15679'",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('15496').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('15129').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('12300').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('8385').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"ADK1 amp_c + atp_c <=> 2.0 adp_c 12300 or 13190 or 15496\nADK1m amp_m + atp_m <=> 2.0 adp_m 12300 or 15129 or 15496\nADK3 amp_c + gtp_c <=> adp_c + gdp_c 15496\nADK4 amp_c + itp_c <=> adp_c + idp_c 15496\nADNK1 adn_c + atp_c --> adp_c + amp_c + h_c 15496 or 8385\nDADK atp_c + damp_c <=> adp_c + dadp_c 12300 or 15129 or 15496\n\nADK1m amp_m + atp_m <=> 2.0 adp_m 12300 or 15129 or 15496\nADK3m amp_m + gtp_m <=> adp_m + gdp_m 15129\nADK4m amp_m + itp_m <=> adp_m + idp_m 15129\nDADK atp_c + damp_c <=> adp_c + dadp_c 12300 or 15129 or 15496\n\nADK1 amp_c + atp_c <=> 2.0 adp_c 12300 or 13190 or 15496\nADK1m amp_m + atp_m <=> 2.0 adp_m 12300 or 15129 or 15496\nATAMh amp_h + atp_h --> 2.0 adp_h 12300\nATDAMh atp_h + damp_h --> adp_h + dadp_h 12300\nATDAMm atp_m + damp_m --> adp_m + dadp_m 12300\nDADK atp_c + damp_c <=> adp_c + dadp_c 12300 or 15129 or 15496\n\nADNK1 adn_c + atp_c --> adp_c + amp_c + h_c 15496 or 8385\nADNK1m adn_m + atp_m --> adp_m + amp_m + h_m 8385\n"
]
],
[
[
"15496 cyto ADK1 adenylate kinase (also mito intermembrane space)\n15129 mito ADK2 adenylate kinase \n12300 mito FAP7 Essential NTPase required for small ribosome subunit synthesis \n8385 extr (sigP) ADO1 adenosine kinase (cyto and nucl?) \n\nS. cer ADK1 catalyze ADK1 and DADK, and ADK2 catalyze and ADK3m and ADK4m \nhttp://www.jbc.org/content/280/19/18604.full \nChange ADK1 genes to '15496' \nChange DADK genes to '15496' \nChange ADK3m genes to '15129' \nRemove ADK1m, ADK3, ADK4, ATAMh, ATDAMh, ATDAMm\n\nChange ADNK1 genes to '8385'\nRemove ADNK1m",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('ADK1').gene_reaction_rule = '15496'\nmodel.reactions.get_by_id('DADK').gene_reaction_rule = '15496'\nmodel.reactions.get_by_id('ADK3m').gene_reaction_rule = '15129'\nmodel.reactions.get_by_id('ADK4m').gene_reaction_rule = '15129'\nmodel.remove_reactions(['ADK1m','ADK3','ADK4','ATAMh','ATDAMh','ATDAMm'], remove_orphans=True)\nmodel.reactions.get_by_id('ADNK1').gene_reaction_rule = '8385'\nmodel.remove_reactions(['ADNK1m'], remove_orphans=True)",
"_____no_output_____"
],
[
"for r in sorted(model.genes.get_by_id('15252').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('13190').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"DTMPK atp_c + dtmp_c <=> adp_c + dtdp_c 15252\nNDP8 dudp_c + h2o_c --> dump_c + h_c + pi_c 15252\nURIDK2r atp_c + dump_c <=> adp_c + dudp_c 13190 or 15252\n\nCPK1 cmp_c + ctp_c <=> 2.0 cdp_c 13190\nCYTK1 atp_c + cmp_c <=> adp_c + cdp_c 13190\nCYTK10 cmp_c + dgtp_c <=> cdp_c + dgdp_c 13190\nCYTK10n cmp_n + dgtp_n <=> cdp_n + dgdp_n 13190\nCYTK11 dcmp_c + dgtp_c <=> dcdp_c + dgdp_c 13190\nCYTK11n dcmp_n + dgtp_n <=> dcdp_n + dgdp_n 13190\nCYTK12 dcmp_c + dctp_c <=> 2.0 dcdp_c 13190\nCYTK12n dcmp_n + dctp_n <=> 2.0 dcdp_n 13190\nCYTK13 datp_c + dcmp_c <=> dadp_c + dcdp_c 13190\nCYTK13n datp_n + dcmp_n <=> dadp_n + dcdp_n 13190\nCYTK14 dcmp_c + utp_c <=> dcdp_c + udp_c 13190\nCYTK14n dcmp_n + utp_n <=> dcdp_n + udp_n 13190\nCYTK1n atp_n + cmp_n <=> adp_n + cdp_n 13190\nCYTK2 atp_c + dcmp_c <=> adp_c + dcdp_c 13190\nCYTK2_1 ctp_c + dcmp_c <=> cdp_c + dcdp_c 13190\nCYTK2n atp_n + dcmp_n <=> adp_n + dcdp_n 13190\nCYTK3n ctp_n + dcmp_n <=> cdp_n + dcdp_n 13190\nCYTK4n dcmp_n + gtp_n <=> dcdp_n + gdp_n 13190\nCYTK5n cmp_n + gtp_n <=> cdp_n + gdp_n 13190\nCYTK6n cmp_n + ctp_n <=> 2.0 cdp_n 13190\nCYTK7 cmp_c + utp_c <=> cdp_c + udp_c 13190\nCYTK7n cmp_n + utp_n <=> cdp_n + udp_n 13190\nCYTK8 cmp_c + datp_c <=> cdp_c + dadp_c 13190\nCYTK8n cmp_n + datp_n <=> cdp_n + dadp_n 13190\nCYTK9 cmp_c + dctp_c <=> cdp_c + dcdp_c 13190\nCYTK9n cmp_n + dctp_n <=> cdp_n + dcdp_n 13190\nUMPK atp_c + ump_c <=> adp_c + udp_c 13190\nUMPK2 ctp_c + ump_c <=> cdp_c + udp_c 13190\nUMPK2n ctp_n + ump_n <=> cdp_n + udp_n 13190\nUMPK3 ump_c + utp_c <=> 2.0 udp_c 13190\nUMPK3n ump_n + utp_n <=> 2.0 udp_n 13190\nUMPK4 gtp_c + ump_c <=> gdp_c + udp_c 13190\nUMPK4n gtp_n + ump_n <=> gdp_n + udp_n 13190\nUMPK5 datp_c + ump_c <=> dadp_c + udp_c 13190\nUMPK5n datp_n + ump_n <=> dadp_n + udp_n 13190\nUMPK6 dctp_c + ump_c <=> dcdp_c + udp_c 13190\nUMPK6n dctp_n + ump_n <=> dcdp_n + udp_n 13190\nUMPK7 dgtp_c + ump_c <=> dgdp_c + udp_c 13190\nUMPK7n dgtp_n + ump_n <=> dgdp_n + udp_n 13190\nUMPKn atp_n + ump_n <=> adp_n + udp_n 13190\nURIDK2r atp_c + dump_c <=> adp_c + dudp_c 13190 or 15252\nURIDK2rn atp_n + dump_n <=> adp_n + dudp_n 13190\n"
]
],
[
[
"15252 mito (no sigP) CDC8 thymidylate kinase (nucl and cyto) -> DTMPK \n13190 cysk 10, cyto 8, mito 7 (no sigP) URA6 Uridylate kinase/adenylate kinase (cyto predominantly and nucl) / KEGG UMP-CMP kinase \nURA6 activity controversy -> keep these reactions for now\n\nChange URIDK2r genes to '13190'",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('URIDK2r').gene_reaction_rule = '13190'",
"_____no_output_____"
],
[
"for r in sorted(model.metabolites.get_by_id('hom__L_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('achms_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('hcys__L_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.metabolites.get_by_id('cyst__L_c').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('16618').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule) ",
"HSDxi aspsa_c + h_c + nadh_c --> hom__L_c + nad_c 12080\nHSDy hom__L_c + nadp_c <=> aspsa_c + h_c + nadph_c 12080 or 16738\nHSERTA accoa_c + hom__L_c <=> achms_c + coa_c 12513 or 15248 or (PP_5098 and 15248)\nHSK atp_c + hom__L_c --> adp_c + h_c + phom_c 8651\n\nAHSERL achms_c + ch4s_c --> ac_c + h_c + met__L_c 16618\nAHSERL2 achms_c + h2s_c --> ac_c + h_c + hcys__L_c 16618\nHSERTA accoa_c + hom__L_c <=> achms_c + coa_c 12513 or 15248 or (PP_5098 and 15248)\nMETB1 achms_c + cys__L_c --> ac_c + cyst__L_c + h_c 11463 or 16725 or 16742\nyli_R0094 achms_c + h_c + trdrd_c + tsul_c --> ac_c + hcys__L_c + so3_c + trdox_c 11463 or 16725 or 16742\n\nAHCi ahcys_c + h2o_c --> adn_c + hcys__L_c 12912\nAHSERL2 achms_c + h2s_c --> ac_c + h_c + hcys__L_c 16618\nCYSTL cyst__L_c + h2o_c --> hcys__L_c + nh4_c + pyr_c 8759\nCYSTS hcys__L_c + ser__L_c --> cyst__L_c + h2o_c 15712\nHCYSMT amet_c + hcys__L_c --> ahcys_c + h_c + met__L_c 15759\nHCYSMT2 hcys__L_c + mmet_c --> h_c + 2.0 met__L_c 15759\nMETS 5mthf_c + hcys__L_c --> h_c + met__L_c + thf_c 9825\nMHPGLUT hcys__L_c + mhpglu_c --> hpglu_c + met__L_c 9825\nSHSL2r h2s_c + suchms_c <=> h_c + hcys__L_c + succ_c 11463 or 16725 or 16742 or 9499\nyli_R0094 achms_c + h_c + trdrd_c + tsul_c --> ac_c + hcys__L_c + so3_c + trdox_c 11463 or 16725 or 16742\n\nCYSTGL cyst__L_c + h2o_c --> 2obut_c + cys__L_c + nh4_c 9499\nCYSTL cyst__L_c + h2o_c --> hcys__L_c + nh4_c + pyr_c 8759\nCYSTS hcys__L_c + ser__L_c --> cyst__L_c + h2o_c 15712\nMETB1 achms_c + cys__L_c --> ac_c + cyst__L_c + h_c 11463 or 16725 or 16742\nSHSL1 cys__L_c + suchms_c --> cyst__L_c + h_c + succ_c 11463 or 16725 or 16742 or 9499\n\nAHSERL achms_c + ch4s_c --> ac_c + h_c + met__L_c 16618\nAHSERL2 achms_c + h2s_c --> ac_c + h_c + hcys__L_c 16618\nAHSERL4 acser_c + trdrd_c + tsul_c --> ac_c + cys__L_c + h_c + so3_c + trdox_c 12031 or 13106 or 16618\n"
]
],
[
[
"Cysteine biosynthesis \n\n12080 extr HOM6 (cyto) homoserine dehydrogenase \n16738 cyto homoserine dehydrogenase \n8651 mito THR1 homoserine kinase\n\n12513 cyto_nucl MET2 L-homoserine O-acetyltransferase \n15248 nucl MET2 L-homoserine O-acetyltransferase \n16618 cyto MET17 O-acetylhomoserine sulfhydrylase \n15712 cyto CYS4 cystathionine beta-synthase \n9499 mito CYS3 cystathionine gamma-lyase \n8759 mito (no sigP) STR3 (pero) cystathionine beta-lyase (also involved in 3-mercaptohexanol) \n11463 mito (no sigP) STR2 (cyto),YLL058W,YML082W (cyto) cystathionine gamma-synthase\n16725 cyto YHR112C cystathionine beta-lyase/gamma-synthase, unknown function in S. cer \n16742 plas YHR112C cystathionine beta-lyase/gamma-synthase, unknown function in S. cer \n\nserine O-acetyltransferase is missing in Rhodo, 9734 is maltose O-acetyltransferase \n12031 mito (no sigP) MCY1 cysteine synthase A (O-acetyl-L-serine sulfhydrylase) \n13106 cyto_mito (anchor) MCY1 cysteine synthase A (O-acetyl-L-serine sulfhydrylase)\n\n12912 cyto SAH1 S-adenosyl-L-homocysteine hydrolase \n15759 extr MHT1,SAM4,YMR321C S-methylmethionine-homocysteine methyltransferase \n9825 mito (non-sigP) MET6 cobalamin-independent methionine synthase \n12876 cyto cobalamin-independent methionine synthase \n12920 cyto cobalamin-independent methionine synthase\n\nChange HSDxi genes to '12080 or 16738' \nChange HSERTA genes to '12513 or 15248' \nAHSERL, AHSERL2 is correct \nRemove AHSERL4, METB1, yli_R0094, SHSL2r (P. putida specific) \nChange METS genes to '12876 or 12920 or 9825' \nMHPGLUT is a specific 5mthf (mhpglu_c) blocked rxn \nRemove MHPGLUT \nChange CYSTL genes to '16725 or 16742 or 8759' \nRemove CYSTLp \nCHange SHSL1 genes to '11463'",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('HSDxi').gene_reaction_rule = '12080 or 16738'\nmodel.reactions.get_by_id('HSERTA').gene_reaction_rule = '12513 or 15248'\nmodel.remove_reactions(['AHSERL4','METB1','yli_R0094','SHSL2r'], remove_orphans=True)\nmodel.reactions.get_by_id('METS').gene_reaction_rule = '12876 or 12920 or 9825'\nmodel.remove_reactions(['MHPGLUT'], remove_orphans=True)\nmodel.reactions.get_by_id('CYSTL').gene_reaction_rule = '16725 or 16742 or 8759'\nmodel.remove_reactions(['CYSTLp'], remove_orphans=True)\nmodel.reactions.get_by_id('SHSL1').gene_reaction_rule = '11463'",
"_____no_output_____"
],
[
"temp = ['12080','14662','16738','8651','12513','15248','16618','15712','9499','8759','11463','16725','16742',\n '9734','12031','13106','12912','15759','9825','12876','12920']\ndisplay(Annotation.loc[temp])\nShow_Data(temp)",
"_____no_output_____"
],
[
"for x in temp:\n if x in model.genes:\n for r in sorted(model.genes.get_by_id(x).reactions, key=lambda x: x.id):\n print(r, r.gene_reaction_rule)\n else:\n print(x, 'no reactions')\n print()",
"ASPK: asp__L_c + atp_c <=> 4pasp_c + adp_c 12080 or 14662 or 16738\nASPK_1: asp__L_h + atp_h --> 4pasp_h + adp_h 12080 or 14662\nHSDH: aspsa_h + h_h + nadph_h --> hom__L_h + nadp_h 12080\nHSDxi: aspsa_c + h_c + nadh_c --> hom__L_c + nad_c 12080 or 16738\nHSDy: hom__L_c + nadp_c <=> aspsa_c + h_c + nadph_c 12080 or 16738\n\nASPK: asp__L_c + atp_c <=> 4pasp_c + adp_c 12080 or 14662 or 16738\nASPK_1: asp__L_h + atp_h --> 4pasp_h + adp_h 12080 or 14662\n\nASPK: asp__L_c + atp_c <=> 4pasp_c + adp_c 12080 or 14662 or 16738\nHSDxi: aspsa_c + h_c + nadh_c --> hom__L_c + nad_c 12080 or 16738\nHSDy: hom__L_c + nadp_c <=> aspsa_c + h_c + nadph_c 12080 or 16738\n\nHSK: atp_c + hom__L_c --> adp_c + h_c + phom_c 8651\n\nHSERTA: accoa_c + hom__L_c <=> achms_c + coa_c 12513 or 15248\n\nHSERTA: accoa_c + hom__L_c <=> achms_c + coa_c 12513 or 15248\n\nAHSERL: achms_c + ch4s_c --> ac_c + h_c + met__L_c 16618\nAHSERL2: achms_c + h2s_c --> ac_c + h_c + hcys__L_c 16618\n\nACSERL: acser_c + seln_c <=> ac_c + 2.0 h_c + selcys_c 12031 or 13106 or 15712\nACSERLh: acser_h + seln_h <=> ac_h + 2.0 h_h + selcys_h 15712\nACSERLm: acser_m + seln_m <=> ac_m + 2.0 h_m + selcys_m 15712\nACSERSULL: acser_c + tsul_c --> ac_c + h_c + scys__L_c 12031 or 13106 or 15712\nACSERSULLh: acser_h + tsul_h --> ac_h + h_h + scys__L_h 15712\nACSERSULLm: acser_m + tsul_m --> ac_m + h_m + scys__L_m 15712\nCYS: h2s_c + ser__L_c --> cys__L_c + h2o_c 15712\nCYSS: acser_c + h2s_c --> ac_c + cys__L_c + h_c 12031 or 13106 or 15712\nCYSS_1: acser_h + h2s_h --> ac_h + cys__L_h 12031 or 13106 or 15712\nCYSTS: hcys__L_c + ser__L_c --> cyst__L_c + h2o_c 15712\nSELCYSTS: selhcys_c + ser__L_c --> h2o_c + selcyst_c 15712\n\nCYSDS: cys__L_c + h2o_c --> h2s_c + nh4_c + pyr_c 8759 or 9499\nCYSTGL: cyst__L_c + h2o_c --> 2obut_c + cys__L_c + nh4_c 9499\nSELCYSTGL: h2o_c + selcyst_c --> 2obut_c + nh4_c + selcys_c 9499\nSHSL4r: h2o_c + suchms_c <=> 2obut_c + h_c + nh4_c + succ_c 11463 or 16725 or 16742 or 9499\n\nCBL: cyst__L_h + h2o_h --> hcys__L_h + nh4_h + pyr_h 8759\nCTINBL: cysi__L_h + h2o_h --> nh4_h + pyr_h + thcys_h 8759\nCYSDS: cys__L_c + h2o_c --> h2s_c + nh4_c + pyr_c 8759 or 9499\nCYSTBL: h2s_h + h_h + nh4_h + pyr_h --> cys__L_h + h2o_h 8759\nCYSTL: cyst__L_c + h2o_c --> hcys__L_c + nh4_c + pyr_c 16725 or 16742 or 8759\nSELCYSTL: h2o_c + selcyst_c --> h_c + nh4_c + pyr_c + selhcys_c 8759\nSELCYSTLh: h2o_h + selcyst_h --> h_h + nh4_h + pyr_h + selhcys_h 8759\n\nSHSL1: cys__L_c + suchms_c --> cyst__L_c + h_c + succ_c 11463\nSHSL4r: h2o_c + suchms_c <=> 2obut_c + h_c + nh4_c + succ_c 11463 or 16725 or 16742 or 9499\n\nCYSTL: cyst__L_c + h2o_c --> hcys__L_c + nh4_c + pyr_c 16725 or 16742 or 8759\nSHSL4r: h2o_c + suchms_c <=> 2obut_c + h_c + nh4_c + succ_c 11463 or 16725 or 16742 or 9499\n\nCYSTL: cyst__L_c + h2o_c --> hcys__L_c + nh4_c + pyr_c 16725 or 16742 or 8759\nSHSL4r: h2o_c + suchms_c <=> 2obut_c + h_c + nh4_c + succ_c 11463 or 16725 or 16742 or 9499\n\nGLCATr: accoa_c + glc__D_c <=> acglc__D_c + coa_c 9734\nMALTATr: accoa_c + malt_c <=> acmalt_c + coa_c 9734\n\nACSERL: acser_c + seln_c <=> ac_c + 2.0 h_c + selcys_c 12031 or 13106 or 15712\nACSERSULL: acser_c + tsul_c --> ac_c + h_c + scys__L_c 12031 or 13106 or 15712\nCHOLS_ex: chols_e <=> chols_p 12031 or 13106\nCYSS: acser_c + h2s_c --> ac_c + cys__L_c + h_c 12031 or 13106 or 15712\nCYSS_1: acser_h + h2s_h --> ac_h + cys__L_h 12031 or 13106 or 15712\nSLCYSS: acser_c + tsul_c --> ac_c + scys__L_c 12031 or 13106\n\nACSERL: acser_c + seln_c <=> ac_c + 2.0 h_c + selcys_c 12031 or 13106 or 15712\nACSERSULL: acser_c + tsul_c --> ac_c + h_c + scys__L_c 12031 or 13106 or 15712\nCHOLS_ex: chols_e <=> chols_p 12031 or 13106\nCYSS: acser_c + h2s_c --> ac_c + cys__L_c + h_c 12031 or 13106 or 15712\nCYSS_1: acser_h + h2s_h --> ac_h + cys__L_h 12031 or 13106 or 15712\nSLCYSS: acser_c + tsul_c --> ac_c + scys__L_c 12031 or 13106\n\nADSHm: ahcys_m + h2o_m <=> adn_m + hcys__L_m 12912\nAHCi: ahcys_c + h2o_c --> adn_c + hcys__L_c 12912\nSEAHCYSHYD: h2o_c + seahcys_c --> adn_c + selhcys_c 12912\nSEAHCYSHYD_1: h2o_c + seahcys_c <=> adn_c + h_c + selhcys_c 12912\n\nHCYSMT: amet_c + hcys__L_c --> ahcys_c + h_c + met__L_c 15759\nHCYSMT2: hcys__L_c + mmet_c --> h_c + 2.0 met__L_c 15759\n\nMETS: 5mthf_c + hcys__L_c --> h_c + met__L_c + thf_c 12876 or 12920 or 9825\nMS: h_m + hcys__L_m + mhpglu_m --> hpglu_m + met__L_m 9825\n\nMETS: 5mthf_c + hcys__L_c --> h_c + met__L_c + thf_c 12876 or 12920 or 9825\n\nMETS: 5mthf_c + hcys__L_c --> h_c + met__L_c + thf_c 12876 or 12920 or 9825\n\n"
],
[
"# 14662 aspartate kinase\nmodel.reactions.get_by_id('ASPK').gene_reaction_rule = '14662'\nmodel.remove_reactions(['ASPK_1','HSDH'], remove_orphans=True)\n# 15712 CYS4 cystathionine beta-synthase (4.2.1.22)\n# L-Serine + L-Homocysteine <=> L-Cystathionine + H2O (CYSTS, correct)\n# 12031 and 13106 cysteine synthase A / O-acetyl-L-serine sulfhydrylase (2.5.1.47)\n# O-Acetyl-L-serine + Hydrogen sulfide <=> L-Cysteine + Acetate (CYSS)\nmodel.reactions.get_by_id('CYSS').gene_reaction_rule = '12031 or 13106'\n# ACSERL/SLCYSS is by selenocysteine synthase, and ACSERSULL by cysteine synthase B only\n# CHOLS_ex and CYSS_1 wrong compartment, CYS wrong, SELCYSTS not known\nmodel.remove_reactions(['ACSERL','ACSERLh','ACSERLm','SLCYSS','ACSERSULL','ACSERSULLh','ACSERSULLm',\n 'CHOLS_ex','CYSS_1','CYS','SELCYSTS'], remove_orphans=True)\n# 9499 CYS3 cystathionine gamma-lyase -> CYSTGL\nmodel.reactions.get_by_id('CYSTGL').gene_reaction_rule = '9499'\n# 8759 STR3 cystathionine beta-lyase -> CYSTL\nmodel.reactions.get_by_id('CYSTL').gene_reaction_rule = '8759'\n# 11463 STR2 cystathionine gamma-synthase -> SHSL1\n# SHSLr4 is sum of CYSTL and CYSTGL\nmodel.remove_reactions(['SELCYSTGL','SHSL4r','CBL','CTINBL','CYSTBL','SELCYSTL','SELCYSTLh'], remove_orphans=True)\n# 16725 and 16742 best hit is A. fum cysteine-S-conjugate beta-lyase -> CYSDS\nmodel.reactions.get_by_id('CYSDS').gene_reaction_rule = '16725 or 16742'\n# Remove the rest of seleno-reactions\n# MS incorrect proton, mhpglu is correct in biocyc but MS is the only reaction using this\nmodel.remove_reactions(['SEAHCYSHYD','SEAHCYSHYD_1','MS'], remove_orphans=True)",
"_____no_output_____"
],
[
"for r in sorted(model.metabolites.get_by_id('cys__L_c').reactions, key=lambda x: x.id):\n print(r, r.gene_reaction_rule)",
"AMPTASECG: cgly_c + h2o_c --> cys__L_c + gly_c 10210 or 12096\nCYSDS: cys__L_c + h2o_c --> h2s_c + nh4_c + pyr_c 16725 or 16742\nCYSS: acser_c + h2s_c --> ac_c + cys__L_c + h_c 12031 or 13106\nCYSTA: akg_c + cys__L_c --> glu__L_c + mercppyr_c 14281 or 8936\nCYSTGL: cyst__L_c + h2o_c --> 2obut_c + cys__L_c + nh4_c 9499\nCYSTRS: atp_c + cys__L_c + trnacys_c --> amp_c + cystrna_c + ppi_c 9855\nCYSt2r: cys__L_e + h_e <=> cys__L_c + h_c 14229 or 15074\nGLUCYS: atp_c + cys__L_c + glu__L_c --> adp_c + glucys_c + h_c + pi_c 12007 or 12022 or (GCLM and 12007) or (GCLM and 12022) or (Gclm and 12007) or (Gclm and 12022)\nICYSDS: cys__L_c + iscs_c --> ala__L_c + iscssh_c 13740\nPPNCL: 4ppan_c + ctp_c + cys__L_c --> 4ppcys_c + cdp_c + h_c + pi_c 8536\nPPNCL2: 4ppan_c + ctp_c + cys__L_c --> 4ppcys_c + cmp_c + h_c + ppi_c 8536 or 8878\nPPNCL3: 4ppan_c + atp_c + cys__L_c --> 4ppcys_c + amp_c + h_c + ppi_c 8878\nSHSL1: cys__L_c + suchms_c --> cyst__L_c + h_c + succ_c 11463\n"
],
[
"# 12007 ~100% coverage, but 12002 only ~50% coverage\nmodel.reactions.get_by_id('GLUCYS').gene_reaction_rule = '12007'",
"_____no_output_____"
]
],
[
[
"#### UREASE",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('9326').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"ALPHNH allphn_c + h2o_c + 3.0 h_c --> 2.0 co2_c + 2.0 nh4_c 9326\nALPHm allphn_m + h2o_m + 3.0 h_m --> 2.0 co2_m + 2.0 nh4_m 9326\nURCB atp_c + hco3_c + urea_c --> adp_c + allphn_c + 2.0 h_c + pi_c 9326\nURCBm atp_m + hco3_m + urea_m --> adp_m + allphn_m + 2.0 h_m + pi_m 9326\nUREASE atp_c + hco3_c + urea_c <=> adp_c + allphn_c + h_c + pi_c 9326\n"
]
],
[
[
"9326 cyto 13.5, cyto_mito 13 (no sigP) DUR1,2 urea amidolyase \nUREASE is correct, URCB is incorrect\nRemove ALPHm, URCB, URCBm",
"_____no_output_____"
]
],
[
[
"model.remove_reactions(['ALPHm','URCB','URCBm'], remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### PTPATi",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('14849').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('11114').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"APPAT atp_c + 2.0 h_c + pan4p_c <=> dpcoa_c + ppi_c 14849\nDPCOAK atp_c + dpcoa_c --> adp_c + coa_c + h_c 11114 or 14849\nPTPATi atp_c + h_c + pan4p_c --> dpcoa_c + ppi_c 14849\nPTPATim atp_m + h_m + pan4p_m --> dpcoa_m + ppi_m 14849\n\nDPCOAK atp_c + dpcoa_c --> adp_c + coa_c + h_c 11114 or 14849\nDPCOAKm atp_m + dpcoa_m --> adp_m + coa_m + h_m 11114\n"
]
],
[
[
"14849 mito 13, cyto 10 (sigP) CAB4 pantetheine-phosphate adenylyltransferase \n11114 cyto (no sigP) CAB5 dephospho-CoA kinase \nPTPATi is correct, APPAT is incorrect, mito reactions are orphans \nChange DPCOAK genes to '11114' \nRemove APPAT, PTPATim, DPCOAKm",
"_____no_output_____"
]
],
[
[
"model.reactions.get_by_id('DPCOAK').gene_reaction_rule = '11114'\nmodel.remove_reactions(['APPAT','PTPATim','DPCOAKm'], remove_orphans=True)",
"_____no_output_____"
]
],
[
[
"#### ACACT1x",
"_____no_output_____"
]
],
[
[
"for r in sorted(model.genes.get_by_id('8678').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\nprint()\nfor r in sorted(model.genes.get_by_id('8885').reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)",
"ACACT10m 2maacoa_m + coa_m <=> accoa_m + ppcoa_m 13813 or 8678\nACACT1m 2.0 accoa_m --> aacoa_m + coa_m 8678 or 8885 or (HADHA and HADHB) or (Hadha and Hadhb)\nACACT1r 2.0 accoa_c <=> aacoa_c + coa_c 13813 or 8678 or 8885\nACACT1x 2.0 accoa_x <=> aacoa_x + coa_x 8678 or 8885\nACACT2 3ohcoa_x + coa_x <=> accoa_x + btcoa_x 13813 or 8678 or 8885\nACACT2r accoa_c + btcoa_c <=> 3ohcoa_c + coa_c 13813 or 8678 or 8885\nKAT1 aacoa_c + coa_c --> 2.0 accoa_c 13813 or 8678 or 8885\nyli_R1435 2.0 accoa_x --> aacoa_x + coa_x 8678 or 8885\n\nACACT1m 2.0 accoa_m --> aacoa_m + coa_m 8678 or 8885 or (HADHA and HADHB) or (Hadha and Hadhb)\nACACT1r 2.0 accoa_c <=> aacoa_c + coa_c 13813 or 8678 or 8885\nACACT1x 2.0 accoa_x <=> aacoa_x + coa_x 8678 or 8885\nACACT2 3ohcoa_x + coa_x <=> accoa_x + btcoa_x 13813 or 8678 or 8885\nACACT2r accoa_c + btcoa_c <=> 3ohcoa_c + coa_c 13813 or 8678 or 8885\nKAT1 aacoa_c + coa_c --> 2.0 accoa_c 13813 or 8678 or 8885\nyli_R1435 2.0 accoa_x --> aacoa_x + coa_x 8678 or 8885\n"
],
[
"# Fix simple things now and curate fatty acid metabolism later\nmodel.reactions.get_by_id('ACACT1m').gene_reaction_rule = '8678 or 8885'\nmodel.remove_reactions(['KAT1','yli_R1435'], remove_orphans=True)",
"_____no_output_____"
],
[
"#### Rest of duplicated",
"_____no_output_____"
],
[
"duplicated = set()\nfor i, r in enumerate(model.reactions):\n temp = 0\n for j, r2 in enumerate(model.reactions):\n if j > i and r2.id not in duplicated:\n if r.reactants == r2.reactants and r.products == r2.products:\n if temp == 0:\n temp = 1\n duplicated.add(r.id)\n print(r.id, r.reaction, r.gene_reaction_rule)\n duplicated.add(r2.id)\n print(r2.id, r2.reaction, r2.gene_reaction_rule)\n if temp == 1:\n print()",
"yli_R0034 chtn_c + h2o_c --> acgam_c 13082\nCHTNASE chtn_c + 2.0 h2o_c --> 3.0 acgam_c 13082\n\nARGSS asp__L_c + atp_c + citr__L_c --> amp_c + argsuc_c + h_c + ppi_c 16196\nARGSS_1 asp__L_c + atp_c + citr__L_c --> amp_c + argsuc_c + 2.0 h_c + ppi_c 16196\n\nPRAGSr atp_c + gly_c + pram_c <=> adp_c + gar_c + h_c + pi_c 14259\nPPRGL atp_c + gly_c + pram_c --> adp_c + gar_c + 2.0 h_c + pi_c 14259\n\nAIRCr air_c + co2_c <=> 5aizc_c + h_c 12132\nPRAIC air_c + co2_c <=> 5aizc_c + 2.0 h_c 12132\n\nyli_R0224 8.0 coa_m + 8.0 h2o_m + 8.0 nad_m + nadph_m + 7.0 o2_m + yli_M04625_m --> 9.0 accoa_m + 7.0 h2o2_m + 7.0 h_m + 8.0 nadh_m + nadp_m (12742 and 13813 and 14805) or (12742 and 14805 and 9065) or (12752 and 13813 and 14805) or (12752 and 14805 and 9065) or (13813 and 14805 and 9700) or (14805 and 9065 and 9700)\nyli_R0223 8.0 coa_m + 8.0 h2o_m + 8.0 nad_m + 2.0 nadph_m + 8.0 o2_m + yli_M04625_m --> 9.0 accoa_m + 8.0 h2o2_m + 6.0 h_m + 8.0 nadh_m + 2.0 nadp_m (12742 and 13813 and 14805) or (12742 and 14805 and 9065) or (12752 and 13813 and 14805) or (12752 and 14805 and 9065) or (13813 and 14805 and 9700) or (14805 and 9065 and 9700)\n\nC4STMO1 44mzym_c + 3.0 h_c + 3.0 nadph_c + 3.0 o2_c --> 4mzym_int1_c + 4.0 h2o_c + 3.0 nadp_c 16640\n44MZYMMO 44mzym_c + 2.0 h_c + 3.0 nadph_c + 3.0 o2_c <=> 4mzym_int1_c + 4.0 h2o_c + 3.0 nadp_c 15314\n\nFAO182p_evenodd 8.0 coa_x + 8.0 h2o_x + 8.0 nad_x + nadph_x + 7.0 o2_x + ocdycacoa_x --> 9.0 accoa_x + 7.0 h2o2_x + 7.0 h_x + 8.0 nadh_x + nadp_x (10293 and 11362 and 12742 and 13228 and 13813) or (10293 and 11362 and 12742 and 13228 and 9065) or (10293 and 11362 and 12752 and 13228 and 13813) or (10293 and 11362 and 12752 and 13228 and 9065) or (10293 and 11362 and 13228 and 13813 and 9700) or (10293 and 11362 and 13228 and 9065 and 9700)\nFAO182p_eveneven 8.0 coa_x + 8.0 h2o_x + 8.0 nad_x + 2.0 nadph_x + 8.0 o2_x + ocdycacoa_x --> 9.0 accoa_x + 8.0 h2o2_x + 6.0 h_x + 8.0 nadh_x + 2.0 nadp_x (10293 and 11362 and 12742 and 13228 and 13813) or (10293 and 11362 and 12742 and 13228 and 9065) or (10293 and 11362 and 12752 and 13228 and 13813) or (10293 and 11362 and 12752 and 13228 and 9065) or (10293 and 11362 and 13228 and 13813 and 9700) or (10293 and 11362 and 13228 and 9065 and 9700)\n\nyli_R1510 1ag3p_SC_r + acoa_r --> coa_r + pa_EC_r 10427 or 16030 or 9746\nyli_R1523 1ag3p_SC_r + acoa_r --> coa_r + pa_EC_r 16030\n\nNMNAT atp_c + h_c + nmn_c --> nad_c + ppi_c 10430\nANNAT atp_c + 2.0 h_c + nmn_c <=> nad_c + ppi_c 10430\n\nyli_R1513 h2o_r + pa_EC_r --> dag_hs_r + pi_r 12485 or 13087\nyli_R1393 h2o_r + 0.01 pa_EC_r --> 0.01 dag_hs_r + pi_r 12485\n\nNNATr atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c 10430\nNNATr_copy1 atp_c + h_c + nicrnt_c --> dnad_c + ppi_c 14638\nNNATr_copy2 atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c 10430\n\nyli_R1508 glyald_c + h2o_c + nad_c --> glyc__R_c + h_c + nadh_c 12042 or 13426 or 16323\nGLYALDDr glyald_c + h2o_c + nad_c <=> glyc__R_c + 2.0 h_c + nadh_c 12042 or 13426\n\nHISTD h2o_c + histd_c + 2.0 nad_c --> 3.0 h_c + his__L_c + 2.0 nadh_c 11646\nHDH h2o_c + histd_c + 2.0 nad_c --> 4.0 h_c + his__L_c + 2.0 nadh_c 11646\n\nGUAD gua_c + h2o_c + h_c --> nh4_c + xan_c 9050 or 9708\nGUAD_1 gua_c + h2o_c + 2.0 h_c --> nh4_c + xan_c 9050 or 9708\n\nPANTS ala_B_c + atp_c + pant__R_c --> amp_c + h_c + pnto__R_c + ppi_c 10475\nPBAL ala_B_c + atp_c + pant__R_c --> amp_c + 2.0 h_c + pnto__R_c + ppi_c 10475\n\nNADDP h2o_c + nad_c --> amp_c + 2.0 h_c + nmn_c 12434\nNPH h2o_c + nad_c --> amp_c + 3.0 h_c + nmn_c 15385\n\nSTARCH300DEGR2A 49.0 h2o_h + 250.0 pi_h + starch300_h --> 50.0 Glc_aD_h + 250.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRA 74.0 h2o_h + 225.0 pi_h + starch300_h --> 75.0 Glc_aD_h + 225.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\n\nHDC 2.0 h_c + his__L_c --> co2_c + hista_c 10722 or 9434 or 9435\nHISDC h_c + his__L_c --> co2_c + hista_c 10104\n\nSTARCH300DEGR2B 49.0 h2o_h + 250.0 pi_h + starch300_h --> 250.0 g1p_h + 50.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRB 74.0 h2o_h + 225.0 pi_h + starch300_h --> 225.0 g1p_h + 75.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\n\nANNATn atp_n + 2.0 h_n + nmn_n <=> nad_n + ppi_n 10430\nNMNATn atp_n + h_n + nmn_n --> nad_n + ppi_n 10430\n\nCLHCOtex cl_e + 2.0 hco3_c --> cl_c + 2.0 hco3_e 14119 or 15736\nCLHCO3tex2 2.0 cl_e + hco3_c --> 2.0 cl_c + hco3_e 16202\n\nFACOAL204_copy1 arachd_c + atp_c + coa_c <=> amp_c + arachdcoa_c + ppi_c 12538 or 12555\nFACOAL204_copy2 arachd_c + atp_c + coa_c --> amp_c + arachdcoa_c + ppi_c 11167 or 12538 or 12555 or 15748\n\n"
],
[
"for x in ['13082','16196','14259','12132','10430','14638','16323','11646','9050','9708',\n '10475','12434','15385','11993','14256','10722','9434','9435','10104','14119','15736','16202']:\n for r in sorted(model.genes.get_by_id(x).reactions, key=lambda x: x.id):\n print(r.id, r.reaction, r.gene_reaction_rule)\n print()",
"CHTNASE chtn_c + 2.0 h2o_c --> 3.0 acgam_c 13082\nCHTNASEe chtn_e + 2.0 h2o_e --> 3.0 acgam_e 13082\nyli_R0034 chtn_c + h2o_c --> acgam_c 13082\n\nARGSS asp__L_c + atp_c + citr__L_c --> amp_c + argsuc_c + h_c + ppi_c 16196\nARGSS_1 asp__L_c + atp_c + citr__L_c --> amp_c + argsuc_c + 2.0 h_c + ppi_c 16196\n\nFGFTh fgam_h + 3.0 h_h + thf_h --> gar_h + h2o_h + methf_h 13595 or 14259\nFPGFTh 10fthf_h + gar_h <=> fgam_h + h_h + thf_h 13595 or 14259\nGARFT 10fthf_c + gar_c <=> fgam_c + h_c + thf_c 13595 or 14259\nPPRGL atp_c + gly_c + pram_c --> adp_c + gar_c + 2.0 h_c + pi_c 14259\nPPRGLh atp_h + gly_h + pram_h --> adp_h + gar_h + h_h + pi_h 14259\nPRAGSr atp_c + gly_c + pram_c <=> adp_c + gar_c + h_c + pi_c 14259\nPRAIS atp_c + fpram_c --> adp_c + air_c + 2.0 h_c + pi_c 14259\n\nAIRC2 air_c + atp_c + hco3_c --> 5caiz_c + adp_c + h_c + pi_c 12132\nAIRC3 5aizc_c <=> 5caiz_c 12132\nAIRCr air_c + co2_c <=> 5aizc_c + h_c 12132\nPRAIC air_c + co2_c <=> 5aizc_c + 2.0 h_c 12132\nPRAICh air_h + co2_h <=> 5aizc_h + h_h 12132\n\nANNAT atp_c + 2.0 h_c + nmn_c <=> nad_c + ppi_c 10430\nANNATn atp_n + 2.0 h_n + nmn_n <=> nad_n + ppi_n 10430\nNMNAT atp_c + h_c + nmn_c --> nad_c + ppi_c 10430\nNMNATm atp_m + h_m + nmn_m --> nad_m + ppi_m 10430\nNMNATn atp_n + h_n + nmn_n --> nad_n + ppi_n 10430\nNNATm atp_m + h_m + nicrnt_m --> dnad_m + ppi_m 10430\nNNATn atp_n + h_n + nicrnt_n --> dnad_n + ppi_n 10430\nNNATr atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c 10430\nNNATr_copy2 atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c 10430\n\nDNADDP dnad_c + h2o_c --> amp_c + 2.0 h_c + nicrnt_c 14638 or 15385\nFADDP fad_c + h2o_c --> amp_c + fmn_c + 2.0 h_c 14638\nNNATr_copy1 atp_c + h_c + nicrnt_c --> dnad_c + ppi_c 14638\nUDPGP h2o_c + udpg_c --> g1p_c + 2.0 h_c + ump_c 14638\n\n34DHALDD 34dhpac_c + h2o_c + nad_c --> 34dhpha_c + 2.0 h_c + nadh_c 12042 or 13426 or 16323\n34DHPLACOX_NADP 34dhpac_c + h2o_c + nadp_c <=> 34dhpha_c + 2.0 h_c + nadph_c 12042 or 13426 or 16323\n3M4HDXPAC 3mox4hpac_c + h2o_c + nad_c <=> 2.0 h_c + homoval_c + nadh_c 12042 or 13426 or 16323\n3MOX4HOXPGALDOX 3m4hpga_c + h2o_c + nad_c --> 3mox4hoxm_c + 2.0 h_c + nadh_c 12042 or 13426 or 16323\n3MOX4HOXPGALDOX_NADP 3m4hpga_c + h2o_c + nadp_c <=> 3mox4hoxm_c + 2.0 h_c + nadph_c 12042 or 13426 or 16323\n4HOXPACDOX_NADP 4hoxpacd_c + h2o_c + nadp_c <=> 4hphac_c + 2.0 h_c + nadph_c 12042 or 13426 or 16323\n5HOXINDACTOX 5hoxindact_c + h2o_c + nad_c --> 5hoxindoa_c + 2.0 h_c + nadh_c 12042 or 13426 or 15814 or 16323\nABUTD 4abutn_c + h2o_c + nad_c --> 4abut_c + 2.0 h_c + nadh_c 12042 or 13426 or 16323\nALDD19x_P h2o_c + nadp_c + pacald_c --> 2.0 h_c + nadph_c + pac_c 12042 or 13426 or 16323\nALDD19xr h2o_c + nad_c + pacald_c <=> 2.0 h_c + nadh_c + pac_c 12042 or 13426 or 16323\nALDD20x h2o_c + id3acald_c + nad_c --> 2.0 h_c + ind3ac_c + nadh_c 12042 or 13426 or 15814 or 16323\nALDD21 h2o_c + nad_c + pristanal_c --> 2.0 h_c + nadh_c + prist_c 16323\nALDD2x acald_c + h2o_c + nad_c --> ac_c + 2.0 h_c + nadh_c 12042 or 13426 or 15814 or 16323\nALDD2xm acald_m + h2o_m + nad_m --> ac_m + 2.0 h_m + nadh_m 12042 or 13426 or 16323\nALDD2y acald_c + h2o_c + nadp_c --> ac_c + 2.0 h_c + nadph_c 11650 or 12042 or 13426 or 14700 or 16323 or 8666\nBAMPPALDOX bamppald_c + h2o_c + nad_c --> ala_B_c + 2.0 h_c + nadh_c 12042 or 13426 or 15814 or 16323\nCOALDDH conialdh_c + h2o_c + nad_c --> fer_c + 2.0 h_c + nadh_c 16323\nGCALDD gcald_c + h2o_c + nad_c --> glyclt_c + 2.0 h_c + nadh_c 12042 or 13426 or 15814 or 16323\nGLACO glac_c + 2.0 h2o_c + nad_c --> glcr_c + 3.0 h_c + nadh_c 12042 or 13426 or 16323\nIMACTD h2o_c + im4act_c + nad_c --> 2.0 h_c + im4ac_c + nadh_c 12042 or 13426 or 15814 or 16323\nLCADi h2o_c + lald__L_c + nad_c --> 2.0 h_c + lac__L_c + nadh_c 12042 or 13426 or 15814 or 16323\nLCADi_D h2o_c + lald__D_c + nad_c --> 2.0 h_c + lac__D_c + nadh_c 12042 or 13426 or 15814 or 16323\nMACOXO 3mldz_c + h2o_c + nad_c --> 3mlda_c + 2.0 h_c + nadh_c 12042 or 13426 or 16323\nNABTNO h2o_c + n4abutn_c + nad_c --> 4aabutn_c + 2.0 h_c + nadh_c 12042 or 13426 or 15814 or 16323\nPYLALDOX h2o_c + nad_c + pylald_c --> 2.0 h_c + nadh_c + peracd_c 12042 or 13426 or 15814 or 16323\nyli_R0303 glyald_c + h_c + nadh_c <=> glyc_c + h2o_c + nad_c 12042 or 13426 or 16323\nyli_R1508 glyald_c + h2o_c + nad_c --> glyc__R_c + h_c + nadh_c 12042 or 13426 or 16323\n\nHDH h2o_c + histd_c + 2.0 nad_c --> 4.0 h_c + his__L_c + 2.0 nadh_c 11646\nHDHh h2o_h + histd_h + 2.0 nad_h --> 3.0 h_h + his__L_h + 2.0 nadh_h 11646\nHISTD h2o_c + histd_c + 2.0 nad_c --> 3.0 h_c + his__L_c + 2.0 nadh_c 11646\nPRACHh h2o_h + prbamp_h --> prfp_h 11646\nPRADPh h2o_h + prbatp_h --> 4.0 h_h + ppi_h + prbamp_h 11646\nPRAMPC h2o_c + prbamp_c --> prfp_c 11646\nPRATPP h2o_c + prbatp_c --> h_c + ppi_c + prbamp_c 11646\n\nGUAD gua_c + h2o_c + h_c --> nh4_c + xan_c 9050 or 9708\nGUAD_1 gua_c + h2o_c + 2.0 h_c --> nh4_c + xan_c 9050 or 9708\n\nGUAD gua_c + h2o_c + h_c --> nh4_c + xan_c 9050 or 9708\nGUAD_1 gua_c + h2o_c + 2.0 h_c --> nh4_c + xan_c 9050 or 9708\n\nPANTS ala_B_c + atp_c + pant__R_c --> amp_c + h_c + pnto__R_c + ppi_c 10475\nPBAL ala_B_c + atp_c + pant__R_c --> amp_c + 2.0 h_c + pnto__R_c + ppi_c 10475\nPBALm ala_B_m + atp_m + pant__R_m --> amp_m + 3.0 h_m + pnto__R_m + ppi_m 10475\n\nNADDP h2o_c + nad_c --> amp_c + 2.0 h_c + nmn_c 12434\nNADDPp h2o_x + nad_x --> amp_x + 2.0 h_x + nmn_x 12434\nyli_R1538 h2o_c + nad_c --> amp_c + nmn_c 12434\nyli_R1539 dnad_c + h2o_c --> amp_c + nicrnt_c 12434\n\nDNADDP dnad_c + h2o_c --> amp_c + 2.0 h_c + nicrnt_c 14638 or 15385\nDNMPPA dhpmp_c + h2o_c --> dhnpt_c + pi_c 14615 or 14875 or 15385\nDNTPPA_1 ahdt_c + h2o_c --> dhpmp_c + ppi_c 15385\nNPH h2o_c + nad_c --> amp_c + 3.0 h_c + nmn_c 15385\n\nAAMYL 14glucan_c --> malthx_c 11993\nSTARCH300DEGR2A 49.0 h2o_h + 250.0 pi_h + starch300_h --> 50.0 Glc_aD_h + 250.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGR2B 49.0 h2o_h + 250.0 pi_h + starch300_h --> 250.0 g1p_h + 50.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRA 74.0 h2o_h + 225.0 pi_h + starch300_h --> 75.0 Glc_aD_h + 225.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRB 74.0 h2o_h + 225.0 pi_h + starch300_h --> 225.0 g1p_h + 75.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\n\nGLCP glycogen_c + pi_c --> g1p_c 14256\nGLCP2 bglycogen_c + pi_c --> g1p_c 14256\nGLPASE1 glygn2_c + 3.0 pi_c --> dxtrn_c + 3.0 g1p_c 14256\nGLPASE2 glygn3_c + 7.0 h2o_c --> Tyr_ggn_c + 7.0 glc__D_c 14256\nMLTP1 maltpt_c + pi_c <=> g1p_c + maltttr_c 14256\nMLTP2 malthx_c + pi_c <=> g1p_c + maltpt_c 14256\nMLTP3 malthp_c + pi_c <=> g1p_c + malthx_c 14256\nSTARCH300DEGR2A 49.0 h2o_h + 250.0 pi_h + starch300_h --> 50.0 Glc_aD_h + 250.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGR2B 49.0 h2o_h + 250.0 pi_h + starch300_h --> 250.0 g1p_h + 50.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRA 74.0 h2o_h + 225.0 pi_h + starch300_h --> 75.0 Glc_aD_h + 225.0 g1p_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nSTARCH300DEGRB 74.0 h2o_h + 225.0 pi_h + starch300_h --> 225.0 g1p_h + 75.0 glc__bD_h (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s11_g2607_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g12805_t1 and 11993 and 14256) or (CRv4_Au5_s19_g8150_t1 and CRv4_Au5_s3_g11038_t1 and CRv4_Au5_s6_g13538_t1 and 11993 and 14256)\nyli_R0732 pi_c + starch_c --> g1p_c 14256\n\nASP1DC asp__L_c + h_c --> ala_B_c + co2_c 10722 or 9434 or 9435\nGLUDC glu__L_c + h_c --> 4abut_c + co2_c 10722 or 9434 or 9435\nHDC 2.0 h_c + his__L_c --> co2_c + hista_c 10722 or 9434 or 9435\nLCYSTCBOXL Lcyst_c + h_c --> co2_c + taur_c 10722 or 9434 or 9435\nSERDC h_c + ser__L_c --> co2_c + etha_c 10722 or 9434 or 9435\nyli_R0777 3sala_c + h_c --> co2_c + yli_M00487_c 10722 or 9434 or 9435\n\nASP1DC asp__L_c + h_c --> ala_B_c + co2_c 10722 or 9434 or 9435\nGLUDC glu__L_c + h_c --> 4abut_c + co2_c 10722 or 9434 or 9435\nHDC 2.0 h_c + his__L_c --> co2_c + hista_c 10722 or 9434 or 9435\nLCYSTCBOXL Lcyst_c + h_c --> co2_c + taur_c 10722 or 9434 or 9435\nSERDC h_c + ser__L_c --> co2_c + etha_c 10722 or 9434 or 9435\nyli_R0777 3sala_c + h_c --> co2_c + yli_M00487_c 10722 or 9434 or 9435\n\nASP1DC asp__L_c + h_c --> ala_B_c + co2_c 10722 or 9434 or 9435\nGLUDC glu__L_c + h_c --> 4abut_c + co2_c 10722 or 9434 or 9435\nHDC 2.0 h_c + his__L_c --> co2_c + hista_c 10722 or 9434 or 9435\nLCYSTCBOXL Lcyst_c + h_c --> co2_c + taur_c 10722 or 9434 or 9435\nSERDC h_c + ser__L_c --> co2_c + etha_c 10722 or 9434 or 9435\nyli_R0777 3sala_c + h_c --> co2_c + yli_M00487_c 10722 or 9434 or 9435\n\n3HLYTCL 34dhphe_c + h_c --> co2_c + dopa_c 10104\n3HXKYNDCL hLkynr_c + h_c --> 3hxkynam_c + co2_c 10104\n5HLTDL 5htrp_c + h_c --> co2_c + srtn_c 10104\n5HXKYNDCL 5hxkyn_c + h_c --> 5hxkynam_c + co2_c 10104\nHISDC h_c + his__L_c --> co2_c + hista_c 10104\nLTDCL h_c + trp__L_c --> co2_c + trypta_c 10104\nPHYCBOXL h_c + phe__L_c --> co2_c + peamn_c 10104\nTYRCBOX h_c + tyr__L_c --> co2_c + tym_c 10104\n\nCHOLSabc atp_c + chols_p + h2o_c --> adp_c + chols_c + h_c + pi_c (PP_0076 and 14119) or (PP_0076 and 15736) or (PP_0868 and PP_0869 and PP_0870) or (PP_0868 and PP_0870 and PP_0871)\nCLHCOtex cl_e + 2.0 hco3_c --> cl_c + 2.0 hco3_e 14119 or 15736\nOXAHCOtex 2.0 hco3_c + oxa_e --> 2.0 hco3_e + oxa_c 14119 or 15736\nSO4HCOtex 2.0 hco3_c + so4_e --> 2.0 hco3_e + so4_c 14119 or 15736\nSO4t2 h_e + so4_e <=> h_c + so4_c 14119 or 15736\nSO4ti so4_e --> so4_c 14119 or 15736 or 16682 or (14119 and 16682) or (15736 and 16682)\n\nCHOLSabc atp_c + chols_p + h2o_c --> adp_c + chols_c + h_c + pi_c (PP_0076 and 14119) or (PP_0076 and 15736) or (PP_0868 and PP_0869 and PP_0870) or (PP_0868 and PP_0870 and PP_0871)\nCLHCOtex cl_e + 2.0 hco3_c --> cl_c + 2.0 hco3_e 14119 or 15736\nOXAHCOtex 2.0 hco3_c + oxa_e --> 2.0 hco3_e + oxa_c 14119 or 15736\nSO4HCOtex 2.0 hco3_c + so4_e --> 2.0 hco3_e + so4_c 14119 or 15736\nSO4t2 h_e + so4_e <=> h_c + so4_c 14119 or 15736\nSO4ti so4_e --> so4_c 14119 or 15736 or 16682 or (14119 and 16682) or (15736 and 16682)\n\nCLFORtex2 2.0 cl_e + for_c --> 2.0 cl_c + for_e 16202\nCLHCO3tex2 2.0 cl_e + hco3_c --> 2.0 cl_c + hco3_e 16202\nCLOHtex2 2.0 cl_e + oh1_c --> 2.0 cl_c + oh1_e 16202\nCLOXAtex2 2.0 cl_e + oxa_c --> 2.0 cl_c + oxa_e 16202\nSO4CLtex2 cl_c + 2.0 so4_e --> cl_e + 2.0 so4_c 16202\nSO4OXAtex2 oxa_c + 2.0 so4_e --> oxa_e + 2.0 so4_c 16202\n\n"
]
],
[
[
"13082\textr 23, golg 2\tK01183: E3.2.1.14; chitinase\tGMQ* \nARGSS_1 incorrect stoich \nPRAGSr, PRAIS correct (air_c -2 in BiGG, but -1 in Metacyc) \nAIRCr correct (AIRC2+AIRC3 bacteria, PRAIC with air_c -1) \n\nNMA1,NMA2\t10430\tcyto 13.5, cyto_mito 10.5, nucl 7, mito 6.5\tK06210: NMNAT; nicotinamide mononucleotide adenylyltransferase\tSAS* \nNPP1,NPP2\t14638\tcyto 8.5, extr 7, mito 6, cyto_nucl 5, plas 2\tKOG2645: Type I phosphodiesterase/nucleotide pyrophosphatase\tEGV* \n15385\tcyto_mito 7, mito 6.5, cyto 6.5, pero 5, nucl 4, extr 2, cysk 2\tK03574: mutT, NUDT15, MTH2; 8-oxo-dGTP diphosphatase\tLVL* \n\nNMNAT/NMNATm/NMNATn, NNATr/NNATm/NNATn correct \nChange NNATr to NNAT, and make it irreversible (deltaG -54 kcal/mol in MetaCyc) \n\nGLYALDDr is not clear -> remove for now \n\n12434\tmito 13.5, cyto_mito 10.833, cyto 7, cyto_nucl 5.333, pero 3\tK03426: E3.6.1.22, NUDT12, nudC; NAD+ diphosphatase\tSKM* \n\n14119\tplas 22, E.R. 4\tK14708: SLC26A11; solute carrier family 26 (sodium-independent sulfate anion transporter), member 11\tTKA* \n15736\tplas 25\tK14708: SLC26A11; solute carrier family 26 (sodium-independent sulfate anion transporter), member 11\tDDW* \n16682\tplas 22, E.R. 3\tK03321: TC.SULP; sulfate permease, SulP family\tASL*",
"_____no_output_____"
]
],
[
[
"temp = ['yli_R0034','ARGSS_1','FGFTh','FPGFTh','PPRGL','PPRGLh','AIRC2','AIRC3','PRAIC','PRAICh','ANNAT','ANNATn',\n 'NNATr_copy1','NNATr_copy2','GLYALDDr','yli_R0303','yli_R1508','HDH','HDHh','PRACHh','PRADPh','GUAD_1','PBAL',\n 'PBALm','NADDP','yli_R1538','yli_R1539','NPH','DNTPPA_1','STARCH300DEGR2A','STARCH300DEGR2B','STARCH300DEGRA',\n 'STARCH300DEGRB','HDC','yli_R0777','CHOLSabc']\nmodel.remove_reactions(temp, remove_orphans=True)\n\nmodel.reactions.get_by_id('NNATr').id = 'NNAT'\nmodel.reactions.get_by_id('NNAT').lower_bound = 0.0\n\nr = sce.reactions.get_by_id('DNTPPA').copy()\nr.gene_reaction_rule = '15385'\nmodel.add_reactions([r])\nr = hsa.reactions.get_by_id('3SALACBOXL').copy() # why is this not picked up? check\nr.gene_reaction_rule = '10722 or 9434 or 9435'\nmodel.add_reactions([r])\n\nmodel.reactions.get_by_id('SO4ti').gene_reaction_rule = '14119 or 15736 or 16682'",
"_____no_output_____"
]
],
[
[
"#### Fix fatty acid and sterol in the next step",
"_____no_output_____"
]
],
[
[
"print(len(model.genes))\nprint(len(model.reactions))\nprint(len(model.metabolites))\nmodel",
"1332\n3264\n3290\n"
],
[
"for x in sorted(model.genes, key=lambda x: x.id):\n if not x.reactions:\n print(x)\nprint()\nfor x in sorted(model.metabolites, key=lambda x: x.id):\n if not x.reactions:\n print(x)",
"12022\n15595\n4833_AT1\n9153\nCRv4_Au5_s6_g12448_t1\nGCLM\nGclm\nPDHX\nPHATRDRAFT_34976\nPHATRDRAFT_36641\nPHATRDRAFT_37658\nPHATRDRAFT_46880\nPP_1986\nPP_5098\nPdhx\nYCR083W\nYDR453C\nYGR180C\nb0071\nb3551\n\n2hhxdal_c\namob_c\ndann_c\nhxdcal_c\npsph1p_c\npsphings_c\nsph1p_c\n"
],
[
"cobra.manipulation.remove_genes(model, [x for x in model.genes if not x.reactions])\nmodel.remove_metabolites([x for x in model.metabolites if not x.reactions])",
"_____no_output_____"
],
[
"print(len(model.genes))\nprint(len(model.reactions))\nprint(len(model.metabolites))\nmodel",
"1312\n3264\n3283\n"
],
[
"for x in sorted(model.genes, key=lambda x: x.id):\n if not x.reactions:\n print(x)\nprint()\nfor x in sorted(model.metabolites, key=lambda x: x.id):\n if not x.reactions:\n print(x)",
"\n"
],
[
"cobra.io.save_json_model(model, \"IFO0880_GPR_1b.json\")",
"_____no_output_____"
],
[
"model_old = cobra.io.load_json_model(\"IFO0880_GPR_1a.json\")\nmodel_new = cobra.io.load_json_model(\"IFO0880_GPR_1b.json\")",
"_____no_output_____"
],
[
"print('Removed reactions\\n')\nfor r in sorted(model_old.reactions, key=lambda x: x.id):\n if r not in model_new.reactions:\n print(r)",
"Removed reactions\n\n3DSPHR: 3dsphgn_c + h_c + nadph_c --> nadp_c + sphgn_c\n4HTHRA: 4hthr_c <=> gcald_c + gly_c\n4HTHRK: 4hthr_c + atp_c --> adp_c + h_c + phthr_c\n4HTHRS: h2o_c + phthr_c --> 4hthr_c + pi_c\nACAS_2ahbut: 2ahethmpp_h + 2obut_h --> 2ahbut_h + thmpp_h\nACHBS: 2obut_c + h_c + pyr_c --> 2ahbut_c + co2_c\nACLS: h_c + 2.0 pyr_c --> alac__S_c + co2_c\nACSERL: acser_c + seln_c <=> ac_c + 2.0 h_c + selcys_c\nACSERLh: acser_h + seln_h <=> ac_h + 2.0 h_h + selcys_h\nACSERLm: acser_m + seln_m <=> ac_m + 2.0 h_m + selcys_m\nACSERSULL: acser_c + tsul_c --> ac_c + h_c + scys__L_c\nACSERSULLh: acser_h + tsul_h --> ac_h + h_h + scys__L_h\nACSERSULLm: acser_m + tsul_m --> ac_m + h_m + scys__L_m\nADK1m: amp_m + atp_m <=> 2.0 adp_m\nADK3: amp_c + gtp_c <=> adp_c + gdp_c\nADK4: amp_c + itp_c <=> adp_c + idp_c\nADNK1m: adn_m + atp_m --> adp_m + amp_m + h_m\nAFAT: atp_c + fmn_c + 2.0 h_c --> fad_c + ppi_c\nAHAL: achms_h + trdrd_h + tsul_h --> ac_h + h_h + hcys__L_h + so3_h + trdox_h\nAHSERL4: acser_c + trdrd_c + tsul_c --> ac_c + cys__L_c + h_c + so3_c + trdox_c\nAIRC2: air_c + atp_c + hco3_c --> 5caiz_c + adp_c + h_c + pi_c\nAIRC3: 5aizc_c <=> 5caiz_c\nAKGDH: akg_c + coa_c + nad_c --> co2_c + nadh_c + succoa_c\nAKGDHe2r: coa_m + h_m + sdhlam_m <=> dhlam_m + succoa_m\nAKGDa: akg_c + h_c + lpam_c <=> co2_c + sdhlam_c\nAKGDam: akg_m + h_m + lpam_m <=> co2_m + sdhlam_m\nAKGDb: coa_c + sdhlam_c <=> dhlam_c + succoa_c\nAKGDbm: coa_m + sdhlam_m --> dhlam_m + succoa_m\nALATA_D2: ala__D_c + pydx5p_c --> pyam5p_c + pyr_c\nALATA_L2: ala__L_c + pydx5p_c --> pyam5p_c + pyr_c\nALDD2x_copy1: acald_c + h2o_c + nad_c --> ac_c + 2.0 h_c + nadh_c\nALPHm: allphn_m + h2o_m + 3.0 h_m --> 2.0 co2_m + 2.0 nh4_m\nAMAOTr: 8aonn_c + amet_c <=> amob_c + dann_c\nANNAT: atp_c + 2.0 h_c + nmn_c <=> nad_c + ppi_c\nANNATn: atp_n + 2.0 h_n + nmn_n <=> nad_n + ppi_n\nAOXSr2: ala__L_c + pimACP_c --> 8aonn_c + ACP_c + co2_c\nAPLh: 2ahethmpp_h + pyr_h --> alac__S_h + thmpp_h\nAPLm: 2ahethmpp_m + pyr_m --> alac__S_m + thmpp_m\nAPPAT: atp_c + 2.0 h_c + pan4p_c <=> dpcoa_c + ppi_c\nARD: dhmtp_c + o2_c --> 2kmb_c + for_c + 2.0 h_c\nARD1: dhmtp_c + o2_c --> co_c + for_c + h_c + mtpp_c\nARGSS_1: asp__L_c + atp_c + citr__L_c --> amp_c + argsuc_c + 2.0 h_c + ppi_c\nASPK_1: asp__L_h + atp_h --> 4pasp_h + adp_h\nATAMh: amp_h + atp_h --> 2.0 adp_h\nATDAMh: atp_h + damp_h --> adp_h + dadp_h\nATDAMm: atp_m + damp_m --> adp_m + dadp_m\nBTS5: 2fe2s_c + amet_c + dtbt_c --> 2fe1s_c + btn_c + dad_5_c + h_c + met__L_c\nBTSr: dtbt_c + s_c <=> btn_c + 2.0 h_c\nCBL: cyst__L_h + h2o_h --> hcys__L_h + nh4_h + pyr_h\nCERH124_copy1: cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c\nCERH124_copy2: cer1_24_c + h_c + nadph_c + o2_c --> cer2_24_c + h2o_c + nadp_c\nCERH126_copy1: cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c\nCERH126_copy2: cer1_26_c + h_c + nadph_c + o2_c --> cer2_26_c + h2o_c + nadp_c\nCERS124: sphgn_c + ttccoa_c --> cer1_24_c + coa_c + h_c\nCERS126: hexccoa_c + sphgn_c --> cer1_26_c + coa_c + h_c\nCERS224: psphings_c + ttccoa_c --> cer2_24_c + coa_c + h_c\nCERS226: hexccoa_c + psphings_c --> cer2_26_c + coa_c + h_c\nCERS324: cer2_24_c + h_c + nadph_c + o2_c --> cer3_24_c + h2o_c + nadp_c\nCERS326: cer2_26_c + h_c + nadph_c + o2_c --> cer3_26_c + h2o_c + nadp_c\nCHOLS_ex: chols_e <=> chols_p\nCHOLSabc: atp_c + chols_p + h2o_c --> adp_c + chols_c + h_c + pi_c\nCTINBL: cysi__L_h + h2o_h --> nh4_h + pyr_h + thcys_h\nCYS: h2s_c + ser__L_c --> cys__L_c + h2o_c\nCYSS_1: acser_h + h2s_h --> ac_h + cys__L_h\nCYSS_trdrd: acser_h + trdrd_h + tsul_h --> ac_h + cys__L_h + h_h + so3_h + trdox_h\nCYSTBL: h2s_h + h_h + nh4_h + pyr_h --> cys__L_h + h2o_h\nCYSTLp: cyst__L_x + h2o_x --> hcys__L_x + nh4_x + pyr_x\nDASCBR: dhdascb_c + h_c + nadph_c --> ascb__L_c + nadp_c\nDHRT_2mbcoa: 2mbdhl_m + coa_m --> 2mbcoa_m + dhlam_m\nDHRT_ibcoa: 2mpdhl_m + coa_m --> dhlam_m + ibcoa_m\nDHRT_ivcoa: 3mbdhl_m + coa_m --> dhlam_m + ivcoa_m\nDKMPPD2: dkmpp_c + 3.0 h2o_c --> 2kmb_c + for_c + 6.0 h_c + pi_c\nDNTPPA_1: ahdt_c + h2o_c --> dhpmp_c + ppi_c\nDPCOAKm: atp_m + dpcoa_m --> adp_m + coa_m + h_m\nDSBDR: dsbdox_c + trdrd_c --> dsbdrd_c + trdox_c\nFGFTh: fgam_h + 3.0 h_h + thf_h --> gar_h + h2o_h + methf_h\nFOMETRi: 5fthf_c + h_c --> h2o_c + methf_c\nFPGFTh: 10fthf_h + gar_h <=> fgam_h + h_h + thf_h\nFTHFCL: 5fthf_c + atp_c --> adp_c + methf_c + pi_c\nGCC2am: gly_m + h_m + lpam_m <=> alpam_m + co2_m\nGCC2bim: alpam_m + thf_m --> dhlam_m + mlthf_m + nh4_m\nGCC2cm: dhlam_m + nad_m <=> h_m + lpam_m + nadh_m\nGCC2cm_copy1: dhlam_m + nad_m <=> h_m + lpam_m + nadh_m\nGCC2cm_copy2: dhlam_m + nad_m --> h_m + lpam_m + nadh_m\nGCCam: gly_m + h_m + lpro_m <=> alpro_m + co2_m\nGCCbim: alpro_m + thf_m --> dhlpro_m + mlthf_m + nh4_m\nGCCcm: dhlpro_m + nad_m <=> h_m + lpro_m + nadh_m\nGDHm: glu__L_m + h2o_m + nad_m <=> akg_m + h_m + nadh_m + nh4_m\nGDR: gthox_c + h_c + nadh_c --> 2.0 gthrd_c + nad_c\nGDR_nadp_h: gthox_h + h_h + nadph_h --> 2.0 gthrd_h + nadp_h\nGDRh: gthox_h + h_h + nadh_h --> 2.0 gthrd_h + nad_h\nGDRm: gthox_m + h_m + nadh_m --> 2.0 gthrd_m + nad_m\nGHMT2rm: ser__L_m + thf_m <=> gly_m + h2o_m + mlthf_m\nGHMT3: 3htmelys_c + h_c --> 4tmeabut_c + gly_c\nGHMT3m: 3htmelys_m + h_m --> 4tmeabut_m + gly_m\nGLCOASYNT: S_gtrdhdlp_c + coa_c --> dhlam_c + glutcoa_c\nGLUDym: glu__L_m + h2o_m + nadp_m <=> akg_m + h_m + nadph_m + nh4_m\nGLUS: akg_h + gln__L_h + h_h + nadh_h --> 2.0 glu__L_h + nad_h\nGLUS_ferr: akg_h + 2.0 fdxrd_h + gln__L_h --> 2.0 fdxox_h + 2.0 glu__L_h + 2.0 h_h\nGLUS_nadph: akg_h + gln__L_h + h_h + nadph_h --> 2.0 glu__L_h + nadp_h\nGLUSy: akg_c + gln__L_c + h_c + nadph_c --> 2.0 glu__L_c + nadp_c\nGLYALDDr: glyald_c + h2o_c + nad_c <=> glyc__R_c + 2.0 h_c + nadh_c\nGLYATx: accoa_x + gly_x <=> 2aobut_x + coa_x + h_x\nGLYCL: gly_c + nad_c + thf_c --> co2_c + mlthf_c + nadh_c + nh4_c\nGLYCL_2: co2_c + mlthf_c + nadh_c + nh4_c --> gly_c + nad_c + thf_c\nGLYDHD: gly_m + lpro_m --> alpro_m + co2_m\nGTHPm: 2.0 gthrd_m + h2o2_m <=> gthox_m + 2.0 h2o_m\nGTPCI_2: gtp_c + h2o_c --> ahdt_c + for_c + 2.0 h_c\nGUAD_1: gua_c + h2o_c + 2.0 h_c --> nh4_c + xan_c\nHDC: 2.0 h_c + his__L_c --> co2_c + hista_c\nHDH: h2o_c + histd_c + 2.0 nad_c --> 4.0 h_c + his__L_c + 2.0 nadh_c\nHDHh: h2o_h + histd_h + 2.0 nad_h --> 3.0 h_h + his__L_h + 2.0 nadh_h\nHSDH: aspsa_h + h_h + nadph_h --> hom__L_h + nadp_h\nHSK_1: atp_h + hom__L_h --> adp_h + h_h + phom_h\nKAT1: aacoa_c + coa_c --> 2.0 accoa_c\nLYSMTF1n: amet_n + peplys_n --> Nmelys_n + ahcys_n\nLYSMTF2n: Nmelys_n + amet_n --> Ndmelys_n + ahcys_n\nLYSMTF3n: Ndmelys_n + amet_n --> Ntmelys_n + ahcys_n\nMETB1: achms_c + cys__L_c --> ac_c + cyst__L_c + h_c\nMHPGLUT: hcys__L_c + mhpglu_c --> hpglu_c + met__L_c\nMOD: 3mob_m + h_m + thmpp_m --> 2mhop_m + co2_m\nMOD_2mbdhl: 2mhob_m + lpam_m --> 2mbdhl_m + thmpp_m\nMOD_2mhop: 2mhop_m + lpam_m --> 2mpdhl_m + thmpp_m\nMOD_3mhtpp: 3mhtpp_m + lpam_m --> 3mbdhl_m + thmpp_m\nMOD_3mop: 3mop_m + h_m + thmpp_m --> 2mhob_m + co2_m\nMOD_4mop: 4mop_m + h_m + thmpp_m --> 3mhtpp_m + co2_m\nMS: h_m + hcys__L_m + mhpglu_m --> hpglu_m + met__L_m\nMTAM: 5fthf_m + 2.0 h_m --> h2o_m + methf_m\nMTAM_nh4: alpro_m + h_m + thf_m <=> dhlpro_m + mlthf_m + nh4_m\nMTRK: 5mtr_c + atp_c --> 5mdr1p_c + adp_c + h_c\nNADDP: h2o_c + nad_c --> amp_c + 2.0 h_c + nmn_c\nNDPK10n: atp_n + didp_n <=> adp_n + ditp_n\nNDPK1n: atp_n + gdp_n <=> adp_n + gtp_n\nNDPK2n: atp_n + udp_n <=> adp_n + utp_n\nNDPK3n: atp_n + cdp_n <=> adp_n + ctp_n\nNDPK4n: atp_n + dtdp_n <=> adp_n + dttp_n\nNDPK5n: atp_n + dgdp_n <=> adp_n + dgtp_n\nNDPK6n: atp_n + dudp_n <=> adp_n + dutp_n\nNDPK7n: atp_n + dcdp_n <=> adp_n + dctp_n\nNDPK8n: atp_n + dadp_n <=> adp_n + datp_n\nNDPK9n: atp_n + idp_n <=> adp_n + itp_n\nNNATr: atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c\nNNATr_copy1: atp_c + h_c + nicrnt_c --> dnad_c + ppi_c\nNNATr_copy2: atp_c + h_c + nicrnt_c <=> dnad_c + ppi_c\nNPH: h2o_c + nad_c --> amp_c + 3.0 h_c + nmn_c\nOIVD1r: 4mop_c + coa_c + nad_c <=> co2_c + ivcoa_c + nadh_c\nOIVD2: 3mob_c + coa_c + nad_c --> co2_c + ibcoa_c + nadh_c\nOIVD3: 3mop_c + coa_c + nad_c --> 2mbcoa_c + co2_c + nadh_c\nOXOADLR: 2oxoadp_c + h_c + lpam_c --> S_gtrdhdlp_c + co2_c\nPAPSPAPthr: pap_c + paps_h <=> pap_h + paps_c\nPAPSR2: grxrd_c + paps_c --> grxox_c + 2.0 h_c + pap_c + so3_c\nPBAL: ala_B_c + atp_c + pant__R_c --> amp_c + 2.0 h_c + pnto__R_c + ppi_c\nPBALm: ala_B_m + atp_m + pant__R_m --> amp_m + 3.0 h_m + pnto__R_m + ppi_m\nPDCm: 2ahethmpp_m --> acald_m + thmpp_m\nPDH: coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c\nPDHam1hi: h_h + pyr_h + thmpp_h --> 2ahethmpp_h + co2_h\nPDHam1mi: h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m\nPDHam2hi: 2ahethmpp_h + lpam_h --> adhlam_h + thmpp_h\nPDHam2mi: 2ahethmpp_m + lpam_m --> adhlam_m + thmpp_m\nPDHcr: dhlam_c + nad_c <=> h_c + lpam_c + nadh_c\nPDHe2r: adhlam_m + coa_m <=> accoa_m + dhlam_m\nPLYSPSer: Ntmelys_r + h2o_r --> pepslys_r + tmlys_r\nPPATDh: h_h + 2.0 pyr_h --> alac__S_h + co2_h\nPPRGL: atp_c + gly_c + pram_c --> adp_c + gar_c + 2.0 h_c + pi_c\nPPRGLh: atp_h + gly_h + pram_h --> adp_h + gar_h + h_h + pi_h\nPRACHh: h2o_h + prbamp_h --> prfp_h\nPRADPh: h2o_h + prbatp_h --> 4.0 h_h + ppi_h + prbamp_h\nPRAIC: air_c + co2_c <=> 5aizc_c + 2.0 h_c\nPRAICh: air_h + co2_h <=> 5aizc_h + h_h\nPSPHPL: psph1p_c --> 2hhxdal_c + ethamp_c\nPSPHS: h_c + nadph_c + o2_c + sphgn_c --> h2o_c + nadp_c + psphings_c\nPTPATim: atp_m + h_m + pan4p_m --> dpcoa_m + ppi_m\nPYRDC_1: h_m + pyr_m --> acald_m + co2_m\nRNDR1b: adp_c + grxrd_c --> dadp_c + grxox_c + h2o_c\nRNDR2b: gdp_c + grxrd_c --> dgdp_c + grxox_c + h2o_c\nRNDR3b: cdp_c + grxrd_c --> dcdp_c + grxox_c + h2o_c\nRNDR4b: grxrd_c + udp_c --> dudp_c + grxox_c + h2o_c\nRNTR1: atp_c + trdrd_c --> datp_c + h2o_c + trdox_c\nRNTR2: gtp_c + trdrd_c --> dgtp_c + h2o_c + trdox_c\nRNTR3: ctp_c + trdrd_c --> dctp_c + h2o_c + trdox_c\nRNTR4: trdrd_c + utp_c --> dutp_c + h2o_c + trdox_c\nSBPP1: h2o_c + sph1p_c --> pi_c + sphgn_c\nSEAHCYSHYD: h2o_c + seahcys_c --> adn_c + selhcys_c\nSEAHCYSHYD_1: h2o_c + seahcys_c <=> adn_c + h_c + selhcys_c\nSELCYSTGL: h2o_c + selcyst_c --> 2obut_c + nh4_c + selcys_c\nSELCYSTL: h2o_c + selcyst_c --> h_c + nh4_c + pyr_c + selhcys_c\nSELCYSTLh: h2o_h + selcyst_h --> h_h + nh4_h + pyr_h + selhcys_h\nSELCYSTS: selhcys_c + ser__L_c --> h2o_c + selcyst_c\nSERPT: h_c + pmtcoa_c + ser__L_c --> 3dsphgn_c + co2_c + coa_c\nSGOR: fad_c + sphgn_c <=> fadh2_c + sphings_c\nSGPL11r: sph1p_r --> ethamp_r + hxdcal_r\nSGPL12r: h2o_r + sphs1p_r --> ethamp_r + h_r + hdca_r\nSGPL13: sphs1p_c --> ethamp_c + hxdceal_c\nSHSL2r: h2s_c + suchms_c <=> h_c + hcys__L_c + succ_c\nSHSL4r: h2o_c + suchms_c <=> 2obut_c + h_c + nh4_c + succ_c\nSLCBK1: atp_c + sphgn_c --> adp_c + h_c + sph1p_c\nSLCBK2: atp_c + psphings_c --> adp_c + h_c + psph1p_c\nSLCYSS: acser_c + tsul_c --> ac_c + scys__L_c\nSPHK21c: atp_c + sphings_c --> adp_c + h_c + sphs1p_c\nSPHPL: sph1p_c --> ethamp_c + hxdcal_c\nSTARCH300DEGR2A: 49.0 h2o_h + 250.0 pi_h + starch300_h --> 50.0 Glc_aD_h + 250.0 g1p_h\nSTARCH300DEGR2B: 49.0 h2o_h + 250.0 pi_h + starch300_h --> 250.0 g1p_h + 50.0 glc__bD_h\nSTARCH300DEGRA: 74.0 h2o_h + 225.0 pi_h + starch300_h --> 75.0 Glc_aD_h + 225.0 g1p_h\nSTARCH300DEGRB: 74.0 h2o_h + 225.0 pi_h + starch300_h --> 225.0 g1p_h + 75.0 glc__bD_h\nTDSRh: h_h + nadph_h + trdox_h --> nadp_h + trdrd_h\nTHFAT: h2o_c + methf_c --> 5fthf_c + h_c\nTHFATm: h2o_m + methf_m --> 5fthf_m + h_m\nTHRA_1: thr__L_h <=> acald_h + gly_h\nTHRS_1: h2o_h + phom_h --> pi_h + thr__L_h\nTMLYSOX: akg_c + o2_c + tmlys_c --> 3htmelys_c + co2_c + succ_c\nURCB: atp_c + hco3_c + urea_c --> adp_c + allphn_c + 2.0 h_c + pi_c\nURCBm: atp_m + hco3_m + urea_m --> adp_m + allphn_m + 2.0 h_m + pi_m\nyli_R0002: akg_c + gln__L_c + h_c + nadph_c --> glu__L_c + nadp_c\nyli_R0034: chtn_c + h2o_c --> acgam_c\nyli_R0094: achms_c + h_c + trdrd_c + tsul_c --> ac_c + hcys__L_c + so3_c + trdox_c\nyli_R0291: gthox_c + h_c + nadph_c --> gthrd_c + nadp_c\nyli_R0303: glyald_c + h_c + nadh_c <=> glyc_c + h2o_c + nad_c\nyli_R0357: 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c\nyli_R0364: 2ahethmpp_c --> acald_c + 3.0 h_c + thmpp_c\nyli_R0374: accoa_m + yli_M03934_m <=> coa_m + yli_M04008_m\nyli_R0377: 2ahethmpp_m + yli_M03933_m --> 3.0 h_c + thmpp_m + yli_M04008_m\nyli_R0381: nad_m + yli_M03934_m --> h_m + nadh_m + yli_M03933_m\nyli_R0428: 2oxoadp_c + coa_c + nad_c --> co2_c + glutcoa_c + nadh_c\nyli_R0698: sphgn_c + yli_M04597_c --> coa_c + h_c + yli_M05742_c\nyli_R0699: sphgn_c + yli_M04095_c --> coa_c + h_c + yli_M05741_c\nyli_R0700: psphings_c + yli_M04095_c --> coa_c + h_c + yli_M05748_c\nyli_R0701: psphings_c + yli_M04597_c --> coa_c + h_c + yli_M05749_c\nyli_R0702: h_c + nadph_c + o2_c + yli_M05742_c --> h2o_c + nadp_c + yli_M05749_c\nyli_R0703: h_c + nadph_c + o2_c + yli_M05741_c --> h2o_c + nadp_c + yli_M05748_c\nyli_R0704: h_c + nadph_c + o2_c + yli_M05749_c --> h2o_c + nadp_c + yli_M05725_c\nyli_R0705: h_c + nadph_c + o2_c + yli_M05748_c --> h2o_c + nadp_c + yli_M05724_c\nyli_R0777: 3sala_c + h_c --> co2_c + yli_M00487_c\nyli_R0784: succoa_m + yli_M03934_m <=> coa_m + yli_M04007_m\nyli_R0788: akg_m + h_m + yli_M03933_m --> co2_m + yli_M04007_m\nyli_R0848: 4.0 h_m + pyr_m + thmpp_m --> 2ahethmpp_m + co2_m\nyli_R0861: 2ahethmpp_m + pyr_m --> alac__S_m + 3.0 h_m + thmpp_m\nyli_R0862: 2ahethmpp_m + 2obut_m --> 2ahbut_m + 3.0 h_m + thmpp_m\nyli_R0878: coa_c + yli_M03938_c --> ibcoa_c + yli_M03934_c\nyli_R0879: coa_c + yli_M03940_c --> yli_M03934_c + yli_M03941_c\nyli_R0880: coa_c + yli_M03936_c --> ivcoa_c + yli_M03934_c\nyli_R0937: 4mop_c + h_c + yli_M03933_c --> co2_c + yli_M03936_c\nyli_R0938: 3mob_c + h_c + yli_M03933_c --> co2_c + yli_M03938_c\nyli_R0939: 3mop_c + h_c + yli_M03933_c --> co2_c + yli_M03940_c\nyli_R1375: 4.0 h_c + pyr_c + thmpp_c --> 2ahethmpp_c + co2_c\nyli_R1377: co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x\nyli_R1378: co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x\nyli_R1387: gly_x + h2o_x + mlthf_x <=> ser__L_x + thf_x\nyli_R1419: nad_c + yli_M03934_c --> h_c + nadh_c + yli_M03933_c\nyli_R1425: co2_x + mlthf_x + nadh_x + nh4_x --> gly_x + nad_x + thf_x\nyli_R1435: 2.0 accoa_x --> aacoa_x + coa_x\nyli_R1438: h_r + pmtcoa_r + ser__L_r --> 3dsphgn_r + co2_r + coa_r\nyli_R1439: atp_r + sphgn_r --> adp_r + h_r + sph1p_r\nyli_R1440: 3dsphgn_r + h_r + nadph_r --> nadp_r + sphgn_r\nyli_R1441: h_r + nadph_r + o2_r + sphgn_r --> h2o_r + nadp_r + psphings_r\nyli_R1465: 3c3hmp_m <=> 2ippm_m + h2o_m\nyli_R1466: 2ippm_m + h2o_m <=> 3c2hmp_m\nyli_R1487: atp_c + btn_c --> btamp_c + ppi_c\nyli_R1488: atp_c + btn_c --> btamp_c + ppi_c\nyli_R1489: atp_c + btn_c --> btamp_c + ppi_c\nyli_R1490: atp_c + btn_c --> btamp_c + ppi_c\nyli_R1495: dhmtp_c + o2_c --> co_c + for_c + mtpp_c\nyli_R1508: glyald_c + h2o_c + nad_c --> glyc__R_c + h_c + nadh_c\nyli_R1532: glutcoa_m + yli_M03934_m <=> S_gtrdhdlp_m + coa_m\nyli_R1538: h2o_c + nad_c --> amp_c + nmn_c\nyli_R1539: dnad_c + h2o_c --> amp_c + nicrnt_c\nyli_R1567: sphs1p_r --> ethamp_r + hxdceal_r\nyli_R1568: sph1p_r --> ethamp_r + yli_M07049_r\nyli_R1569: atp_r + yli_M07050_r --> adp_r + sphs1p_r\nyli_R1570: acoa_r + sphgn_r --> coa_r + dhcrm_cho_r\nyli_R1571: dhcrm_cho_r + h_r + nadph_r + o2_r --> h2o_r + nadp_r + phcrm_hs_r\nyli_R1575: HC01435_m + yli_M03933_m --> thmpp_m + yli_M04007_m\nyli_R1576: akg_m + thmpp_m --> HC01435_m + co2_m\nyli_R1587: 4mop_c + thmpp_c <=> 3mhtpp_c + co2_c\nyli_R1588: 2obut_m + pyr_m --> 2ahbut_m + co2_m\nyli_R1590: 2mhop_c + yli_M03933_c --> thmpp_c + yli_M03938_c\nyli_R1591: 3mhtpp_c + yli_M03933_c --> thmpp_c + yli_M03936_c\nyli_R1592: 2mhob_c + yli_M03933_c --> thmpp_c + yli_M03940_c\nyli_R7859: 2ippm_m + h2o_m <=> 3c2hmp_m\nyli_R8859: 3c3hmp_m <=> 2ippm_m + h2o_m\n"
],
[
"print('Updated reactions\\n')\nfor r in sorted(model_old.reactions, key=lambda x: x.id):\n if r in model_new.reactions:\n r2 = model_new.reactions.get_by_id(r.id)\n if (r.name == r2.name and r.reaction == r2.reaction and r.gene_reaction_rule == r2.gene_reaction_rule and\n r.lower_bound == r2.lower_bound and r.upper_bound == r2.upper_bound):\n pass\n else:\n print('Old', r, r.gene_reaction_rule)\n print('New', r2, r2.gene_reaction_rule)\n print()",
"Updated reactions\n\nOld 2OXOADOXm: 2oxoadp_m + coa_m + nad_m --> co2_m + glutcoa_m + nadh_m (PDHX and 10007 and 10040 and 12116) or (PDHX and 10040 and 12116 and 9274) or (Pdhx and 10007 and 10040 and 12116) or (Pdhx and 10040 and 12116 and 9274)\nNew 2OXOADOXm: 2oxoadp_m + coa_m + nad_m --> co2_m + glutcoa_m + nadh_m 10040 and 12116 and 9274\n\nOld ACACT1m: 2.0 accoa_m --> aacoa_m + coa_m 8678 or 8885 or (HADHA and HADHB) or (Hadha and Hadhb)\nNew ACACT1m: 2.0 accoa_m --> aacoa_m + coa_m 8678 or 8885\n\nOld ADK1: amp_c + atp_c <=> 2.0 adp_c 12300 or 13190 or 15496\nNew ADK1: amp_c + atp_c <=> 2.0 adp_c 15496\n\nOld ADNK1: adn_c + atp_c --> adp_c + amp_c + h_c 15496 or 8385\nNew ADNK1: adn_c + atp_c --> adp_c + amp_c + h_c 8385\n\nOld AKGDm: akg_m + coa_m + nad_m --> co2_m + nadh_m + succoa_m (PDHX and 10007 and 10040 and 12116) or (PDHX and 10040 and 12116 and 9274) or (Pdhx and 10007 and 10040 and 12116) or (Pdhx and 10040 and 12116 and 9274)\nNew AKGDm: akg_m + coa_m + nad_m --> co2_m + nadh_m + succoa_m 10007 and 10040 and 12116\n\nOld ASPK: asp__L_c + atp_c <=> 4pasp_c + adp_c 12080 or 14662 or 16738\nNew ASPK: asp__L_c + atp_c <=> 4pasp_c + adp_c 14662\n\nOld CERS124er: sphgn_r + ttccoa_r --> cer1_24_r + coa_r + h_r 11391\nNew CERS124er: sphgn_r + ttccoa_r --> cer1_24_r + coa_r + h_r 11391 or 15168\n\nOld CERS126er: hexccoa_r + sphgn_r --> cer1_26_r + coa_r + h_r 11391\nNew CERS126er: hexccoa_r + sphgn_r --> cer1_26_r + coa_r + h_r 11391 or 15168\n\nOld CERS224er: psphings_r + ttccoa_r --> cer2_24_r + coa_r + h_r 11391\nNew CERS224er: psphings_r + ttccoa_r --> cer2_24_r + coa_r + h_r 11391 or 15168\n\nOld CERS226er: hexccoa_r + psphings_r --> cer2_26_r + coa_r + h_r 11391\nNew CERS226er: hexccoa_r + psphings_r --> cer2_26_r + coa_r + h_r 11391 or 15168\n\nOld CYSDS: cys__L_c + h2o_c --> h2s_c + nh4_c + pyr_c 8759 or 9499\nNew CYSDS: cys__L_c + h2o_c --> h2s_c + nh4_c + pyr_c 16725 or 16742\n\nOld CYSS: acser_c + h2s_c --> ac_c + cys__L_c + h_c 12031 or 13106 or 15712\nNew CYSS: acser_c + h2s_c --> ac_c + cys__L_c + h_c 12031 or 13106\n\nOld DADK: atp_c + damp_c <=> adp_c + dadp_c 12300 or 15129 or 15496\nNew DADK: atp_c + damp_c <=> adp_c + dadp_c 15496\n\nOld DPCOAK: atp_c + dpcoa_c --> adp_c + coa_c + h_c 11114 or 14849\nNew DPCOAK: atp_c + dpcoa_c --> adp_c + coa_c + h_c 11114\n\nOld FMNATm: atp_m + fmn_m + h_m --> fad_m + ppi_m 11542\nNew FMNATm: atp_m + fmn_m + h_m --> fad_m + ppi_m 16092\n\nOld GLUCYS: atp_c + cys__L_c + glu__L_c --> adp_c + glucys_c + h_c + pi_c 12007 or 12022 or (GCLM and 12007) or (GCLM and 12022) or (Gclm and 12007) or (Gclm and 12022)\nNew GLUCYS: atp_c + cys__L_c + glu__L_c --> adp_c + glucys_c + h_c + pi_c 12007\n\nOld GLUDy: glu__L_c + h2o_c + nadp_c <=> akg_c + h_c + nadph_c + nh4_c 12248 or (b3213 and 15713)\nNew GLUDy: glu__L_c + h2o_c + nadp_c <=> akg_c + h_c + nadph_c + nh4_c 12248\n\nOld GLYCLm: gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 12898 or (CRv4_Au5_s12_g4121_t1 and 14894) or (12898 and 14894) or (10040 and 10205 and 12898 and 15184)\nNew GLYCLm: gly_m + nad_m + thf_m --> co2_m + mlthf_m + nadh_m + nh4_m 10040 and 10205 and 12898 and 15184\n\nOld GRXR: grxox_c + 2.0 gthrd_c --> grxrd_c + gthox_c 15038 or 16549 or 9250\nNew GRXR: grxox_c + 2.0 gthrd_c --> grxrd_c + gthox_c 15038 or 8790\n\nOld GTHOm: gthox_m + h_m + nadph_m --> 2.0 gthrd_m + nadp_m 15482 or (15482 and 9250)\nNew GTHOm: gthox_m + h_m + nadph_m --> 2.0 gthrd_m + nadp_m 15482\n\nOld GTHOr: gthox_c + h_c + nadph_c <=> 2.0 gthrd_c + nadp_c 15482 or (15038 and 15482) or (15482 and 16549) or (15482 and 8790)\nNew GTHOr: gthox_c + h_c + nadph_c <=> 2.0 gthrd_c + nadp_c 15482\n\nOld GTHPi: 2.0 gthrd_c + h2o2_c --> gthox_c + 2.0 h2o_c 12715 or 15038 or 16549 or 8579\nNew GTHPi: 2.0 gthrd_c + h2o2_c --> gthox_c + 2.0 h2o_c 8579\n\nOld HSDxi: aspsa_c + h_c + nadh_c --> hom__L_c + nad_c 12080\nNew HSDxi: aspsa_c + h_c + nadh_c --> hom__L_c + nad_c 12080 or 16738\n\nOld HSERTA: accoa_c + hom__L_c <=> achms_c + coa_c 12513 or 15248 or (PP_5098 and 15248)\nNew HSERTA: accoa_c + hom__L_c <=> achms_c + coa_c 12513 or 15248\n\nOld IPPMIa: 3c2hmp_c <=> 2ippm_c + h2o_c 14914 or (CRv4_Au5_s6_g12448_t1 and 14914) or (PP_1986 and 14914) or (b0071 and 14914)\nNew IPPMIa: 3c2hmp_c <=> 2ippm_c + h2o_c 14914\n\nOld IPPMIb: 2ippm_c + h2o_c <=> 3c3hmp_c 14914 or (CRv4_Au5_s6_g12448_t1 and 14914) or (PP_1986 and 14914) or (b0071 and 14914)\nNew IPPMIb: 2ippm_c + h2o_c <=> 3c3hmp_c 14914\n\nOld METS: 5mthf_c + hcys__L_c --> h_c + met__L_c + thf_c 9825\nNew METS: 5mthf_c + hcys__L_c --> h_c + met__L_c + thf_c 12876 or 12920 or 9825\n\nOld METSOXR1: metsox_S__L_c + trdrd_c --> h2o_c + met__L_c + trdox_c 15469 or (b3551 and 15339) or (b3551 and 16019) or (15339 and 15902) or (15902 and 16019)\nNew METSOXR1: metsox_S__L_c + trdrd_c --> h2o_c + met__L_c + trdox_c (10848 and 15902) or (12730 and 15902) or (12737 and 15902) or (15339 and 15902)\n\nOld METSOXR2: metsox_R__L_c + trdrd_c --> h2o_c + met__L_c + trdox_c (15339 and 15469) or (15339 and 9153) or (15469 and 16019) or (16019 and 9153)\nNew METSOXR2: metsox_R__L_c + trdrd_c --> h2o_c + met__L_c + trdox_c (10848 and 15469) or (12730 and 15469) or (12737 and 15469) or (15339 and 15469)\n\nOld MTAP: 5mta_c + pi_c --> 5mdr1p_c + ade_c 14521 or 8372\nNew MTAP: 5mta_c + pi_c --> 5mdr1p_c + ade_c 8372\n\nOld MTRI: 5mdr1p_c <=> 5mdru1p_c 13385 or 15595\nNew MTRI: 5mdr1p_c <=> 5mdru1p_c 13385\n\nOld NDPK1: atp_c + gdp_c <=> adp_c + gtp_c 15496 or 15679 or 8943\nNew NDPK1: atp_c + gdp_c <=> adp_c + gtp_c 15679\n\nOld NDPK10: atp_c + didp_c <=> adp_c + ditp_c 15679 or 8943\nNew NDPK10: atp_c + didp_c <=> adp_c + ditp_c 15679\n\nOld NDPK2: atp_c + udp_c <=> adp_c + utp_c 15496 or 15679 or 8943\nNew NDPK2: atp_c + udp_c <=> adp_c + utp_c 15679\n\nOld NDPK2m: atp_m + udp_m --> adp_m + utp_m 15679\nNew NDPK2m: atp_m + udp_m <=> adp_m + utp_m 8943\n\nOld NDPK3: atp_c + cdp_c <=> adp_c + ctp_c 15496 or 15679 or 8943\nNew NDPK3: atp_c + cdp_c <=> adp_c + ctp_c 15679\n\nOld NDPK3m: atp_m + cdp_m --> adp_m + ctp_m 15679\nNew NDPK3m: atp_m + cdp_m <=> adp_m + ctp_m 8943\n\nOld NDPK4: atp_c + dtdp_c <=> adp_c + dttp_c 15496 or 15679 or 8943\nNew NDPK4: atp_c + dtdp_c <=> adp_c + dttp_c 15679\n\nOld NDPK4m: atp_m + dtdp_m --> adp_m + dttp_m 15679\nNew NDPK4m: atp_m + dtdp_m <=> adp_m + dttp_m 8943\n\nOld NDPK5: atp_c + dgdp_c <=> adp_c + dgtp_c 15496 or 15679 or 8943\nNew NDPK5: atp_c + dgdp_c <=> adp_c + dgtp_c 15679\n\nOld NDPK6: atp_c + dudp_c <=> adp_c + dutp_c 15496 or 15679 or 8943\nNew NDPK6: atp_c + dudp_c <=> adp_c + dutp_c 15679\n\nOld NDPK6m: atp_m + dudp_m --> adp_m + dutp_m 15679\nNew NDPK6m: atp_m + dudp_m <=> adp_m + dutp_m 8943\n\nOld NDPK7: atp_c + dcdp_c <=> adp_c + dctp_c 15496 or 15679 or 8943\nNew NDPK7: atp_c + dcdp_c <=> adp_c + dctp_c 15679\n\nOld NDPK7m: atp_m + dcdp_m --> adp_m + dctp_m 15679\nNew NDPK7m: atp_m + dcdp_m <=> adp_m + dctp_m 8943\n\nOld NDPK8: atp_c + dadp_c <=> adp_c + datp_c 15496 or 15679 or 8943\nNew NDPK8: atp_c + dadp_c <=> adp_c + datp_c 15679\n\nOld NDPK8m: atp_m + dadp_m --> adp_m + datp_m 15679\nNew NDPK8m: atp_m + dadp_m <=> adp_m + datp_m 8943\n\nOld NDPK9: atp_c + idp_c <=> adp_c + itp_c 15679 or 8943\nNew NDPK9: atp_c + idp_c <=> adp_c + itp_c 15679\n\nOld NDPK9m: atp_m + idp_m --> adp_m + itp_m 15679\nNew NDPK9m: atp_m + idp_m <=> adp_m + itp_m 8943\n\nOld OIVD1m: 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nNew OIVD1m: 4mop_m + coa_m + nad_m --> co2_m + ivcoa_m + nadh_m (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)\n\nOld OIVD2m: 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nNew OIVD2m: 3mob_m + coa_m + nad_m --> co2_m + ibcoa_m + nadh_m (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)\n\nOld OIVD3m: 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12086 and 15436) or (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12086 and 15436) or (10040 and 11188 and 12566 and 15436)\nNew OIVD3m: 3mop_m + coa_m + nad_m --> 2mbcoa_m + co2_m + nadh_m (10040 and 11183 and 12566 and 15436) or (10040 and 11188 and 12566 and 15436)\n\nOld PAPSR: paps_c + trdrd_c --> 2.0 h_c + pap_c + so3_c + trdox_c 11741 or (11741 and 15339) or (11741 and 16019)\nNew PAPSR: paps_c + trdrd_c --> 2.0 h_c + pap_c + so3_c + trdox_c (10848 and 11741) or (11741 and 12730) or (11741 and 12737) or (11741 and 15339)\n\nOld PDHm: coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m (PDHX and 10040 and 13630 and 13948 and 14126) or (Pdhx and 10040 and 13630 and 13948 and 14126) or (10040 and 13630 and 13722 and 13948 and 14126)\nNew PDHm: coa_m + nad_m + pyr_m --> accoa_m + co2_m + nadh_m 10040 and 13630 and 13722 and 13948 and 14126\n\nOld RNDR1: adp_c + trdrd_c --> dadp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nNew RNDR1: adp_c + trdrd_c --> dadp_c + h2o_c + trdox_c (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)\n\nOld RNDR1n: adp_n + trdrd_n --> dadp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nNew RNDR1n: adp_n + trdrd_n --> dadp_n + h2o_n + trdox_n (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)\n\nOld RNDR2: gdp_c + trdrd_c --> dgdp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nNew RNDR2: gdp_c + trdrd_c --> dgdp_c + h2o_c + trdox_c (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)\n\nOld RNDR2n: gdp_n + trdrd_n --> dgdp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nNew RNDR2n: gdp_n + trdrd_n --> dgdp_n + h2o_n + trdox_n (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)\n\nOld RNDR3: cdp_c + trdrd_c --> dcdp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nNew RNDR3: cdp_c + trdrd_c --> dcdp_c + h2o_c + trdox_c (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)\n\nOld RNDR3n: cdp_n + trdrd_n --> dcdp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nNew RNDR3n: cdp_n + trdrd_n --> dcdp_n + h2o_n + trdox_n (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)\n\nOld RNDR4: trdrd_c + udp_c --> dudp_c + h2o_c + trdox_c (11172 and 11290) or (11290 and 14237) or (CRv4_Au5_s9_g15314_t1 and 11172 and 11290) or (CRv4_Au5_s9_g15314_t1 and 11290 and 14237) or (11172 and 11290 and 15339) or (11172 and 11290 and 16019) or (11290 and 14237 and 15339) or (11290 and 14237 and 16019)\nNew RNDR4: trdrd_c + udp_c --> dudp_c + h2o_c + trdox_c (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237) or (11172 and 11290 and 14237 and 15339)\n\nOld RNDR4n: trdrd_n + udp_n --> dudp_n + h2o_n + trdox_n (YGR180C and 11290 and 15339) or (YGR180C and 11290 and 16019)\nNew RNDR4n: trdrd_n + udp_n --> dudp_n + h2o_n + trdox_n (10848 and 11172 and 11290 and 14237) or (11172 and 11290 and 12730 and 14237) or (11172 and 11290 and 12737 and 14237)\n\nOld SHSL1: cys__L_c + suchms_c --> cyst__L_c + h_c + succ_c 11463 or 16725 or 16742 or 9499\nNew SHSL1: cys__L_c + suchms_c --> cyst__L_c + h_c + succ_c 11463\n\nOld SO4ti: so4_e --> so4_c 14119 or 15736 or 16682 or (14119 and 16682) or (15736 and 16682)\nNew SO4ti: so4_e --> so4_c 14119 or 15736 or 16682\n\nOld THIORDXi: h2o2_c + trdrd_c --> 2.0 h2o_c + trdox_c 8579 or (YDR453C and 15339) or (12715 and 15339) or (12715 and 16019) or (15037 and 15339) or (15037 and 16019)\nNew THIORDXi: h2o2_c + trdrd_c --> 2.0 h2o_c + trdox_c (10848 and 12715) or (12715 and 12730) or (12715 and 12737)\n\nOld THIORDXm: h2o2_m + trdrd_m <=> 2.0 h2o_m + trdox_m 10200 and 15339\nNew THIORDXm: h2o2_m + trdrd_m <=> 2.0 h2o_m + trdox_m 10200 and 16019\n\nOld THIORDXni: h2o2_n + trdrd_n --> 2.0 h2o_n + trdox_n (15037 and 15339) or (15037 and 16019)\nNew THIORDXni: h2o2_n + trdrd_n --> 2.0 h2o_n + trdox_n (10848 and 15037) or (12730 and 15037) or (12737 and 15037)\n\nOld THIORDXp: h2o2_x + trdrd_x <=> 2.0 h2o_x + trdox_x (13262 and 15339) or (13262 and 16019)\nNew THIORDXp: h2o2_x + trdrd_x <=> 2.0 h2o_x + trdox_x 13262 and 15339\n\nOld THRA: thr__L_c --> acald_c + gly_c 16182 or 9222 or 9667\nNew THRA: thr__L_c --> acald_c + gly_c 16182\n\nOld THRA2: athr__L_c --> acald_c + gly_c 16182 or 9222 or 9667\nNew THRA2: athr__L_c --> acald_c + gly_c 16182\n\nOld TRDR: h_c + nadph_c + trdox_c --> nadp_c + trdrd_c 15339 or 15482 or 16019 or 9688 or (CRv4_Au5_s2_g8777_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s2_g8777_t1 and 15339) or (CRv4_Au5_s2_g8777_t1 and 16019) or (CRv4_Au5_s8_g14830_t1 and CRv4_Au5_s9_g15314_t1) or (CRv4_Au5_s8_g14830_t1 and 15339) or (CRv4_Au5_s8_g14830_t1 and 16019) or (CRv4_Au5_s9_g15314_t1 and 9688) or (15339 and 9688) or (16019 and 9688)\nNew TRDR: h_c + nadph_c + trdox_c --> nadp_c + trdrd_c (10848 and 9688) or (12730 and 9688) or (12737 and 9688)\n\nOld TRDRm: h_m + nadph_m + trdox_m --> nadp_m + trdrd_m 15482 or (YCR083W and 9688) or (15339 and 9688)\nNew TRDRm: h_m + nadph_m + trdox_m --> nadp_m + trdrd_m 16019 and 9688\n\nOld URIDK2r: atp_c + dump_c <=> adp_c + dudp_c 13190 or 15252\nNew URIDK2r: atp_c + dump_c <=> adp_c + dudp_c 13190\n\n"
],
[
"print('Added reactions\\n')\nfor r in sorted(model_new.reactions, key=lambda x: x.id):\n if r not in model_old.reactions:\n print(r)",
"Added reactions\n\n2OH3K5MPPISO: h2o_c + hkmpp_c --> dhmtp_c + pi_c\n3DSPHRer: 3dsphgn_r + h_r + nadph_r --> nadp_r + sphgn_r\n3SALACBOXL: 3sala_c + h_c --> co2_c + hyptaur_c\nACRS: dkmpp_c --> h_c + hkmpp_c\nAMAOTrm: 8aonn_m + amet_m <=> amob_m + dann_m\nAOXSp: ala__L_x + h_x + pimcoa_x --> 8aonn_x + co2_x + coa_x\nBTS5m: 2fe2s_m + amet_m + dtbt_m --> 2fe1s_m + btn_m + dad_5_m + h_m + met__L_m\nCERH124er: cer1_24_r + h_r + nadph_r + o2_r --> cer2_24_r + h2o_r + nadp_r\nCERH126er: cer1_26_r + h_r + nadph_r + o2_r --> cer2_26_r + h2o_r + nadp_r\nCERS2p24er: cer1_24_r + h_r + nadph_r + o2_r --> cer2p_24_r + h2o_r + nadp_r\nCERS2p26er: cer1_26_r + h_r + nadph_r + o2_r --> cer2p_26_r + h2o_r + nadp_r\nCERS324er: cer2_24_r + h_r + nadph_r + o2_r --> cer3_24_r + h2o_r + nadp_r\nCERS326er: cer2_26_r + h_r + nadph_r + o2_r --> cer3_26_r + h2o_r + nadp_r\nDBTSm: atp_m + co2_m + dann_m <=> adp_m + dtbt_m + 3.0 h_m + pi_m\nDHAOX_c: dhdascb_c + 2.0 gthrd_c --> ascb__L_c + gthox_c + h_c\nDNTPPA: ahdt_c + h2o_c --> dhpmp_c + h_c + ppi_c\nFTHFCLm: 5fthf_m + atp_m --> adp_m + methf_m + pi_m\nGBBOX_m: akg_m + gbbtn_m + o2_m --> co2_m + crn_m + succ_m\nGRXRm: grxox_m + 2.0 gthrd_m --> grxrd_m + gthox_m\nHTMLA_m: 3htmelys_m --> 4tmeabut_m + gly_m\nNDPK10m: atp_m + didp_m <=> adp_m + ditp_m\nNDPK1m: atp_m + gdp_m <=> adp_m + gtp_m\nNDPK5m: atp_m + dgdp_m <=> adp_m + dgtp_m\nNNAT: atp_c + h_c + nicrnt_c --> dnad_c + ppi_c\nOBDHm: 2obut_m + coa_m + nad_m --> co2_m + nadh_m + ppcoa_m\nPSPHPLer: psph1p_r --> 2hhxdal_r + ethamp_r\nPSPHSer: h_r + nadph_r + o2_r + sphgn_r --> h2o_r + nadp_r + psphings_r\nSERPTer: h_r + pmtcoa_r + ser__L_r --> 3dsphgn_r + co2_r + coa_r\nSLCBK1er: atp_r + sphgn_r --> adp_r + h_r + sph1p_r\nSLCBK2er: atp_r + psphings_r --> adp_r + h_r + psph1p_r\nSPHPLer: sph1p_r --> ethamp_r + hxdcal_r\nTMABDH1_m: 4tmeabut_m + h2o_m + nad_m --> gbbtn_m + 2.0 h_m + nadh_m\nTMLOX_m: akg_m + o2_m + tmlys_m --> 3htmelys_m + co2_m + succ_m\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc804cdbefe5cf5bc781cc4c75bd2ba1228b6ae | 9,248 | ipynb | Jupyter Notebook | 3-DML.ipynb | subratcall/cx-oracle-notebooks | 59578373954daf3dc85b34d8f4ee3dc82e3aed4d | [
"Apache-2.0"
] | 10 | 2022-02-28T22:50:02.000Z | 2022-03-13T19:55:15.000Z | 3-DML.ipynb | cjbj/cx-oracle-notebooks | f9dbe330eaadd2fd23791b18dec56baf9d2a681e | [
"Apache-2.0"
] | null | null | null | 3-DML.ipynb | cjbj/cx-oracle-notebooks | f9dbe330eaadd2fd23791b18dec56baf9d2a681e | [
"Apache-2.0"
] | 2 | 2022-02-28T22:50:08.000Z | 2022-03-11T04:29:21.000Z | 27.939577 | 156 | 0.50811 | [
[
[
"",
"_____no_output_____"
],
[
"# DML - INSERT, UPDATE, DELETE, and MERGE Statements",
"_____no_output_____"
]
],
[
[
"import cx_Oracle\nimport os\nimport platform\n\nif platform.system() == 'Darwin':\n cx_Oracle.init_oracle_client(lib_dir=os.environ.get(\"HOME\")+\"/instantclient_19_8\")\nelif platform.system() == 'Windows':\n cx_Oracle.init_oracle_client(lib_dir=r\"C:\\oracle\\instantclient_19_14\")",
"_____no_output_____"
],
[
"un = \"pythondemo\"\npw = \"welcome\"\ncs = \"localhost/orclpdb1\"\n\nconnection = cx_Oracle.connect(user=un, password=pw, dsn=cs)",
"_____no_output_____"
],
[
"cursor = connection.cursor()\ncursor.execute(\"drop table mytab\")",
"_____no_output_____"
],
[
"cursor.execute(\"create table mytab (id number, data varchar2(1000))\")",
"_____no_output_____"
]
],
[
[
"# Binding for Insertion\n\nDocumentation reference link: [Using Bind Variables](https://cx-oracle.readthedocs.io/en/latest/user_guide/bind.html)\n\nBinding is very, very important. It:\n- eliminates escaping special characters and helps prevent SQL injection attacks\n- is important for performance and scalability",
"_____no_output_____"
]
],
[
[
"with connection.cursor() as cursor:\n cursor.execute(\"truncate table mytab\")\n\n sql = \"insert into mytab (id, data) values (:idVal, :dataVal)\"\n\n # bind by position using a sequence (list or tuple)\n cursor.execute(sql, [1, \"String 1\"])\n cursor.execute(sql, (2, \"String 2\"))\n\n # bind by name using a dictionary\n cursor = connection.cursor()\n cursor.execute(sql, {\"idVal\": 3, \"dataVal\": \"String 3\"})\n\n # bind by name using keyword arguments\n cursor.execute(sql, idVal=4, dataVal=\"String 4\")\n\n print(\"Done\")",
"_____no_output_____"
]
],
[
[
"# Batch execution - Inserting multiple rows with executemany()\n\nDocumentation reference link: [Batch Statement Execution and Bulk Loading](https://cx-oracle.readthedocs.io/en/latest/user_guide/batch_statement.html)",
"_____no_output_____"
]
],
[
[
"with connection.cursor() as cursor:\n cursor.execute(\"truncate table mytab\")\n\n rows = [ (1, \"First\" ),\n (2, \"Second\" ),\n (3, \"Third\" ),\n (4, \"Fourth\" ),\n (5, \"Fifth\" ),\n (6, \"Sixth\" ),\n (7, \"Seventh\" ) ]\n\n # Using setinputsizes helps avoid memory reallocations.\n # The parameters correspond to the insert columns. \n # The value None says use cx_Oracle's default size for a NUMBER column. \n # The second value is the maximum input data (or column) width for the VARCHAR2 column\n cursor.setinputsizes(None, 7)\n\n cursor.executemany(\"insert into mytab(id, data) values (:1, :2)\", rows)\n\n # Now query the results back\n\n for row in cursor.execute('select * from mytab'):\n print(row)\n\n connection.rollback()",
"_____no_output_____"
]
],
[
[
"### Benchmark - executemany() vs execute()",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport time\n\ncursor = connection.cursor()\ncursor.execute(\"truncate table mytab\")\n\n# Row counts to test inserting\nnumrows = (1, 5, 10, 100, 1000)\n\nlongstring = \"x\" * 1000\n\ndef create_data(n):\n d = []\n for i in range(n):\n d.append((i, longstring))\n return d\n\nex = [] # seconds for execute() loop\nem = [] # seconds for executemany()\n\nfor n in numrows:\n \n rows = create_data(n)\n \n ###############\n \n start=time.time()\n \n #\n # Loop over each row\n #\n for r in rows:\n cursor.execute(\"insert into mytab(id, data) values (:1, :2)\", r) # <==== Loop over execute()\n \n elapsed = time.time() - start\n ex.append(elapsed)\n \n r, = cursor.execute(\"select count(*) from mytab\").fetchone()\n print(\"execute() loop {:6d} rows in {:06.4f} seconds\".format(r, elapsed)) \n connection.rollback()\n \n ################\n \n start = time.time()\n \n #\n # Insert all rows in one call\n #\n cursor.executemany(\"insert into mytab(id, data) values (:1, :2)\", rows) # <==== One executemany()\n \n elapsed = time.time() - start\n em.append(elapsed)\n \n r, = cursor.execute(\"select count(*) from mytab\").fetchone()\n print(\"executemany() {:6d} rows in {:06.4f} seconds\".format(r, elapsed)) \n connection.rollback()\n\n\nprint(\"Plot is:\")\nplt.xticks(numrows)\nplt.plot(numrows, ex, \"o\", label=\"execute() loop\")\nplt.plot(numrows, em, \"o\", label=\"executemany()\")\nplt.xscale(\"log\")\nplt.xlabel('Number of rows')\nplt.ylabel('Seconds')\nplt.legend(loc=\"upper left\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Noisy Data - Batch Errors\n\nDealing with bad data is easy with the `batcherrors` parameter.",
"_____no_output_____"
]
],
[
[
"# Initial data\n\nwith connection.cursor() as cursor:\n\n for row in cursor.execute(\"select * from ParentTable order by ParentId\"):\n print(row)\n\n for row in cursor.execute(\"select * from ChildTable order by ChildId\"):\n print(row)",
"_____no_output_____"
],
[
"dataToInsert = [\n (1016, 10, 'Child Red'),\n (1018, 20, 'Child Blue'),\n (1018, 30, 'Child Green'), # duplicate key\n (1022, 40, 'Child Yellow'),\n (1021, 75, 'Child Orange') # parent does not exist\n]\n\nwith connection.cursor() as cursor:\n \n cursor.executemany(\"insert into ChildTable values (:1, :2, :3)\", dataToInsert, batcherrors=True)\n\n print(\"\\nErrors:\")\n for error in cursor.getbatcherrors():\n print(\"Error\", error.message, \"at row offset\", error.offset)\n \n print(\"\\nTable rows:\")\n for row in cursor.execute(\"select * from ChildTable order by ChildId\"):\n print(row)",
"_____no_output_____"
]
],
[
[
"Now you can choose whether or not to fix failed records and reinsert them.\nYou can then rollback or commit.\n\nThis is true even if you had enabled autocommit mode - no commit will occur if there are batch errors.",
"_____no_output_____"
]
],
[
[
"connection.rollback()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc808346f011623b1b48e07b57575929350b0c4 | 44,943 | ipynb | Jupyter Notebook | nbs/00_settings.ipynb | seerbio/alphapept | 1941f7ae8a37a80172df3e427d372dca01595ff9 | [
"Apache-2.0"
] | null | null | null | nbs/00_settings.ipynb | seerbio/alphapept | 1941f7ae8a37a80172df3e427d372dca01595ff9 | [
"Apache-2.0"
] | null | null | null | nbs/00_settings.ipynb | seerbio/alphapept | 1941f7ae8a37a80172df3e427d372dca01595ff9 | [
"Apache-2.0"
] | null | null | null | 32.829072 | 396 | 0.535389 | [
[
[
"# default_exp settings",
"_____no_output_____"
]
],
[
[
"# Settings\n\n> A template for settings",
"_____no_output_____"
],
[
"AlphaPept stores all settings in `*.yaml`-files. This notebook contains functions to load, save, and print settings. Additionally, a settings template is defined. Here we define parameters, default values, and a range and what kind of parameter this is (e.g., float value, list, etc.). The idea here is to have definitions to automatically create graphical user interfaces for the settings.",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
]
],
[
[
"## Settings\n\n### Saving and Loading\n\nThe default scheme for saving settings are `*.yaml`-files. These files can be easily modified when opening with a text editor.",
"_____no_output_____"
]
],
[
[
"#export\nimport yaml\n\ndef print_settings(settings: dict):\n \"\"\"Print a yaml settings file\n\n Args:\n settings (dict): A yaml dictionary.\n \"\"\"\n print(yaml.dump(settings, default_flow_style=False))\n\n\ndef load_settings(path: str):\n \"\"\"Load a yaml settings file.\n\n Args:\n path (str): Path to the settings file.\n \"\"\"\n with open(path, \"r\") as settings_file:\n SETTINGS_LOADED = yaml.load(settings_file, Loader=yaml.FullLoader)\n return SETTINGS_LOADED\n \n \ndef load_settings_as_template(path: str):\n \"\"\"Loads settings but removes fields that contain summary information.\n\n Args:\n path (str): Path to the settings file.\n \"\"\"\n settings = load_settings(path)\n \n for _ in ['summary','failed']:\n if _ in settings:\n settings.pop(_)\n\n _ = 'prec_tol_calibrated'\n if 'search' in settings:\n if _ in settings['search']:\n settings['search'].pop(_)\n\n return settings\n \n\ndef save_settings(settings: dict, path: str):\n \"\"\"Save settings file to path.\n\n Args:\n settings (dict): A yaml dictionary.\n path (str): Path to the settings file.\n \"\"\"\n with open(path, \"w\") as file:\n yaml.dump(settings, file, sort_keys=False)",
"_____no_output_____"
],
[
"settings = {'field1': 0,'summary':123}\ndummy_path = 'to_delete.yaml'\n\nprint('--- print_settings ---')\nprint_settings(settings)\n\nsave_settings(settings, dummy_path)\n\nprint('--- load_settings ---')\n\nprint_settings(load_settings(dummy_path))\n\nprint('--- load_settings_as_template ---')\n\nprint_settings(load_settings_as_template(dummy_path))",
"--- print_settings ---\nfield1: 0\nsummary: 123\n\n--- load_settings ---\nfield1: 0\nsummary: 123\n\n--- load_settings_as_template ---\nfield1: 0\n\n"
],
[
"#hide\ndef test_settings_utils():\n settings = {'field1': 0,'summary':123}\n dummy_path = 'to_delete.yaml'\n\n save_settings(settings, dummy_path)\n\n s = load_settings(dummy_path)\n\n assert s==settings\n\n s_ = load_settings_as_template(dummy_path)\n\n assert 'summary' not in s_\n assert 'failed' not in s_\n \ntest_settings_utils()",
"_____no_output_____"
]
],
[
[
"## Settings Template\n\nThe settings template defines individual settings. The idea is to provide a template so that a graphical user interface can be automatically generated. The list below represents what each item would be when using `streamlit`. This could be adapted for any kind of GUI library.\n\nEach entry has a type, default values, and a description.\n\n* spinbox -> `st.range`, range with minimum and maximum values (int)\n* doublespinbox -> `st.range`, range with minimum and maximum values (float)\n* path -> `st.button`, clickable button to select a path to save / load files.\n* combobox -> `st.selectbox`, dropdown menu with values to choose from\n* checkbox -> `st.checkbox`, checkbox that can be selected\n* checkgroup -> `st.multiselect`, creates a list of options that can be selected\n* string -> `st.text_input`, generic string input\n* list -> Creates a list that is displayed\n* placeholder -> This just prints the parameter and cannot be changed\n",
"_____no_output_____"
],
[
"### Worfklow settings\n\nWorkflow settings regarding the workflow - which algorithmic steps should be performed.",
"_____no_output_____"
]
],
[
[
"#hide\nimport pandas as pd\nfrom alphapept.constants import protease_dict\n\nSETTINGS_TEMPLATE = {}\n\n# Workflow\nworkflow = {}\n\nworkflow[\"continue_runs\"] = {'type':'checkbox', 'default':False, 'description':\"Flag to continue previously computated runs. If False existing ms_data will be deleted.\"}\nworkflow[\"create_database\"] = {'type':'checkbox', 'default':True, 'description':\"Flag to create a database.\"}\nworkflow[\"import_raw_data\"] = {'type':'checkbox', 'default':True, 'description':\"Flag to import the raw data.\"}\nworkflow[\"find_features\"] = {'type':'checkbox', 'default':True, 'description':\"Flag to perform feature finding.\"}\nworkflow[\"search_data\"] = {'type':'checkbox', 'default':True, 'description':\"Flag to perform search.\"}\nworkflow[\"recalibrate_data\"] = {'type':'checkbox', 'default':True, 'description':\"Flag to perform recalibration.\"}\nworkflow[\"align\"] = {'type':'checkbox', 'default':False, 'description':\"Flag to align the data.\"}\nworkflow[\"match\"] = {'type':'checkbox', 'default':False, 'description':\"Flag to perform match-between runs.\"}\nworkflow[\"lfq_quantification\"] = {'type':'checkbox', 'default':True, 'description':\"Flag to perfrom lfq normalization.\"}\n\nSETTINGS_TEMPLATE[\"workflow\"] = workflow",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['workflow']))",
"align:\n default: false\n description: Flag to align the data.\n type: checkbox\ncontinue_runs:\n default: false\n description: Flag to continue previously computated runs. If False existing ms_data\n will be deleted.\n type: checkbox\ncreate_database:\n default: true\n description: Flag to create a database.\n type: checkbox\nfind_features:\n default: true\n description: Flag to perform feature finding.\n type: checkbox\nimport_raw_data:\n default: true\n description: Flag to import the raw data.\n type: checkbox\nlfq_quantification:\n default: true\n description: Flag to perfrom lfq normalization.\n type: checkbox\nmatch:\n default: false\n description: Flag to perform match-between runs.\n type: checkbox\nrecalibrate_data:\n default: true\n description: Flag to perform recalibration.\n type: checkbox\nsearch_data:\n default: true\n description: Flag to perform search.\n type: checkbox\n\n"
],
[
"general = {}\n\ngeneral['n_processes'] = {'type':'spinbox', 'min':1, 'max':60, 'default':60, 'description':\"Maximum number of processes for multiprocessing. If larger than number of processors it will be capped.\"}\n\nSETTINGS_TEMPLATE[\"general\"] = general",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['general']))",
"n_processes:\n default: 60\n description: Maximum number of processes for multiprocessing. If larger than number\n of processors it will be capped.\n max: 60\n min: 1\n type: spinbox\n\n"
]
],
[
[
"### Experimental Settings\n\nCore defintions of the experiment, regarding the filepaths..",
"_____no_output_____"
]
],
[
[
"#hide\nexperiment = {}\n\nexperiment[\"results_path\"] = {'type':'path','default': None, 'filetype':['hdf'], 'folder':False, 'description':\"Path where the results should be stored.\"}\nexperiment[\"shortnames\"] = {'type':'list','default':[], 'description':\"List of shortnames for the raw files.\"}\nexperiment[\"file_paths\"] = {'type':'list','default':[], 'description':\"Filepaths of the experiments.\"}\nexperiment[\"sample_group\"] = {'type':'list','default':[], 'description':\"Sample group, for raw files that should be quanted together.\"}\nexperiment[\"matching_group\"] = {'type':'list','default':[], 'description':\"List of macthing groups for the raw files. This only allows match-between-runs of files within the same groups.\"}\nexperiment[\"fraction\"] = {'type':'list','default':[], 'description':\"List of fraction numbers for fractionated samples.\"}\nexperiment[\"database_path\"] = {'type':'path','default':None, 'filetype':['hdf'], 'folder':False, 'description':\"Path to library file (.hdf).\"}\nexperiment[\"fasta_paths\"] = {'type':'list','default':[], 'description':\"List of paths for FASTA files.\"}\n\nSETTINGS_TEMPLATE[\"experiment\"] = experiment",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['experiment']))",
"database_path:\n default: null\n description: Path to library file (.hdf).\n filetype:\n - hdf\n folder: false\n type: path\nfasta_paths:\n default: []\n description: List of paths for FASTA files.\n type: list\nfile_paths:\n default: []\n description: Filepaths of the experiments.\n type: list\nfraction:\n default: []\n description: List of fraction numbers for fractionated samples.\n type: list\nmatching_group:\n default: []\n description: List of macthing groups for the raw files. This only allows match-between-runs\n of files within the same groups.\n type: list\nresults_path:\n default: null\n description: Path where the results should be stored.\n filetype:\n - hdf\n folder: false\n type: path\nsample_group:\n default: []\n description: Sample group, for raw files that should be quanted together.\n type: list\nshortnames:\n default: []\n description: List of shortnames for the raw files.\n type: list\n\n"
]
],
[
[
"### Raw file handling\n",
"_____no_output_____"
]
],
[
[
"#hide\nraw = {}\n\nraw[\"n_most_abundant\"] = {'type':'spinbox', 'min':1, 'max':1000, 'default':400, 'description':\"Number of most abundant peaks to be isolated from raw spectra.\"}\nraw[\"use_profile_ms1\"] = {'type':'checkbox', 'default':False, 'description':\"Use profile data for MS1 and perform own centroiding.\"}\n\nSETTINGS_TEMPLATE[\"raw\"] = raw",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['raw']))",
"n_most_abundant:\n default: 400\n description: Number of most abundant peaks to be isolated from raw spectra.\n max: 1000\n min: 1\n type: spinbox\nuse_profile_ms1:\n default: false\n description: Use profile data for MS1 and perform own centroiding.\n type: checkbox\n\n"
]
],
[
[
"### FASTA settings",
"_____no_output_____"
]
],
[
[
"#hide\nfasta = {}\n\n## Read modifications from modifications file\nmod_db = pd.read_csv('../modifications.tsv', sep='\\t')\n\nmods = {}\nmods_terminal = {}\nmods_protein = {}\n\nfor i in range(len(mod_db)):\n mod = mod_db.iloc[i]\n if 'terminus' in mod['Type']:\n if 'peptide' in mod['Type']:\n mods_terminal[mod['Identifier']] = mod['Description']\n elif 'protein' in mod['Type']:\n mods_protein[mod['Identifier']] = mod['Description']\n else:\n print('Not understood')\n print(mod['Type'])\n else:\n mods[mod['Identifier']] = mod['Description']\n\nfasta[\"mods_fixed\"] = {'type':'checkgroup', 'value':mods.copy(), 'default':['cC'],'description':\"Fixed modifications.\"}\nfasta[\"mods_fixed_terminal\"] = {'type':'checkgroup', 'value':mods_terminal.copy(), 'default':[],'description':\"Fixed terminal modifications.\"}\nfasta[\"mods_variable\"] = {'type':'checkgroup', 'value':mods.copy(), 'default':['oxM'],'description':\"Variable modifications.\"}\nfasta[\"mods_variable_terminal\"] = {'type':'checkgroup', 'value':mods_terminal.copy(), 'default':[], 'description':\"Varibale terminal modifications.\"}\n\nfasta[\"mods_fixed_terminal_prot\"] = {'type':'checkgroup', 'value':mods_protein.copy(), 'default':[],'description':\"Fixed terminal modifications on proteins.\"}\nfasta[\"mods_variable_terminal_prot\"] = {'type':'checkgroup', 'value':mods_protein.copy(), 'default':['a<^'], 'description':\"Varibale terminal modifications on proteins.\"}\n\nfasta[\"n_missed_cleavages\"] = {'type':'spinbox', 'min':0, 'max':99, 'default':2, 'description':\"Number of missed cleavages.\"}\nfasta[\"pep_length_min\"] = {'type':'spinbox', 'min':7, 'max':99, 'default':7, 'description':\"Minimum peptide length.\"}\nfasta[\"pep_length_max\"] = {'type':'spinbox', 'min':7, 'max':99, 'default':27, 'description':\"Maximum peptide length.\"}\nfasta[\"isoforms_max\"] = {'type':'spinbox', 'min':1, 'max':4096, 'default':1024, 'description':\"Maximum number of isoforms per peptide.\"}\nfasta[\"n_modifications_max\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':3, 'description':\"Limit the number of modifications per peptide.\"}\n\nfasta[\"pseudo_reverse\"] = {'type':'checkbox', 'default':True, 'description':\"Use pseudo-reverse strategy instead of reverse.\"}\nfasta[\"AL_swap\"] = {'type':'checkbox', 'default':False, 'description':\"Swap A and L for decoy generation.\"}\nfasta[\"KR_swap\"] = {'type':'checkbox', 'default':False, 'description':\"Swap K and R (only if terminal) for decoy generation.\"}\n\nproteases = [_ for _ in protease_dict.keys()]\nfasta[\"protease\"] = {'type':'combobox', 'value':proteases, 'default':'trypsin', 'description':\"Protease for digestions.\"}\n\nfasta[\"spectra_block\"] = {'type':'spinbox', 'min':1000, 'max':1000000, 'default':100000, 'description':\"Maximum number of sequences to be collected before theoretical spectra are generated.\"}\nfasta[\"fasta_block\"] = {'type':'spinbox', 'min':100, 'max':10000, 'default':1000, 'description':\"Number of fasta entries to be processed in one block.\"}\nfasta[\"save_db\"] = {'type':'checkbox', 'default':True, 'description':\"Save DB or create on the fly.\"}\nfasta[\"fasta_size_max\"] = {'type':'spinbox', 'min':1, 'max':1000000, 'default':100, 'description':\"Maximum size of FASTA (MB) when switching on-the-fly.\"}\n\nSETTINGS_TEMPLATE[\"fasta\"] = fasta",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['fasta']))",
"AL_swap:\n default: false\n description: Swap A and L for decoy generation.\n type: checkbox\nKR_swap:\n default: false\n description: Swap K and R (only if terminal) for decoy generation.\n type: checkbox\nfasta_block:\n default: 1000\n description: Number of fasta entries to be processed in one block.\n max: 10000\n min: 100\n type: spinbox\nfasta_size_max:\n default: 100\n description: Maximum size of FASTA (MB) when switching on-the-fly.\n max: 1000000\n min: 1\n type: spinbox\nisoforms_max:\n default: 1024\n description: Maximum number of isoforms per peptide.\n max: 4096\n min: 1\n type: spinbox\nmods_fixed:\n default:\n - cC\n description: Fixed modifications.\n type: checkgroup\n value:\n aK: acetylation of lysine\n cC: carbamidomethylation of C\n deamN: deamidation of N\n deamQ: deamidation of Q\n eK: EASItag 6-plex on K\n itraq4K: iTRAQ 4-plex on K\n itraq4Y: iTRAQ 4-plex on Y\n itraq8K: iTRAQ 8-plex on K\n itraq8Y: iTRAQ 8-plex on Y\n oxM: oxidation of M\n pS: phosphorylation of S\n pT: phosphorylation of T\n pY: phosphorylation of Y\n tmt0K: TMT zero on K\n tmt0Y: TMT zero on Y\n tmt2K: TMT duplex on K\n tmt2Y: TMT duplex on Y\n tmt6K: TMT sixplex/tenplex on K\n tmt6Y: TMT sixplex/tenplex on Y\nmods_fixed_terminal:\n default: []\n description: Fixed terminal modifications.\n type: checkgroup\n value:\n arg10>R: Arg 10 on peptide C-terminus\n arg6>R: Arg 6 on peptide C-terminus\n cm<C: pyro-cmC\n e<^: EASItag 6-plex on peptide N-terminus\n itraq4K<^: iTRAQ 4-plex on peptide N-terminus\n itraq8K<^: iTRAQ 8-plex on peptide N-terminus\n lys8>K: Lys 8 on peptide C-terminus\n pg<E: pyro-E\n pg<Q: pyro-Q\n tmt0<^: TMT zero on peptide N-terminus\n tmt2<^: TMT duplex on peptide N-terminus\n tmt6<^: TMT sixplex/tenplex on peptide N-terminus\nmods_fixed_terminal_prot:\n default: []\n description: Fixed terminal modifications on proteins.\n type: checkgroup\n value:\n a<^: acetylation of protein N-terminus\n am>^: amidation of protein C-terminus\nmods_variable:\n default:\n - oxM\n description: Variable modifications.\n type: checkgroup\n value:\n aK: acetylation of lysine\n cC: carbamidomethylation of C\n deamN: deamidation of N\n deamQ: deamidation of Q\n eK: EASItag 6-plex on K\n itraq4K: iTRAQ 4-plex on K\n itraq4Y: iTRAQ 4-plex on Y\n itraq8K: iTRAQ 8-plex on K\n itraq8Y: iTRAQ 8-plex on Y\n oxM: oxidation of M\n pS: phosphorylation of S\n pT: phosphorylation of T\n pY: phosphorylation of Y\n tmt0K: TMT zero on K\n tmt0Y: TMT zero on Y\n tmt2K: TMT duplex on K\n tmt2Y: TMT duplex on Y\n tmt6K: TMT sixplex/tenplex on K\n tmt6Y: TMT sixplex/tenplex on Y\nmods_variable_terminal:\n default: []\n description: Varibale terminal modifications.\n type: checkgroup\n value:\n arg10>R: Arg 10 on peptide C-terminus\n arg6>R: Arg 6 on peptide C-terminus\n cm<C: pyro-cmC\n e<^: EASItag 6-plex on peptide N-terminus\n itraq4K<^: iTRAQ 4-plex on peptide N-terminus\n itraq8K<^: iTRAQ 8-plex on peptide N-terminus\n lys8>K: Lys 8 on peptide C-terminus\n pg<E: pyro-E\n pg<Q: pyro-Q\n tmt0<^: TMT zero on peptide N-terminus\n tmt2<^: TMT duplex on peptide N-terminus\n tmt6<^: TMT sixplex/tenplex on peptide N-terminus\nmods_variable_terminal_prot:\n default:\n - a<^\n description: Varibale terminal modifications on proteins.\n type: checkgroup\n value:\n a<^: acetylation of protein N-terminus\n am>^: amidation of protein C-terminus\nn_missed_cleavages:\n default: 2\n description: Number of missed cleavages.\n max: 99\n min: 0\n type: spinbox\nn_modifications_max:\n default: 3\n description: Limit the number of modifications per peptide.\n max: 10\n min: 1\n type: spinbox\npep_length_max:\n default: 27\n description: Maximum peptide length.\n max: 99\n min: 7\n type: spinbox\npep_length_min:\n default: 7\n description: Minimum peptide length.\n max: 99\n min: 7\n type: spinbox\nprotease:\n default: trypsin\n description: Protease for digestions.\n type: combobox\n value:\n - arg-c\n - asp-n\n - bnps-skatole\n - caspase 1\n - caspase 2\n - caspase 3\n - caspase 4\n - caspase 5\n - caspase 6\n - caspase 7\n - caspase 8\n - caspase 9\n - caspase 10\n - chymotrypsin high specificity\n - chymotrypsin low specificity\n - clostripain\n - cnbr\n - enterokinase\n - factor xa\n - formic acid\n - glutamyl endopeptidase\n - granzyme b\n - hydroxylamine\n - iodosobenzoic acid\n - lysc\n - ntcb\n - pepsin ph1.3\n - pepsin ph2.0\n - proline endopeptidase\n - proteinase k\n - staphylococcal peptidase i\n - thermolysin\n - thrombin\n - trypsin_full\n - trypsin_exception\n - non-specific\n - trypsin\npseudo_reverse:\n default: true\n description: Use pseudo-reverse strategy instead of reverse.\n type: checkbox\nsave_db:\n default: true\n description: Save DB or create on the fly.\n type: checkbox\nspectra_block:\n default: 100000\n description: Maximum number of sequences to be collected before theoretical spectra\n are generated.\n max: 1000000\n min: 1000\n type: spinbox\n\n"
]
],
[
[
"### Feature Finding",
"_____no_output_____"
]
],
[
[
"#hide\n\nfeatures = {}\n# Thermo FF settings\n\nfeatures[\"max_gap\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':2}\nfeatures[\"centroid_tol\"] = {'type':'spinbox', 'min':1, 'max':25, 'default':8}\nfeatures[\"hill_length_min\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':3}\nfeatures[\"hill_split_level\"] = {'type':'doublespinbox', 'min':0.1, 'max':10.0, 'default':1.3}\nfeatures[\"iso_split_level\"] = {'type':'doublespinbox', 'min':0.1, 'max':10.0, 'default':1.3}\n\nfeatures[\"hill_smoothing\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':1}\nfeatures[\"hill_check_large\"] = {'type':'spinbox', 'min':1, 'max':100, 'default':40}\n\nfeatures[\"iso_charge_min\"] = {'type':'spinbox', 'min':1, 'max':6, 'default':1}\nfeatures[\"iso_charge_max\"] = {'type':'spinbox', 'min':1, 'max':6, 'default':6}\nfeatures[\"iso_n_seeds\"] = {'type':'spinbox', 'min':1, 'max':500, 'default':100}\n\nfeatures[\"hill_nboot_max\"] = {'type':'spinbox', 'min':1, 'max':500, 'default':300}\nfeatures[\"hill_nboot\"] = {'type':'spinbox', 'min':1, 'max':500, 'default':150}\n\nfeatures[\"iso_mass_range\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':5}\nfeatures[\"iso_corr_min\"] = {'type':'doublespinbox', 'min':0.1, 'max':1, 'default':0.6}\n\nfeatures[\"map_mz_range\"] = {'type':'doublespinbox', 'min':0.1, 'max':2, 'default':1.5}\nfeatures[\"map_rt_range\"] = {'type':'doublespinbox', 'min':0.1, 'max':1, 'default':0.5}\nfeatures[\"map_mob_range\"] = {'type':'doublespinbox', 'min':0.1, 'max':1, 'default':0.3}\nfeatures[\"map_n_neighbors\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':5}\n\nfeatures[\"search_unidentified\"] = {'type':'checkbox', 'default':False, 'description':\"Search MSMS w/o feature.\"}\n\nSETTINGS_TEMPLATE[\"features\"] = features",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['features']))",
"centroid_tol:\n default: 8\n max: 25\n min: 1\n type: spinbox\nhill_check_large:\n default: 40\n max: 100\n min: 1\n type: spinbox\nhill_length_min:\n default: 3\n max: 10\n min: 1\n type: spinbox\nhill_nboot:\n default: 150\n max: 500\n min: 1\n type: spinbox\nhill_nboot_max:\n default: 300\n max: 500\n min: 1\n type: spinbox\nhill_smoothing:\n default: 1\n max: 10\n min: 1\n type: spinbox\nhill_split_level:\n default: 1.3\n max: 10.0\n min: 0.1\n type: doublespinbox\niso_charge_max:\n default: 6\n max: 6\n min: 1\n type: spinbox\niso_charge_min:\n default: 1\n max: 6\n min: 1\n type: spinbox\niso_corr_min:\n default: 0.6\n max: 1\n min: 0.1\n type: doublespinbox\niso_mass_range:\n default: 5\n max: 10\n min: 1\n type: spinbox\niso_n_seeds:\n default: 100\n max: 500\n min: 1\n type: spinbox\niso_split_level:\n default: 1.3\n max: 10.0\n min: 0.1\n type: doublespinbox\nmap_mob_range:\n default: 0.3\n max: 1\n min: 0.1\n type: doublespinbox\nmap_mz_range:\n default: 1.5\n max: 2\n min: 0.1\n type: doublespinbox\nmap_n_neighbors:\n default: 5\n max: 10\n min: 1\n type: spinbox\nmap_rt_range:\n default: 0.5\n max: 1\n min: 0.1\n type: doublespinbox\nmax_gap:\n default: 2\n max: 10\n min: 1\n type: spinbox\nsearch_unidentified:\n default: false\n description: Search MSMS w/o feature.\n type: checkbox\n\n"
]
],
[
[
"### Search",
"_____no_output_____"
]
],
[
[
"#hide\n# Search Settings\nsearch = {}\n\nsearch[\"prec_tol\"] = {'type':'spinbox', 'min':1, 'max':500, 'default':30, 'description':\"Maximum allowed precursor mass offset.\"}\nsearch[\"frag_tol\"] = {'type':'spinbox', 'min':1, 'max':500, 'default':30, 'description':\"Maximum fragment mass tolerance.\"}\nsearch[\"min_frag_hits\"] = {'type':'spinbox', 'min':1, 'max':99, 'default':7, 'description':\"Minimum number of fragment hits.\"}\nsearch[\"ppm\"] = {'type':'checkbox', 'default':True, 'description':\"Use ppm instead of Dalton.\"}\nsearch[\"calibrate\"] = {'type':'checkbox', 'default':True, 'description':\"Recalibrate masses.\"}\nsearch[\"calibration_std_prec\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':5, 'description':\"Std range for precursor tolerance after calibration.\"}\nsearch[\"calibration_std_frag\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':5, 'description':\"Std range for fragment tolerance after calibration.\"}\nsearch[\"parallel\"] = {'type':'checkbox', 'default':True, 'description':\"Use parallel processing.\"}\nsearch[\"peptide_fdr\"] = {'type':'doublespinbox', 'min':0.0, 'max':1.0, 'default':0.01, 'description':\"FDR level for peptides.\"}\nsearch[\"protein_fdr\"] = {'type':'doublespinbox', 'min':0.0, 'max':1.0, 'default':0.01, 'description':\"FDR level for proteins.\"}\nsearch['recalibration_min'] = {'type':'spinbox', 'min':100, 'max':10000, 'default':100, 'description':\"Minimum number of datapoints to perform calibration.\"}\n\nSETTINGS_TEMPLATE[\"search\"] = search",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['search']))",
"calibrate:\n default: true\n description: Recalibrate masses.\n type: checkbox\ncalibration_std_frag:\n default: 5\n description: Std range for fragment tolerance after calibration.\n max: 10\n min: 1\n type: spinbox\ncalibration_std_prec:\n default: 5\n description: Std range for precursor tolerance after calibration.\n max: 10\n min: 1\n type: spinbox\nfrag_tol:\n default: 30\n description: Maximum fragment mass tolerance.\n max: 500\n min: 1\n type: spinbox\nmin_frag_hits:\n default: 7\n description: Minimum number of fragment hits.\n max: 99\n min: 1\n type: spinbox\nparallel:\n default: true\n description: Use parallel processing.\n type: checkbox\npeptide_fdr:\n default: 0.01\n description: FDR level for peptides.\n max: 1.0\n min: 0.0\n type: doublespinbox\nppm:\n default: true\n description: Use ppm instead of Dalton.\n type: checkbox\nprec_tol:\n default: 30\n description: Maximum allowed precursor mass offset.\n max: 500\n min: 1\n type: spinbox\nprotein_fdr:\n default: 0.01\n description: FDR level for proteins.\n max: 1.0\n min: 0.0\n type: doublespinbox\nrecalibration_min:\n default: 100\n description: Minimum number of datapoints to perform calibration.\n max: 10000\n min: 100\n type: spinbox\n\n"
]
],
[
[
"### Score",
"_____no_output_____"
]
],
[
[
"#hide\n# Score\nscore = {}\n\nscore[\"method\"] = {'type':'combobox', 'value':['x_tandem','random_forest'], 'default':'random_forest', 'description':\"Scoring method.\"}\nSETTINGS_TEMPLATE[\"score\"] = score",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['score']))",
"method:\n default: random_forest\n description: Scoring method.\n type: combobox\n value:\n - x_tandem\n - random_forest\n\n"
]
],
[
[
"### Calibration",
"_____no_output_____"
]
],
[
[
"#hide\n# Calibration\ncalibration = {}\n\ncalibration[\"outlier_std\"] = {'type':'spinbox', 'min':1, 'max':5, 'default':3, 'description':\"Number of std. deviations to filter outliers in psms.\"}\ncalibration[\"calib_n_neighbors\"] = {'type':'spinbox', 'min':1, 'max':1000, 'default':100, 'description':\"Number of neighbors that are used for offset interpolation.\"}\ncalibration[\"calib_mz_range\"] = {'type':'spinbox', 'min':1, 'max':1000, 'default':20, 'description':\"Scaling factor for mz axis.\"}\ncalibration[\"calib_rt_range\"] = {'type':'doublespinbox', 'min':0.0, 'max':10, 'default':0.5, 'description':\"Scaling factor for rt axis.\"}\ncalibration[\"calib_mob_range\"] = {'type':'doublespinbox', 'min':0.0, 'max':1.0, 'default':0.3, 'description':\"Scaling factor for mobility axis.\"}\n\nSETTINGS_TEMPLATE[\"calibration\"] = calibration",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['calibration']))",
"calib_mob_range:\n default: 0.3\n description: Scaling factor for mobility axis.\n max: 1.0\n min: 0.0\n type: doublespinbox\ncalib_mz_range:\n default: 20\n description: Scaling factor for mz axis.\n max: 1000\n min: 1\n type: spinbox\ncalib_n_neighbors:\n default: 100\n description: Number of neighbors that are used for offset interpolation.\n max: 1000\n min: 1\n type: spinbox\ncalib_rt_range:\n default: 0.5\n description: Scaling factor for rt axis.\n max: 10\n min: 0.0\n type: doublespinbox\noutlier_std:\n default: 3\n description: Number of std. deviations to filter outliers in psms.\n max: 5\n min: 1\n type: spinbox\n\n"
]
],
[
[
"### Matching",
"_____no_output_____"
]
],
[
[
"#hide\n# Matching\n\nmatching = {}\n\nmatching[\"match_p_min\"] = {'type':'doublespinbox', 'min':0.001, 'max':1.0, 'default':0.05, 'description':\"Minimum probability cutoff for matching.\"}\nmatching[\"match_d_min\"] = {'type':'doublespinbox', 'min':0.001, 'max':10.0, 'default':3, 'description': \"Minimum distance cutoff for matching.\"}\nmatching[\"match_group_tol\"] = {'type':'spinbox', 'min':0, 'max':100, 'default':0, 'description': \"When having matching groups, match neighboring groups.\"}\n\n\nSETTINGS_TEMPLATE[\"matching\"] = matching",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['matching']))",
"match_d_min:\n default: 3\n description: Minimum distance cutoff for matching.\n max: 10.0\n min: 0.001\n type: doublespinbox\nmatch_group_tol:\n default: 0\n description: When having matching groups, match neighboring groups.\n max: 100\n min: 0\n type: spinbox\nmatch_p_min:\n default: 0.05\n description: Minimum probability cutoff for matching.\n max: 1.0\n min: 0.001\n type: doublespinbox\n\n"
]
],
[
[
"### Isobaric Labeling ",
"_____no_output_____"
]
],
[
[
"isobaric_label = {}\n\nisobaric_label[\"label\"] = {'type':'combobox', 'value':['None','TMT10plex'], 'default':'None', 'description':\"Type of isobaric label present.\"}\nisobaric_label[\"reporter_frag_tolerance\"] = {'type':'spinbox', 'min':1, 'max':500, 'default':15, 'description':\"Maximum fragment mass tolerance for a reporter.\"}\nisobaric_label[\"reporter_frag_tolerance_ppm\"] = {'type':'checkbox', 'default':True, 'description':\"Use ppm instead of Dalton.\"}\n\nSETTINGS_TEMPLATE[\"isobaric_label\"] = isobaric_label",
"_____no_output_____"
]
],
[
[
"### Quantification ",
"_____no_output_____"
]
],
[
[
"#hide\n# Quantification\n\nquantification = {}\nquantification[\"max_lfq\"] = {'type':'checkbox', 'default':True, 'description':\"Perform max lfq type quantification.\"}\nquantification[\"lfq_ratio_min\"] = {'type':'spinbox', 'min':1, 'max':10, 'default':1, 'description':\"Minimum number of ratios for LFQ.\"}\nquantification[\"mode\"] = {'type':'combobox', 'value':['ms1_int_sum'], 'default':'ms1_int_sum', 'description':\"Column to perform quantification on.\"}\n\nSETTINGS_TEMPLATE[\"quantification\"] = quantification",
"_____no_output_____"
],
[
"print(yaml.dump(SETTINGS_TEMPLATE['quantification']))",
"lfq_ratio_min:\n default: 1\n description: Minimum number of ratios for LFQ.\n max: 10\n min: 1\n type: spinbox\nmax_lfq:\n default: true\n description: Perform max lfq type quantification.\n type: checkbox\nmode:\n default: ms1_int_sum\n description: Column to perform quantification on.\n type: combobox\n value:\n - ms1_int_sum\n\n"
],
[
"#hide\nsettings = {}\n\nfor category in SETTINGS_TEMPLATE.keys():\n \n temp_settings = {}\n \n for key in SETTINGS_TEMPLATE[category].keys():\n temp_settings[key] = SETTINGS_TEMPLATE[category][key]['default']\n \n settings[category] = temp_settings\n \nsave_settings(settings, \"../alphapept/default_settings.yaml\")\n\nsave_settings(SETTINGS_TEMPLATE, \"../alphapept/settings_template.yaml\")",
"_____no_output_____"
],
[
"#hide\nfrom nbdev.export import *\nnotebook2script()",
"Converted 00_settings.ipynb.\nConverted 01_chem.ipynb.\nConverted 02_io.ipynb.\nConverted 03_fasta.ipynb.\nConverted 04_feature_finding.ipynb.\nConverted 05_search.ipynb.\nConverted 06_score.ipynb.\nConverted 07_recalibration.ipynb.\nConverted 08_quantification.ipynb.\nConverted 09_matching.ipynb.\nConverted 10_constants.ipynb.\nConverted 11_interface.ipynb.\nConverted 12_performance.ipynb.\nConverted 13_export.ipynb.\nConverted 14_display.ipynb.\nConverted 15_label.ipynb.\nConverted additional_code.ipynb.\nConverted contributing.ipynb.\nConverted file_formats.ipynb.\nConverted index.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecc8272f2b1b8415ddbc5cf4787442c4c08b46e9 | 125,911 | ipynb | Jupyter Notebook | src/CMplot.ipynb | j-carson/ml-nids | 40064f2427fe665b8e4ac1d5a66fb965fea423e2 | [
"MIT"
] | null | null | null | src/CMplot.ipynb | j-carson/ml-nids | 40064f2427fe665b8e4ac1d5a66fb965fea423e2 | [
"MIT"
] | null | null | null | src/CMplot.ipynb | j-carson/ml-nids | 40064f2427fe665b8e4ac1d5a66fb965fea423e2 | [
"MIT"
] | null | null | null | 39.933714 | 198 | 0.433886 | [
[
[
"# Confusion Matrix\n\nThis is just prettying up the default confusion matrix plot for my slides",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport pickle",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'",
"_____no_output_____"
],
[
"with open('cmdf.pkl', 'rb') as fp:\n cmdf = pickle.load(fp)",
"_____no_output_____"
],
[
"cmdf",
"_____no_output_____"
],
[
"names = ['Normal', 'Generic', 'Exploit', 'Fuzzer',\n 'Deny Serv', 'Reconn', 'Analysis', 'Backdoor',\n 'Shellcode', 'Worm']\ncmdf.index = names\ncmdf.columns = names\ncmdf",
"_____no_output_____"
],
[
"percents = cmdf.copy()\nsums = cmdf.sum(axis=1)",
"_____no_output_____"
],
[
"for row in percents.index:\n for col in percents.index:\n percents.loc[row,col] /= sums[row]\npercents *= 100\nraw_percents = percents\npercents = percents.round(decimals=0).astype(int)\npercents",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(6,6))\nsns.heatmap(percents, square=True, annot=True, fmt='',\n ax=ax, cmap='Blues', cbar=False)\n\nplt.tick_params(axis='x', which='both', bottom=False, top=False)\nplt.tick_params(axis='y', which='both', left=False, right=False)\n\nlocs,labels = plt.xticks()\nlocs = [ l - .35 for l in locs ]\nplt.xticks(locs, rotation=40)\n\nplt.tight_layout()\nplt.savefig('multiclass.svg')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc82defef9955bebb3e9529d6f30d0287d9010b | 89,017 | ipynb | Jupyter Notebook | c165f21MLP.ipynb | everestso/Fall21Spring22 | 0a1039f59f43086a96168211d7bdc7cae93cf3bd | [
"Apache-2.0"
] | null | null | null | c165f21MLP.ipynb | everestso/Fall21Spring22 | 0a1039f59f43086a96168211d7bdc7cae93cf3bd | [
"Apache-2.0"
] | null | null | null | c165f21MLP.ipynb | everestso/Fall21Spring22 | 0a1039f59f43086a96168211d7bdc7cae93cf3bd | [
"Apache-2.0"
] | 1 | 2021-02-09T20:46:41.000Z | 2021-02-09T20:46:41.000Z | 129.951825 | 13,350 | 0.814125 | [
[
[
"<a href=\"https://colab.research.google.com/github/everestso/Fall2021/blob/main/c165f21MLP.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#data = [((1, 5.1, 3.5), 0), ((1, 4.9, 3.0), 0), ((1, 4.7, 3.2), 0), ((1, 4.6, 3.1), 0), ((1, 5.0, 3.6), 0), ((1, 5.4, 3.9), 0), ((1, 4.6, 3.4), 0), ((1, 5.0, 3.4), 0), ((1, 4.4, 2.9), 0), ((1, 4.9, 3.1), 0), ((1, 5.4, 3.7), 0), ((1, 4.8, 3.4), 0), ((1, 4.8, 3.0), 0), ((1, 4.3, 3.0), 0), ((1, 5.8, 4.0), 0), ((1, 5.7, 4.4), 0), ((1, 5.4, 3.9), 0), ((1, 5.1, 3.5), 0), ((1, 5.7, 3.8), 0), ((1, 5.1, 3.8), 0), ((1, 5.4, 3.4), 0), ((1, 5.1, 3.7), 0), ((1, 4.6, 3.6), 0), ((1, 5.1, 3.3), 0), ((1, 4.8, 3.4), 0), ((1, 5.0, 3.0), 0), ((1, 5.0, 3.4), 0), ((1, 5.2, 3.5), 0), ((1, 5.2, 3.4), 0), ((1, 4.7, 3.2), 0), ((1, 4.8, 3.1), 0), ((1, 5.4, 3.4), 0), ((1, 5.2, 4.1), 0), ((1, 5.5, 4.2), 0), ((1, 4.9, 3.1), 0), ((1, 5.0, 3.2), 0), ((1, 5.5, 3.5), 0), ((1, 4.9, 3.6), 0), ((1, 4.4, 3.0), 0), ((1, 5.1, 3.4), 0), ((1, 5.0, 3.5), 0), ((1, 4.5, 2.3), 0), ((1, 4.4, 3.2), 0), ((1, 5.0, 3.5), 0), ((1, 5.1, 3.8), 0), ((1, 4.8, 3.0), 0), ((1, 5.1, 3.8), 0), ((1, 4.6, 3.2), 0), ((1, 5.3, 3.7), 0), ((1, 5.0, 3.3), 0), ((1, 7.0, 3.2), 1), ((1, 6.4, 3.2), 1), ((1, 6.9, 3.1), 1), ((1, 5.5, 2.3), 1), ((1, 6.5, 2.8), 1), ((1, 5.7, 2.8), 1), ((1, 6.3, 3.3), 1), ((1, 4.9, 2.4), 1), ((1, 6.6, 2.9), 1), ((1, 5.2, 2.7), 1), ((1, 5.0, 2.0), 1), ((1, 5.9, 3.0), 1), ((1, 6.0, 2.2), 1), ((1, 6.1, 2.9), 1), ((1, 5.6, 2.9), 1), ((1, 6.7, 3.1), 1), ((1, 5.6, 3.0), 1), ((1, 5.8, 2.7), 1), ((1, 6.2, 2.2), 1), ((1, 5.6, 2.5), 1), ((1, 5.9, 3.2), 1), ((1, 6.1, 2.8), 1), ((1, 6.3, 2.5), 1), ((1, 6.1, 2.8), 1), ((1, 6.4, 2.9), 1), ((1, 6.6, 3.0), 1), ((1, 6.8, 2.8), 1), ((1, 6.7, 3.0), 1), ((1, 6.0, 2.9), 1), ((1, 5.7, 2.6), 1), ((1, 5.5, 2.4), 1), ((1, 5.5, 2.4), 1), ((1, 5.8, 2.7), 1), ((1, 6.0, 2.7), 1), ((1, 5.4, 3.0), 1), ((1, 6.0, 3.4), 1), ((1, 6.7, 3.1), 1), ((1, 6.3, 2.3), 1), ((1, 5.6, 3.0), 1), ((1, 5.5, 2.5), 1), ((1, 5.5, 2.6), 1), ((1, 6.1, 3.0), 1), ((1, 5.8, 2.6), 1), ((1, 5.0, 2.3), 1), ((1, 5.6, 2.7), 1), ((1, 5.7, 3.0), 1), ((1, 5.7, 2.9), 1), ((1, 6.2, 2.9), 1), ((1, 5.1, 2.5), 1), ((1, 5.7, 2.8), 1), ((1, 6.3, 3.3), 2), ((1, 5.8, 2.7), 2), ((1, 7.1, 3.0), 2), ((1, 6.3, 2.9), 2), ((1, 6.5, 3.0), 2), ((1, 7.6, 3.0), 2), ((1, 4.9, 2.5), 2), ((1, 7.3, 2.9), 2), ((1, 6.7, 2.5), 2), ((1, 7.2, 3.6), 2), ((1, 6.5, 3.2), 2), ((1, 6.4, 2.7), 2), ((1, 6.8, 3.0), 2), ((1, 5.7, 2.5), 2), ((1, 5.8, 2.8), 2), ((1, 6.4, 3.2), 2), ((1, 6.5, 3.0), 2), ((1, 7.7, 3.8), 2), ((1, 7.7, 2.6), 2), ((1, 6.0, 2.2), 2), ((1, 6.9, 3.2), 2), ((1, 5.6, 2.8), 2), ((1, 7.7, 2.8), 2), ((1, 6.3, 2.7), 2), ((1, 6.7, 3.3), 2), ((1, 7.2, 3.2), 2), ((1, 6.2, 2.8), 2), ((1, 6.1, 3.0), 2), ((1, 6.4, 2.8), 2), ((1, 7.2, 3.0), 2), ((1, 7.4, 2.8), 2), ((1, 7.9, 3.8), 2), ((1, 6.4, 2.8), 2), ((1, 6.3, 2.8), 2), ((1, 6.1, 2.6), 2), ((1, 7.7, 3.0), 2), ((1, 6.3, 3.4), 2), ((1, 6.4, 3.1), 2), ((1, 6.0, 3.0), 2), ((1, 6.9, 3.1), 2), ((1, 6.7, 3.1), 2), ((1, 6.9, 3.1), 2), ((1, 5.8, 2.7), 2), ((1, 6.8, 3.2), 2), ((1, 6.7, 3.3), 2), ((1, 6.7, 3.0), 2), ((1, 6.3, 2.5), 2), ((1, 6.5, 3.0), 2), ((1, 6.2, 3.4), 2), ((1, 5.9, 3.0), 2)]\ndata = [((0,0),0), ((0,1),1), ((1,0),1), ((1,1),0)]",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfrom sklearn.neural_network import MLPClassifier, MLPRegressor\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.linear_model import LinearRegression\nimport numpy as np",
"_____no_output_____"
],
[
"###clf = MLPClassifier(random_state=1, activation=\"logistic\", solver=\"sgd\", hidden_layer_sizes=(3,), max_iter=100000)\n\nclf = MLPClassifier(\n activation='logistic',\n max_iter=10000,\n hidden_layer_sizes=(2,),\n solver='lbfgs')\nclf1 = LogisticRegression(max_iter=10000)",
"_____no_output_____"
],
[
"X= [x for x,y in data]\nY= [y for x,y in data]\nprint (X)\nprint (Y)",
"[(0, 0), (0, 1), (1, 0), (1, 1)]\n[0, 1, 1, 0]\n"
],
[
"clf.fit(X,Y)\nclf1.fit(X,Y)",
"_____no_output_____"
],
[
"Predict = clf.predict(X)\nprint(Predict)\nPredict1 = clf1.predict(X)\nprint(Predict1)",
"[0 1 0 1]\n[0 0 0 0]\n"
],
[
"Out = [1 if x==y else 0 for x,y in zip(Predict, Y)]\nprint (Out)\nprint( sum(Out)/len(Out) )\nprint (Predict)",
"[1, 1, 0, 0]\n0.5\n[0 1 0 1]\n"
],
[
"print (clf.coefs_)\nprint(clf.intercepts_)",
"[array([[ 9.93278091, 9.97879746],\n [-4.74659947, 4.47953684]]), array([[-8.71647388],\n [ 8.39004239]])]\n[array([ 3.19304864, -1.59451474]), array([0.32620883])]\n"
],
[
"\nfor i in range(len(clf.coefs_)):\n number_neurons_in_layer = clf.coefs_[i].shape[1]\n for j in range(number_neurons_in_layer):\n weights = clf.coefs_[i][:,j]\n print(i, j, weights, end=\", \")\n print()\n print()\nprint(\"Bias values for first hidden layer:\")\nprint(clf.intercepts_[0])\nprint(\"\\nBias values for second hidden layer:\")\nprint(clf.intercepts_[1])",
"0 0 [ 9.93278091 -4.74659947], \n0 1 [9.97879746 4.47953684], \n\n1 0 [-8.71647388 8.39004239], \n\nBias values for first hidden layer:\n[ 3.19304864 -1.59451474]\n\nBias values for second hidden layer:\n[0.32620883]\n"
],
[
"from sklearn.datasets import load_iris",
"_____no_output_____"
],
[
"X, y = load_iris(return_X_y=True)\nclf = LogisticRegression(max_iter=10000)\nclf.fit(X, y)\nPredict = clf.predict(X)\nOut = [1 if x==y else 0 for x,y in zip(Predict, y)]\nprint (Out)\nprint( sum(Out)/len(Out) )\nprint (Predict)\n",
"[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n0.9733333333333334\n[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1\n 1 1 1 2 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 1 2 2 2 2\n 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n 2 2]\n"
],
[
"clf1 = MLPClassifier(\n activation='logistic',\n max_iter=100000,\n hidden_layer_sizes=(3,),\n solver='lbfgs')\nclf1.fit(X, y)\nPredict = clf1.predict(X)\nOut = [1 if x==y else 0 for x,y in zip(Predict, y)]\nprint (Out)\nprint( sum(Out)/len(Out) )\nprint (Predict)",
"[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n0.9866666666666667\n[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2\n 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n 2 2]\n"
],
[
"X = [(x, x+5) for x in range(-10,11)]\nprint (X)\nl1 = lambda x: 2*x[0] + 4\nl2 = lambda x: 3*x[0]**2 + 2*x[0] + 4\n\nY1 = [l1(x) for x in X]\nY2 = [l2(x) for x in X]\n\nX1 = [x for x,y in X]\n\nplt.plot(X1, Y1, \"r-\")\n",
"[(-10, -5), (-9, -4), (-8, -3), (-7, -2), (-6, -1), (-5, 0), (-4, 1), (-3, 2), (-2, 3), (-1, 4), (0, 5), (1, 6), (2, 7), (3, 8), (4, 9), (5, 10), (6, 11), (7, 12), (8, 13), (9, 14), (10, 15)]\n"
],
[
"reg = MLPRegressor(\n max_iter=100000,\n hidden_layer_sizes=(2,),\n solver='lbfgs')\nreg.fit(X, Y1)\nYPredict = reg.predict(X)\nplt.plot(X1,Y1)\nplt.plot(X1, YPredict, \"b+\")",
"_____no_output_____"
],
[
"\nplt.plot(X1, Y2, \"r-\")\n",
"_____no_output_____"
],
[
"reg2 = MLPRegressor(\n max_iter=1000000,\n hidden_layer_sizes=(2,),\n solver='lbfgs',\n activation='logistic')\nlr2 = LinearRegression()\nlr2.fit(X,Y2)\nreg2.fit(X,Y2)\nYPredict = reg2.predict(X)\nY22Predict= lr2.predict(X)\nplt.plot(X1,Y2, \"go\")\nplt.plot(X1, YPredict, \"b+\")\n#plt.plot(X, Y22Predict, \"rx\")\n",
"_____no_output_____"
],
[
"YPredict = reg2.predict(X)\nplt.plot(X1,Y2, \"go\")\nplt.plot(X1, YPredict, \"b+\")\n",
"_____no_output_____"
],
[
"print (reg.coefs_)\nprint(reg.intercepts_)",
"[array([[-0.95441149, 1.29009731],\n [-0.93135564, 0.77409955]]), array([[-1.06057211],\n [ 0.96890402]])]\n[array([1.10039566, 0.0223508 ]), array([0.22815591])]\n"
],
[
"print(reg2.coefs_)\nprint(reg2.intercepts_)",
"[array([[ 0.76725124, -0.16023894],\n [-0.46162588, 0.46183291]]), array([[ 714.08159036],\n [-714.2580552 ]])]\n[array([-0.61288476, 0.78643018]), array([652.38100287])]\n"
],
[
"Xcheck = [x/10.0 for x in range(-80,81)]\nXcheck2 = [(x, x+5) for x in Xcheck]\nprint(Xcheck)\nYPredict = reg2.predict(Xcheck2)\nplt.plot(X1,Y2, \"go\")\nplt.plot(Xcheck, YPredict, \"b+\")",
"[-8.0, -7.9, -7.8, -7.7, -7.6, -7.5, -7.4, -7.3, -7.2, -7.1, -7.0, -6.9, -6.8, -6.7, -6.6, -6.5, -6.4, -6.3, -6.2, -6.1, -6.0, -5.9, -5.8, -5.7, -5.6, -5.5, -5.4, -5.3, -5.2, -5.1, -5.0, -4.9, -4.8, -4.7, -4.6, -4.5, -4.4, -4.3, -4.2, -4.1, -4.0, -3.9, -3.8, -3.7, -3.6, -3.5, -3.4, -3.3, -3.2, -3.1, -3.0, -2.9, -2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1, -2.0, -1.9, -1.8, -1.7, -1.6, -1.5, -1.4, -1.3, -1.2, -1.1, -1.0, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0]\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc83293e6f76a4824c6d7f321d31e765036b153 | 16,777 | ipynb | Jupyter Notebook | Wto3l/MVA/Pre-Selector-Alt.ipynb | Nik-Menendez/PyCudaAnalyzer | 4b43d2915caac04da9ba688c2743e9c76eacdd5b | [
"MIT"
] | null | null | null | Wto3l/MVA/Pre-Selector-Alt.ipynb | Nik-Menendez/PyCudaAnalyzer | 4b43d2915caac04da9ba688c2743e9c76eacdd5b | [
"MIT"
] | null | null | null | Wto3l/MVA/Pre-Selector-Alt.ipynb | Nik-Menendez/PyCudaAnalyzer | 4b43d2915caac04da9ba688c2743e9c76eacdd5b | [
"MIT"
] | null | null | null | 40.426506 | 4,318 | 0.552125 | [
[
[
"import sys\nsys.path = ['', '/home/nikmenendez/.local/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/share/overrides/python', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/cms/cmssw/CMSSW_9_4_4/python', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/cms/cmssw/CMSSW_9_4_4/lib/slc6_amd64_gcc630', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/cms/coral/CORAL_2_3_21-fmblme4/slc6_amd64_gcc630/python', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/cms/coral/CORAL_2_3_21-fmblme4/slc6_amd64_gcc630/lib', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/professor2/2.2.1-fmblme5/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/pyqt/4.11.4-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/sherpa/2.2.4-fmblme2/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/rivet/2.5.4/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python-ldap/2.4.10-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-matplotlib/1.5.2-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/sip/4.17-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/llvm/4.0.1/lib64/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-sqlalchemy/1.1.4-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-lint/0.25.1-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-dxr/1.0-fmblme3/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-cx-oracle/5.2.1-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-numpy/1.12.1-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/frontier_client/2.8.20-fmblme/python/lib', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/cms/das_client/v03.01.00-fmblme/bin', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/xrootd/4.6.1-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/yoda/1.6.7/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/lcg/root/6.10.08/lib', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/pyminuit2/0.0.1-fmblme3/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-PyYAML/3.11-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-pygithub/1.23.0-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-pip/9.0.1-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-dablooms/0.9.1-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-pippkgs_depscipy/3.0-fmblme4/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-pippkgs/6.0-fmblme/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/professor/1.4.0-fmblme3/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/cms/dbs-client/DBS_2_1_9-fmblme/lib', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/cms/dbs-client/DBS_2_1_9-fmblme/lib/DBSAPI', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/cvs2git/5419-fmblme/lib', '/blue/avery/nikmenendez/Wto3l/Analyzer2/UF-PyNTupleRunner/Wto3l', '/blue/avery/nikmenendez/Wto3l/Analyzer2/UF-PyNTupleRunner', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-sqlalchemy/1.1.4-fmblme/lib/python2.7/site-packages/SQLAlchemy-1.1.4-py2.7-linux-x86_64.egg', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/py2-numpy/1.12.1-fmblme/lib/python2.7/site-packages/numpy-1.12.1-py2.7-linux-x86_64.egg', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python/2.7.11-fmblme/lib/python27.zip', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python/2.7.11-fmblme/lib/python2.7', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python/2.7.11-fmblme/lib/python2.7/plat-linux2', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python/2.7.11-fmblme/lib/python2.7/lib-tk', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python/2.7.11-fmblme/lib/python2.7/lib-old', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python/2.7.11-fmblme/lib/python2.7/lib-dynload', '/home/nikmenendez/.local/lib/python2.7/site-packages', '/cvmfs/cms.cern.ch/slc6_amd64_gcc630/external/python/2.7.11-fmblme/lib/python2.7/site-packages']",
"_____no_output_____"
],
[
"from ROOT import TFile, TTree\nfrom ROOT import gROOT, AddressOf, TLorentzVector\nimport uproot\nfrom array import array\nfrom tqdm import tqdm\nimport numpy as np\nfrom Utils.DeltaR import deltaR\nfrom __future__ import division",
"Welcome to JupyROOT 6.10/09\n"
],
[
"Lep1_pt = array('f', [0.])\nLep1_eta = array('f', [0.])\nLep1_phi = array('f', [0.])\nLep1_iso = array('f', [0.])\nLep1_id = array('i', [0])\n\nLep2_pt = array('f', [0.])\nLep2_eta = array('f', [0.])\nLep2_phi = array('f', [0.])\nLep2_iso = array('f', [0.])\nLep2_id = array('i', [0])\n\ndR = array('f', [0.])\n\nmet = array('f', [0.])\nmet_phi = array('f', [0.])\n\nfromZp = array('i', [0])",
"_____no_output_____"
],
[
"#sample = 'WmTo3l_ZpM15'\n#sample = 'DYJetsToLL_M-10to50_TuneCP5_13TeV-madgraphMLM-pythia8'\n#sample = 'DYJetsToLL_M-50_TuneCP5_13TeV-madgraphMLM-pythia8'\n#sample = 'TTJets_DiLept_TuneCP5_13TeV-madgraphMLM-pythia8'\nsample = 'WZTo3LNu_TuneCP5_13TeV-amcatnloFXFX-pythia8'\n\n# in_path = \"/cmsuf/data/store/user/t2/users/nikmenendez/skimmed/signal/\"+ sample + \".root\"\n# out_path = \"/cmsuf/data/store/user/t2/users/nikmenendez/skimmed/signal/MVA/\" + sample + \"_alt_PS.root\"\n\nin_path = \"/cmsuf/data/store/user/t2/users/nikmenendez/skimmed/2017/\" + sample + \".root\"\nout_path = \"/cmsuf/data/store/user/t2/users/nikmenendez/skimmed/2017/MVA/\" + sample + \"_alt_PS.root\"\n\nf = TFile( out_path, 'RECREATE' )\ntree = TTree('passedEvents', 'Passed Events')\n\ntree.Branch('Lep1_pt', Lep1_pt, 'Lep1_pt/F')\ntree.Branch('Lep1_eta', Lep1_eta, 'Lep1_eta/F')\ntree.Branch('Lep1_phi', Lep1_phi, 'Lep1_phi/F')\ntree.Branch('Lep1_iso', Lep1_iso, 'Lep1_iso/F')\ntree.Branch('Lep1_id', Lep1_id, 'Lep1_id/I')\n\ntree.Branch('Lep2_pt', Lep2_pt, 'Lep2_pt/F')\ntree.Branch('Lep2_eta', Lep2_eta, 'Lep2_eta/F')\ntree.Branch('Lep2_phi', Lep2_phi, 'Lep2_phi/F')\ntree.Branch('Lep2_iso', Lep2_iso, 'Lep2_iso/F')\ntree.Branch('Lep2_id', Lep2_id, 'Lep2_id/I')\n\ntree.Branch('dR', dR, 'dR/F')\n\ntree.Branch('met', met, 'met/F')\ntree.Branch('met_phi', met_phi, 'met_phi/F')\n\ntree.Branch('fromZp', fromZp, 'fromZp/I')",
"_____no_output_____"
],
[
"File = TFile(in_path,\"READ\")\nevent = File.Get(\"passedEvents\")\n\nnEntries = event.GetEntries()\nprint nEntries\n\nzid = 23 #999888",
"298934\n"
],
[
"masses = [45,20,30,60]\nmass_up = [0,0,0,0]\nmass_dn = [0,0,0,0]\n\nfor i in range(len(masses)):\n mass_up[i] = masses[i]+2\n mass_dn[i] = masses[i]-2",
"_____no_output_____"
],
[
"for i in tqdm(range(0, nEntries)):\n event.GetEntry(i)\n \n Lep1, Lep2, Lep3 = TLorentzVector(), TLorentzVector(), TLorentzVector(),\n CorrMass = 0\n \n if event.pTL3 > event.pTL1:\n \n Lep1pt = event.pTL3#/event.massL3\n Lep1eta = event.etaL3\n Lep1phi = event.phiL3\n Lep1iso = event.IsoL3\n Lep1id = event.idL3\n Lep1mom = event.MomIdL3\n Lep1.SetPtEtaPhiM(event.pTL3,event.etaL3,event.phiL3,event.massL3)\n \n Lep2pt = event.pTL1#/event.massL1\n Lep2eta = event.etaL1\n Lep2phi = event.phiL1\n Lep2iso = event.IsoL1\n Lep2id = event.idL1\n Lep2mom = event.MomIdL1\n Lep2.SetPtEtaPhiM(event.pTL1,event.etaL1,event.phiL1,event.massL1)\n\n Lep3pt = event.pTL2#/event.massL2\n Lep3eta = event.etaL2\n Lep3phi = event.phiL2\n Lep3iso = event.IsoL2 \n Lep3id = event.idL2\n Lep3mom = event.MomIdL2\n Lep3.SetPtEtaPhiM(event.pTL2,event.etaL2,event.phiL2,event.massL2)\n \n elif event.pTL3 > event.pTL2 and event.pTL1 > event.pTL3:\n\n Lep1pt = event.pTL1#/event.massL1\n Lep1eta = event.etaL1\n Lep1phi = event.phiL1\n Lep1iso = event.IsoL1\n Lep1id = event.idL1\n Lep1mom = event.MomIdL1\n Lep1.SetPtEtaPhiM(event.pTL1,event.etaL1,event.phiL1,event.massL1)\n \n Lep2pt = event.pTL3#/event.massL3\n Lep2eta = event.etaL3\n Lep2phi = event.phiL3\n Lep2iso = event.IsoL3\n Lep2id = event.idL3\n Lep2mom = event.MomIdL3\n Lep2.SetPtEtaPhiM(event.pTL3,event.etaL3,event.phiL3,event.massL3)\n\n Lep3pt = event.pTL2#/event.massL2\n Lep3eta = event.etaL2\n Lep3phi = event.phiL2\n Lep3iso = event.IsoL2 \n Lep3id = event.idL2\n Lep3mom = event.MomIdL2\n Lep3.SetPtEtaPhiM(event.pTL2,event.etaL2,event.phiL2,event.massL2)\n \n elif event.pTL3 < event.pTL2:\n\n Lep1pt = event.pTL1#/event.massL1\n Lep1eta = event.etaL1\n Lep1phi = event.phiL1\n Lep1iso = event.IsoL1\n Lep1id = event.idL1\n Lep1mom = event.MomIdL1\n Lep1.SetPtEtaPhiM(event.pTL1,event.etaL1,event.phiL1,event.massL1)\n \n Lep2pt = event.pTL2#/event.massL2\n Lep2eta = event.etaL2\n Lep2phi = event.phiL2\n Lep2iso = event.IsoL2\n Lep2id = event.idL2\n Lep2mom = event.MomIdL2\n Lep2.SetPtEtaPhiM(event.pTL2,event.etaL2,event.phiL2,event.massL2)\n\n Lep3pt = event.pTL3#/event.massL3\n Lep3eta = event.etaL3\n Lep3phi = event.phiL3\n Lep3iso = event.IsoL3\n Lep3id = event.idL3\n Lep3mom = event.MomIdL3\n Lep3.SetPtEtaPhiM(event.pTL3,event.etaL3,event.phiL3,event.massL3)\n \n met[0] = event.met\n met_phi[0] = event.met_phi\n \n dR12 = deltaR(Lep1eta,Lep1phi,Lep2eta,Lep2phi)\n dR13 = deltaR(Lep1eta,Lep1phi,Lep3eta,Lep3phi)\n dR23 = deltaR(Lep2eta,Lep2phi,Lep3eta,Lep3phi)\n \n if (Lep1id + Lep2id == 0):\n M = (Lep1 + Lep2).M()\n \n Lep1_pt[0] = Lep1pt/M\n Lep1_eta[0] = Lep1eta\n Lep1_phi[0] = Lep1phi\n Lep1_iso[0] = Lep1iso\n Lep1_id[0] = Lep1id\n Lep1_mom = Lep1mom\n \n Lep2_pt[0] = Lep2pt/M\n Lep2_eta[0] = Lep2eta\n Lep2_phi[0] = Lep2phi\n Lep2_iso[0] = Lep2iso\n Lep2_id[0] = Lep2id\n Lep2_mom = Lep2mom\n \n dR[0] = dR12\n \n if abs(Lep1_mom) == zid and abs(Lep2_mom) == zid:\n fromZp[0] = 1\n else:\n fromZp[0] = 0\n \n if (M > 0 and M < 65):\n tree.Fill()\n if (Lep1id + Lep3id == 0):\n M = (Lep1 + Lep2).M()\n \n Lep1_pt[0] = Lep1pt/M\n Lep1_eta[0] = Lep1eta\n Lep1_phi[0] = Lep1phi\n Lep1_iso[0] = Lep1iso\n Lep1_id[0] = Lep1id\n Lep1_mom = Lep1mom\n \n Lep2_pt[0] = Lep3pt/M\n Lep2_eta[0] = Lep3eta\n Lep2_phi[0] = Lep3phi\n Lep2_iso[0] = Lep3iso\n Lep2_id[0] = Lep3id\n Lep2_mom = Lep3mom\n \n dR[0] = dR13\n \n if abs(Lep1_mom) == zid and abs(Lep2_mom) == zid:\n fromZp[0] = 1\n else:\n fromZp[0] = 0\n \n if (M > 0 and M < 65):\n tree.Fill()\n if (Lep2id + Lep3id == 0):\n M = (Lep1 + Lep2).M()\n \n Lep1_pt[0] = Lep2pt/M\n Lep1_eta[0] = Lep2eta\n Lep1_phi[0] = Lep2phi\n Lep1_iso[0] = Lep2iso\n Lep1_id[0] = Lep2id\n Lep1_mom = Lep2mom\n \n Lep2_pt[0] = Lep3pt/M\n Lep2_eta[0] = Lep3eta\n Lep2_phi[0] = Lep3phi\n Lep2_iso[0] = Lep3iso\n Lep2_id[0] = Lep3id\n Lep2_mom = Lep3mom\n \n dR[0] = dR23\n \n if abs(Lep1_mom) == zid and abs(Lep2_mom) == zid:\n fromZp[0] = 1\n else:\n fromZp[0] = 0\n \n if (M > 0 and M < 65):\n tree.Fill()\n \n# P1 = Lep1 + Lep2\n# P2 = Lep1 + Lep3\n# P3 = Lep2 + Lep3\n \n# M1 = P1.M()\n# M2 = P2.M()\n# M3 = P3.M()\n \n# #print \"M1 = %f GeV, M2 = %f GeV, M3 = %f GeV\" % (M1, M2, M3)\n \n# for m in range(len(masses)):\n# if (mass_dn[m] < M1 < mass_up[m]): save_event = True\n# elif (mass_dn[m] < M2 < mass_up[m]): save_event = True\n# elif (mass_dn[m] < M3 < mass_up[m]): save_event = True\n \n# if save_event:\n# #print \"Saving event\"\n# tree.Fill()",
"100%|██████████| 298934/298934 [01:03<00:00, 4694.16it/s]\n"
],
[
"f.Write()\nf.Close()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecc845e265020c30e80537bed60fbfccf0cd34dc | 2,809 | ipynb | Jupyter Notebook | moocSemana5/Ejercicio5.4/Ejercicio.ipynb | jcastir/pythonMoocProblems | 54797e9cdc9127284b3261851065a09fbccd7d92 | [
"MIT"
] | 6 | 2020-06-11T21:59:39.000Z | 2021-11-22T04:37:02.000Z | moocSemana5/Ejercicio5.4/Ejercicio.ipynb | jcastir/pythonMoocProblems | 54797e9cdc9127284b3261851065a09fbccd7d92 | [
"MIT"
] | 1 | 2020-06-25T06:02:16.000Z | 2020-06-25T07:01:02.000Z | moocSemana5/Ejercicio5.4/Ejercicio.ipynb | jcastir/pythonMoocProblems | 54797e9cdc9127284b3261851065a09fbccd7d92 | [
"MIT"
] | 12 | 2020-06-23T21:38:59.000Z | 2022-02-01T08:58:06.000Z | 23.805085 | 265 | 0.571378 | [
[
[
"Antes de empezar, asegúrate de que todo va segun lo esperado. Primero, **reinicia el kernel** (en la barra de menu, selecciona Kernel$\\rightarrow$Restart) y entonces **ejecuta todas las celdas** (en la barra de menu, selecciona Cell$\\rightarrow$Run All).\n\nAsegurate de rellenar cualquier lugar donde aparezca `YOUR CODE HERE` o `YOUR ANSWER HERE`.",
"_____no_output_____"
]
],
[
[
"import rassertions\nANONYMOUS_ID=\"student\" #introduce aquí tu identificador anónimo",
"_____no_output_____"
]
],
[
[
"# Enunciado de problema:\n\nDefine un módulo **constantes.py** en la misma carpeta que este ejercicio, en ese módulo deberás definir las constantes:\n\n- PI = 3.1416\n- G = 9.8\n- E = 2.7183\n- GR = 1.618033\n- F = 4.669201\n\nEn el código a continuación importa el módulo que acabas de crear y muestra con el comando print los valores que has definido.",
"_____no_output_____"
]
],
[
[
"#vuestro código tendrá que ir en el espacio reservado en esta celda\n\n# YOUR CODE HERE",
"_____no_output_____"
]
],
[
[
"Ejecuta la celda siguiente cuando hayas finalizado el ejercicio para obtener tu clave de respuesta",
"_____no_output_____"
]
],
[
[
"rassertions.assert_control(globals(),locals())",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecc85ffaed49195ad03b50335e5fc61981fca896 | 34,066 | ipynb | Jupyter Notebook | visualization_notebooks/.ipynb_checkpoints/test-train-graphic-checkpoint.ipynb | buds-lab/building-prediction-benchmarking | 12658f75fc90d3454de3e0d2c210d9aa8b8cbef3 | [
"MIT"
] | 26 | 2019-03-30T11:03:51.000Z | 2022-03-10T11:50:42.000Z | visualization_notebooks/test-train-graphic.ipynb | buds-lab/building-prediction-benchmarking | 12658f75fc90d3454de3e0d2c210d9aa8b8cbef3 | [
"MIT"
] | 1 | 2019-03-30T04:17:39.000Z | 2019-03-30T04:17:39.000Z | visualization_notebooks/test-train-graphic.ipynb | buds-lab/building-prediction-benchmarking | 12658f75fc90d3454de3e0d2c210d9aa8b8cbef3 | [
"MIT"
] | 6 | 2019-10-18T16:19:14.000Z | 2021-08-16T15:21:04.000Z | 77.072398 | 11,932 | 0.741561 | [
[
[
"import matplotlib.pyplot as plt\nimport pandas\nimport seaborn as sns\nfrom matplotlib.colors import LinearSegmentedColormap\n\n\n%matplotlib inline",
"_____no_output_____"
],
[
"index = range(1,13)",
"_____no_output_____"
],
[
"df = pd.DataFrame({\"Scenario 1\":[1,1,1,2,2,2,0,0,0,0,0,0],\n \"Scenario 2\":[1,1,1,1,1,1,2,2,2,0,0,0],\n \"Scenario 3\":[1,1,1,1,1,1,1,1,1,2,2,2],\n \"Scenario 4\":[1,1,1,2,1,1,1,2,1,1,1,2]})",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.index = index",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"ax = sns.heatmap(df.T, linewidths=.5)",
"_____no_output_____"
]
],
[
[
"https://stackoverflow.com/questions/38836154/discrete-legend-in-seaborn-heatmap-plot",
"_____no_output_____"
]
],
[
[
"myColors = ((0.8, 0.0, 0.0, 1.0), (0.0, 0.8, 0.0, 1.0), (0.0, 0.0, 0.8, 1.0))\ncmap = LinearSegmentedColormap.from_list('Custom', myColors, len(myColors))",
"_____no_output_____"
],
[
"ax = sns.heatmap(df.T, cmap=cmap, linewidths=.5, linecolor='lightgray')\n\n# Manually specify colorbar labelling after it's been generated\ncolorbar = ax.collections[0].colorbar\ncolorbar.set_ticks([0.33, 1, 1.66])\ncolorbar.set_ticklabels(['Not Used','Train', 'Test'])\n\n# X - Y axis labels\nax.set_ylabel('Training/Testing Scenarios')\nax.set_xlabel('Months')\n\n# Only y-axis labels need their rotation set, x-axis labels already have a rotation of 0\n_, labels = plt.yticks()\nplt.setp(labels, rotation=0)\nplt.tight_layout()\nplt.savefig(\"traing-testing.pdf\")",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecc875ea69c49a9eda75a0d2503afd5926bdd2fe | 65,716 | ipynb | Jupyter Notebook | Oanda v1 REST-oandapy/07.00 Transaction History.ipynb | anthonyng2/FX-Trading-with-Python-and-Oanda | a898ae443c942e64a08e1d79d5972ca0d22fd166 | [
"MIT"
] | 60 | 2017-02-27T16:07:35.000Z | 2021-09-19T14:12:35.000Z | Oanda v1 REST-oandapy/07.00 Transaction History.ipynb | TianyiDataAnalyst/FX-Trading-with-Python-and-Oanda | a898ae443c942e64a08e1d79d5972ca0d22fd166 | [
"MIT"
] | null | null | null | Oanda v1 REST-oandapy/07.00 Transaction History.ipynb | TianyiDataAnalyst/FX-Trading-with-Python-and-Oanda | a898ae443c942e64a08e1d79d5972ca0d22fd166 | [
"MIT"
] | 55 | 2017-04-05T19:39:15.000Z | 2022-03-28T05:36:35.000Z | 47.007153 | 10,401 | 0.395307 | [
[
[
"<!--NAVIGATION-->\n< [Position Management](06.00 Position Management.ipynb) | [Contents](Index.ipynb) | [Streaming Prices](08.00 Streaming Prices.ipynb) >",
"_____no_output_____"
],
[
"# Transaction History",
"_____no_output_____"
],
[
"# Obtain Transaction History\n`get_transaction_history(self, account_id, **params)`",
"_____no_output_____"
]
],
[
[
"from datetime import datetime, timedelta\nimport pandas as pd\nimport oandapy\nimport configparser\n\nconfig = configparser.ConfigParser()\nconfig.read('../config/config_v1.ini')\naccount_id = config['oanda']['account_id']\napi_key = config['oanda']['api_key']\n\noanda = oandapy.API(environment=\"practice\", \n access_token=api_key)",
"_____no_output_____"
],
[
"response = oanda.get_transaction_history(account_id)\nprint(response)",
"{'transactions': [{'instrument': 'USD_JPY', 'interest': 0, 'time': '2017-01-27T13:55:07.000000Z', 'side': 'sell', 'price': 115.19, 'accountId': 7173488, 'accountBalance': 99987.8214, 'type': 'TRADE_CLOSE', 'tradeId': 10618882484, 'id': 10618882724, 'units': 1000, 'pl': -0.1736}, {'instrument': 'USD_JPY', 'interest': 0, 'time': '2017-01-27T13:55:06.000000Z', 'tradeOpened': {'units': 1000, 'id': 10618882484}, 'side': 'buy', 'price': 115.204, 'accountId': 7173488, 'accountBalance': 99987.995, 'type': 'MARKET_ORDER_CREATE', 'id': 10618882484, 'units': 1000, 'pl': 0}, {'instrument': 'NZD_USD', 'interest': 0, 'time': '2017-01-27T13:55:06.000000Z', 'tradeOpened': {'units': 1000, 'id': 10618882479}, 'side': 'buy', 'price': 0.72637, 'accountId': 7173488, 'accountBalance': 99987.995, 'type': 'MARKET_ORDER_CREATE', 'id': 10618882479, 'units': 1000, 'pl': 0}, {'instrument': 'AUD_USD', 'interest': 0, 'time': '2017-01-27T13:55:06.000000Z', 'tradeOpened': {'units': 1000, 'id': 10618882472}, 'side': 'buy', 'price': 0.75516, 'accountId': 7173488, 'accountBalance': 99987.995, 'type': 'MARKET_ORDER_CREATE', 'id': 10618882472, 'units': 1000, 'pl': 0}, {'instrument': 'GBP_USD', 'interest': 0, 'time': '2017-01-27T13:54:40.000000Z', 'side': 'sell', 'price': 1.2542, 'accountId': 7173488, 'accountBalance': 99987.995, 'type': 'TRADE_CLOSE', 'tradeId': 10618881939, 'id': 10618881962, 'units': 1000, 'pl': -0.2572}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'stopLossPrice': 1.15, 'id': 10618881951, 'time': '2017-01-27T13:54:40.000000Z', 'type': 'TRADE_UPDATE', 'units': 1000, 'tradeId': 10618881939}, {'instrument': 'GBP_USD', 'interest': 0, 'time': '2017-01-27T13:54:39.000000Z', 'tradeOpened': {'units': 1000, 'id': 10618881939}, 'side': 'buy', 'price': 1.25438, 'accountId': 7173488, 'accountBalance': 99988.2522, 'type': 'MARKET_ORDER_CREATE', 'id': 10618881939, 'units': 1000, 'pl': 0}, {'instrument': 'USD_CHF', 'interest': 0, 'time': '2017-01-27T13:54:39.000000Z', 'tradeOpened': {'units': 1000, 'id': 10618881930}, 'side': 'buy', 'price': 0.99977, 'accountId': 7173488, 'accountBalance': 99988.2522, 'type': 'MARKET_ORDER_CREATE', 'id': 10618881930, 'units': 1000, 'pl': 0}, {'instrument': 'EUR_USD', 'interest': 0, 'time': '2017-01-27T13:54:38.000000Z', 'tradeOpened': {'units': 1000, 'id': 10618881925}, 'side': 'buy', 'price': 1.06951, 'accountId': 7173488, 'accountBalance': 99988.2522, 'type': 'MARKET_ORDER_CREATE', 'id': 10618881925, 'units': 1000, 'pl': 0}, {'accountId': 7173488, 'reason': 'CLIENT_REQUEST', 'orderId': 10618881403, 'id': 10618881452, 'time': '2017-01-27T13:54:20.000000Z', 'type': 'ORDER_CANCEL'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'reason': 'REPLACES_ORDER', 'orderId': 10618881403, 'id': 10618881427, 'time': '2017-01-27T13:54:20.000000Z', 'expiry': '2017-01-28T21:54:17.000000Z', 'type': 'ORDER_UPDATE', 'units': 1000, 'price': 0.704}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'side': 'buy', 'reason': 'CLIENT_REQUEST', 'id': 10618881403, 'time': '2017-01-27T13:54:18.000000Z', 'expiry': '2017-01-28T21:54:17.000000Z', 'type': 'LIMIT_ORDER_CREATE', 'units': 1000, 'price': 0.742}, {'accountId': 7173488, 'instrument': 'USD_CHF', 'interest': 0.0588, 'id': 10618045173, 'time': '2017-01-26T21:00:00.000000Z', 'accountBalance': 99988.2522, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'NZD_USD', 'interest': 0.061, 'id': 10618045172, 'time': '2017-01-26T21:00:00.000000Z', 'accountBalance': 99988.1934, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'interest': -0.0281, 'id': 10618045171, 'time': '2017-01-26T21:00:00.000000Z', 'accountBalance': 99988.1324, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'EUR_USD', 'interest': -0.1619, 'id': 10618045170, 'time': '2017-01-26T21:00:00.000000Z', 'accountBalance': 99988.1605, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'interest': 0.0724, 'id': 10618045169, 'time': '2017-01-26T21:00:00.000000Z', 'accountBalance': 99988.3224, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'USD_CHF', 'interest': 0.0582, 'id': 10616745950, 'time': '2017-01-25T21:00:00.000000Z', 'accountBalance': 99988.25, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'NZD_USD', 'interest': 0.0611, 'id': 10616745949, 'time': '2017-01-25T21:00:00.000000Z', 'accountBalance': 99988.1918, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'interest': -0.0278, 'id': 10616745948, 'time': '2017-01-25T21:00:00.000000Z', 'accountBalance': 99988.1307, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'EUR_USD', 'interest': -0.1606, 'id': 10616745947, 'time': '2017-01-25T21:00:00.000000Z', 'accountBalance': 99988.1585, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'interest': 0.073, 'id': 10616745946, 'time': '2017-01-25T21:00:00.000000Z', 'accountBalance': 99988.3191, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'USD_CHF', 'interest': 0.0582, 'id': 10615478582, 'time': '2017-01-24T21:00:00.000000Z', 'accountBalance': 99988.2461, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'NZD_USD', 'interest': 0.0609, 'id': 10615478581, 'time': '2017-01-24T21:00:00.000000Z', 'accountBalance': 99988.1879, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'interest': -0.028, 'id': 10615478580, 'time': '2017-01-24T21:00:00.000000Z', 'accountBalance': 99988.127, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'EUR_USD', 'interest': -0.1616, 'id': 10615478579, 'time': '2017-01-24T21:00:00.000000Z', 'accountBalance': 99988.155, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'interest': 0.0738, 'id': 10615478578, 'time': '2017-01-24T21:00:00.000000Z', 'accountBalance': 99988.3166, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'USD_CHF', 'interest': 0.0582, 'id': 10614205734, 'time': '2017-01-23T21:00:00.000000Z', 'accountBalance': 99988.2428, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'NZD_USD', 'interest': 0.0603, 'id': 10614205733, 'time': '2017-01-23T21:00:00.000000Z', 'accountBalance': 99988.1846, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'interest': -0.028, 'id': 10614205732, 'time': '2017-01-23T21:00:00.000000Z', 'accountBalance': 99988.1243, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'EUR_USD', 'interest': -0.1612, 'id': 10614205731, 'time': '2017-01-23T21:00:00.000000Z', 'accountBalance': 99988.1523, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'interest': 0.0731, 'id': 10614205730, 'time': '2017-01-23T21:00:00.000000Z', 'accountBalance': 99988.3135, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'USD_CHF', 'interest': 0.0588, 'id': 10612893759, 'time': '2017-01-22T21:00:00.000000Z', 'accountBalance': 99988.2404, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'NZD_USD', 'interest': 0.0592, 'id': 10612893758, 'time': '2017-01-22T21:00:00.000000Z', 'accountBalance': 99988.1816, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'interest': -0.0282, 'id': 10612893757, 'time': '2017-01-22T21:00:00.000000Z', 'accountBalance': 99988.1224, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'EUR_USD', 'interest': -0.1622, 'id': 10612893756, 'time': '2017-01-22T21:00:00.000000Z', 'accountBalance': 99988.1506, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'interest': 0.073, 'id': 10612893755, 'time': '2017-01-22T21:00:00.000000Z', 'accountBalance': 99988.3128, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'USD_CHF', 'interest': 0.0588, 'id': 10612516793, 'time': '2017-01-21T21:00:00.000000Z', 'accountBalance': 99988.2398, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'NZD_USD', 'interest': 0.0592, 'id': 10612516792, 'time': '2017-01-21T21:00:00.000000Z', 'accountBalance': 99988.181, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'interest': -0.0282, 'id': 10612516791, 'time': '2017-01-21T21:00:00.000000Z', 'accountBalance': 99988.1218, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'EUR_USD', 'interest': -0.1622, 'id': 10612516790, 'time': '2017-01-21T21:00:00.000000Z', 'accountBalance': 99988.15, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'interest': 0.073, 'id': 10612516789, 'time': '2017-01-21T21:00:00.000000Z', 'accountBalance': 99988.3122, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'USD_CHF', 'interest': 0.0565, 'id': 10612124336, 'time': '2017-01-20T21:00:00.000000Z', 'accountBalance': 99988.2392, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'NZD_USD', 'interest': 0.0567, 'id': 10612124335, 'time': '2017-01-20T21:00:00.000000Z', 'accountBalance': 99988.1827, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'GBP_USD', 'interest': -0.0281, 'id': 10612124334, 'time': '2017-01-20T21:00:00.000000Z', 'accountBalance': 99988.126, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'EUR_USD', 'interest': -0.1556, 'id': 10612124333, 'time': '2017-01-20T21:00:00.000000Z', 'accountBalance': 99988.1541, 'type': 'DAILY_INTEREST'}, {'accountId': 7173488, 'instrument': 'AUD_USD', 'interest': 0.0707, 'id': 10612124332, 'time': '2017-01-20T21:00:00.000000Z', 'accountBalance': 99988.3097, 'type': 'DAILY_INTEREST'}, {'instrument': 'USD_JPY', 'interest': 0, 'time': '2017-01-20T02:34:19.000000Z', 'side': 'sell', 'price': 114.739, 'accountId': 7173488, 'accountBalance': 99988.239, 'type': 'TRADE_CLOSE', 'tradeId': 10611191619, 'id': 10611191636, 'units': 1000, 'pl': -0.1739}, {'instrument': 'USD_JPY', 'interest': 0, 'time': '2017-01-20T02:34:17.000000Z', 'tradeOpened': {'units': 1000, 'id': 10611191619}, 'side': 'buy', 'price': 114.753, 'accountId': 7173488, 'accountBalance': 99988.4129, 'type': 'MARKET_ORDER_CREATE', 'id': 10611191619, 'units': 1000, 'pl': 0}, {'instrument': 'NZD_USD', 'interest': 0, 'time': '2017-01-20T02:34:16.000000Z', 'tradeOpened': {'units': 1000, 'id': 10611191617}, 'side': 'buy', 'price': 0.72067, 'accountId': 7173488, 'accountBalance': 99988.4129, 'type': 'MARKET_ORDER_CREATE', 'id': 10611191617, 'units': 1000, 'pl': 0}]}\n"
],
[
"pd.DataFrame(response['transactions'])",
"_____no_output_____"
]
],
[
[
"# Get Specific Transaction Information\n`get_transaction(self, account_id, transaction_id)`",
"_____no_output_____"
]
],
[
[
"response = oanda.get_transaction(account_id, \n transaction_id=10605643945)",
"_____no_output_____"
],
[
"print(response)",
"{'instrument': 'AUD_USD', 'interest': 0, 'time': '2017-01-16T03:35:00.000000Z', 'tradeOpened': {'units': 1000, 'id': 10605643945}, 'side': 'buy', 'price': 0.7478, 'accountId': 7173488, 'accountBalance': 99991.7798, 'type': 'MARKET_ORDER_CREATE', 'id': 10605643945, 'units': 1000, 'pl': 0}\n"
]
],
[
[
"<!--NAVIGATION-->\n< [Position Management](06.00 Position Management.ipynb) | [Contents](Index.ipynb) | [Streaming Prices](08.00 Streaming Prices.ipynb) >",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.