hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
e7603c32052dc5b862dec04627952ab4b17db4ba
44,399
ipynb
Jupyter Notebook
Interactives/Radioactivity.ipynb
greywormz/Astrophysics
60e9bc118bc26a129cf8c1a88e1c8b99ac0a574e
[ "MIT" ]
19
2018-06-04T05:50:25.000Z
2022-02-22T08:28:50.000Z
Interactives/Radioactivity.ipynb
greywormz/Astrophysics
60e9bc118bc26a129cf8c1a88e1c8b99ac0a574e
[ "MIT" ]
22
2018-06-03T17:49:24.000Z
2019-04-04T15:26:05.000Z
Interactives/Radioactivity.ipynb
greywormz/Astrophysics
60e9bc118bc26a129cf8c1a88e1c8b99ac0a574e
[ "MIT" ]
6
2018-06-20T14:03:31.000Z
2021-01-04T10:27:37.000Z
48.79011
510
0.570531
[ [ [ "# Radioactive Decay Interactives\n\n## Interactive Figure 1: Model of Radioactive Decay\n\nThis first figure takes a population of 900 atoms and models their radioactive decay.", "_____no_output_____" ], [ "This first interactive is designed to allow you to explore what happens during radioactive decay of some isotope. An *isotope* refers to a particular version of an element. For example, the non-radioactive Carbon-12 isotope has 6 protons and 6 neutrons in its nucleus but the radioactive Carbon-14 isotope has 6 protons and 8 neutrons in its nucleus. Different isotopes of the same element behave identically in terms of chemistry and bonding to other atoms, but their nuclear properties can differ.\n\nDuring radioactive decay, a *parent isotope* is said to decay into a *daughter isotope*. So, for example, the parent isotope of Carbon-14 decays into Nitrogen-14. There is no way to predict when any one nucleus of a *parent isotope* will decay into a *daughter isotope*. That said, if you look at a large number of nuclei of a parent isotope, they exhibit a very simple property:\n\n> **The same *fraction* of a radioactive parent isotope will decay over the same amount of time.**\n \nThe time it takes for one-half of a population of parent isotopes to decay (on average) into daughter isotopes is called the *half-life* of that isotope. Different parent isotopes can have very different half-lifes. **NOTE:** This interactive shows a simulation of the decay of only 900 atoms. The radioactive decay is still modelled as occurring randomly for any one atom, so the simulation will show slightly different results on different runs!!", "_____no_output_____" ], [ "Some questions to consider \n1. After 1 half-life, about 50% of a parent isotope should have decayed and become daughter isotope. Use the interactive graphic below and adjust the elapsed time to figure out how long one half-life is for each isotope. Explain your approach.\n2. How much of a parent isotope is left after 2 half-lifes? 3 half-lifes? 4 half-lifes? 5 half-lifes? Explain how you figured this out.\n3. You should have found about 25% of the original amount of parent isotope is left after 2 half-lifes. This may seem surprising since in the first half-life 50% of the original amount of parent isotope decayed. Explain why 'less' of it decayed during the second half-life. **HINT** Consider the very simple property of radioactive decay we highlighted above.", "_____no_output_____" ] ], [ [ "from IPython.display import display\nimport numpy as np\nimport bqplot as bq\nimport ipywidgets as widgets\nimport random as random\nimport pandas as pd\nimport number_formatting as nf\nfrom math import ceil, floor, log10", "_____no_output_____" ], [ "## Originally developed June 2018 by Samuel Holen\n##\n## Edits by Juan Cabanela October 2018 to allow changes in the GUI.\n## - Made the display of the precise number/faction of parent/daught atoms instructor\n## configurable.\n## - Fixed a problem with the display of data, forced data to only be displayed for first\n## 10 half-lifes regardless of actual generated decay times (this allows hard-coding of)\n## num_ticks and tick values.\n\n## Pre-construct model of radioactive decay of a population\n## of parent and daughter atoms.\n## \n\n# GUI Configuration Parameters\nshow_counts = True\nmax_half_lifes = 10 # maximum half-lives to graph\ntime_ticks = 6 # number of ticks for the time axis\n# half-lives to place ticks on horizontal axis\nhalf_life_ticks = np.linspace(0, max_half_lifes, time_ticks)\n\n# Constants Related to decay of the parent species to the daughter species\nN_parent = 900 # initial number of parent atoms (should be a perfect square)\nN_daughter = 0 # initial number of daughter atoms\ntau = 1 # placeholder for the half-life of the parent species \nh = 0.025 # time step (in half lives)\nmu = np.log(2.) / tau # constant for decay time distribution \nPlot_all_times = True # Plot all times (otherwise, selects only before time slider value)\n\n# Initialize tracking of number of atoms\nParent_counts = [] # list of number of parent atoms \nDauther_counts = [] # list of number of daughter atoms\n\n# Generate a uniform random distribution of N_parent numbers from 0 to 1\nz = np.random.rand(N_parent)\n\n# Function to convert uniform distribution of random numbers to\n# a distribution weighted to model radiactive decay. The times are in number of \n# half-lives. \n#\n# The unsorted data representing the number of half-lifes until the individual\n# decay of each atom.\ndecay_times = -np.log(1 - z) / mu\ndecay_times_sorted = np.sort( decay_times )\n\n# Genereate array of numbers of atoms left\n# Adjusted so that each count contains 0 and N_parent\nParent_counts = np.arange(N_parent,-1, -1, dtype='int') # Number of parent atoms\nDaughter_counts = np.ones_like(Parent_counts)\nDaughter_counts = N_parent - Parent_counts # Number of daughter atoms\n\n#\n# Construct Pandas data frames\n#\n\n# Time column adjusted to include t=0\ndecay_data = pd.DataFrame()\ndecay_data['time'] = np.concatenate((np.zeros(1), decay_times_sorted))\ndecay_data['Parent'] = Parent_counts\ndecay_data['Daughter'] = Daughter_counts\n\n# Data array for species\nspecies = pd.DataFrame()\nspecies['parent_long'] = ['Generic', 'Carbon', 'Thallium','Uranium','Rubidium']\nspecies['daughter_long'] = ['Generic', 'Nitrogen','Lead','Thorium','Strontium']\nspecies['parent_short'] = ['Parent', 'C-14','Tl-208','U-235','Rb-87']\nspecies['daughter_short'] = ['Daughter', 'N-14','Pb-208','Th-231','Sr-87']\nspecies['half-lives'] = [tau, 5730, 3.053 * 60, 703.8, 48.8]\nspecies['step-size'] = [h, 125, 0.05 * 60, 15, 1]\nspecies['timeunits'] = ['half-lives', 'years', 'seconds', 'million years', 'billion years']", "_____no_output_____" ], [ "##\n## Define functions to respond to controls on population plots.\n##\n\ndef UpdateSpecies(change=None):\n ##\n ## Deal with possible changes of species\n ##\n \n # Generate a new uniform random distribution of N_parent numbers from 0 to 1 and\n # then convert uniform distribution to one weighted to model radioactive decay.\n z = np.random.rand(N_parent)\n decay_times = -np.log(1 - z) / mu\n decay_times_sorted = np.sort( decay_times )\n decay_data['time'] = np.concatenate((np.zeros(1), decay_times_sorted))\n \n # Reset the time to zero\n Time_slide.value = 0\n \n # Adjust half-life data and limits on plots appropriately \n new_index = species.loc[species.parent_long == pick_Species.value].index[0]\n \n # Set the half-life based the selected species\n hf = species['half-lives'][new_index]\n \n # Set the limit on the time sider to be 10 half-lifes\n Time_slide.max = max_half_lifes*hf\n \n # Set the time step on slider\n Time_slide.step = species['step-size'][new_index]\n\n # Updates the time slider label/units\n unit_label.value = species['timeunits'][new_index]\n \n # Updates the time axes on the plot\n x_time.max = Time_slide.max\n ax_x_time.scale = x_time\n ax_x_time.label = unit_label.value\n \n # Update tick values\n ax_x_time.tick_values = list(half_life_ticks*hf)\n\n # Update the legend\n parent_label_new = species['parent_short'][new_index]\n daughter_label_new = species['daughter_short'][new_index]\n line_parent.labels = [parent_label_new]\n line_daughter.labels = [daughter_label_new]\n \n # Update the species in the box that shows how many are present \n parent_label.value = parent_label_new + ' produced'\n daughter_label.value = daughter_label_new + ' remaining'\n\n # Now call the function for updating the plot in the event of a time change\n UpdateTimes()\n\n \ndef UpdateFraction(change=None):\n ##\n ## Deal with whether fraction view or number view selected to set scales for plot,\n ## y values for plot, and label values\n ##\n if frac_or_num.value == False:\n # Number count mode enabled\n\n # Update axes and scales\n fig_counts.axes = [ax_x_time, ax_y_number]\n line_parent.scales={'x': x_time, 'y': y_number}\n line_daughter.scales={'x': x_time, 'y': y_number}\n pts_parent.scales={'x': x_time, 'y': y_number}\n pts_daughter.scales={'x': x_time, 'y': y_number}\n line_time.scales={'x': x_time, 'y': y_number}\n\n # Update 'time line' limits\n line_time.y = [0, N_parent] \n else:\n # Fraction mode enabled\n\n # Updated axes and scales\n fig_counts.axes = [ax_x_time, ax_y_fraction]\n line_parent.scales={'x': x_time, 'y': y_fraction}\n line_daughter.scales={'x': x_time, 'y': y_fraction}\n pts_parent.scales={'x': x_time, 'y': y_fraction}\n pts_daughter.scales={'x': x_time, 'y': y_fraction}\n line_time.scales={'x': x_time, 'y': y_fraction}\n\n # Update 'time line' limits\n line_time.y = [0, 1]\n\n UpdateTimes()\n\n \ndef UpdateTimes(change=None):\n ##\n ## Deal with changes in the time slider\n ##\n\n # Recall half-life of this species\n species_idx = species.loc[species.parent_long == pick_Species.value].index[0]\n hf = species['half-lives'][species_idx]\n\n # Update time label with 3 significant figures\n Time_label.value = str(nf.SigFig(Time_slide.value, 3))\n \n # Get the array of times\n time_arr = hf * decay_data['time']\n \n # Set the times to be x values\n line_daughter.x = time_arr\n line_parent.x = line_daughter.x\n pts_daughter.x = line_daughter.x\n pts_parent.x = line_daughter.x\n line_time.x = [Time_slide.value, Time_slide.value]\n \n # Changes the color of population to reflect the decays\n for i in range(N_parent):\n if Time_slide.value >= decay_times[i]*hf:\n Colors[i] = 'blue'\n else:\n Colors[i] = 'red'\n population_scat.colors = Colors\n \n # Identify where we are in the data set\n i = 0\n while i < N_parent + 1 and hf*decay_data['time'][i] < Time_slide.value: \n i += 1\n if i > 0:\n i -= 1\n \n # Update selection of the parent and daughter data to be plotted\n if Plot_all_times:\n daughter_decay = decay_data['Daughter']\n parent_decay = decay_data['Parent']\n else:\n daughter_decay = decay_data['Daughter'][0:i+1]\n parent_decay = decay_data['Parent'][0:i+1] \n \n line_parent.y = parent_decay\n line_daughter.y = daughter_decay\n \n ##\n ## Deal with whether fraction view or number view selected to set scales for plot,\n ## y values for plot, and label values\n ##\n if frac_or_num.value == False:\n # Number count mode enabled\n\n # Update number of parent and daughter in labels\n parent_present.value = str(decay_data['Parent'][i])\n daughter_present.value = str(decay_data['Daughter'][i])\n else:\n # Fraction mode enabled\n \n # Update the x and y arrays for the parent and daughter lines\n line_parent.y = (1/N_parent)*parent_decay\n line_daughter.y = (1/N_parent)*daughter_decay\n \n # Update number of parent and daughter in labels\n parent_present.value = '{:.3f}'.format((1/N_parent)*decay_data['Parent'][i])\n daughter_present.value = '{:.3f}'.format((1/N_parent)*decay_data['Daughter'][i])\n \n # Update parent and daughter lines\n pts_parent.y = line_parent.y\n pts_daughter.y = line_daughter.y\n \n # Update the tooltip labels and formats (Bug? Doesn't appear to be working)\n #pts_parent.tooltip.labels[1] = parent_label_new\n #pts_daughter.tooltip.labels[1] = daughter_label_new\n #if frac_or_num.value == False:\n # pts_parent.tooltip.formats[1] = '3.0f'\n # pts_daughter.tooltip.formats[1] = '3.0f'\n #else:\n # pts_parent.tooltip.formats[1] = '0.3f'\n # pts_daughter.tooltip.formats[1] = '0.3f'\n \n", "_____no_output_____" ], [ "##\n## Set up counts versus time plot\n##\n\n# Set up Species to be Generic\ninit_species_ind = 0 \n\n# Set up initial time\ninit_time = 0\ninit_time_idx = 0\n\n# Set up axes\nx_time = bq.LinearScale(min = 0, max=max_half_lifes)\ny_number = bq.LinearScale(min = 0, max=N_parent)\ny_fraction = bq.LinearScale(min = 0, max=1)\n\n# Labels and scales for Axes\nax_x_time = bq.Axis(label=species['timeunits'][init_species_ind], scale=x_time, num_ticks = time_ticks,\n tick_values = half_life_ticks )\nax_y_number = bq.Axis(label='Number of atoms', scale=y_number, orientation='vertical')\nax_y_fraction = bq.Axis(label='Fraction of atoms', scale=y_fraction, orientation='vertical')\n\n# Define tooltip (Bug: doesn't allow relabeling Tooltips, also needs to apply to Scatter, not Lines)\n#def_tt_parent = bq.Tooltip(fields=['x', 'y'], formats=['.2f', '3.0f'], labels=['time', species['parent_short'][init_species_ind]])\n#def_tt_daughter = bq.Tooltip(fields=['x', 'y'], formats=['.2f', '3.0f'], labels=['time', species['daughter_short'][init_species_ind]])\ndef_tt_parent = bq.Tooltip(fields=['x', 'y'], formats=['.2f', '.3f'], labels=['time', 'amount of parent isotope'])\ndef_tt_daughter = bq.Tooltip(fields=['x', 'y'], formats=['.2f', '.3f'], labels=['time', 'amount of daughter isotope'])\n\n# Define the Lines and Scatter plots\n# NOTE: Scatter only necessary to allow tooltips to function.\ninit_x = decay_data['time']*species['half-lives'][init_species_ind]\nif Plot_all_times:\n init_parent = decay_data['Parent']\n init_daughter = decay_data['Daughter']\nelse:\n init_parent = decay_data['Parent'][0:init_time_idx]\n init_daughter = decay_data['Daughter'][0:init_time_idx]\n \npts_parent = bq.Scatter(x=init_x, y=init_parent,\n scales={'x': x_time, 'y': y_number}, marker='circle', default_size=2,\n display_legend=False, colors=['red'], labels=[species['parent_short'][init_species_ind]], \n tooltip=def_tt_parent)\npts_daughter = bq.Scatter(x=init_x, y=init_daughter,\n scales={'x': x_time, 'y': y_number}, marker='circle', default_size=2,\n display_legend=False, colors=['blue'], labels=[species['daughter_short'][init_species_ind]],\n tooltip=def_tt_daughter)\nline_parent = bq.Lines(x=init_x, y=init_parent, \n scales={'x': x_time, 'y': y_number}, display_legend=True, colors=['red'], \n labels=[species['parent_short'][init_species_ind]], )\nline_daughter = bq.Lines(x=init_x, y=init_daughter, \n scales={'x': x_time, 'y': y_number}, display_legend=True, colors=['blue'], \n labels=[species['daughter_short'][init_species_ind]] )\n\n# Set up a vertical line on this plot to indicate the current time\ntimes_x = [init_time, init_time]\ntimes_y = [0, N_parent]\nline_time = bq.Lines(x=times_x, y=times_y,\n scales={'x': x_time, 'y': y_number}, \n colors=['greenyellow'])\n\n# Creates figure for plot\nfig_counts = bq.Figure(axes=[ax_x_time, ax_y_number], marks=[line_parent, line_daughter, line_time, pts_parent, pts_daughter], \n legend_location='right', legend_style={'fill': 'white'}, \n title='Counts versus Time', background_style={'fill': 'black'}, \n layout={'width': '500px', 'min_height': '400px'},\n animation=1000)", "_____no_output_____" ], [ "# Slider widget to control the amount of time that has passed\n# Set for generic half-life situation initially (0 to 10 half-lifes)\nTime_slide = widgets.FloatSlider(\n value=0.,\n description='Time',\n min=0.,\n max=max_half_lifes,\n step=h,\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=False,\n readout_format='.1f',\n layout=widgets.Layout(overflow_x='visible',\n overflow_y='visible',\n width='400px',\n max_width='500px',\n min_width='250px')\n)\n\n# Widget to display the number of parent atoms present\nparent_present = widgets.Text(\n value = str(N_parent),\n style = {'description_width': 'initial'},\n #description = species['parent_short'][0]+' remaining',\n disabled = True,\n layout=widgets.Layout(overflow_x='visible',\n overflow_y='visible',\n width='175px',\n max_width='300px',\n min_width='125px')\n)\n\n# Widget to display the number of daughter atoms present\ndaughter_present = widgets.Text(\n value = str(0),\n style = {'description_width': 'initial'},\n #description = species['daughter_short'][0]+' produced',\n disabled = True,\n layout=widgets.Layout(overflow_x='visible',\n overflow_y='visible',\n width='175px',\n max_width='300px',\n min_width='125px')\n)\n\n# Widgets to label the time slider with units\nTime_label = widgets.Label(value=str(Time_slide.value))\nunit_label = widgets.Label(value=str(species['timeunits'][0]))\nComposite_Time_Label = widgets.HBox([Time_label, unit_label],\n layout=widgets.Layout(width='200px'))\n\n# Labels for the parent/daughter present displays\nparent_label = widgets.Label(value=species['parent_short'][0]+' remaining',\n layout=widgets.Layout(width='200px', \n overflow_x='visible',\n overflow_y='visible') )\ndaughter_label = widgets.Label(value=species['daughter_short'][0]+' produced',\n layout=widgets.Layout(width='200px', \n overflow_x='visible',\n overflow_y='visible') )\n\n# Checkbox to choose whether to display the number of each species\n# or the fraction of each\nfrac_or_num = widgets.Checkbox(value=False, description='Display as fraction of atoms')\n\n# Widget to allow one to choose which species to work with\npick_Species = widgets.RadioButtons(options=species['parent_long'][:],\n value='Generic', description='Species:', disabled=False,\n layout=widgets.Layout(width='250px'))\n", "_____no_output_____" ], [ "# Scale for population figure\nx_sc = bq.LinearScale(min=1, max=np.sqrt(N_parent))\ny_sc = bq.LinearScale(min=1, max=np.sqrt(N_parent))\n\n# Axes for population figure\nax_x = bq.Axis(scale=x_sc, num_ticks=0)\nax_y = bq.Axis(scale=y_sc, orientation='vertical', num_ticks=0)\n\n# Creates an array of x values: [1,2,...,30,1,2,...30,.....,1,2,...,30]\nx_ls = []\nfor i in range(1,int(np.sqrt(N_parent))+1):\n x_ls.append(float(i))\nx_ls = x_ls * int(np.sqrt(N_parent))\nx_arr = np.array(x_ls)\n\n# Creates an array of y values: [1,1,...,1,2,2,...2,......,30,30,...,30]\ny_ls = []\nfor i in range(1,int(np.sqrt(N_parent))+1):\n y_ls += [float(i)] * int(np.sqrt(N_parent))\ny_arr = np.array(y_ls) \n\n# Creates a color array with the same number of entries as the number of atoms in\n# the sample\nColors = ['red'] * N_parent\n\n# Plot the population model\npopulation_scat = bq.Scatter(x=x_arr, y=y_arr, scales={'x': x_sc, 'y': y_sc}, colors =['red'])", "_____no_output_____" ], [ "# Picking a new species resets everything\npick_Species.observe(UpdateSpecies, names=['value'])\n\n# Update view from fraction to/from number\nfrac_or_num.observe(UpdateFraction, names=['value'])\n\n# Update times\nTime_slide.observe(UpdateTimes, names=['value'])\nTime_label.observe(UpdateTimes, names=['value'])\n\n# Figure for the population\nfig_population = bq.Figure(title='Population of Atoms', marks=[population_scat], axes=[ax_x, ax_y], \n background_style={'fill' : 'black'},padding_x = 0.025,\n min_aspect_ratio=1, max_aspect_ratio=1)\n\n# Boxes to organize display\nparent_box = widgets.HBox([parent_label, parent_present],\n layout=widgets.Layout(overflow_x='visible', overflow_y='visible'))\ndaughter_box = widgets.HBox([daughter_label, daughter_present],\n layout=widgets.Layout(overflow_x='visible', overflow_y='visible'))\n# Set visibility of the exact counts/fractions\nif (show_counts == False):\n parent_box.layout.visibility = 'hidden'\n daughter_box.layout.visibility = 'hidden'\n\nvalue_box = widgets.VBox([parent_box, daughter_box])\nspecies_box = widgets.HBox([value_box, pick_Species])\n\nslide_box = widgets.HBox([Time_slide, Composite_Time_Label])\nslide_check_box = widgets.VBox([slide_box, frac_or_num])\n\ntop_box = widgets.HBox([fig_counts, fig_population],\n layout=widgets.Layout(width='900px'))\ntop_box.children[0].layout.width = '450px'\ntop_box.children[1].layout.width = '450px'\n\nbottom_box = widgets.HBox([species_box, slide_check_box],\n layout=widgets.Layout(width='900px'))\nbottom_box.children[0].layout.width = '450px'\nbottom_box.children[1].layout.width = '450px'\n\n# Final display\nFinal = widgets.VBox([top_box, bottom_box])\nFinal.layout.overflow = 'hidden'\ndisplay(Final)", "_____no_output_____" ] ], [ [ "## Interactive Figure 2: Geochron Plot", "_____no_output_____" ], [ "Assuming a non-radiogenic isotope (that is, an isotope that is not the result of radioactive decay) that also will not decay, its amount should be constant. This means that for different mineral samples we can measure the ratio of parent isotope versus the non-radiogenic isotope ($P/D_i$) and daughter isotope ($D$) versus the non-radiogenic isotope ($D/D_i$) to build an geochron plot. For example, using the following isotopes\n\n- $D_i$ (non-radiogenic isotope of daughter element)\n- $D$ (Daughter Isoptope)\n- $P$ (Parent isotope)\n\nan geochron plot could plot $D/D_i$ versus $P/D_i$. \n\nWhat sets the *geochron method* (also known as the *isochron method*) apart from the just measuring parent and daughter abundances is the use of the non-radiogenic isotope of the daughter element. This avoids the assumption of no initial daughter isotope before the rock solidified (radioactive decay can occur while rock is molten).\n\nSome minerals in the rock incorporate the parent better than daughter which is why the initial amount of parent \nisotope versus daughter isotope can vary. We expect daughter versus non-radiogenic isotope ratio to be constant\nif we pick the non-radiogenic isotope to be the same element as the daughter isotope.\n\nWith all this said, it is actually often not this simple as many daughter isotopes are themselves radioactive and decay, leading to a chain of reactions, so comparing abundances of parent to daughter isotopes is not simple.\n\n*Note:* The idea for the geochron dating interactive came from a Isochron Diagram Java app at *ScienceCourseware.org*. However that app had some issues in that it didn't divide by a non-radiogenic isotope (or at least didn't mention it). In fact, they used $D_i$ for the initial amount of daughter isotope instead of the non-radiogenic isotope of the same element as the daughter isotope.", "_____no_output_____" ], [ "Consider the following questions:\n\n1. Notice that the geochron shown here is using the parent and daugther isotope ratios for 4 mineral samples in a rock. Consider the sample that starts off with the most parent isotope initially (the point initially farthest to the right). Adjust the geochron to show the samples after 1 half-life. How much of that parent isotope was there initially? How much is there after 1 half-life? Does this make sense? \n2. Consider the sample that starts off with the least parent isotope initially (the point initially farthest to the left). Adjust the geochron to show the samples after 1 half-life. How much of that parent isotope was there initially? How much is there after 1 half-life? Does this make sense? \n3. What fraction of the parent isotope should have decayed into daughter isotope after one half-life? Does it matter if the mineral sample we look at started with more parent isotope than another mineral sample in the same rock? Why or why not? \n4. Why is it as time elapses that the points representing the 4 mineral samples all move toward the upper left? Can you explain why their motions are parallel to one another?\n4. Imagine you are looking at the parent isotope Rubidium-87 which decays into Strontium-87. You examine 4 mineral samples in an meteorite and they line up in a line with a slope of 0.0666. Determine the age of that meteorite (or rather, the time since it solidified). Also explain how we can determine how much of the daughter isotope was in the meteorite initially.\n", "_____no_output_____" ] ], [ [ "##\n## Define the various isotopes we want to consider\n##\n\nisotope_info = pd.DataFrame(columns=['Name', 'PName', 'PAbbrev', 'DName', 'DAbbrev', 'DiName', 'DiAbbrev', 'HalfLife', 'HLUnits'])\nisotope_info['index'] = ['generic', 'Rb87']\nisotope_info['Name'] = ['Generic', 'Rb-87->Sr-87']\nisotope_info['PName'] = ['Parent', 'Rubidium-87']\nisotope_info['PAbbrev'] = ['P', 'Rb-87']\nisotope_info['DName'] = ['Daughter', 'Strontium-87']\nisotope_info['DAbbrev'] = ['D', 'Sr-87']\nisotope_info['DiName'] = ['Non-Radiogenic Isotope of Daughter Element', 'Strontium-86']\nisotope_info['DiAbbrev'] = ['D_i', 'Sr-86']\nisotope_info['HalfLife'] = [ 1, 48.8 ]\nisotope_info['HLUnits'] = [ 'half-lives', 'Billion years']\nisotope_info = isotope_info.set_index('index')\n\n# Set initial isotope to plot\ninit_isotope = 'generic'", "_____no_output_____" ], [ "##\n## Define the initial amounts of parent and daughter in the sample.\n##\n## In principle, I would change this depending on the isotopes we plot. But I am only plotting\n## Rb87 --> Sr-87, since that is the most classical use of this Geochron approach.\n##\n\n# Range of P to D_i fractions and initial amounts of D to D_i to consider\nP2Di_min = 0.05\nP2Di_max = 0.40\nD2Di0_min = 0.05\nD2Di0_max = 0.75\n\n# Generate three mineral samples in different thirds of the entire range\nrange_P2Di = (P2Di_max-P2Di_min)\n\n# Create sample amounts\nn_samples = 4\nnums = np.array(list(range(1, n_samples+1)))\ninitial_samples = pd.DataFrame(index=nums)\ninitial_D2Di0 = D2Di0_min + (D2Di0_max - D2Di0_min) * np.random.random()\ninitial_samples['P2Di'] = P2Di_min + (range_P2Di/n_samples) * (nums - np.random.random(n_samples))\ninitial_samples['D2Di'] = initial_D2Di0*np.ones_like(nums)\n", "_____no_output_____" ], [ "##\n## Define functions to call when building interactive plot\n##\n\ndef amt_left(sample_in, taus):\n # Generate a sample DataFrame after tau half-lifes given an initial DataFrame\n sample = sample_in.copy(deep = True)\n sample['P2Di'] = sample_in['P2Di']*((1/2)**(taus))\n sample['D2Di'] = sample_in['D2Di'] + sample_in['P2Di']*(1 - (1/2)**(taus))\n return sample\n\ndef line_points(sample):\n global x_min, x_max, y_min, y_max, initial_D2Di0\n \n # Determine the end points of a line going through the sample points.\n x_range = x_max - x_min\n y_range = y_max - y_min\n \n # Slope (extrapolate from first two points - could be done by a fit to the points)\n slope = (sample['D2Di'][2]-sample['D2Di'][1])/(sample['P2Di'][2]-sample['P2Di'][1])\n y_final = initial_D2Di0 + slope*x_range\n x_points = (x_min, x_max)\n y_points = (initial_D2Di0, y_final)\n return x_points, y_points, slope\n\ndef init2current(samples0, samples):\n # Compute the lines connecting initital and final points for plotting\n n_pts = len(samples0)\n\n xlist = []\n ylist = []\n for pt in range(1, n_pts+1):\n x = np.array([ samples0['P2Di'][pt], samples['P2Di'][pt] ])\n y = np.array([ samples0['D2Di'][pt], samples['D2Di'][pt] ])\n xlist.append(x)\n ylist.append(y)\n \n return(xlist, ylist)\n \ndef HL_changed(change):\n global isotope, sample, initial_samples, dots_current, line_current, connectors, slope_label\n \n # Determine half-life of this isotope\n idx = (isotope_info.Name == isotope.value)\n HL = float(isotope_info[idx].HalfLife.tolist()[0])\n \n # How many half-lives have passed? Use this to get new sample and line info\n this_tau = HL_slider.value / HL\n sample = amt_left(initial_samples, this_tau)\n x_sample, y_sample, slope = line_points(sample)\n \n # Update plot\n dots_current.x = sample['P2Di']\n dots_current.y = sample['D2Di']\n line_current.x = x_sample\n line_current.y = y_sample\n slope_label.value = 'Slope: {0:0.4f}'.format(slope)\n xlist, ylist = init2current(initial_samples, sample)\n connectors.x = xlist\n connectors.y = ylist\n \n \ndef isotope_changed(change):\n global ax_x_P2Di, ax_y_D2Di, HL_slider, HLlabel, UnitsText, Max_half_lives\n\n # Extract the necessary isotope descriptors from the Pandas DataFrame\n idx = (isotope_info.Name == change.new)\n HL = float(isotope_info[idx].HalfLife.tolist()[0])\n HLUnits = isotope_info[idx].HLUnits.tolist()[0]\n PAbbrev = isotope_info[idx].PAbbrev.tolist()[0]\n DAbbrev = isotope_info[idx].DAbbrev.tolist()[0]\n DiAbbrev = isotope_info[idx].DiAbbrev.tolist()[0]\n\n # Get old half-life\n idx_old = (isotope_info.Name == change.old)\n HL_old = float(isotope_info[idx_old].HalfLife.tolist()[0])\n\n # Determine current age reading from slider and adjust to new units\n init_age = HL_slider.value \n \n # Hard code generic versus others\n if (change.new != isotope_info.loc['generic'].Name):\n HL_slider.description = \"Time\"\n else: \n HL_slider.description = \"Half-lives\" \n\n # Adjust time scales\n if (HL_old < HL):\n # Adjust maximum limits first before adjusting values (since new HL > old HL)\n HL_slider.max = Max_half_lives*HL\n HLlabel.max = HL_slider.max \n HL_slider.value = HL*(init_age/HL_old)\n HLlabel.value = HL_slider.value \n else:\n # Adjust maximum limits after adjusting values (since new HL < old HL)\n HL_slider.value = HL*(init_age/HL_old)\n HLlabel.value = HL_slider.value\n HL_slider.max = Max_half_lives*HL\n HLlabel.max = HL_slider.max \n \n # Set the axes and other labels to display\n UnitsText.value = HLUnits\n ax_x_P2Di.label = '{0} / {1}'.format(PAbbrev, DiAbbrev)\n ax_y_D2Di.label = '{0} / {1}'.format(DAbbrev, DiAbbrev)\n\n", "_____no_output_____" ], [ "##\n## Set up isochron plot\n##\n\n# Largest possible fraction of decay (only go out to 5 half-lives)\nMax_half_lives = 5\nMax_decay_fraction = 1 - (1/2)**(Max_half_lives)\n\n# detemine maximum and minimum values of X and Y axes\nx_step = 0.05\nx_min = 0\nx_max = x_step * ceil(initial_samples['P2Di'][n_samples] / x_step)\ny_step = 0.04\ny_min = y_step * floor(initial_D2Di0 / y_step)\ny_max = y_step * ceil((initial_D2Di0 + initial_samples['P2Di'][n_samples] * Max_decay_fraction) / y_step)\n\n# Labels and scales for Axes\nx_P2Di = bq.LinearScale(min = x_min, max = x_max)\ny_D2Di = bq.LinearScale(min = y_min, max = y_max)\nax_x_P2Di = bq.Axis(label='P / D_i', scale=x_P2Di)\nax_y_D2Di = bq.Axis(label='D / D_i', scale=y_D2Di, orientation='vertical')\n\n# Set up initial conditions\ntaus = 0 # zero half lives past\nsample = amt_left(initial_samples, taus)\n\n##\n## Define the lines\n##\n\n# Initial amount of daughter line (with dots for initial amounts of parent)\nx_init, y_init, slope_init = line_points(initial_samples)\nline_initial = bq.Lines(x=x_init, y=y_init, scales={'x': x_P2Di, 'y': y_D2Di}, \n line_style='dashed', colors=['red'], labels=['Initial Sample'])\ndots_initial = bq.Scatter(x=initial_samples['P2Di'], y=initial_samples['D2Di'], scales={'x': x_P2Di, 'y': y_D2Di}, \n colors=['white'], stroke='red', fill= True, labels=['Initial Isochron'])\n\n# Current quantities on isochron line\nx_sample, y_sample, slope = line_points(sample)\nline_current = bq.Lines(x=x_sample, y=y_sample, scales={'x': x_P2Di, 'y': y_D2Di}, \n line_style='solid', colors=['red'], labels=['Current Isochron'])\ndots_current = bq.Scatter(x=sample['P2Di'], y=sample['D2Di'], scales={'x': x_P2Di, 'y': y_D2Di}, \n colors=['red'], stroke='red', fill= True, labels=['Current Isochron'])\n\n# Connect Initial and Current quantities on isochron line\nxlist, ylist = init2current(initial_samples, sample)\nconnectors = bq.Lines(x=xlist, y=ylist, scales={'x': x_P2Di, 'y': y_D2Di}, \n line_style='dotted', colors=['black'])\n\n##\n## Construct plot\n##\nisochron = bq.Figure(axes=[ax_x_P2Di, ax_y_D2Di], \n marks=[connectors, line_initial, dots_initial, line_current, dots_current],\n title='Geochron Diagram', \n layout={'width': '700px', 'height': '500px', \n 'max_width': '700px', 'max_height': '500px',\n 'min_width': '600px', 'min_height': '400px'})\n\n##\n## Construct controls\n##\n\n# Select Generic or Specific Isotopes\nisotope = widgets.RadioButtons(options=list(isotope_info.Name), \n value=isotope_info.loc[init_isotope].Name, description='Isotope:', \n disabled=False, \n layout=widgets.Layout(height='75px', max_height='100px', min_height='50px', \n width='200px', max_width='300px', min_width='100px'))\nisotope.observe(isotope_changed, 'value')\n\n# Slider and text field controling age\nidx = (isotope_info.Name == isotope.value)\nHL = float(isotope_info[idx].HalfLife.tolist()[0])\nHLUnits = isotope_info[idx].HLUnits.tolist()[0]\n\nHL_slider = widgets.FloatSlider(value=0, min=0, max=Max_half_lives*HL, step=0.02,\n description='Half-lives', disabled=False,\n continuous_update=False, orientation='horizontal',\n readout=False, readout_format='.2f',\n layout=widgets.Layout(height='75px', max_height='100px', min_height='50px', \n width='200px', max_width='300px', min_width='100px'))\nHL_slider.observe(HL_changed, 'value')\n\n# Get units value and units for age label, then apply them\nHLlabel = widgets.BoundedFloatText(value = HL_slider.value, min = HL_slider.min, max = HL_slider.max, \n step = HL_slider.step,\n layout={'width': '75px', 'height': '50px', \n 'max_width': '75px', 'max_height': '75px',\n 'min_width': '50px', 'min_height': '50px'})\nUnitsText = widgets.Label(value=HLUnits)\nage_label = widgets.HBox([HLlabel, UnitsText])\n# Link HL slider with this text\nwidgets.jslink((HL_slider, 'value'), (HLlabel, 'value'))\n\n# Describe slope\nslope_label = widgets.Label(value = 'Slope: {0:0.4f}'.format(slope),\n layout={'align_items':'center','align_content':'center', \n 'justify_content':'center', \n 'width': '100px', 'height': '50px', \n 'max_width': '100px', 'max_height': '75px',\n 'min_width': '50px', 'min_height': '50px'})\n\n\ncontrols = widgets.VBox( [isotope, HL_slider, age_label, slope_label], \n layout=widgets.Layout(align_content='center', align_items='center', \n justify_content='center', \n width='300px', height='500px', \n max_width='300px', max_height='500px',\n min_width='100px', min_height='400px',\n overflow_x='hidden', overflow_y='hidden') )\n\ndisplay(widgets.HBox( [isochron, controls] ) )", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
e7605e355310bd48e85f781a0ac3ac36d4067478
13,179
ipynb
Jupyter Notebook
notebooks/BertModel.ipynb
dwszai/news-summarizer
0a5583b979d8751a2af6af235b03d0ca2fa770ad
[ "MIT" ]
1
2021-03-08T06:59:41.000Z
2021-03-08T06:59:41.000Z
notebooks/BertModel.ipynb
dwszai/news-summarizer
0a5583b979d8751a2af6af235b03d0ca2fa770ad
[ "MIT" ]
null
null
null
notebooks/BertModel.ipynb
dwszai/news-summarizer
0a5583b979d8751a2af6af235b03d0ca2fa770ad
[ "MIT" ]
null
null
null
33.966495
2,306
0.57971
[ [ [ "from src.modelling.bert_model import BertModel\nimport os", "_____no_output_____" ], [ "DATA_ORIGINAL = 'news'\nDATA_SUMMARIZED = 'summary'\nTEST_INDEX = 1", "_____no_output_____" ], [ "bert_model = BertModel()", "_____no_output_____" ], [ "data_path = os.path.join(os.getcwd(), r\"data\\bbc_news.csv\")\ndata = bert_model.load_data(data_path)\n\ninput_text = data[DATA_ORIGINAL][TEST_INDEX]", "_____no_output_____" ] ], [ [ "#### Load model", "_____no_output_____" ] ], [ [ "bert_model.train()", "_____no_output_____" ] ], [ [ "#### Original text", "_____no_output_____" ] ], [ [ "print(input_text)", "b'Dollar gains on Greenspan speech\\n\\nThe dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.\\n\\nAnd Alan Greenspan highlighted the US government\\'s willingness to curb spending and rising household savings as factors which may help to reduce it. In late trading in New York, the dollar reached $1.2871 against the euro, from $1.2974 on Thursday. Market concerns about the deficit has hit the greenback in recent months. On Friday, Federal Reserve chairman Mr Greenspan\\'s speech in London ahead of the meeting of G7 finance ministers sent the dollar higher after it had earlier tumbled on the back of worse-than-expected US jobs data. \"I think the chairman\\'s taking a much more sanguine view on the current account deficit than he\\'s taken for some time,\" said Robert Sinche, head of currency strategy at Bank of America in New York. \"He\\'s taking a longer-term view, laying out a set of conditions under which the current account deficit can improve this year and next.\"\\n\\nWorries about the deficit concerns about China do, however, remain. China\\'s currency remains pegged to the dollar and the US currency\\'s sharp falls in recent months have therefore made Chinese export prices highly competitive. But calls for a shift in Beijing\\'s policy have fallen on deaf ears, despite recent comments in a major Chinese newspaper that the \"time is ripe\" for a loosening of the peg. The G7 meeting is thought unlikely to produce any meaningful movement in Chinese policy. In the meantime, the US Federal Reserve\\'s decision on 2 February to boost interest rates by a quarter of a point - the sixth such move in as many months - has opened up a differential with European rates. The half-point window, some believe, could be enough to keep US assets looking more attractive, and could help prop up the dollar. The recent falls have partly been the result of big budget deficits, as well as the US\\'s yawning current account gap, both of which need to be funded by the buying of US bonds and assets by foreign firms and governments. The White House will announce its budget on Monday, and many commentators believe the deficit will remain at close to half a trillion dollars.\\n'\n" ] ], [ [ "#### Actual summarized text from dataset", "_____no_output_____" ] ], [ [ "actual_summary = data[DATA_SUMMARIZED][TEST_INDEX]\nprint(actual_summary)", "b'The dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.China\\'s currency remains pegged to the dollar and the US currency\\'s sharp falls in recent months have therefore made Chinese export prices highly competitive.Market concerns about the deficit has hit the greenback in recent months.\"I think the chairman\\'s taking a much more sanguine view on the current account deficit than he\\'s taken for some time,\" said Robert Sinche, head of currency strategy at Bank of America in New York.The recent falls have partly been the result of big budget deficits, as well as the US\\'s yawning current account gap, both of which need to be funded by the buying of US bonds and assets by foreign firms and governments.\"He\\'s taking a longer-term view, laying out a set of conditions under which the current account deficit can improve this year and next.\"'\n" ] ], [ [ "#### Summarized text", "_____no_output_____" ] ], [ [ "summary = bert_model.predict(input_text)\nprint(summary)", "b'Dollar gains on Greenspan speech\\n\\nThe dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.\\n\\nAnd Alan Greenspan highlighted the US government\\'s willingness to curb spending and rising household savings as factors which may help to reduce it. I think the chairman\\'s taking a much more sanguine view on the current account deficit than he\\'s taken for some time,\" said Robert Sinche, head of currency strategy at Bank of America in New York. \" But calls for a shift in Beijing\\'s policy have fallen on deaf ears, despite recent comments in a major Chinese newspaper that the \"time is ripe\" for a loosening of the peg.\n" ] ], [ [ "#### Evaluation", "_____no_output_____" ] ], [ [ "reference_data = data[DATA_ORIGINAL].sample(n=10, random_state=42)\nreference_data", "_____no_output_____" ], [ "# candidate_data = reference_data.apply(lambda x: bert_model.predict(x))\ncandidate_data = reference_data.map(bert_model.predict)\ncandidate_data", "_____no_output_____" ] ], [ [ "#### Score for 10 samples", "_____no_output_____" ] ], [ [ "precision, recall, f1 = bert_model.evaluation(preds=candidate_data, refs=reference_data)\nprint(f'Precision: {precision}')\nprint(f'Recall: {recall}')\nprint(f'F1: {f1}')", "calculating scores...\ncomputing bert embedding.\n" ] ], [ [ "#### Score for 100 samples", "_____no_output_____" ] ], [ [ "precision, recall, f1 = bert_model.evaluation(preds=preds, refs=refs)\nprint(f'Precision: {precision}')\nprint(f'Recall: {recall}')\nprint(f'F1: {f1}')", "calculating scores...\ncomputing bert embedding.\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e76090ab796021f9ca91a5b0161437b66037c7a6
95,486
ipynb
Jupyter Notebook
binder/04-execution.ipynb
lgray/AwkwardQL
53802afa248740893512d01b8b7365a2e3a34367
[ "BSD-3-Clause" ]
1
2019-12-28T21:11:22.000Z
2019-12-28T21:11:22.000Z
binder/04-execution.ipynb
lgray/AwkwardQL
53802afa248740893512d01b8b7365a2e3a34367
[ "BSD-3-Clause" ]
3
2020-01-08T07:01:57.000Z
2020-01-08T15:54:48.000Z
04-execution.ipynb
jpivarski/PartiQL
897be27dad192fb2b8479f673ebe07ae2fa83eb3
[ "BSD-3-Clause" ]
2
2019-12-26T18:28:32.000Z
2021-05-07T14:11:07.000Z
36.668971
4,688
0.45326
[ [ [ "# Execution\n\nThis notebook explains how PartiQL executes, with some discussion of its implementation.", "_____no_output_____" ], [ "## Data model\n\nAs discussed in [02-data-model.ipynb](02-data-model.ipynb), the only allowed types are PLURP (PLUR in this demo). In a rowwise, dynamically typed interpreter, this means there are only values, records, and lists. Since PartiQL is list order-independent and merges items with the same key, the lists are really sets, though the word \"list\" appears in the implementation and error messages.\n\nEvery value entering or exiting the PartiQL interpreter is a `data.Instance`, even simple numbers.", "_____no_output_____" ] ], [ [ "import data\n\narrays = data.RecordArray({\n \"x\": data.PrimitiveArray([0.1, 0.2, 0.3]),\n \"y\": data.PrimitiveArray([1000, 2000, 3000]),\n \"table1\": data.ListArray([0, 3, 3], [3, 3, 5], data.RecordArray({\n \"a\": data.PrimitiveArray([1.1, 2.2, 3.3, 4.4, 5.5]),\n \"b\": data.PrimitiveArray([100, 200, 300, 400, 500])\n })\n ),\n \"table2\": data.ListArray([0, 2, 3], [2, 3, 8], data.RecordArray({\n \"b\": data.PrimitiveArray([True, False, True, False, False, True, True, True]),\n \"c\": data.PrimitiveArray([10, 20, 100, 1, 2, 3, 4, 5])\n })\n ),\n \"stuff\": data.ListArray([0, 3, 3], [3, 3, 5], data.PrimitiveArray([1, 2, 3, 4, 5]))\n})\n\narrays.tolist()", "_____no_output_____" ], [ "arrays.setindex()\n\ninstances = data.instantiate(arrays)\ninstances", "_____no_output_____" ] ], [ [ "## Inputs and outputs\n\nNext, we'll run a few simple examples to show what the runtime engine requires and produces.\n\nIf you have not already done so, install the [Lark parser](https://github.com/lark-parser/lark#readme) and Matplotlib.", "_____no_output_____" ] ], [ [ "!pip install lark-parser matplotlib", "Requirement already satisfied: lark-parser in /home/jpivarski/miniconda3/lib/python3.7/site-packages (0.7.1)\nRequirement already satisfied: matplotlib in /home/jpivarski/miniconda3/lib/python3.7/site-packages (3.1.1)\nRequirement already satisfied: cycler>=0.10 in /home/jpivarski/miniconda3/lib/python3.7/site-packages (from matplotlib) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /home/jpivarski/miniconda3/lib/python3.7/site-packages (from matplotlib) (2.4.0)\nRequirement already satisfied: python-dateutil>=2.1 in /home/jpivarski/miniconda3/lib/python3.7/site-packages (from matplotlib) (2.8.0)\nRequirement already satisfied: numpy>=1.11 in /home/jpivarski/miniconda3/lib/python3.7/site-packages (from matplotlib) (1.16.4)\nRequirement already satisfied: kiwisolver>=1.0.1 in /home/jpivarski/miniconda3/lib/python3.7/site-packages (from matplotlib) (1.1.0)\nRequirement already satisfied: six in /home/jpivarski/miniconda3/lib/python3.7/site-packages (from cycler>=0.10->matplotlib) (1.12.0)\nRequirement already satisfied: setuptools in /home/jpivarski/miniconda3/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib) (41.0.1)\n" ], [ "import interpreter", "_____no_output_____" ] ], [ [ "The interpreter takes a source string and a `data.ListInstance` of `data.RecordInstances` as input.\n\nIt returns any newly assigned variables (as a `data.ListInstance` of `data.RecordInstances`) and the hierarchy of counters, which may be a single counter (total number of events) or a directory of histograms.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nz = x + 1\n\nhist z by regular(100, 1, 1.5) named \"h\"\n\n\"\"\", instances)", "_____no_output_____" ], [ "output", "_____no_output_____" ], [ "counter.allkeys()", "_____no_output_____" ], [ "counter[\"h\"].mpl()", "_____no_output_____" ] ], [ [ "Variables assigned in nested blocks will *not* be returned, but histograms defined in these blocks will.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\ntable1 with {\n z = a + 1\n hist z by regular(100, 0, 10) named \"h\"\n}\n\n\"\"\", instances)", "_____no_output_____" ], [ "output", "_____no_output_____" ], [ "counter[\"h\"].mpl()", "_____no_output_____" ] ], [ [ "To get nested quantities as output, be sure to assign them to a top-level variable.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\ntop = table1 with {\n z = a + 1\n hist z by regular(100, 0, 10) named \"h\"\n}\n\n\"\"\", instances)", "_____no_output_____" ], [ "output", "_____no_output_____" ] ], [ [ "Variables assigned at the top-level of the source are a functional return value, but histograms and counters are a side-effect.\n\nHistograms nested in a `cut` or `vary` are nested in their counters.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\ncut x > 0.1 named \"c\" {\n table2 with {\n hist c by regular(100, 0, 10) named \"h\"\n }\n}\n\"\"\", instances)", "_____no_output_____" ], [ "counter.allkeys()", "_____no_output_____" ], [ "counter[\"c\"]", "_____no_output_____" ], [ "counter[\"c/h\"].mpl()", "_____no_output_____" ] ], [ [ "## Scalar expressions", "_____no_output_____" ], [ "Trivial assignment is one way to pass on values from intput to output.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nx = x\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "Missing values are handled as unions of records with a field with records without that field. They're only passed through an expression if safe navigation operators (`?`) are used (or explicit checks for `has`).", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\na = if x > 0.1 then 100\n\nb = if x > 0.1 then 100 else 200\n\nc = ?a\n\nd = if has a then 100 else 200\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "Temporary variables don't appear if they're in a curly-brackets scope.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\na = {\n tmp1 = x + 1\n tmp2 = x * 100\n tmp2**2\n}\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "## Set operations", "_____no_output_____" ], [ "The `as` operator turns any set into a set of records nested within a single field.\n\nIn the example below, `out1` does not have a record structure, but `out2` does—the same data is packed in a field named `nested`.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout1 = stuff\nout2 = stuff as nested\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "It is also a special case of sampling without replacement.\n\nBelow, we see that each record has an `x` and a `y` field, and these are unique pairs of the original data, per-event: `(1, 2)`, `(1, 3)`, `(2, 3)` in the first event and only `(4, 5)` in the third event. (The second event is empty.)", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = stuff as (x, y)\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "The natural alternative to sampling without replacement is sampling with replacement, which can be built from `as` and `cross` (cross-join).\n\nThe cross-join computes a Cartesian product, just as it does in SQL. Promoting each set of integers into sets of records makes them mergable (`x` and `y` go into the same output records).\n\nNow the pairs are `(1, 1)`, `(1, 2)`, `(1, 3)`, `(2, 1)`, `(2, 2)`, `(2, 3)`, `(3, 1)`, `(3, 2)`, `(3, 3)` in the first event and `(4, 4)`, `(4, 5)`, `(5, 4)`, `(5, 5)` in the third event. (The second event is empty.)", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = stuff as x cross stuff as y\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "For added flair, we can `group by` either `x` or `y` to build sets of sets. The Cartesian grid pattern should be more clear.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = stuff as x cross stuff as y group by x\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "The cross-join/Cartesian product is one kind of a join: two sets of records are combined to make one set of records. The output records have fields from both input record types and the set is derived from a combinatorial rule, in this case: \"for all in the left set, merge with each of the right set.\"\n\nIf we didn't give the records in the left and right sets distinct field names with `as`, then they could overshadow each other. That is, if the left and right both had a field named `x`, only the left's `x` would show up in the results (by convention, left wins over right). Perhaps it would be better if a situation like this raises an error...\n\nThe output records don't necessarily have distinct values for all fields. The entities, defined by hidden surrogate keys, are guaranteed distinct, but the field values might not be.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = stuff as x cross stuff as x\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "In the above, it looks like these sets contain multiple values of `{\"x\": 4}` and `{\"x\": 5}`, but their index keys are different. (They are different ordered tuples of elements from the left and right sets.)", "_____no_output_____" ] ], [ [ "[[y.row for y in x[\"out\"]] for x in output]", "_____no_output_____" ] ], [ [ "Cross-join is, in a sense, the simplest join because it creates a new index: `CrossRef(<Ref X>, <Ref Y>)` is distinct from `<Ref X>` and `<Ref Y>`, and the only way to get another one is to `cross` sets with indexes `<Ref X>` and `<Ref Y>` again.\n\nThe next-simplest is `join`, which requires the left and right sets to have the same index and returns a set with the same index. In SQL terms, PartiQL's `join` is an `INNER JOIN`, taking an element from the left and right sets only if they have the same index key. Again in SQL terms, this `INNER JOIN` is implicitly `ON` the hidden surrogate index, not any other columns. With that restriction, `join` effectively becomes set intersection.\n\nI had considered calling it `intersect`, rather than `join` because it is set intersection, but the way fields from records in the left and right sets are mixed is more reminiscent of an SQL join.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = stuff as x join stuff as y\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "To show what this is doing, let's filter the left and right sets with `where`. Even and odd values have no overlap.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nevens = stuff as x where x % 2 == 0\nodds = stuff as x where x % 2 == 1\n\nout = evens join odds\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "But the overlap of evens and `2 <= x and x <= 4` is `2` and `4`.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nevens = stuff as x where x % 2 == 0\nmiddle = stuff as x where 2 <= x and x <= 4\n\nout = evens join middle\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "To see how this is like an SQL `INNER JOIN`, consider records with different fields: the left records have an `x` and the right records have a `y`. When they have an overlapping index, the merged records have `x` and `y`.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nevens = stuff as x where x % 2 == 0\nmiddle = stuff as y where 2 <= y and y <= 4\n\nout = evens join middle\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "The opposite of PartiQL's `join` (like SQL's `INNER JOIN` on the hidden surrogate key) is PartiQL's `union` (like SQL's `FULL OUTER JOIN` on the hidden surrogate key).\n\nJust as `join` acts as set intersection, `union` acts as set union. I chose to use the word \"`union`\" in this case because field names of the records in the left and right sets are often different, so this behaves intuitively as a user would expect set union to behave.\n\nHere is an example where the field name is the same, but the left and right sets are non-overlapping (so the union is just their concatenation).", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nevens = stuff as x where x % 2 == 0\nodds = stuff as x where x % 2 == 1\n\nout = evens union odds\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "Here's the same example, but with fields named `x` and `y`. If the output has a field named `x`, you know it came from the left, and if it has a field named `y`, you know it came from the right.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nevens = stuff as x where x % 2 == 0\nodds = stuff as y where y % 2 == 1\n\nout = evens union odds\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "Often, that would be used to combine particle lists, like `leptons = electrons union muons`. In the example below, we'll take a union of `table1` and `table2`.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = table1 union table2\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "The `table1` and `table2` sets both have a `b` field, so `b` in the output records might have come from `table1` (in which case, it's an integer) or might have come from `table2` (in which case, it's boolean). To make the provenance clear, we can use `as` to label them.\n\nBelow, the output records either have a `left` field or they have a `right` field.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = table1 as left union table2 as right\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "Also notice that the index is a union of the two input indexes. Keys either come from `<Ref 1>` or `<Ref 2>`.", "_____no_output_____" ] ], [ [ "[[y.row for y in x[\"out\"]] for x in output]", "_____no_output_____" ] ], [ [ "So `cross` creates a new index reference (`CrossRef(<Ref X>, <Ref Y>)`), `join` returns the input index references (finding the ones that are in both left and right), and `union` creates a union of keys out of the left and right index references.\n\nThere's one more set operation: `except`. This is a set difference, removing elements from the left set of records whose keys can be found in the right set of records.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nevens = stuff as x where x % 2 == 0\n\nout = stuff as x except evens\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "I could have implemented a symmetric set difference or an equivalent of SQL's `LEFT OUTER JOIN` and `RIGHT OUTER JOIN`, but they're straightforward extensions of what I've implemented here (and it's not clear they would be needed for a physics analysis).", "_____no_output_____" ], [ "### Equality and set membership\n\nIt should be emphasized that the set operations defined above use a definition of equality that only requires two objects' index keys to be the same. They can have different field names and different values of their fields and be considered mergable by `join` and they can have the same field names and the same field values and be considered different elements by `union`.\n\nThis is useful because a user might want to enrich a subset of particles with new computed values and then mix in particles from the original set. The \"entity identity\" that must be maintained is the *particles,* not any attached values. To illustrate this, let's do some work on a subset of `table2`, different work on another subset of `table2`, and then bring them together for the final result.\n\nThis is an example of a DAG, described in the abstract in [02-data-model.ipynb#Joins-and-index-compatibility](02-data-model.ipynb#Joins-and-index-compatibility).", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\nout = {\n subset1 = table2 where c % 2 == 1 with {\n z = 2*c\n }\n subset2 = table2 where not b with {\n z = 10*c\n }\n subset1 union subset2\n}\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "Note that this is not how simple equality—tested by the `==` operator—should work. Simple equality should return `True` if the left and right are the same value or have the same field names and field values, irrespective of whether they have the same indexes. The same applies to `!=`, `in`, and `not in`, which are all scalar expressions.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = 2 in stuff\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "## Modifying sets of records\n\nSQL's `SELECT` statement lets you create, replace, and remove columns from a table without changing its index (except inasmuch as a visible or natural index is a column that can be changed like any other). The equivalents in PartiQL are:\n\n * `with { ... }` to create or replace fields to a set of records (copying over any fields not specified),\n * `to { ... }` to replace all fields in a set of records (only copying a field if `x = x` is specified),\n * `to ...` to replace the set of records with the result of an expression, which might not even be a record.\n\nHere are a few examples.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = table1 with {\n c = 10*a # add c as a new field\n}\n\n\"\"\", instances); output", "_____no_output_____" ], [ "output, counter = interpreter.run(r\"\"\"\n\nout = table1 to {\n c = 10*a # c is the only field in a new set of records\n}\n\n\"\"\", instances); output", "_____no_output_____" ], [ "output, counter = interpreter.run(r\"\"\"\n\nout = table1 to 10*a # the results are simple values, not records (in this case, numbers)\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "## Group-by\n\nThe `GROUP BY` operator is a major part of SQL: it changes a `SELECT` operation from a filtered transformation of a table into an aggregation over subtables, each defined by a distinct value of the expression following `GROUP BY`. It changes the nature of the whole query—expressions after `SELECT` must be reducers (converting lists/sets into scalars).\n\nIn PartiQL, `group by` acts as a simple function that converts sets of records into sets of sets of those records, grouped by distinct values of the expression following `group by`. If you decide to aggregate those inner sets, that's your choice.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\ngrouped = table2 group by b\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "Below, we aggregate over the inner sets by modifying the set of records as we would modify any other set of records—by defining new fields in a `to` block. The aggregation functions (`count`, `sum`, `min`, `max`, `any`, `all`) operate on sets of values.\n\nIf we wanted to aggregate `b` (because we grouped by `b`; it would have the same value in every element) without aggregating `c`, we could do that. The data model is capable of managing sets of records whose fields are a scalar and a set.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\naggregated = table2 group by b as grouped to {\n b = any(grouped.b)\n c = sum(grouped.c)\n}\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "## Min by and max by\n\nWhereas `group by` restructures a set of records into a set of sets of records, `min by` and `max by` restructure a set of records into a single record. If the set is empty, this returns missing data.", "_____no_output_____" ] ], [ [ "output, counter = interpreter.run(r\"\"\"\n\nout = table1 max by a\n\n\"\"\", instances); output", "_____no_output_____" ], [ "output, counter = interpreter.run(r\"\"\"\n\nout = (table1 max by a).b\n\n\"\"\", instances); output", "_____no_output_____" ] ], [ [ "# Concluding remarks\n\nSQL has broad applicability because it is a complete system for transforming sets without the complexity of a generic programming language. Unlike functional programming, which is an implementation of Alonzo Church's Lambda Calculus (Turing complete), SQL is an implementation of Edgar Codd's [Relational Algebra](https://en.wikipedia.org/wiki/Relational_algebra), a more restricted but still useful system. In principle, a Turing complete language must be executed to find out what it does, but set operations can be analyzed analytically.\n\nSQL's claims to \"completeness\" rest on its implementation of *almost* every operation in the Relational Algebra. PartiQL has the same features:\n\n * Cartesian product: `cross`,\n * projection: `to` (without curly brackets),\n * selection: `with` or `to` (with curly brackets),\n * rename: `to` (with curly brackets),\n * inner and outer join: `join` and `union`, which double as intersection and union, given our choice of surrogate index,\n * θ-join: `join` with `where`,\n * semijoin: (not implemented; this is the `LEFT OUTER JOIN` and `RIGHT OUTER JOIN` of SQL, discussed above),\n * antijoin: `except`,\n * division: (not implemented, but not implemented in SQL, either),\n\nFurthermore, PartiQL uses true sets as containers, not bags (sets with duplicates). Unlike SQL, PartiQL has prescribed indexes and doesn't let the user choose new indexes for `JOIN` operations, but this is because entity identity is defined by reconstruction and does not have to be discovered from natural keys in the data.\n\nLike SQL, PartiQL has aggregation (reducer functions) and `group by`, both of which are extensions beyond Relational Algebra.\n\nSQL's only container type is a table, which is a bag of relations, but PartiQL handles sets of values, relations (\"records\" in PartiQL), as well as sets. This can be modeled in SQL using multiple tables and foreign keys, but PartiQL hides the keys and presents conventional data structures.\n\nAdditionally, PartiQL is capable of all the `JaggedArray` operations that have been needed in awkward-array, except for those dealing with order and forcing arrays to have a fixed size (e.g. `JaggedArray.pad`).\n\nThe pattern matching of my May 2019 language is expressible in PartiQL, though without the nicety of a pattern-match syntax. There could be an elegant way to merge the two, though PartiQL's paradigm of naming samples with and without replacement before building any structures is more powerful than the pattern-matching syntax. (There are long-distance constraints that can't be expressed in pattern-matching, but can with PartiQL.)\n\nDifficult combinatorics, such as [Benchmark 8](https://github.com/iris-hep/adl-benchmarks-index#functionality-benchmarks), appear to be trivial and quite readable in this language.\n\nThus, I conclude that a language built out of\n\n 1. SQL-style surrogate keys (hidden),\n 2. SQL-style joins (on the hidden keys),\n 3. with a deep structure-friendly data model and a more composable syntax than SQL\n\nwould be a useful language for doing particle physics, would be different enough in capabilities to provide more \"value added,\" and would be more tightly controlled (more declarative, more optimizable) than a general programming language, including functional languages like Spark and LINQ.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e76094aa6dbe82cfab0e7abe4492bf002bbcb691
26,001
ipynb
Jupyter Notebook
naive_bayes/Naive_Bayes_Basic.ipynb
MayukhSobo/ML
31dd3f4295af44e0424aa47dba63996a80e9a8d2
[ "MIT" ]
1
2017-12-20T08:39:04.000Z
2017-12-20T08:39:04.000Z
naive_bayes/Naive_Bayes_Basic.ipynb
MayukhSobo/ML
31dd3f4295af44e0424aa47dba63996a80e9a8d2
[ "MIT" ]
null
null
null
naive_bayes/Naive_Bayes_Basic.ipynb
MayukhSobo/ML
31dd3f4295af44e0424aa47dba63996a80e9a8d2
[ "MIT" ]
null
null
null
28.354417
859
0.398677
[ [ [ "# Load the Dataset", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport random\nurl = \"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data\"\niris = pd.read_csv(url, \n header=None,\n names = ['sepal_length', \n 'sepal_width', \n 'petal_length', \n 'petal_width', \n 'species'])", "_____no_output_____" ], [ "## Definitions of classifying classes\nclasses = list(pd.unique(iris.species))\nnumClasses = len(classes)", "_____no_output_____" ] ], [ [ "## Extracting the Feature Matrix", "_____no_output_____" ] ], [ [ "X = np.matrix(iris.iloc[:, 0:4])\nX = X.astype(np.float)\nm, n = X.shape", "_____no_output_____" ] ], [ [ "## Extracting the response", "_____no_output_____" ] ], [ [ "y = np.asarray(iris.species)", "_____no_output_____" ] ], [ [ "## Extracting features for different classes", "_____no_output_____" ] ], [ [ "CLS = []\nfor each in classes:\n CLS.append(np.matrix(iris[iris.species == each].iloc[:, 0:4]))", "_____no_output_____" ], [ "len(CLS)", "_____no_output_____" ] ], [ [ "## The real meat", "_____no_output_____" ], [ "#### Calculating the mean and variance of each features for each class", "_____no_output_____" ] ], [ [ "pArray = []\ndef calculate_mean_and_variance(CLS, n, numClasses):\n for i in range(numClasses):\n pArray.append([])\n for x in range(n):\n mean = np.mean(CLS[i][:, x])\n var = np.var(CLS[i][:, x])\n pArray[i].append([mean, var])", "_____no_output_____" ], [ "calculate_mean_and_variance(CLS, n, numClasses)\nfor each in pArray:\n print(each, end='\\n\\n')", "[[5.0060000000000002, 0.12176400000000002], [3.4180000000000001, 0.14227600000000001], [1.464, 0.029504000000000002], [0.24399999999999999, 0.011264000000000003]]\n\n[[5.9359999999999999, 0.261104], [2.7700000000000005, 0.096500000000000016], [4.2599999999999998, 0.21640000000000004], [1.3259999999999998, 0.038323999999999997]]\n\n[[6.5879999999999983, 0.39625600000000011], [2.9740000000000002, 0.10192399999999999], [5.5520000000000005, 0.29849600000000004], [2.0260000000000002, 0.07392399999999999]]\n\n" ] ], [ [ "#### Choosing training dataset (Random Choosing)", "_____no_output_____" ] ], [ [ "# Choosing 70% of the dataset for training Randomly\nrandom_index = random.sample(range(m), int(m * 0.7))", "_____no_output_____" ], [ "def probability(mean, stdev, x):\n ", "_____no_output_____" ] ], [ [ "#### Creating the actual Baysean Classifier", "_____no_output_____" ] ], [ [ "def classify_baysean():\n correct_predictions = 0\n for index in random_index:\n result = []\n x = X[index, :]\n for eachClass in range(numClasses):\n result.append([])\n prior = 1 / numClasses\n \n # For sepal_length\n prosterior_feature_1 = probability(pArray[index][0][0], \n pArray[index][0][1], \n x[0])\n # For sepal_width\n prosterior_feature_2 = probability(pArray[index][1][0],\n pArray[index][1][1],\n x[1])\n # For petal_length\n prosterior_feature_3 = probability(pArray[index][2][0],\n pArray[index][2][1],\n x[2])\n # For petal_width\n prosterior_feature_4 = probability(pArray[index][3][0],\n pArray[index][3][1],\n x[3])\n joint = prosterior_feature_1 * prosterior_feature_2 * \\\n prosterior_feature_3 * prosterior_feature_4 * prior\n result[index].append(joint)\n print(result[index])", "_____no_output_____" ], [ "classify_baysean()", "_____no_output_____" ], [ "x = X[49,:][:, 0]", "_____no_output_____" ], [ "x", "_____no_output_____" ], [ "import math", "_____no_output_____" ], [ "mean = pArray[0][0][0]\nstdev = pArray[0][0][1]\nexponent = math.exp(-(math.pow(x-mean,2)/(2*stdev)))", "_____no_output_____" ], [ "exponent", "_____no_output_____" ], [ "a = [[1, 2, 3, 4], [5, 6, 7, 8]]", "_____no_output_____" ], [ "for attribute in zip(*a):\n print(attribute)", "(1, 5)\n(2, 6)\n(3, 7)\n(4, 8)\n" ], [ "df = pd.DataFrame(np.random.randn(100, 2))", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "msk = np.random.rand(len(df)) < 0.8", "_____no_output_____" ], [ "msk", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7609cf77dc9f413f894ea722e77c104ab4a0bab
384,807
ipynb
Jupyter Notebook
Improving Deep Neural Networks/Optimization methods.ipynb
LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng-
0ab9869324a8f8a32d5ecfe0bfae71fc6fe0dde0
[ "MIT" ]
null
null
null
Improving Deep Neural Networks/Optimization methods.ipynb
LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng-
0ab9869324a8f8a32d5ecfe0bfae71fc6fe0dde0
[ "MIT" ]
null
null
null
Improving Deep Neural Networks/Optimization methods.ipynb
LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng-
0ab9869324a8f8a32d5ecfe0bfae71fc6fe0dde0
[ "MIT" ]
null
null
null
232.091074
62,492
0.879864
[ [ [ "# Optimization Methods\n\nUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. \n\nGradient descent goes \"downhill\" on a cost function $J$. Think of it as trying to do this: \n<img src=\"images/cost.jpg\" style=\"width:650px;height:300px;\">\n<caption><center> <u> **Figure 1** </u>: **Minimizing the cost is like finding the lowest point in a hilly landscape**<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>\n\n**Notations**: As usual, $\\frac{\\partial J}{\\partial a } = $ `da` for any variable `a`.\n\nTo get started, run the following code to import the libraries you will need.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io\nimport math\nimport sklearn\nimport sklearn.datasets\n\nfrom opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation\nfrom opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset\nfrom testCases import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'", "C:\\Users\\Theochem\\Desktop\\DeepLearning\\deep-learning-coursera-master\\Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization\\opt_utils.py:76: SyntaxWarning: assertion is always true, perhaps remove parentheses?\n assert(parameters['W' + str(l)].shape == layer_dims[l], layer_dims[l-1])\nC:\\Users\\Theochem\\Desktop\\DeepLearning\\deep-learning-coursera-master\\Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization\\opt_utils.py:77: SyntaxWarning: assertion is always true, perhaps remove parentheses?\n assert(parameters['W' + str(l)].shape == layer_dims[l], 1)\nC:\\Users\\Theochem\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n" ] ], [ [ "## 1 - Gradient Descent\n\nA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. \n\n**Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: \n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{1}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{2}$$\n\nwhere L is the number of layers and $\\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: update_parameters_with_gd\n\ndef update_parameters_with_gd(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using one step of gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters to be updated:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients to update each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n learning_rate -- the learning rate, scalar.\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n\n L = len(parameters) // 2 # number of layers in the neural networks\n\n # Update rule for each parameter\n for l in range(L):\n ### START CODE HERE ### (approx. 2 lines)\n parameters[\"W\" + str(l + 1)] = parameters[\"W\" + str(l + 1)] - learning_rate * grads[\"dW\" + str(l + 1)]\n parameters[\"b\" + str(l + 1)] = parameters[\"b\" + str(l + 1)] - learning_rate * grads[\"db\" + str(l + 1)]\n ### END CODE HERE ###\n \n return parameters", "_____no_output_____" ], [ "parameters, grads, learning_rate = update_parameters_with_gd_test_case()\n\nparameters = update_parameters_with_gd(parameters, grads, learning_rate)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "W1 = [[ 1.63535156 -0.62320365 -0.53718766]\n [-1.07799357 0.85639907 -2.29470142]]\nb1 = [[ 1.74604067]\n [-0.75184921]]\nW2 = [[ 0.32171798 -0.25467393 1.46902454]\n [-2.05617317 -0.31554548 -0.3756023 ]\n [ 1.1404819 -1.09976462 -0.1612551 ]]\nb2 = [[-0.88020257]\n [ 0.02561572]\n [ 0.57539477]]\n" ] ], [ [ "**Expected Output**:\n\n<table> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.63535156 -0.62320365 -0.53718766]\n [-1.07799357 0.85639907 -2.29470142]] </td> \n </tr> \n \n <tr>\n <td > **b1** </td> \n <td > [[ 1.74604067]\n [-0.75184921]] </td> \n </tr> \n \n <tr>\n <td > **W2** </td> \n <td > [[ 0.32171798 -0.25467393 1.46902454]\n [-2.05617317 -0.31554548 -0.3756023 ]\n [ 1.1404819 -1.09976462 -0.1612551 ]] </td> \n </tr> \n \n <tr>\n <td > **b2** </td> \n <td > [[-0.88020257]\n [ 0.02561572]\n [ 0.57539477]] </td> \n </tr> \n</table>\n", "_____no_output_____" ], [ "A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. \n\n- **(Batch) Gradient Descent**:\n\n``` python\nX = data_input\nY = labels\nparameters = initialize_parameters(layers_dims)\nfor i in range(0, num_iterations):\n # Forward propagation\n a, caches = forward_propagation(X, parameters)\n # Compute cost.\n cost = compute_cost(a, Y)\n # Backward propagation.\n grads = backward_propagation(a, caches, parameters)\n # Update parameters.\n parameters = update_parameters(parameters, grads)\n \n```\n\n- **Stochastic Gradient Descent**:\n\n```python\nX = data_input\nY = labels\nparameters = initialize_parameters(layers_dims)\nfor i in range(0, num_iterations):\n for j in range(0, m):\n # Forward propagation\n a, caches = forward_propagation(X[:,j], parameters)\n # Compute cost\n cost = compute_cost(a, Y[:,j])\n # Backward propagation\n grads = backward_propagation(a, caches, parameters)\n # Update parameters.\n parameters = update_parameters(parameters, grads)\n```\n", "_____no_output_____" ], [ "In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will \"oscillate\" toward the minimum rather than converge smoothly. Here is an illustration of this: \n\n<img src=\"images/kiank_sgd.png\" style=\"width:750px;height:250px;\">\n<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **SGD vs GD**<br> \"+\" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>\n\n**Note** also that implementing SGD requires 3 for-loops in total:\n1. Over the number of iterations\n2. Over the $m$ training examples\n3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)\n\nIn practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.\n\n<img src=\"images/kiank_minibatch.png\" style=\"width:750px;height:250px;\">\n<caption><center> <u> <font color='purple'> **Figure 2** </u>: <font color='purple'> **SGD vs Mini-Batch GD**<br> \"+\" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>\n\n<font color='blue'>\n**What you should remember**:\n- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.\n- You have to tune a learning rate hyperparameter $\\alpha$.\n- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).", "_____no_output_____" ], [ "## 2 - Mini-Batch Gradient descent\n\nLet's learn how to build mini-batches from the training set (X, Y).\n\nThere are two steps:\n- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. \n\n<img src=\"images/kiank_shuffle.png\" style=\"width:550px;height:300px;\">\n\n- **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: \n\n<img src=\"images/kiank_partition.png\" style=\"width:550px;height:300px;\">\n\n**Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:\n```python\nfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]\nsecond_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]\n...\n```\n\nNote that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\\lfloor s \\rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\\lfloor \\frac{m}{mini\\_batch\\_size}\\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\\_batch_\\_size \\times \\lfloor \\frac{m}{mini\\_batch\\_size}\\rfloor$). ", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: random_mini_batches\n\ndef random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):\n \"\"\"\n Creates a list of random minibatches from (X, Y)\n \n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)\n mini_batch_size -- size of the mini-batches, integer\n \n Returns:\n mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)\n \"\"\"\n \n np.random.seed(seed) # To make your \"random\" minibatches the same as ours\n m = X.shape[1] # number of training examples\n mini_batches = []\n \n # Step 1: Shuffle (X, Y)\n permutation = list(np.random.permutation(m))\n shuffled_X = X[:, permutation]\n shuffled_Y = Y[:, permutation].reshape((1,m))\n\n # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.\n num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning\n for k in range(0, num_complete_minibatches):\n ### START CODE HERE ### (approx. 2 lines)\n mini_batch_X = shuffled_X[:,k * mini_batch_size:(k + 1) * mini_batch_size]\n mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k + 1) * mini_batch_size]\n ### END CODE HERE ###\n mini_batch = (mini_batch_X, mini_batch_Y)\n mini_batches.append(mini_batch)\n \n # Handling the end case (last mini-batch < mini_batch_size)\n if m % mini_batch_size != 0:\n ### START CODE HERE ### (approx. 2 lines)\n end = m - mini_batch_size * math.floor(m / mini_batch_size)\n mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]\n mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]\n ### END CODE HERE ###\n mini_batch = (mini_batch_X, mini_batch_Y)\n mini_batches.append(mini_batch)\n \n return mini_batches", "_____no_output_____" ], [ "X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()\nmini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)\n\nprint(\"shape of the 1st mini_batch_X: \" + str(mini_batches[0][0].shape))\nprint(\"shape of the 2nd mini_batch_X: \" + str(mini_batches[1][0].shape))\nprint(\"shape of the 3rd mini_batch_X: \" + str(mini_batches[2][0].shape))\nprint(\"shape of the 1st mini_batch_Y: \" + str(mini_batches[0][1].shape))\nprint(\"shape of the 2nd mini_batch_Y: \" + str(mini_batches[1][1].shape)) \nprint(\"shape of the 3rd mini_batch_Y: \" + str(mini_batches[2][1].shape))\nprint(\"mini batch sanity check: \" + str(mini_batches[0][0][0][0:3]))", "shape of the 1st mini_batch_X: (12288, 64)\nshape of the 2nd mini_batch_X: (12288, 64)\nshape of the 3rd mini_batch_X: (12288, 20)\nshape of the 1st mini_batch_Y: (1, 64)\nshape of the 2nd mini_batch_Y: (1, 64)\nshape of the 3rd mini_batch_Y: (1, 20)\nmini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]\n" ] ], [ [ "**Expected Output**:\n\n<table style=\"width:50%\"> \n <tr>\n <td > **shape of the 1st mini_batch_X** </td> \n <td > (12288, 64) </td> \n </tr> \n \n <tr>\n <td > **shape of the 2nd mini_batch_X** </td> \n <td > (12288, 64) </td> \n </tr> \n \n <tr>\n <td > **shape of the 3rd mini_batch_X** </td> \n <td > (12288, 20) </td> \n </tr>\n <tr>\n <td > **shape of the 1st mini_batch_Y** </td> \n <td > (1, 64) </td> \n </tr> \n <tr>\n <td > **shape of the 2nd mini_batch_Y** </td> \n <td > (1, 64) </td> \n </tr> \n <tr>\n <td > **shape of the 3rd mini_batch_Y** </td> \n <td > (1, 20) </td> \n </tr> \n <tr>\n <td > **mini batch sanity check** </td> \n <td > [ 0.90085595 -0.7612069 0.2344157 ] </td> \n </tr>\n \n</table>", "_____no_output_____" ], [ "<font color='blue'>\n**What you should remember**:\n- Shuffling and Partitioning are the two steps required to build mini-batches\n- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.", "_____no_output_____" ], [ "## 3 - Momentum\n\nBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will \"oscillate\" toward convergence. Using momentum can reduce these oscillations. \n\nMomentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the \"velocity\" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. \n\n<img src=\"images/opt_momentum.png\" style=\"width:400px;height:250px;\">\n<caption><center> <u><font color='purple'>**Figure 3**</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>\n\n\n**Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:\nfor $l =1,...,L$:\n```python\nv[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\nv[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\n```\n**Note** that the iterator l starts at 0 in the for loop while the first parameters are v[\"dW1\"] and v[\"db1\"] (that's a \"one\" on the superscript). This is why we are shifting l to l+1 in the `for` loop.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: initialize_velocity\n\ndef initialize_velocity(parameters):\n \"\"\"\n Initializes the velocity as a python dictionary with:\n - keys: \"dW1\", \"db1\", ..., \"dWL\", \"dbL\" \n - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.\n Arguments:\n parameters -- python dictionary containing your parameters.\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n \n Returns:\n v -- python dictionary containing the current velocity.\n v['dW' + str(l)] = velocity of dWl\n v['db' + str(l)] = velocity of dbl\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v = {}\n \n # Initialize velocity\n for l in range(L):\n ### START CODE HERE ### (approx. 2 lines)\n v[\"dW\" + str(l + 1)] = np.zeros_like(parameters[\"W\" + str(l+1)])\n v[\"db\" + str(l + 1)] = np.zeros_like(parameters[\"b\" + str(l+1)])\n ### END CODE HERE ###\n \n return v", "_____no_output_____" ], [ "parameters = initialize_velocity_test_case()\n\nv = initialize_velocity(parameters)\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))", "v[\"dW1\"] = [[ 0. 0. 0.]\n [ 0. 0. 0.]]\nv[\"db1\"] = [[ 0.]\n [ 0.]]\nv[\"dW2\"] = [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]]\nv[\"db2\"] = [[ 0.]\n [ 0.]\n [ 0.]]\n" ] ], [ [ "**Expected Output**:\n\n<table style=\"width:40%\"> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n \n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr> \n</table>\n", "_____no_output_____" ], [ "**Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: \n\n$$ \\begin{cases}\nv_{dW^{[l]}} = \\beta v_{dW^{[l]}} + (1 - \\beta) dW^{[l]} \\\\\nW^{[l]} = W^{[l]} - \\alpha v_{dW^{[l]}}\n\\end{cases}\\tag{3}$$\n\n$$\\begin{cases}\nv_{db^{[l]}} = \\beta v_{db^{[l]}} + (1 - \\beta) db^{[l]} \\\\\nb^{[l]} = b^{[l]} - \\alpha v_{db^{[l]}} \n\\end{cases}\\tag{4}$$\n\nwhere L is the number of layers, $\\beta$ is the momentum and $\\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a \"one\" on the superscript). So you will need to shift `l` to `l+1` when coding.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: update_parameters_with_momentum\n\ndef update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):\n \"\"\"\n Update parameters using Momentum\n \n Arguments:\n parameters -- python dictionary containing your parameters:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients for each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n v -- python dictionary containing the current velocity:\n v['dW' + str(l)] = ...\n v['db' + str(l)] = ...\n beta -- the momentum hyperparameter, scalar\n learning_rate -- the learning rate, scalar\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n v -- python dictionary containing your updated velocities\n \"\"\"\n\n L = len(parameters) // 2 # number of layers in the neural networks\n \n # Momentum update for each parameter\n for l in range(L):\n \n ### START CODE HERE ### (approx. 4 lines)\n # compute velocities\n v[\"dW\" + str(l + 1)] = beta * v[\"dW\" + str(l + 1)] + (1 - beta) * grads['dW' + str(l + 1)]\n v[\"db\" + str(l + 1)] = beta * v[\"db\" + str(l + 1)] + (1 - beta) * grads['db' + str(l + 1)]\n # update parameters\n parameters[\"W\" + str(l + 1)] = parameters[\"W\" + str(l + 1)] - learning_rate * v[\"dW\" + str(l + 1)]\n parameters[\"b\" + str(l + 1)] = parameters[\"b\" + str(l + 1)] - learning_rate * v[\"db\" + str(l + 1)]\n ### END CODE HERE ###\n \n return parameters, v", "_____no_output_____" ], [ "parameters, grads, v = update_parameters_with_momentum_test_case()\n\nparameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))", "W1 = [[ 1.62544598 -0.61290114 -0.52907334]\n [-1.07347112 0.86450677 -2.30085497]]\nb1 = [[ 1.74493465]\n [-0.76027113]]\nW2 = [[ 0.31930698 -0.24990073 1.4627996 ]\n [-2.05974396 -0.32173003 -0.38320915]\n [ 1.13444069 -1.0998786 -0.1713109 ]]\nb2 = [[-0.87809283]\n [ 0.04055394]\n [ 0.58207317]]\nv[\"dW1\"] = [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]]\nv[\"db1\"] = [[-0.01228902]\n [-0.09357694]]\nv[\"dW2\"] = [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]]\nv[\"db2\"] = [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]]\n" ] ], [ [ "**Expected Output**:\n\n<table style=\"width:90%\"> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.62544598 -0.61290114 -0.52907334]\n [-1.07347112 0.86450677 -2.30085497]] </td> \n </tr> \n \n <tr>\n <td > **b1** </td> \n <td > [[ 1.74493465]\n [-0.76027113]] </td> \n </tr> \n \n <tr>\n <td > **W2** </td> \n <td > [[ 0.31930698 -0.24990073 1.4627996 ]\n [-2.05974396 -0.32173003 -0.38320915]\n [ 1.13444069 -1.0998786 -0.1713109 ]] </td> \n </tr> \n \n <tr>\n <td > **b2** </td> \n <td > [[-0.87809283]\n [ 0.04055394]\n [ 0.58207317]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[-0.01228902]\n [-0.09357694]] </td> \n </tr> \n \n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]]</td> \n </tr> \n</table>\n\n", "_____no_output_____" ], [ "**Note** that:\n- The velocity is initialized with zeros. So the algorithm will take a few iterations to \"build up\" velocity and start to take bigger steps.\n- If $\\beta = 0$, then this just becomes standard gradient descent without momentum. \n\n**How do you choose $\\beta$?**\n\n- The larger the momentum $\\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\\beta$ is too big, it could also smooth out the updates too much. \n- Common values for $\\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\\beta = 0.9$ is often a reasonable default. \n- Tuning the optimal $\\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. ", "_____no_output_____" ], [ "<font color='blue'>\n**What you should remember**:\n- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.\n- You have to tune a momentum hyperparameter $\\beta$ and a learning rate $\\alpha$.", "_____no_output_____" ], [ "## 4 - Adam\n\nAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. \n\n**How does Adam work?**\n1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). \n2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). \n3. It updates parameters in a direction based on combining information from \"1\" and \"2\".\n\nThe update rule is, for $l = 1, ..., L$: \n\n$$\\begin{cases}\nv_{dW^{[l]}} = \\beta_1 v_{dW^{[l]}} + (1 - \\beta_1) \\frac{\\partial \\mathcal{J} }{ \\partial W^{[l]} } \\\\\nv^{corrected}_{dW^{[l]}} = \\frac{v_{dW^{[l]}}}{1 - (\\beta_1)^t} \\\\\ns_{dW^{[l]}} = \\beta_2 s_{dW^{[l]}} + (1 - \\beta_2) (\\frac{\\partial \\mathcal{J} }{\\partial W^{[l]} })^2 \\\\\ns^{corrected}_{dW^{[l]}} = \\frac{s_{dW^{[l]}}}{1 - (\\beta_1)^t} \\\\\nW^{[l]} = W^{[l]} - \\alpha \\frac{v^{corrected}_{dW^{[l]}}}{\\sqrt{s^{corrected}_{dW^{[l]}}} + \\varepsilon}\n\\end{cases}$$\nwhere:\n- t counts the number of steps taken of Adam \n- L is the number of layers\n- $\\beta_1$ and $\\beta_2$ are hyperparameters that control the two exponentially weighted averages. \n- $\\alpha$ is the learning rate\n- $\\varepsilon$ is a very small number to avoid dividing by zero\n\nAs usual, we will store all parameters in the `parameters` dictionary ", "_____no_output_____" ], [ "**Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.\n\n**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:\nfor $l = 1, ..., L$:\n```python\nv[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\nv[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\ns[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\ns[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\n\n```", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: initialize_adam\n\ndef initialize_adam(parameters) :\n \"\"\"\n Initializes v and s as two python dictionaries with:\n - keys: \"dW1\", \"db1\", ..., \"dWL\", \"dbL\" \n - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.\n \n Arguments:\n parameters -- python dictionary containing your parameters.\n parameters[\"W\" + str(l)] = Wl\n parameters[\"b\" + str(l)] = bl\n \n Returns: \n v -- python dictionary that will contain the exponentially weighted average of the gradient.\n v[\"dW\" + str(l)] = ...\n v[\"db\" + str(l)] = ...\n s -- python dictionary that will contain the exponentially weighted average of the squared gradient.\n s[\"dW\" + str(l)] = ...\n s[\"db\" + str(l)] = ...\n\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v = {}\n s = {}\n \n # Initialize v, s. Input: \"parameters\". Outputs: \"v, s\".\n for l in range(L):\n ### START CODE HERE ### (approx. 4 lines)\n v[\"dW\" + str(l + 1)] = np.zeros_like(parameters[\"W\" + str(l + 1)])\n v[\"db\" + str(l + 1)] = np.zeros_like(parameters[\"b\" + str(l + 1)])\n\n s[\"dW\" + str(l+1)] = np.zeros_like(parameters[\"W\" + str(l + 1)])\n s[\"db\" + str(l+1)] = np.zeros_like(parameters[\"b\" + str(l + 1)])\n ### END CODE HERE ###\n \n return v, s", "_____no_output_____" ], [ "parameters = initialize_adam_test_case()\n\nv, s = initialize_adam(parameters)\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))\nprint(\"s[\\\"dW1\\\"] = \" + str(s[\"dW1\"]))\nprint(\"s[\\\"db1\\\"] = \" + str(s[\"db1\"]))\nprint(\"s[\\\"dW2\\\"] = \" + str(s[\"dW2\"]))\nprint(\"s[\\\"db2\\\"] = \" + str(s[\"db2\"]))", "v[\"dW1\"] = [[ 0. 0. 0.]\n [ 0. 0. 0.]]\nv[\"db1\"] = [[ 0.]\n [ 0.]]\nv[\"dW2\"] = [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]]\nv[\"db2\"] = [[ 0.]\n [ 0.]\n [ 0.]]\ns[\"dW1\"] = [[ 0. 0. 0.]\n [ 0. 0. 0.]]\ns[\"db1\"] = [[ 0.]\n [ 0.]]\ns[\"dW2\"] = [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]]\ns[\"db2\"] = [[ 0.]\n [ 0.]\n [ 0.]]\n" ] ], [ [ "**Expected Output**:\n\n<table style=\"width:40%\"> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n \n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr> \n <tr>\n <td > **s[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n \n <tr>\n <td > **s[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n \n <tr>\n <td > **s[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n \n <tr>\n <td > **s[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr>\n\n</table>\n", "_____no_output_____" ], [ "**Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: \n\n$$\\begin{cases}\nv_{W^{[l]}} = \\beta_1 v_{W^{[l]}} + (1 - \\beta_1) \\frac{\\partial J }{ \\partial W^{[l]} } \\\\\nv^{corrected}_{W^{[l]}} = \\frac{v_{W^{[l]}}}{1 - (\\beta_1)^t} \\\\\ns_{W^{[l]}} = \\beta_2 s_{W^{[l]}} + (1 - \\beta_2) (\\frac{\\partial J }{\\partial W^{[l]} })^2 \\\\\ns^{corrected}_{W^{[l]}} = \\frac{s_{W^{[l]}}}{1 - (\\beta_2)^t} \\\\\nW^{[l]} = W^{[l]} - \\alpha \\frac{v^{corrected}_{W^{[l]}}}{\\sqrt{s^{corrected}_{W^{[l]}}}+\\varepsilon}\n\\end{cases}$$\n\n\n**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: update_parameters_with_adam\n\ndef update_parameters_with_adam(parameters, grads, v, s, t, learning_rate=0.01,\n beta1=0.9, beta2=0.999, epsilon=1e-8):\n \"\"\"\n Update parameters using Adam\n \n Arguments:\n parameters -- python dictionary containing your parameters:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients for each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n v -- Adam variable, moving average of the first gradient, python dictionary\n s -- Adam variable, moving average of the squared gradient, python dictionary\n learning_rate -- the learning rate, scalar.\n beta1 -- Exponential decay hyperparameter for the first moment estimates \n beta2 -- Exponential decay hyperparameter for the second moment estimates \n epsilon -- hyperparameter preventing division by zero in Adam updates\n\n Returns:\n parameters -- python dictionary containing your updated parameters \n v -- Adam variable, moving average of the first gradient, python dictionary\n s -- Adam variable, moving average of the squared gradient, python dictionary\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v_corrected = {} # Initializing first moment estimate, python dictionary\n s_corrected = {} # Initializing second moment estimate, python dictionary\n \n # Perform Adam update on all parameters\n for l in range(L):\n # Moving average of the gradients. Inputs: \"v, grads, beta1\". Output: \"v\".\n ### START CODE HERE ### (approx. 2 lines)\n v[\"dW\" + str(l + 1)] = beta1 * v[\"dW\" + str(l + 1)] + (1 - beta1) * grads['dW' + str(l + 1)]\n v[\"db\" + str(l + 1)] = beta1 * v[\"db\" + str(l + 1)] + (1 - beta1) * grads['db' + str(l + 1)]\n ### END CODE HERE ###\n\n # Compute bias-corrected first moment estimate. Inputs: \"v, beta1, t\". Output: \"v_corrected\".\n ### START CODE HERE ### (approx. 2 lines)\n v_corrected[\"dW\" + str(l + 1)] = v[\"dW\" + str(l + 1)] / (1 - np.power(beta1, t))\n v_corrected[\"db\" + str(l + 1)] = v[\"db\" + str(l + 1)] / (1 - np.power(beta1, t))\n ### END CODE HERE ###\n\n # Moving average of the squared gradients. Inputs: \"s, grads, beta2\". Output: \"s\".\n ### START CODE HERE ### (approx. 2 lines)\n s[\"dW\" + str(l + 1)] = beta2 * s[\"dW\" + str(l + 1)] + (1 - beta2) * np.power(grads['dW' + str(l + 1)], 2)\n s[\"db\" + str(l + 1)] = beta2 * s[\"db\" + str(l + 1)] + (1 - beta2) * np.power(grads['db' + str(l + 1)], 2)\n ### END CODE HERE ###\n\n # Compute bias-corrected second raw moment estimate. Inputs: \"s, beta2, t\". Output: \"s_corrected\".\n ### START CODE HERE ### (approx. 2 lines)\n s_corrected[\"dW\" + str(l + 1)] = s[\"dW\" + str(l + 1)] / (1 - np.power(beta2, t))\n s_corrected[\"db\" + str(l + 1)] = s[\"db\" + str(l + 1)] / (1 - np.power(beta2, t))\n ### END CODE HERE ###\n\n # Update parameters. Inputs: \"parameters, learning_rate, v_corrected, s_corrected, epsilon\". Output: \"parameters\".\n ### START CODE HERE ### (approx. 2 lines)\n parameters[\"W\" + str(l + 1)] = parameters[\"W\" + str(l + 1)] - learning_rate * v_corrected[\"dW\" + str(l + 1)] / np.sqrt(s_corrected[\"dW\" + str(l + 1)] + epsilon)\n parameters[\"b\" + str(l + 1)] = parameters[\"b\" + str(l + 1)] - learning_rate * v_corrected[\"db\" + str(l + 1)] / np.sqrt(s_corrected[\"db\" + str(l + 1)] + epsilon)\n ### END CODE HERE ###\n\n return parameters, v, s", "_____no_output_____" ], [ "parameters, grads, v, s = update_parameters_with_adam_test_case()\nparameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)\n\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))\nprint(\"s[\\\"dW1\\\"] = \" + str(s[\"dW1\"]))\nprint(\"s[\\\"db1\\\"] = \" + str(s[\"db1\"]))\nprint(\"s[\\\"dW2\\\"] = \" + str(s[\"dW2\"]))\nprint(\"s[\\\"db2\\\"] = \" + str(s[\"db2\"]))", "W1 = [[ 1.79078034 -0.77819144 -0.69460639]\n [-1.23940099 0.69897299 -2.13510481]]\nb1 = [[ 1.91119235]\n [-0.59477218]]\nW2 = [[ 0.48546317 -0.41580308 1.62854186]\n [-1.89371033 -0.1559833 -0.21761985]\n [ 1.30020326 -0.93841334 -0.00599321]]\nb2 = [[-1.04427894]\n [-0.12422162]\n [ 0.41638106]]\nv[\"dW1\"] = [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]]\nv[\"db1\"] = [[-0.01228902]\n [-0.09357694]]\nv[\"dW2\"] = [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]]\nv[\"db2\"] = [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]]\ns[\"dW1\"] = [[ 0.00121136 0.00131039 0.00081287]\n [ 0.0002525 0.00081154 0.00046748]]\ns[\"db1\"] = [[ 1.51020075e-05]\n [ 8.75664434e-04]]\ns[\"dW2\"] = [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]\n [ 1.57413361e-04 4.72206320e-04 7.14372576e-04]\n [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]\ns[\"db2\"] = [[ 5.49507194e-05]\n [ 2.75494327e-03]\n [ 5.50629536e-04]]\n" ] ], [ [ "**Expected Output**:\n\n<table> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.63178673 -0.61919778 -0.53561312]\n [-1.08040999 0.85796626 -2.29409733]] </td> \n </tr> \n \n <tr>\n <td > **b1** </td> \n <td > [[ 1.75225313]\n [-0.75376553]] </td> \n </tr> \n \n <tr>\n <td > **W2** </td> \n <td > [[ 0.32648046 -0.25681174 1.46954931]\n [-2.05269934 -0.31497584 -0.37661299]\n [ 1.14121081 -1.09245036 -0.16498684]] </td> \n </tr> \n \n <tr>\n <td > **b2** </td> \n <td > [[-0.88529978]\n [ 0.03477238]\n [ 0.57537385]] </td> \n </tr> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[-0.01228902]\n [-0.09357694]] </td> \n </tr> \n \n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]] </td> \n </tr> \n \n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]] </td> \n </tr> \n <tr>\n <td > **s[\"dW1\"]** </td> \n <td > [[ 0.00121136 0.00131039 0.00081287]\n [ 0.0002525 0.00081154 0.00046748]] </td> \n </tr> \n \n <tr>\n <td > **s[\"db1\"]** </td> \n <td > [[ 1.51020075e-05]\n [ 8.75664434e-04]] </td> \n </tr> \n \n <tr>\n <td > **s[\"dW2\"]** </td> \n <td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]\n [ 1.57413361e-04 4.72206320e-04 7.14372576e-04]\n [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td> \n </tr> \n \n <tr>\n <td > **s[\"db2\"]** </td> \n <td > [[ 5.49507194e-05]\n [ 2.75494327e-03]\n [ 5.50629536e-04]] </td> \n </tr>\n</table>\n", "_____no_output_____" ], [ "You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.", "_____no_output_____" ], [ "## 5 - Model with different optimization algorithms\n\nLets use the following \"moons\" dataset to test the different optimization methods. (The dataset is named \"moons\" because the data from each of the two classes looks a bit like a crescent-shaped moon.) ", "_____no_output_____" ] ], [ [ "train_X, train_Y = load_dataset()", "_____no_output_____" ] ], [ [ "We have already implemented a 3-layer neural network. You will train it with: \n- Mini-batch **Gradient Descent**: it will call your function:\n - `update_parameters_with_gd()`\n- Mini-batch **Momentum**: it will call your functions:\n - `initialize_velocity()` and `update_parameters_with_momentum()`\n- Mini-batch **Adam**: it will call your functions:\n - `initialize_adam()` and `update_parameters_with_adam()`", "_____no_output_____" ] ], [ [ "def model(X, Y, layers_dims, optimizer, learning_rate=0.0007, mini_batch_size=64, beta=0.9,\n beta1=0.9, beta2=0.999, epsilon=1e-8, num_epochs=10000, print_cost=True):\n \"\"\"\n 3-layer neural network model which can be run in different optimizer modes.\n \n Arguments:\n X -- input data, of shape (2, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)\n layers_dims -- python list, containing the size of each layer\n learning_rate -- the learning rate, scalar.\n mini_batch_size -- the size of a mini batch\n beta -- Momentum hyperparameter\n beta1 -- Exponential decay hyperparameter for the past gradients estimates \n beta2 -- Exponential decay hyperparameter for the past squared gradients estimates \n epsilon -- hyperparameter preventing division by zero in Adam updates\n num_epochs -- number of epochs\n print_cost -- True to print the cost every 1000 epochs\n\n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n\n L = len(layers_dims) # number of layers in the neural networks\n costs = [] # to keep track of the cost\n t = 0 # initializing the counter required for Adam update\n seed = 10 # For grading purposes, so that your \"random\" minibatches are the same as ours\n \n # Initialize parameters\n parameters = initialize_parameters(layers_dims)\n\n # Initialize the optimizer\n if optimizer == \"gd\":\n pass # no initialization required for gradient descent\n elif optimizer == \"momentum\":\n v = initialize_velocity(parameters)\n elif optimizer == \"adam\":\n v, s = initialize_adam(parameters)\n \n # Optimization loop\n for i in range(num_epochs):\n \n # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch\n seed = seed + 1\n minibatches = random_mini_batches(X, Y, mini_batch_size, seed)\n\n for minibatch in minibatches:\n\n # Select a minibatch\n (minibatch_X, minibatch_Y) = minibatch\n\n # Forward propagation\n a3, caches = forward_propagation(minibatch_X, parameters)\n\n # Compute cost\n cost = compute_cost(a3, minibatch_Y)\n\n # Backward propagation\n grads = backward_propagation(minibatch_X, minibatch_Y, caches)\n\n # Update parameters\n if optimizer == \"gd\":\n parameters = update_parameters_with_gd(parameters, grads, learning_rate)\n elif optimizer == \"momentum\":\n parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)\n elif optimizer == \"adam\":\n t = t + 1 # Adam counter\n parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,\n t, learning_rate, beta1, beta2, epsilon)\n \n # Print the cost every 1000 epoch\n if print_cost and i % 1000 == 0:\n print(\"Cost after epoch %i: %f\" % (i, cost))\n if print_cost and i % 100 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('epochs (per 100)')\n plt.title(\"Learning rate = \" + str(learning_rate))\n plt.show()\n\n return parameters", "_____no_output_____" ] ], [ [ "You will now run this 3 layer neural network with each of the 3 optimization methods.\n\n### 5.1 - Mini-batch Gradient descent\n\nRun the following code to see how the model does with mini-batch gradient descent.", "_____no_output_____" ] ], [ [ "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, optimizer=\"gd\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Gradient Descent optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5, 2.5])\naxes.set_ylim([-1, 1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "Cost after epoch 0: 0.690736\nCost after epoch 1000: 0.685273\nCost after epoch 2000: 0.647072\nCost after epoch 3000: 0.619525\nCost after epoch 4000: 0.576584\nCost after epoch 5000: 0.607243\nCost after epoch 6000: 0.529403\nCost after epoch 7000: 0.460768\nCost after epoch 8000: 0.465586\nCost after epoch 9000: 0.464518\n" ] ], [ [ "### 5.2 - Mini-batch gradient descent with momentum\n\nRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.", "_____no_output_____" ] ], [ [ "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, beta=0.9, optimizer=\"momentum\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Momentum optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5, 2.5])\naxes.set_ylim([-1, 1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "Cost after epoch 0: 0.690741\nCost after epoch 1000: 0.685341\nCost after epoch 2000: 0.647145\nCost after epoch 3000: 0.619594\nCost after epoch 4000: 0.576665\nCost after epoch 5000: 0.607324\nCost after epoch 6000: 0.529476\nCost after epoch 7000: 0.460936\nCost after epoch 8000: 0.465780\nCost after epoch 9000: 0.464740\n" ] ], [ [ "### 5.3 - Mini-batch with Adam mode\n\nRun the following code to see how the model does with Adam.", "_____no_output_____" ] ], [ [ "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, optimizer=\"adam\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Adam optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5, 2.5])\naxes.set_ylim([-1, 1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "Cost after epoch 0: 0.687550\nCost after epoch 1000: 0.173593\nCost after epoch 2000: 0.150145\nCost after epoch 3000: 0.072939\nCost after epoch 4000: 0.125896\nCost after epoch 5000: 0.104185\nCost after epoch 6000: 0.116069\nCost after epoch 7000: 0.031774\nCost after epoch 8000: 0.112908\nCost after epoch 9000: 0.197732\n" ] ], [ [ "### 5.4 - Summary\n\n<table> \n <tr>\n <td>\n **optimization method**\n </td>\n <td>\n **accuracy**\n </td>\n <td>\n **cost shape**\n </td>\n\n </tr>\n <td>\n Gradient descent\n </td>\n <td>\n 79.7%\n </td>\n <td>\n oscillations\n </td>\n <tr>\n <td>\n Momentum\n </td>\n <td>\n 79.7%\n </td>\n <td>\n oscillations\n </td>\n </tr>\n <tr>\n <td>\n Adam\n </td>\n <td>\n 94%\n </td>\n <td>\n smoother\n </td>\n </tr>\n</table> \n\nMomentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.\n\nAdam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster.\n\nSome advantages of Adam include:\n- Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum) \n- Usually works well even with little tuning of hyperparameters (except $\\alpha$)", "_____no_output_____" ], [ "**References**:\n\n- Adam paper: https://arxiv.org/pdf/1412.6980.pdf", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e760ace10e5b24ecb62b26d01647e862ab2dc3da
105,689
ipynb
Jupyter Notebook
toy-problems/toy-problem-005.ipynb
debracupitt/toolkitten
94179a448b58f04dae4eb3e54c9a0ec740a4950d
[ "MIT" ]
719
2018-06-17T17:40:16.000Z
2022-03-28T00:21:48.000Z
toy-problems/toy-problem-005.ipynb
debracupitt/toolkitten
94179a448b58f04dae4eb3e54c9a0ec740a4950d
[ "MIT" ]
92
2018-06-26T13:06:21.000Z
2020-03-17T19:25:35.000Z
toy-problems/toy-problem-005.ipynb
debracupitt/toolkitten
94179a448b58f04dae4eb3e54c9a0ec740a4950d
[ "MIT" ]
784
2018-06-18T08:05:30.000Z
2022-02-20T13:31:25.000Z
1,056.89
103,764
0.966875
[ [ [ "# Python Challenge 3\n\nchallenge URL: http://www.pythonchallenge.com/pc/def/equality.html", "_____no_output_____" ] ], [ [ "import urllib.request\nimport re\n\nhtml = urllib.request.urlopen(\"http://www.pythonchallenge.com/pc/def/equality.html\").read().decode()\ndata = re.findall(\"<!--(.*?)-->\", html, re.DOTALL)[-1]\ndata", "_____no_output_____" ], [ "# ***AAAbCCC***\n# [A-Z] capital letters\n# [a-z] small caps\n# [^] not\n# + multiple\n# {3} 3 of something\n\n\"\".join(re.findall( \"[^A-Z]+[A-Z]{3}([a-z])[A-Z]{3}[^A-Z]+\", data))", "_____no_output_____" ] ], [ [ "# Advanced\n\nhttps://www.geeksforgeeks.org/inplace-rotate-square-matrix-by-90-degrees/", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e760ae64787a6b7e34e5391795fa9d6f81b4f752
68,007
ipynb
Jupyter Notebook
SageMaker Project.ipynb
KatherineKing/sagemaker-deployment
2c2c59de0b1e0eeece3137353b0595c81cf2eb2b
[ "MIT" ]
null
null
null
SageMaker Project.ipynb
KatherineKing/sagemaker-deployment
2c2c59de0b1e0eeece3137353b0595c81cf2eb2b
[ "MIT" ]
null
null
null
SageMaker Project.ipynb
KatherineKing/sagemaker-deployment
2c2c59de0b1e0eeece3137353b0595c81cf2eb2b
[ "MIT" ]
null
null
null
50.189668
1,841
0.639331
[ [ [ "# Creating a Sentiment Analysis Web App\n## Using PyTorch and SageMaker\n\n_Deep Learning Nanodegree Program | Deployment_\n\n---\n\nNow that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.\n\n## Instructions\n\nSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!\n\nIn addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.\n\n> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.\n\n## General Outline\n\nRecall the general outline for SageMaker projects using a notebook instance.\n\n1. Download or otherwise retrieve the data.\n2. Process / Prepare the data.\n3. Upload the processed data to S3.\n4. Train a chosen model.\n5. Test the trained model (typically using a batch transform job).\n6. Deploy the trained model.\n7. Use the deployed model.\n\nFor this project, you will be following the steps in the general outline with some modifications. \n\nFirst, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.\n\nIn addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.", "_____no_output_____" ] ], [ [ "# Make sure that we use SageMaker 1.x\n!pip install sagemaker==1.72.0", "Requirement already satisfied: sagemaker==1.72.0 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (1.72.0)\nRequirement already satisfied: importlib-metadata>=1.4.0 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (3.10.0)\nRequirement already satisfied: numpy>=1.9.0 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (1.20.1)\nRequirement already satisfied: scipy>=0.19.0 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (1.6.2)\nRequirement already satisfied: protobuf3-to-dict>=0.1.5 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (0.1.5)\nRequirement already satisfied: boto3>=1.14.12 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (1.17.97)\nRequirement already satisfied: smdebug-rulesconfig==0.1.4 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (0.1.4)\nRequirement already satisfied: packaging>=20.0 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (20.9)\nRequirement already satisfied: protobuf>=3.1 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from sagemaker==1.72.0) (3.17.3)\nRequirement already satisfied: s3transfer<0.5.0,>=0.4.0 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.4.2)\nRequirement already satisfied: botocore<1.21.0,>=1.20.97 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.97)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from botocore<1.21.0,>=1.20.97->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)\nRequirement already satisfied: urllib3<1.27,>=1.25.4 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from botocore<1.21.0,>=1.20.97->boto3>=1.14.12->sagemaker==1.72.0) (1.26.4)\nRequirement already satisfied: zipp>=0.5 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.1)\nRequirement already satisfied: pyparsing>=2.0.2 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)\nRequirement already satisfied: six>=1.9 in c:\\users\\katherine\\anaconda3\\lib\\site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)\n" ] ], [ [ "## Step 1: Downloading the data\n\nAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)\n\n> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.", "_____no_output_____" ] ], [ [ "#%mkdir .\\data\n#jupyter notebook --notebook-dir=/Users/Katherine/UdacityMLE/sagemaker-deployment-master/data\n%cd C:/Users/Katherine/UdacityMLE/sagemaker-deployment-master/data", "C:\\Users\\Katherine\\UdacityMLE\\sagemaker-deployment-master\\data\n" ], [ "#!wget -O ./data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\n!tar -zxf ./data/aclImdb_v1.tar.gz -C ./data", "_____no_output_____" ] ], [ [ "## Step 2: Preparing and Processing the data\n\nAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.", "_____no_output_____" ] ], [ [ "import os\nimport glob\n\ndef read_imdb_data(data_dir='./aclImdb'):\n data = {}\n labels = {}\n \n for data_type in ['train', 'test']:\n data[data_type] = {}\n labels[data_type] = {}\n \n for sentiment in ['pos', 'neg']:\n data[data_type][sentiment] = []\n labels[data_type][sentiment] = []\n \n path = os.path.join(data_dir, data_type, sentiment, '*.txt')\n files = glob.glob(path)\n \n for f in files:\n with open(f) as review:\n data[data_type][sentiment].append(review.read())\n # Here we represent a positive review by '1' and a negative review by '0'\n labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)\n \n assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \\\n \"{}/{} data size does not match labels size\".format(data_type, sentiment)\n \n return data, labels", "_____no_output_____" ], [ "data, labels = read_imdb_data()\nprint(\"IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg\".format(\n len(data['train']['pos']), len(data['train']['neg']),\n len(data['test']['pos']), len(data['test']['neg'])))", "_____no_output_____" ] ], [ [ "Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.", "_____no_output_____" ] ], [ [ "from sklearn.utils import shuffle\n\ndef prepare_imdb_data(data, labels):\n \"\"\"Prepare training and test sets from IMDb movie reviews.\"\"\"\n \n #Combine positive and negative reviews and labels\n data_train = data['train']['pos'] + data['train']['neg']\n data_test = data['test']['pos'] + data['test']['neg']\n labels_train = labels['train']['pos'] + labels['train']['neg']\n labels_test = labels['test']['pos'] + labels['test']['neg']\n \n #Shuffle reviews and corresponding labels within training and test sets\n data_train, labels_train = shuffle(data_train, labels_train)\n data_test, labels_test = shuffle(data_test, labels_test)\n \n # Return a unified training data, test data, training labels, test labets\n return data_train, data_test, labels_train, labels_test", "_____no_output_____" ], [ "train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)\nprint(\"IMDb reviews (combined): train = {}, test = {}\".format(len(train_X), len(test_X)))", "_____no_output_____" ] ], [ [ "Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.", "_____no_output_____" ] ], [ [ "print(train_X[100])\nprint(train_y[100])", "_____no_output_____" ] ], [ [ "The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.", "_____no_output_____" ] ], [ [ "import nltk\nfrom nltk.corpus import stopwords\nfrom nltk.stem.porter import *\n\nimport re\nfrom bs4 import BeautifulSoup\n\ndef review_to_words(review):\n nltk.download(\"stopwords\", quiet=True)\n stemmer = PorterStemmer()\n \n text = BeautifulSoup(review, \"html.parser\").get_text() # Remove HTML tags\n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text.lower()) # Convert to lower case\n words = text.split() # Split string into words\n words = [w for w in words if w not in stopwords.words(\"english\")] # Remove stopwords\n words = [PorterStemmer().stem(w) for w in words] # stem\n \n return words", "_____no_output_____" ] ], [ [ "The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.", "_____no_output_____" ] ], [ [ "# TODO: Apply review_to_words to a review (train_X[100] or any other review)\nreview_to_words(train_X[100])", "_____no_output_____" ] ], [ [ "**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?", "_____no_output_____" ], [ "**Answer:**", "_____no_output_____" ], [ "The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.", "_____no_output_____" ] ], [ [ "import pickle\n\ncache_dir = os.path.join(\"../cache\", \"sentiment_analysis\") # where to store cache files\nos.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists\n\ndef preprocess_data(data_train, data_test, labels_train, labels_test,\n cache_dir=cache_dir, cache_file=\"preprocessed_data.pkl\"):\n \"\"\"Convert each review to words; read from cache if available.\"\"\"\n\n # If cache_file is not None, try to read from it first\n cache_data = None\n if cache_file is not None:\n try:\n with open(os.path.join(cache_dir, cache_file), \"rb\") as f:\n cache_data = pickle.load(f)\n print(\"Read preprocessed data from cache file:\", cache_file)\n except:\n pass # unable to read from cache, but that's okay\n \n # If cache is missing, then do the heavy lifting\n if cache_data is None:\n # Preprocess training and test data to obtain words for each review\n #words_train = list(map(review_to_words, data_train))\n #words_test = list(map(review_to_words, data_test))\n words_train = [review_to_words(review) for review in data_train]\n words_test = [review_to_words(review) for review in data_test]\n \n # Write to cache file for future runs\n if cache_file is not None:\n cache_data = dict(words_train=words_train, words_test=words_test,\n labels_train=labels_train, labels_test=labels_test)\n with open(os.path.join(cache_dir, cache_file), \"wb\") as f:\n pickle.dump(cache_data, f)\n print(\"Wrote preprocessed data to cache file:\", cache_file)\n else:\n # Unpack data loaded from cache file\n words_train, words_test, labels_train, labels_test = (cache_data['words_train'],\n cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])\n \n return words_train, words_test, labels_train, labels_test", "_____no_output_____" ], [ "# Preprocess data\ntrain_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)", "_____no_output_____" ] ], [ [ "## Transform the data\n\nIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.\n\nSince we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.", "_____no_output_____" ], [ "### (TODO) Create a word dictionary\n\nTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.\n\n> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndef build_dict(data, vocab_size = 5000):\n \"\"\"Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer.\"\"\"\n \n # TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a\n # sentence is a list of words.\n \n word_count = {} # A dict storing the words that appear in the reviews along with how often they occur\n \n # TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and\n # sorted_words[-1] is the least frequently appearing word.\n \n sorted_words = None\n \n word_dict = {} # This is what we are building, a dictionary that translates words into integers\n for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'\n word_dict[word] = idx + 2 # 'infrequent' labels\n \n return word_dict", "_____no_output_____" ], [ "word_dict = build_dict(train_X)", "_____no_output_____" ] ], [ [ "**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?", "_____no_output_____" ], [ "**Answer:**", "_____no_output_____" ] ], [ [ "# TODO: Use this space to determine the five most frequently appearing words in the training set.", "_____no_output_____" ] ], [ [ "### Save `word_dict`\n\nLater on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.", "_____no_output_____" ] ], [ [ "data_dir = '../data/pytorch' # The folder we will use for storing data\nif not os.path.exists(data_dir): # Make sure that the folder exists\n os.makedirs(data_dir)", "_____no_output_____" ], [ "with open(os.path.join(data_dir, 'word_dict.pkl'), \"wb\") as f:\n pickle.dump(word_dict, f)", "_____no_output_____" ] ], [ [ "### Transform the reviews\n\nNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.", "_____no_output_____" ] ], [ [ "def convert_and_pad(word_dict, sentence, pad=500):\n NOWORD = 0 # We will use 0 to represent the 'no word' category\n INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict\n \n working_sentence = [NOWORD] * pad\n \n for word_index, word in enumerate(sentence[:pad]):\n if word in word_dict:\n working_sentence[word_index] = word_dict[word]\n else:\n working_sentence[word_index] = INFREQ\n \n return working_sentence, min(len(sentence), pad)\n\ndef convert_and_pad_data(word_dict, data, pad=500):\n result = []\n lengths = []\n \n for sentence in data:\n converted, leng = convert_and_pad(word_dict, sentence, pad)\n result.append(converted)\n lengths.append(leng)\n \n return np.array(result), np.array(lengths)", "_____no_output_____" ], [ "train_X, train_X_len = convert_and_pad_data(word_dict, train_X)\ntest_X, test_X_len = convert_and_pad_data(word_dict, test_X)", "_____no_output_____" ] ], [ [ "As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?", "_____no_output_____" ] ], [ [ "# Use this cell to examine one of the processed reviews to make sure everything is working as intended.", "_____no_output_____" ] ], [ [ "**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?", "_____no_output_____" ], [ "**Answer:**", "_____no_output_____" ], [ "## Step 3: Upload the data to S3\n\nAs in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.\n\n### Save the processed training dataset locally\n\nIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.", "_____no_output_____" ] ], [ [ "import pandas as pd\n \npd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \\\n .to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)", "_____no_output_____" ] ], [ [ "### Uploading the training data\n\n\nNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.", "_____no_output_____" ] ], [ [ "import sagemaker\n\nsagemaker_session = sagemaker.Session()\n\nbucket = sagemaker_session.default_bucket()\nprefix = 'sagemaker/sentiment_rnn'\n\nrole = sagemaker.get_execution_role()", "_____no_output_____" ], [ "input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)", "_____no_output_____" ] ], [ [ "**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.", "_____no_output_____" ], [ "## Step 4: Build and Train the PyTorch Model\n\nIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects\n\n - Model Artifacts,\n - Training Code, and\n - Inference Code,\n \neach of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.\n\nWe will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.", "_____no_output_____" ] ], [ [ "!pygmentize train/model.py", "_____no_output_____" ] ], [ [ "The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.\n\nFirst we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.", "_____no_output_____" ] ], [ [ "import torch\nimport torch.utils.data\n\n# Read in only the first 250 rows\ntrain_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)\n\n# Turn the input pandas dataframe into tensors\ntrain_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()\ntrain_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()\n\n# Build the dataset\ntrain_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)\n# Build the dataloader\ntrain_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)", "_____no_output_____" ] ], [ [ "### (TODO) Writing the training method\n\nNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.", "_____no_output_____" ] ], [ [ "def train(model, train_loader, epochs, optimizer, loss_fn, device):\n for epoch in range(1, epochs + 1):\n model.train()\n total_loss = 0\n for batch in train_loader: \n batch_X, batch_y = batch\n \n batch_X = batch_X.to(device)\n batch_y = batch_y.to(device)\n \n # TODO: Complete this train method to train the model provided.\n \n total_loss += loss.data.item()\n print(\"Epoch: {}, BCELoss: {}\".format(epoch, total_loss / len(train_loader)))", "_____no_output_____" ] ], [ [ "Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.", "_____no_output_____" ] ], [ [ "import torch.optim as optim\nfrom train.model import LSTMClassifier\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel = LSTMClassifier(32, 100, 5000).to(device)\noptimizer = optim.Adam(model.parameters())\nloss_fn = torch.nn.BCELoss()\n\ntrain(model, train_sample_dl, 5, optimizer, loss_fn, device)", "_____no_output_____" ] ], [ [ "In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.", "_____no_output_____" ], [ "### (TODO) Training the model\n\nWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.\n\n**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.\n\nThe way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.", "_____no_output_____" ] ], [ [ "from sagemaker.pytorch import PyTorch\n\nestimator = PyTorch(entry_point=\"train.py\",\n source_dir=\"train\",\n role=role,\n framework_version='0.4.0',\n train_instance_count=1,\n train_instance_type='ml.p2.xlarge',\n hyperparameters={\n 'epochs': 10,\n 'hidden_dim': 200,\n })", "_____no_output_____" ], [ "estimator.fit({'training': input_data})", "_____no_output_____" ] ], [ [ "## Step 5: Testing the model\n\nAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.\n\n## Step 6: Deploy the model for testing\n\nNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.\n\nThere is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.\n\n**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )\n\nSince we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.\n\n**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.\n\nIn other words **If you are no longer using a deployed endpoint, shut it down!**\n\n**TODO:** Deploy the trained model.", "_____no_output_____" ] ], [ [ "# TODO: Deploy the trained model", "_____no_output_____" ] ], [ [ "## Step 7 - Use the model for testing\n\nOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.", "_____no_output_____" ] ], [ [ "test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)", "_____no_output_____" ], [ "# We split the data into chunks and send each chunk seperately, accumulating the results.\n\ndef predict(data, rows=512):\n split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))\n predictions = np.array([])\n for array in split_array:\n predictions = np.append(predictions, predictor.predict(array))\n \n return predictions", "_____no_output_____" ], [ "predictions = predict(test_X.values)\npredictions = [round(num) for num in predictions]", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score\naccuracy_score(test_y, predictions)", "_____no_output_____" ] ], [ [ "**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?", "_____no_output_____" ], [ "**Answer:**", "_____no_output_____" ], [ "### (TODO) More testing\n\nWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.", "_____no_output_____" ] ], [ [ "test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'", "_____no_output_____" ] ], [ [ "The question we now need to answer is, how do we send this review to our model?\n\nRecall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.\n - Removed any html tags and stemmed the input\n - Encoded the review as a sequence of integers using `word_dict`\n \nIn order process the review we will need to repeat these two steps.\n\n**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.", "_____no_output_____" ] ], [ [ "# TODO: Convert test_review into a form usable by the model and save the results in test_data\ntest_data = None", "_____no_output_____" ] ], [ [ "Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.", "_____no_output_____" ] ], [ [ "predictor.predict(test_data)", "_____no_output_____" ] ], [ [ "Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.", "_____no_output_____" ], [ "### Delete the endpoint\n\nOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.", "_____no_output_____" ] ], [ [ "estimator.delete_endpoint()", "_____no_output_____" ] ], [ [ "## Step 6 (again) - Deploy the model for the web app\n\nNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.\n\nAs we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.\n\nWe will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.\n\nWhen deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.\n - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.\n - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.\n - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.\n - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.\n\nFor the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.\n\n### (TODO) Writing inference code\n\nBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.", "_____no_output_____" ] ], [ [ "!pygmentize serve/predict.py", "_____no_output_____" ] ], [ [ "As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.\n\n**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.", "_____no_output_____" ], [ "### Deploying the model\n\nNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.\n\n**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.", "_____no_output_____" ] ], [ [ "from sagemaker.predictor import RealTimePredictor\nfrom sagemaker.pytorch import PyTorchModel\n\nclass StringPredictor(RealTimePredictor):\n def __init__(self, endpoint_name, sagemaker_session):\n super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')\n\nmodel = PyTorchModel(model_data=estimator.model_data,\n role = role,\n framework_version='0.4.0',\n entry_point='predict.py',\n source_dir='serve',\n predictor_cls=StringPredictor)\npredictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')", "_____no_output_____" ] ], [ [ "### Testing the model\n\nNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.", "_____no_output_____" ] ], [ [ "import glob\n\ndef test_reviews(data_dir='../data/aclImdb', stop=250):\n \n results = []\n ground = []\n \n # We make sure to test both positive and negative reviews \n for sentiment in ['pos', 'neg']:\n \n path = os.path.join(data_dir, 'test', sentiment, '*.txt')\n files = glob.glob(path)\n \n files_read = 0\n \n print('Starting ', sentiment, ' files')\n \n # Iterate through the files and send them to the predictor\n for f in files:\n with open(f) as review:\n # First, we store the ground truth (was the review positive or negative)\n if sentiment == 'pos':\n ground.append(1)\n else:\n ground.append(0)\n # Read in the review and convert to 'utf-8' for transmission via HTTP\n review_input = review.read().encode('utf-8')\n # Send the review to the predictor and store the results\n results.append(float(predictor.predict(review_input)))\n \n # Sending reviews to our endpoint one at a time takes a while so we\n # only send a small number of reviews\n files_read += 1\n if files_read == stop:\n break\n \n return ground, results", "_____no_output_____" ], [ "ground, results = test_reviews()", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score\naccuracy_score(ground, results)", "_____no_output_____" ] ], [ [ "As an additional test, we can try sending the `test_review` that we looked at earlier.", "_____no_output_____" ] ], [ [ "predictor.predict(test_review)", "_____no_output_____" ] ], [ [ "Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.", "_____no_output_____" ], [ "## Step 7 (again): Use the model for the web app\n\n> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.\n\nSo far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.\n\n<img src=\"Web App Diagram.svg\">\n\nThe diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.\n\nIn the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.\n\nLastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.\n\n### Setting up a Lambda function\n\nThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.\n\n#### Part A: Create an IAM Role for the Lambda function\n\nSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.\n\nUsing the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.\n\nIn the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.\n\nLastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.\n\n#### Part B: Create a Lambda function\n\nNow it is time to actually create the Lambda function.\n\nUsing the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.\n\nOn the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. \n\n```python\n# We need to use the low-level library to interact with SageMaker since the SageMaker API\n# is not available natively through Lambda.\nimport boto3\n\ndef lambda_handler(event, context):\n\n # The SageMaker runtime is what allows us to invoke the endpoint that we've created.\n runtime = boto3.Session().client('sagemaker-runtime')\n\n # Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given\n response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created\n ContentType = 'text/plain', # The data format that is expected\n Body = event['body']) # The actual review\n\n # The response is an HTTP response whose body contains the result of our inference\n result = response['Body'].read().decode('utf-8')\n\n return {\n 'statusCode' : 200,\n 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },\n 'body' : result\n }\n```\n\nOnce you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.", "_____no_output_____" ] ], [ [ "predictor.endpoint", "_____no_output_____" ] ], [ [ "Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.\n\n### Setting up API Gateway\n\nNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.\n\nUsing AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.\n\nOn the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.\n\nNow we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.\n\nSelect the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.\n\nFor the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.\n\nType the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.\n\nThe last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.\n\nYou have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.", "_____no_output_____" ], [ "## Step 4: Deploying our web app\n\nNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.\n\nIn the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\\*\\*REPLACE WITH PUBLIC API URL\\*\\***. Replace this string with the url that you wrote down in the last step and then save the file.\n\nNow, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.\n\nIf you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!\n\n> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.\n\n**TODO:** Make sure that you include the edited `index.html` file in your project submission.", "_____no_output_____" ], [ "Now that your web app is working, trying playing around with it and see how well it works.\n\n**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?", "_____no_output_____" ], [ "**Answer:**", "_____no_output_____" ], [ "### Delete the endpoint\n\nRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.", "_____no_output_____" ] ], [ [ "predictor.delete_endpoint()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
e760b2fda79a841b0fb75419a1e70dd8a3e93aa1
1,598
ipynb
Jupyter Notebook
udacity_data_science_notes/statistics/lesson_03/lesson_03.ipynb
anshbansal/anshbansal.github.io
9ce6be13b81053e7640ef5b5952e7ed1ee6f2e4f
[ "MIT" ]
null
null
null
udacity_data_science_notes/statistics/lesson_03/lesson_03.ipynb
anshbansal/anshbansal.github.io
9ce6be13b81053e7640ef5b5952e7ed1ee6f2e4f
[ "MIT" ]
null
null
null
udacity_data_science_notes/statistics/lesson_03/lesson_03.ipynb
anshbansal/anshbansal.github.io
9ce6be13b81053e7640ef5b5952e7ed1ee6f2e4f
[ "MIT" ]
null
null
null
29.054545
202
0.572591
[ [ [ "# Lesson 03 - Central Tendency\n\nWe have distributions and we need some numebr to describe that data.\n- Mode\n - For numbers it is single number\n - For Distributions it is range of numbers\n - Some distributions don't have mode e.g. uniform distribution\n - Some distributions have multiple modes ![](bimodal.png)\n - it is not affected by outliers\n- Median\n - median is the [actual average of people's intuition](http://www.slate.com/articles/life/weddings/2013/06/average_wedding_cost_published_numbers_on_the_price_of_a_wedding_are_totally.html)\n - median is more robust to outliers\n- Mean\n - $\\bar{x} = (\\sum x) / n$\n - $\\mu = (\\sum x) / N$\n - All scores affect the mean\n - many samples from same population will have same mean\n - it is affected by outliers\n \n![](measure_of_center.png)", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e760bcf81244a01c3b69fd865fc01cc626e54b7d
207,574
ipynb
Jupyter Notebook
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
7daeec279b05ddc8e9db35af7396f1c4f6874a72
[ "Apache-2.0" ]
1
2021-04-24T20:08:18.000Z
2021-04-24T20:08:18.000Z
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
7daeec279b05ddc8e9db35af7396f1c4f6874a72
[ "Apache-2.0" ]
null
null
null
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
7daeec279b05ddc8e9db35af7396f1c4f6874a72
[ "Apache-2.0" ]
1
2021-04-24T20:08:15.000Z
2021-04-24T20:08:15.000Z
198.635407
65,834
0.876001
[ [ [ "# Probability and statistics\n\n\nIn some form or another, machine learning is all about making predictions. \nWe might want to predict the *probability* of a patient suffering a heart attack in the next year,\ngiven their clinical history.\nIn anomaly detection, we might want to assess how *likely* a set of readings from an airplane's jet engine would be,\nwere it operating normally. \nIn reinforcement learning, we want an agent to act intelligently in an environment. \nThis means we need to think about the probability of getting a high reward under each of the available action. \nAnd when we build recommender systems we also need to think about probability. \nFor example, if we *hypothetically* worked for a large online bookseller,\nwe might want to estimate the probability that a particular user would buy a particular book, if prompted. \nFor this we need to use the language of probability and statistics. \nEntire courses, majors, theses, careers, and even departments, are devoted to probability.\nSo our goal here isn't to teach the whole subject. \nInstead we hope to get you off the ground,\nto teach you just enough that you know everything necessary to start building your first machine learning models \nand to have enough of a flavor for the subject that you can begin to explore it on your own if you wish.\n\n\nWe've talked a lot about probabilities so far without articulating what precisely they are or giving a concrete example. Let's get more serious by considering the problem of distinguishing cats and dogs based on photographs. This might sound simpler but it's actually a formidable challenge. To start with, the difficulty of the problem may depend on the resolution of the image.\n\n| 20px | 40px | 80px | 160px | 320px |\n|:----:|:----:|:----:|:-----:|:-----:|\n|![](../img/whitecat20.jpg)|![](../img/whitecat40.jpg)|![](../img/whitecat80.jpg)|![](../img/whitecat160.jpg)|![](../img/whitecat320.jpg)|\n|![](../img/whitedog20.jpg)|![](../img/whitedog40.jpg)|![](../img/whitedog80.jpg)|![](../img/whitedog160.jpg)|![](../img/whitedog320.jpg)|\n\nWhile it's easy for humans to recognize cats and dogs at 320 pixel resolution, \nit becomes challenging at 40 pixels \nand next to impossible at 20 pixels. \nIn other words, our ability to tell cats and dogs apart at a large distance (and thus low resolution) \nmight approach uninformed guessing. \nProbability gives us a formal way of reasoning about our level of certainty.\nIf we are completely sure that the image depicts a cat, \nwe say that the *probability* that the corresponding label $l$ is $\\mathrm{cat}$, \ndenoted $P(l=\\mathrm{cat})$ equals 1.0.\nIf we had no evidence to suggest that $l =\\mathrm{cat}$ or that $l = \\mathrm{dog}$,\nthen we might say that the two possibilities were equally $likely$\nexpressing this as $P(l=\\mathrm{cat}) = 0.5$.\nIf we were reasonably confident, but not sure that the image depicted a cat,\nwe might assign a probability $.5 < P(l=\\mathrm{cat}) < 1.0$.\n\nNow consider a second case:\ngiven some weather monitoring data,\nwe want to predict the probability that it will rain in Taipei tomorrow.\nIf it's summertime, the rain might come with probability $.5$\nIn both cases, we have some value of interest.\nAnd in both cases we are uncertain about the outcome.\nBut There's a key difference between the two cases. \nIn this first case, the image is in fact either a dog or a cat, \nwe just don't know which. \nIn the second case, the outcome may actually be a random event,\nif you believe in such things (and most physicists do).\nSo probability is a flexible language for reasoning about our level of certainty,\nand it can be applied effectively in a broad set of contexts.", "_____no_output_____" ], [ "## Basic probability theory\n\nSay that we cast a die and want to know \nwhat the chance is of seeing a $1$ \nrather than another digit. \nIf the die is fair, all six outcomes $\\mathcal{X} = \\{1, \\ldots, 6\\}$ \nare equally likely to occur, \nhence we would see a $1$ in $1$ out of $6$ cases. \nFormally we state that $1$ occurs with probability $\\frac{1}{6}$. \n\nFor a real die that we receive from a factory,\nwe might not know those proportions \nand we would need to check whether it is tainted. \nThe only way to investigate the die is by casting it many times \nand recording the outcomes. \nFor each cast of the die, \nwe'll observe a value $\\{1, 2, \\ldots, 6\\}$. \nGiven these outcomes, we want to investigate the probability of observing each outcome.\n\nOne natural approach for each value is to take the individual count for that value\nand to divide it by the total number of tosses.\nThis gives us an *estimate* of the probability of a given event. \nThe law of large numbers tell us that as the number of tosses grows this estimate will draw closer and closer to the true underlying probability.\nBefore going into the details of what's going here, let's try it out.\n\nTo start, let's import the necessary packages:", "_____no_output_____" ] ], [ [ "import mxnet as mx\nfrom mxnet import nd", "_____no_output_____" ] ], [ [ "Next, we'll want to be able to cast the die.\nIn statistics we call this process of drawing examples from probability distributions *sampling*.\nThe distribution which assigns probabilities to a number of discrete choices is called \nthe *multinomial* distribution. \nWe'll give a more formal definition of *distribution* later,\nbut at a high level, think of it as just an assignment of probabilities to events.\nIn MXNet, we can sample from the multinomial distribution via the aptly named `nd.sample_multinomial` function.\nThe function can be called in many ways, but we'll focus on the simplest.\nTo draw a single sample, we simply give pass in a vector of probabilities.", "_____no_output_____" ] ], [ [ "probabilities = nd.ones(6) / 6\nnd.sample_multinomial(probabilities)", "_____no_output_____" ] ], [ [ "If you run this line (`nd.sample_multinomial(probabilities)`) a bunch of times, \nyou'll find that you get out random values each time. \nAs with estimating the fairness of a die, \nwe often want to generate many samples from the same distribution.\nIt would be really slow to do this with a Python `for` loop,\nso `sample_multinomial` supports drawing multiple samples at once,\nreturning an array of independent samples in any shape we might desire.", "_____no_output_____" ] ], [ [ "print(nd.sample_multinomial(probabilities, shape=(10)))\nprint(nd.sample_multinomial(probabilities, shape=(5,10)))", "\n[3 4 5 3 5 3 5 2 3 3]\n<NDArray 10 @cpu(0)>\n\n[[2 2 1 5 0 5 1 2 2 4]\n [4 3 2 3 2 5 5 0 2 0]\n [3 0 2 4 5 4 0 5 5 5]\n [2 4 4 2 3 4 4 0 4 3]\n [3 0 3 5 4 3 0 2 2 1]]\n<NDArray 5x10 @cpu(0)>\n" ] ], [ [ "Now that we know how to sample rolls of a die,\nwe can simulate 1000 rolls.", "_____no_output_____" ] ], [ [ "rolls = nd.sample_multinomial(probabilities, shape=(1000))", "_____no_output_____" ] ], [ [ "We can then go through and count, after each of the 1000 rolls,\nhow many times each number was rolled.", "_____no_output_____" ] ], [ [ "counts = nd.zeros((6,1000))\ntotals = nd.zeros(6)\nfor i, roll in enumerate(rolls):\n totals[int(roll.asscalar())] += 1\n counts[:, i] = totals", "_____no_output_____" ] ], [ [ "To start, we can inspect the final tally at the end of $1000$ rolls.", "_____no_output_____" ] ], [ [ "totals / 1000", "_____no_output_____" ] ], [ [ "As you can see, the lowest estimated probability for any of the numbers is about $.15$ \nand the highest estimated probability is $0.188$. \nBecause we generated the data from a fair die,\nwe know that each number actually has probability of $1/6$, roughly $.167$,\nso these estimates are pretty good. \nWe can also visualize how these probabilities converge over time\ntowards reasonable estimates.\n\nTo start let's take a look at the `counts`\narray which has shape `(6, 1000)`.\nFor each time step (out of 1000),\ncounts, says how many times each of the numbers has shown up.\nSo we can normalize each $j$-th column of the counts vector by the number of tosses\nto give the `current` estimated probabilities at that time.\nThe counts object looks like this:", "_____no_output_____" ] ], [ [ "counts", "_____no_output_____" ] ], [ [ "Normalizing by the number of tosses, we get:", "_____no_output_____" ] ], [ [ "x = nd.arange(1000).reshape((1,1000)) + 1\nestimates = counts / x\nprint(estimates[:,0])\nprint(estimates[:,1])\nprint(estimates[:,100])", "\n[ 0. 1. 0. 0. 0. 0.]\n<NDArray 6 @cpu(0)>\n\n[ 0. 0.5 0. 0. 0.5 0. ]\n<NDArray 6 @cpu(0)>\n\n[ 0.1980198 0.15841584 0.17821783 0.18811882 0.12871288 0.14851485]\n<NDArray 6 @cpu(0)>\n" ] ], [ [ "As you can see, after the first toss of the die, we get the extreme estimate that one of the numbers will be rolled with probability $1.0$ and that the others have probability $0$. After $100$ rolls, things already look a bit more reasonable.\nWe can visualize this convergence by using the plotting package `matplotlib`. \nIf you don't have it installed, now would be a good time to [install it](https://matplotlib.org/).\n", "_____no_output_____" ] ], [ [ "from matplotlib import pyplot as plt\nplt.plot(estimates[0, :].asnumpy(), label=\"Estimated P(die=1)\")\nplt.plot(estimates[1, :].asnumpy(), label=\"Estimated P(die=2)\")\nplt.plot(estimates[2, :].asnumpy(), label=\"Estimated P(die=3)\")\nplt.plot(estimates[3, :].asnumpy(), label=\"Estimated P(die=4)\")\nplt.plot(estimates[4, :].asnumpy(), label=\"Estimated P(die=5)\")\nplt.plot(estimates[5, :].asnumpy(), label=\"Estimated P(die=6)\")\nplt.axhline(y=0.16666, color='black', linestyle='dashed')\nplt.legend()\nplt.show()", "_____no_output_____" ] ], [ [ "Each solid curve corresponds to one of the six values of the die\nand gives our estimated probability that the die turns up that value \nas assessed after each of the 1000 turns. \nThe dashed black line gives the true underlying probability.\nAs we get more data, the solid curves converge towards the true answer.", "_____no_output_____" ], [ "<!-- What we can see is that the red curves pretty well capture the behavior of the 10 random traces of averages. This is the case since we are averaging numbers and their aggregate behavior is like that of a number with a lot less uncertainty. Looking at the red curves, they are given by $f(x) = \\pm 1/\\sqrt{x}$. (The reader might cry foul by noting that we just added Gaussian random variables which, quite obviously, lead to yet another Gaussian random variable. That said, the curves for sums of other random variables, such as $\\{0, 1\\}$ valued objects look identical in the limit.) -->\n\n\nIn our example of casting a die, we introduced the notion of a **random variable**. \nA random variable, which we denote here as $X$ can be pretty much any quantity is not determistic. \nRandom variables could take one value among a set of possibilites. \nWe denote sets with brackets, e.g., $\\{\\mathrm{cat}, \\mathrm{dog}, \\mathrm{rabbit}\\}$.\nThe items contained in the set are called *elements*,\nand we can say that an element $x$ is *in* the set S, by writing $x \\in S$.\nThe symbol $\\in$ is read as \"in\" and denotes membership. \nFor instance, we could truthfully say $\\mathrm{dog} \\in \\{\\mathrm{cat}, \\mathrm{dog}, \\mathrm{rabbit}\\}$.\nWhen dealing with the rolls of die, we are concerned with a variable $X \\in \\{1, 2, 3, 4, 5, 6\\}$. \n\nNote that there is a subtle difference between discrete random variables, like the sides of a dice, \nand continuous ones, like the weight and the height of a person. \nThere's little point in asking whether two people have exactly the same height. \nIf we take precise enough measurements you'll find that no two people on the planet have the exact same height. \nIn fact, if we take a fine enough measurement, \nyou will not have the same height when you wake up and when you go to sleep.\nSo there's no purpose in asking about the probability \nthat some one is $2.00139278291028719210196740527486202$ meters tall. \nThe probability is 0.\nIt makes more sense in this case to ask whether someone's height falls into a given interval, \nsay between 1.99 and 2.01 meters. \nIn these cases we quantify the likelihood that we see a value as a density. \nThe height of exactly 2.0 meters has no probability, but nonzero density. \nBetween any two different heights we have nonzero probability.\n\n\nThere are a few important axioms of probability that you'll want to remember:\n\n* For any event $z$, the probability is never negative, i.e. $\\Pr(Z=z) \\geq 0$.\n* For any two events $Z=z$ and $X=x$ the union is no more likely than the sum of the individual events, i.e. $\\Pr(Z=z \\cup X=x) \\leq \\Pr(Z=z) + \\Pr(X=x)$.\n* For any random variable, the probabilities of all the values it can take must sum to 1 $\\sum_{i=1}^n P(Z=z_i) = 1$.\n* For any two mutually exclusive events $Z=z$ and $X=x$, the probability that either happens is equal to the sum of their individual probabilities that $\\Pr(Z=z \\cup X=x) = \\Pr(Z=z) + \\Pr(X=z)$.", "_____no_output_____" ], [ "## Dealing with multiple random variables\n\nVery often, we'll want consider more than one random variable at a time. \nFor instance, we may want to model the relationship between diseases and symptoms.\nGiven a disease and symptom, say 'flu' and 'cough', \neither may or may not occur in a patient with some probability.\nWhile we hope that the probability of both would be close to zero,\nwe may want to estimate these probabilities and their relationships to each other\nso that we may apply our inferences to effect better medical care.\n\nAs a more complicated example, images contain millions of pixels, thus millions of random variables. \nAnd in many cases images will come with a label, identifying objects in the image.\nWe can also think of the label as a random variable.\nWe can even get crazy and think of all the metadata as random variables\nsuch as location, time, aperture, focal length, ISO, focus distance, camera type, etc.\nAll of these are random variables that occur jointly. \nWhen we deal with multiple random variables, \nthere are several quantities of interest.\nThe first is called the joint distribution $\\Pr(A, B)$. \nGiven any elements $a$ and $b$,\nthe joint distribution lets us answer,\nwhat is the probability that $A=a$ and $B=b$ simulataneously?\nIt might be clear that for any values $a$ and $b$, $\\Pr(A,B) \\leq \\Pr(A=a)$. \n\nThis has to be the case, since for $A$ and $B$ to happen, \n$A$ has to happen *and* $B$ also has to happen (and vice versa). \nThus $A,B$ cannot be more likely than $A$ or $B$ individually. \nThis brings us to an interesting ratio: $0 \\leq \\frac{\\Pr(A,B)}{\\Pr(A)} \\leq 1$. \nWe call this a **conditional probability** and denote it by $\\Pr(B|A)$, \nthe probability that $B$ happens, provided that $A$ has happened. \n\nUsing the definition of conditional probabilities, \nwe can derive one of the most useful and celebrated equations in statistics - Bayes' theorem. \nIt goes as follows: By construction, we have that $\\Pr(A, B) = \\Pr(B|A) \\Pr(A)$. \nBy symmetry, this also holds for $\\Pr(A,B) = \\Pr(A|B) \\Pr(B)$. \nSolving for one of the conditional variables we get:\n$$\\Pr(A|B) = \\frac{\\Pr(B|A) \\Pr(A)}{\\Pr(B)}$$\n\nThis is very useful if we want to infer one thing from another, \nsay cause and effect but we only know the properties in the reverse direction. \nOne important operation that we need to make this work is **marginalization**, i.e., \nthe operation of determining $\\Pr(A)$ and $\\Pr(B)$ from $\\Pr(A,B)$.\nWe can see that the probability of seeing $A$ amounts to accounting \nfor all possible choices of $B$ and aggregating the joint probabilities over all of them, i.e. \n\n$$\\Pr(A) = \\sum_{B'} \\Pr(A,B') \\text{ and } \\Pr(B) = \\sum_{A'} \\Pr(A',B)$$\n\nA really useful property to check is for **dependence** and **independence**. \nIndependence is when the occurrence of one event does not influence the occurrence of the other.\nIn this case $\\Pr(B|A) = \\Pr(B)$. Statisticians typically use $A \\perp\\!\\!\\!\\perp B$ to express this. \nFrom Bayes Theorem it follows immediately that also $\\Pr(A|B) = \\Pr(A)$. \nIn all other cases we call $A$ and $B$ dependent. \nFor instance, two successive rolls of a dice are independent. \nOn the other hand, the position of a light switch and the brightness in the room are not \n(they are not perfectly deterministic, though, \nsince we could always have a broken lightbulb, power failure, or a broken switch). \n\nLet's put our skills to the test. \nAssume that a doctor administers an AIDS test to a patient. \nThis test is fairly accurate and fails only with 1% probability \nif the patient is healthy by reporting him as diseased, \nand that it never fails to detect HIV if the patient actually has it. \nWe use $D$ to indicate the diagnosis and $H$ to denote the HIV status.\nWritten as a table the outcome $\\Pr(D|H)$ looks as follows:\n\n| | Patient is HIV positive | Patient is HIV negative |\n|:------------|------------------------:|------------------------:|\n|Test positive| 1 | 0.01 |\n|Test negative| 0 | 0.99 |\n\nNote that the column sums are all one (but the row sums aren't), \nsince the conditional probability needs to sum up to $1$, just like the probability. \nLet us work out the probability of the patient having AIDS if the test comes back positive. \nObviously this is going to depend on how common the disease is, since it affects the number of false alarms.\nAssume that the population is quite healthy, e.g. $\\Pr(\\text{HIV positive}) = 0.0015$. \nTo apply Bayes Theorem we need to determine \n\n$$\\Pr(\\text{Test positive}) = \\Pr(D=1|H=0) \\Pr(H=0) + \\Pr(D=1|H=1) \\Pr(H=1) = 0.01 \\cdot 0.9985 + 1 \\cdot 0.0015 = 0.011485$$\n\nHence we get $\\Pr(H = 1|D = 1) = \\frac{\\Pr(D=1|H=1) \\Pr(H=1)}{\\Pr(D=1)} = \\frac{1 \\cdot 0.0015}{0.011485} = 0.131$, in other words, there's only a 13.1% chance that the patient actually has AIDS, despite using a test that is 99% accurate! As we can see, statistics can be quite counterintuitive. ", "_____no_output_____" ], [ "## Conditional independence\n\nWhat should a patient do upon receiving such terrifying news? \nLikely, he/she would ask the physician to administer another test to get clarity. \nThe second test has different characteristics (it isn't as good as the first one). \n\n| | Patient is HIV positive | Patient is HIV negative |\n|:------------|------------------------:|------------------------:|\n|Test positive| 0.98 | 0.03 |\n|Test negative| 0.02 | 0.97 |\n\nUnfortunately, the second test comes back positive, too. \nLet us work out the requisite probabilities to invoke Bayes' Theorem. \n\n* $\\Pr(D_1 = 1 \\text{ and } D_2 = 1|H = 0) = 0.01 \\cdot 0.03 = 0.0001$\n* $\\Pr(D_1 = 1 \\text{ and } D_2 = 1|H = 1) = 1 \\cdot 0.98 = 0.98$\n* $\\Pr(D_1 = 1 \\text{ and } D_2 = 1) = 0.0001 \\cdot 0.9985 + 0.98 \\cdot 0.0015 = 0.00156985$\n* $\\Pr(H = 1|D_1 = 1 \\text{ and } D_2 = 1) = \\frac{0.98 \\cdot 0.0015}{0.00156985} = 0.936$\n\nThat is, the second test allowed us to gain much higher confidence that not all is well. \nDespite the second test being considerably less accurate than the first one, \nit still improved our estimate quite a bit. \n*Why couldn't we just run the first test a second time?* \nAfter all, the first test was more accurate. \nThe reason is that we needed a second test that confirmed *independently* of the first test that things were dire, indeed. In other words, we made the tacit assumption that $\\Pr(D_1, D_2|H) = \\Pr(D_1|H) \\Pr(D_2|H)$. Statisticians call such random variables **conditionally independent**. This is expressed as $D_1 \\perp\\!\\!\\!\\perp D_2 | H$. \n\n## Naive Bayes classification\n\nConditional independence is useful when dealing with data, since it simplifies a lot of equations. \nA popular algorithm is the Naive Bayes Classifier. \nThe key assumption in it is that the attributes are all independent of each other, given the labels. \nIn other words, we have:\n\n$$p(x|y) = \\prod_i p(x_i|y)$$\n\nUsing Bayes Theorem this leads to the classifier $p(y|x) = \\frac{\\prod_i p(x_i|y) p(y)}{p(x)}$. Unfortunately, this is still intractable, since we don't know $p(x)$. Fortunately, we don't need it, since we know that $\\sum_y p(y|x) = 1$, hence we can always recover the normalization from $p(y|x) \\propto \\prod_i p(x_i|y) p(y)$. After all that math, it's time for some code to show how to use a Naive Bayes classifier for distinguishing digits on the MNIST classification dataset. \n\nThe problem is that we don't actually know $p(y)$ and $p(x_i|y)$. So we need to *estimate* it given some training data first. This is what is called *training* the model. In the case of 10 possible classes we simply compute $n_y$, i.e. the number of occurrences of class $y$ and then divide it by the total number of occurrences. E.g. if we have a total of 60,000 pictures of digits and digit 4 occurs 5800 times, we estimate its probability as $\\frac{5800}{60000}$. Likewise, to get an idea of $p(x_i|y)$ we count how many times pixel $i$ is set for digit $y$ and then divide it by the number of occurrences of digit $y$. This is the probability that that very pixel will be switched on.", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# we go over one observation at a time (speed doesn't matter here)\ndef transform(data, label):\n return (nd.floor(data/128)).astype(np.float32), label.astype(np.float32)\nmnist_train = mx.gluon.data.vision.MNIST(train=True, transform=transform)\nmnist_test = mx.gluon.data.vision.MNIST(train=False, transform=transform)\n\n# Initialize the count statistics for p(y) and p(x_i|y)\n# We initialize all numbers with a count of 1 to ensure that we don't get a\n# division by zero. Statisticians call this Laplace smoothing.\nycount = nd.ones(shape=(10))\nxcount = nd.ones(shape=(784, 10))\n\n# Aggregate count statistics of how frequently a pixel is on (or off) for\n# zeros and ones.\nfor data, label in mnist_train:\n x = data.reshape((784,))\n y = int(label)\n ycount[y] += 1\n xcount[:, y] += x\n\n# normalize the probabilities p(x_i|y) (divide per pixel counts by total\n# count)\nfor i in range(10):\n xcount[:, i] = xcount[:, i]/ycount[i]\n\n# likewise, compute the probability p(y)\npy = ycount / nd.sum(ycount)", "_____no_output_____" ] ], [ [ "Now that we computed per-pixel counts of occurrence for all pixels, it's time to see how our model behaves. Time to plot it. We show the estimated probabilities of observing a switched-on pixel. These are some mean looking digits.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nfig, figarr = plt.subplots(1, 10, figsize=(15, 15))\nfor i in range(10):\n figarr[i].imshow(xcount[:, i].reshape((28, 28)).asnumpy(), cmap='hot')\n figarr[i].axes.get_xaxis().set_visible(False)\n figarr[i].axes.get_yaxis().set_visible(False)\n\nplt.show()\nprint(py)", "_____no_output_____" ] ], [ [ "Now we can compute the likelihoods of an image, given the model. This is statistican speak for $p(x|y)$, i.e. how likely it is to see a particular image under certain conditions (such as the label). Since this is computationally awkward (we might have to multiply many small numbers if many pixels have a small probability of occurring), we are better off computing its logarithm instead. That is, instead of $p(x|y) = \\prod_{i} p(x_i|y)$ we compute $\\log p(x|y) = \\sum_i \\log p(x_i|y)$. \n\n$$l_y := \\sum_i \\log p(x_i|y) = \\sum_i x_i \\log p(x_i = 1|y) + (1-x_i) \\log \\left(1-p(x_i=1|y)\\right)$$\n\nTo avoid recomputing logarithms all the time, we precompute them for all pixels. ", "_____no_output_____" ] ], [ [ "logxcount = nd.log(xcount)\nlogxcountneg = nd.log(1-xcount)\nlogpy = nd.log(py)\n\nfig, figarr = plt.subplots(2, 10, figsize=(15, 3))\n\n# show 10 images\nctr = 0\nfor data, label in mnist_test:\n x = data.reshape((784,))\n y = int(label)\n \n # we need to incorporate the prior probability p(y) since p(y|x) is\n # proportional to p(x|y) p(y)\n logpx = logpy.copy()\n for i in range(10):\n # compute the log probability for a digit\n logpx[i] += nd.dot(logxcount[:, i], x) + nd.dot(logxcountneg[:, i], 1-x)\n # normalize to prevent overflow or underflow by subtracting the largest\n # value\n logpx -= nd.max(logpx)\n # and compute the softmax using logpx\n px = nd.exp(logpx).asnumpy()\n px /= np.sum(px)\n\n # bar chart and image of digit\n figarr[1, ctr].bar(range(10), px)\n figarr[1, ctr].axes.get_yaxis().set_visible(False)\n figarr[0, ctr].imshow(x.reshape((28, 28)).asnumpy(), cmap='hot')\n figarr[0, ctr].axes.get_xaxis().set_visible(False)\n figarr[0, ctr].axes.get_yaxis().set_visible(False)\n ctr += 1\n if ctr == 10:\n break\n\nplt.show()", "_____no_output_____" ] ], [ [ "As we can see, this classifier is both incompetent and overly confident of its incorrect estimates. That is, even if it is horribly wrong, it generates probabilities close to 1 or 0. Not a classifier we should use very much nowadays any longer. While Naive Bayes classifiers used to be popular in the 80s and 90s, e.g. for spam filtering, their heydays are over. The poor performance is due to the incorrect statistical assumptions that we made in our model: we assumed that each and every pixel are *independently* generated, depending only on the label. This is clearly not how humans write digits, and this wrong assumption led to the downfall of our overly naive (Bayes) classifier.", "_____no_output_____" ], [ "## Sampling\n\nRandom numbers are just one form of random variables, and since computers are particularly good with numbers, pretty much everything else in code ultimately gets converted to numbers anyway. One of the basic tools needed to generate random numbers is to sample from a distribution. Let's start with what happens when we use a random number generator. ", "_____no_output_____" ] ], [ [ "import random\nfor i in range(10):\n print(random.random())", "0.970844720223\n0.11442244666\n0.476145849846\n0.154138063676\n0.925771401913\n0.347466944833\n0.288795056587\n0.855051122608\n0.32666729925\n0.932922304219\n" ] ], [ [ "### Uniform Distribution\n\nThese are some pretty random numbers. As we can see, their range is between 0 and 1, and they are evenly distributed. That is, there is (actually, should be, since this is not a *real* random number generator) no interval in which numbers are more likely than in any other. In other words, the chances of any of these numbers to fall into the interval, say $[0.2,0.3)$ are as high as in the interval $[.593264, .693264)$. The way they are generated internally is to produce a random integer first, and then divide it by its maximum range. If we want to have integers directly, try the following instead. It generates random numbers between 0 and 100.", "_____no_output_____" ] ], [ [ "for i in range(10):\n print(random.randint(1, 100))", "75\n23\n34\n85\n99\n66\n13\n42\n19\n14\n" ] ], [ [ "What if we wanted to check that ``randint`` is actually really uniform. Intuitively the best strategy would be to run it, say 1 million times, count how many times it generates each one of the values and to ensure that the result is uniform. ", "_____no_output_____" ] ], [ [ "import math\n\ncounts = np.zeros(100)\nfig, axes = plt.subplots(2, 3, figsize=(15, 8), sharex=True)\naxes = axes.reshape(6)\n# mangle subplots such that we can index them in a linear fashion rather than\n# a 2d grid\n\nfor i in range(1, 1000001):\n counts[random.randint(0, 99)] += 1\n if i in [10, 100, 1000, 10000, 100000, 1000000]:\n axes[int(math.log10(i))-1].bar(np.arange(1, 101), counts)\nplt.show()", "_____no_output_____" ] ], [ [ "What we can see from the above figures is that the initial number of counts looks *very* uneven. If we sample fewer than 100 draws from a distribution over 100 outcomes this is pretty much expected. But even for 1000 samples there is a significant variability between the draws. What we are really aiming for is a situation where the probability of drawing a number $x$ is given by $p(x)$. \n\n### The categorical distribution\n\nQuite obviously, drawing from a uniform distribution over a set of 100 outcomes is quite simple. But what if we have nonuniform probabilities? Let's start with a simple case, a biased coin which comes up heads with probability 0.35 and tails with probability 0.65. A simple way to sample from that is to generate a uniform random variable over $[0,1]$ and if the number is less than $0.35$, we output heads and otherwise we generate tails. Let's try this out.", "_____no_output_____" ] ], [ [ "# number of samples\nn = 1000000\ny = np.random.uniform(0, 1, n)\nx = np.arange(1, n+1)\n# count number of occurrences and divide by the number of total draws\np0 = np.cumsum(y < 0.35) / x\np1 = np.cumsum(y >= 0.35) / x\n\nplt.figure(figsize=(15, 8))\nplt.semilogx(x, p0)\nplt.semilogx(x, p1)\nplt.show()", "_____no_output_____" ] ], [ [ "As we can see, on average this sampler will generate 35% zeros and 65% ones. Now what if we have more than two possible outcomes? We can simply generalize this idea as follows. Given any probability distribution, e.g. \n$p = [0.1, 0.2, 0.05, 0.3, 0.25, 0.1]$ we can compute its cumulative distribution (python's ``cumsum`` will do this for you) $F = [0.1, 0.3, 0.35, 0.65, 0.9, 1]$. Once we have this we draw a random variable $x$ from the uniform distribution $U[0,1]$ and then find the interval where $F[i-1] \\leq x < F[i]$. We then return $i$ as the sample. By construction, the chances of hitting interval $[F[i-1], F[i])$ has probability $p(i)$. \n\nNote that there are many more efficient algorithms for sampling than the one above. For instance, binary search over $F$ will run in $O(\\log n)$ time for $n$ random variables. There are even more clever algorithms, such as the [Alias Method](https://en.wikipedia.org/wiki/Alias_method) to sample in constant time, after $O(n)$ preprocessing.", "_____no_output_____" ], [ "### The Normal distribution\n\nThe Normal distribution (aka the Gaussian distribution) is given by $p(x) = \\frac{1}{\\sqrt{2 \\pi}} \\exp\\left(-\\frac{1}{2} x^2\\right)$. Let's plot it to get a feel for it.", "_____no_output_____" ] ], [ [ "x = np.arange(-10, 10, 0.01)\np = (1/math.sqrt(2 * math.pi)) * np.exp(-0.5 * x**2)\nplt.figure(figsize=(10, 5))\nplt.plot(x, p)\nplt.show()", "_____no_output_____" ] ], [ [ "Sampling from this distribution is a lot less trivial. First off, the support is infinite, that is, for any $x$ the density $p(x)$ is positive. Secondly, the density is nonuniform. There are many tricks for sampling from it - the key idea in all algorithms is to stratify $p(x)$ in such a way as to map it to the uniform distribution $U[0,1]$. One way to do this is with the probability integral transform. \n\nDenote by $F(x) = \\int_{-\\infty}^x p(z) dz$ the cumulative distribution function (CDF) of $p$. This is in a way the continuous version of the cumulative sum that we used previously. In the same way we can now define the inverse map $F^{-1}(\\xi)$, where $\\xi$ is drawn uniformly. Unlike previously where we needed to find the correct interval for the vector $F$ (i.e. for the piecewise constant function), we now invert the function $F(x)$. \n\nIn practice, this is slightly more tricky since inverting the CDF is hard in the case of a Gaussian. It turns out that the *twodimensional* integral is much easier to deal with, thus yielding two normal random variables than one, albeit at the price of two uniformly distributed ones. For now, suffice it to say that there are built-in algorithms to address this. \n\nThe normal distribution has yet another desirable property. In a way all distributions converge to it, if we only average over a sufficiently large number of draws from any other distribution. To understand this in a bit more detail, we need to introduce three important things: expected values, means and variances. \n\n* The expected value $\\mathbb{E}_{x \\sim p(x)}[f(x)]$ of a function $f$ under a distribution $p$ is given by the integral $\\int_x p(x) f(x) dx$. That is, we average over all possible outcomes, as given by $p$. \n* A particularly important expected value is that for the function $f(x) = x$, i.e. $\\mu := \\mathbb{E}_{x \\sim p(x)}[x]$. It provides us with some idea about the typical values of $x$.\n* Another important quantity is the variance, i.e. the typical deviation from the mean \n$\\sigma^2 := \\mathbb{E}_{x \\sim p(x)}[(x-\\mu)^2]$. Simple math shows (check it as an exercise) that\n$\\sigma^2 = \\mathbb{E}_{x \\sim p(x)}[x^2] - \\mathbb{E}^2_{x \\sim p(x)}[x]$.\n\nThe above allows us to change both mean and variance of random variables. Quite obviously for some random variable $x$ with mean $\\mu$, the random variable $x + c$ has mean $\\mu + c$. Moreover, $\\gamma x$ has the variance $\\gamma^2 \\sigma^2$. Applying this to the normal distribution we see that one with mean $\\mu$ and variance $\\sigma^2$ has the form $p(x) = \\frac{1}{\\sqrt{2 \\sigma^2 \\pi}} \\exp\\left(-\\frac{1}{2 \\sigma^2} (x-\\mu)^2\\right)$. Note the scaling factor $\\frac{1}{\\sigma}$ - it arises from the fact that if we stretch the distribution by $\\sigma$, we need to lower it by $\\frac{1}{\\sigma}$ to retain the same probability mass (i.e. the weight under the distribution always needs to integrate out to 1). \n\nNow we are ready to state one of the most fundamental theorems in statistics, the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem). It states that for sufficiently well-behaved random variables, in particular random variables with well-defined mean and variance, the sum tends toward a normal distribution. To get some idea, let's repeat the experiment described in the beginning, but now using random variables with integer values of $\\{0, 1, 2\\}$. ", "_____no_output_____" ] ], [ [ "# generate 10 random sequences of 10,000 random normal variables N(0,1)\ntmp = np.random.uniform(size=(10000,10))\nx = 1.0 * (tmp > 0.3) + 1.0 * (tmp > 0.8)\nmean = 1 * 0.5 + 2 * 0.2\nvariance = 1 * 0.5 + 4 * 0.2 - mean**2\nprint('mean {}, variance {}'.format(mean, variance))\n# cumulative sum and normalization\ny = np.arange(1,10001).reshape(10000,1)\nz = np.cumsum(x,axis=0) / y\n\nplt.figure(figsize=(10,5))\nfor i in range(10):\n plt.semilogx(y,z[:,i])\n\nplt.semilogx(y,(variance**0.5) * np.power(y,-0.5) + mean,'r')\nplt.semilogx(y,-(variance**0.5) * np.power(y,-0.5) + mean,'r')\nplt.show() ", "mean 0.9, variance 0.49\n" ] ], [ [ "This looks very similar to the initial example, at least in the limit of averages of large numbers of variables. This is confirmed by theory. Denote by mean and variance of a random variable the quantities \n\n$$\\mu[p] := \\mathbf{E}_{x \\sim p(x)}[x] \\text{ and } \\sigma^2[p] := \\mathbf{E}_{x \\sim p(x)}[(x - \\mu[p])^2]$$\n\nThen we have that $\\lim_{n\\to \\infty} \\frac{1}{\\sqrt{n}} \\sum_{i=1}^n \\frac{x_i - \\mu}{\\sigma} \\to \\mathcal{N}(0, 1)$. In other words, regardless of what we started out with, we will always converge to a Gaussian. This is one of the reasons why Gaussians are so popular in statistics.\n\n\n### More distributions\n\nMany more useful distributions exist. We recommend consulting a statistics book or looking some of them up on Wikipedia for further detail. \n\n* **Binomial Distribution** It is used to describe the distribution over multiple draws from the same distribution, e.g. the number of heads when tossing a biased coin (i.e. a coin with probability $\\pi$ of returning heads) 10 times. The probability is given by $p(x) = {n \\choose x} \\pi^x (1-\\pi)^{n-x}$. \n* **Multinomial Distribution** Obviously we can have more than two outcomes, e.g. when rolling a dice multiple times. In this case the distribution is given by $p(x) = \\frac{n!}{\\prod_{i=1}^k x_i!} \\prod_{i=1}^k \\pi_i^{x_i}$.\n* **Poisson Distribution** It is used to model the occurrence of point events that happen with a given rate, e.g. the number of raindrops arriving within a given amount of time in an area (weird fact - the number of Prussian soldiers being killed by horses kicking them followed that distribution). Given a rate $\\lambda$, the number of occurrences is given by $p(x) = \\frac{1}{x!} \\lambda^x e^{-\\lambda}$.\n* **Beta, Dirichlet, Gamma, and Wishart Distributions** They are what statisticians call *conjugate* to the Binomial, Multinomial, Poisson and Gaussian respectively. Without going into detail, these distributions are often used as priors for coefficients of the latter set of distributions, e.g. a Beta distribution as a prior for modeling the probability for binomial outcomes. ", "_____no_output_____" ], [ "## Next\n[Autograd](../chapter01_crashcourse/autograd.ipynb)", "_____no_output_____" ], [ "For whinges or inquiries, [open an issue on GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e760ca199c5ff3e087b819cd0f77814c534079f0
5,467
ipynb
Jupyter Notebook
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
0253af9add4a1b4fb7de2aacfc874e9d191e9a46
[ "Apache-2.0" ]
8
2020-09-02T03:59:02.000Z
2022-01-08T23:36:19.000Z
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
0253af9add4a1b4fb7de2aacfc874e9d191e9a46
[ "Apache-2.0" ]
null
null
null
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
0253af9add4a1b4fb7de2aacfc874e9d191e9a46
[ "Apache-2.0" ]
3
2020-11-18T12:13:05.000Z
2021-02-24T19:31:50.000Z
20.94636
251
0.483629
[ [ [ "___\n\n<a href='https://www.udemy.com/user/joseportilla/'><img src='../Pierian_Data_Logo.png'/></a>\n___\n<center><em>Content Copyright by Pierian Data</em></center>", "_____no_output_____" ], [ "# Statements Assessment Solutions", "_____no_output_____" ], [ "_____\n**Use <code>for</code>, .split(), and <code>if</code> to create a Statement that will print out words that start with 's':**", "_____no_output_____" ] ], [ [ "st = 'Print only the words that start with s in this sentence'", "_____no_output_____" ], [ "for word in st.split():\n if word[0] == 's':\n print(word)", "start\ns\nsentence\n" ] ], [ [ "______\n**Use range() to print all the even numbers from 0 to 10.**", "_____no_output_____" ] ], [ [ "list(range(0,11,2))", "_____no_output_____" ] ], [ [ "___\n**Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.**", "_____no_output_____" ] ], [ [ "[x for x in range(1,51) if x%3 == 0]", "_____no_output_____" ] ], [ [ "_____\n**Go through the string below and if the length of a word is even print \"even!\"**", "_____no_output_____" ] ], [ [ "st = 'Print every word in this sentence that has an even number of letters'", "_____no_output_____" ], [ "for word in st.split():\n if len(word)%2 == 0:\n print(word+\" <-- has an even length!\")", "word <-- has an even length!\nin <-- has an even length!\nthis <-- has an even length!\nsentence <-- has an even length!\nthat <-- has an even length!\nan <-- has an even length!\neven <-- has an even length!\nnumber <-- has an even length!\nof <-- has an even length!\n" ] ], [ [ "____\n**Write a program that prints the integers from 1 to 100. But for multiples of three print \"Fizz\" instead of the number, and for the multiples of five print \"Buzz\". For numbers which are multiples of both three and five print \"FizzBuzz\".**", "_____no_output_____" ] ], [ [ "for num in range(1,101):\n if num % 3 == 0 and num % 5 == 0:\n print(\"FizzBuzz\")\n elif num % 3 == 0:\n print(\"Fizz\")\n elif num % 5 == 0:\n print(\"Buzz\")\n else:\n print(num)", "_____no_output_____" ] ], [ [ "____\n**Use a List Comprehension to create a list of the first letters of every word in the string below:**", "_____no_output_____" ] ], [ [ "st = 'Create a list of the first letters of every word in this string'", "_____no_output_____" ], [ "[word[0] for word in st.split()]", "_____no_output_____" ] ], [ [ "### Great Job!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e760cc750334191c8b517df9a29344a192c3d877
4,101
ipynb
Jupyter Notebook
AutoEncoders.ipynb
ameya03dot/Model-Templates
63718ccb41f97e443f207b0b7afc46d0508a5b2e
[ "MIT" ]
3
2021-04-07T16:02:49.000Z
2021-05-17T13:20:15.000Z
AutoEncoders.ipynb
ameya03dot/Model-Templates
63718ccb41f97e443f207b0b7afc46d0508a5b2e
[ "MIT" ]
null
null
null
AutoEncoders.ipynb
ameya03dot/Model-Templates
63718ccb41f97e443f207b0b7afc46d0508a5b2e
[ "MIT" ]
1
2021-04-16T20:52:45.000Z
2021-04-16T20:52:45.000Z
25.955696
114
0.514021
[ [ [ "import numpy as np\nimport pandas as pd\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.optim as optim\nimport torch.utils.data\nfrom torch.autograd import Variable\n\nmovies = pd.read_csv('mn/movies.dat',sep = '::', header =None, engine = 'python', encoding = 'latin-1')\n\nmovies\n\nusers = pd.read_csv('mn/users.dat',sep = '::', header =None, engine = 'python', encoding = 'latin-1')\n\nusers\n\nratings = pd.read_csv('mn/ratings.dat',sep = '::', header =None, engine = 'python', encoding = 'latin-1')\n\nratings\n\n#Preparing the Training set and Test set\n\ntraining_set = pd.read_csv('boltz/u1.base',delimiter ='\\t')\n\ntraining_set\n\ntraining_set = np.array(training_set, dtype = 'int')\n\n\ntest_set = pd.read_csv('boltz/u1.test',delimiter ='\\t')\ntest_set = np.array(training_set, dtype = 'int')\n\n#Getting the number of users and movies\n\nnb_users = int(max(max(training_set[:,0]),max(test_set[:,0]))) \nnb_movies =int(max(max(training_set[:,1]),max(test_set[:,1])))\n\nnb_users\n\nnb_movies\n\n#Convert data into array with users in lines and movies in columns\n\ndef convert(data):\n new_data = []\n for id_users in range(1,nb_users+1):\n id_movies = data[:, 1][data[:,0]==id_users]\n id_ratings = data[:, 2][data[:,0]==id_users]\n ratings = np.zeros(nb_movies)\n ratings[id_movies-1] = id_ratings\n new_data.append(list(ratings))\n return new_data\ntraining_set = convert(training_set)\ntest_set = convert(test_set)\n \n\ntraining_set\n\n#Converting into torch\n\ntraining_set = torch.FloatTensor(training_set)\ntest_set = torch.FloatTensor(test_set)\n\n#Creating the architecture of the Neural Network\n\nclass SAE(nn.Module):\n def __init(self, ):\n super(SAE, self).__init__()\n self.fc1 = nn.Linear(nb_movies, 20)\n self.fc2 = nn.Linear(20, 10)\n self.fc3 = nn.Linear(10,20)\n self.fc4 = nn.Linear(20,nb_movies)\n self.activation = nn.Sigmoid()\n def forward(self, x):\n x = self.activation(self.fc1(x))\n x = self.activation(self.fc2(x))\n x = self.activation(self.fc3(x))\n x = self.fc4(x)\n return x\nsae = SAE()\n\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e760cc8e44910e136c2e5aedba76dec19e3c50f5
37,034
ipynb
Jupyter Notebook
DP/Gamblers Problem Solution.ipynb
itsmeashutosh43/reinforcement-learning
c670592bad2ee200c8701df6342041086fda2bc7
[ "MIT" ]
null
null
null
DP/Gamblers Problem Solution.ipynb
itsmeashutosh43/reinforcement-learning
c670592bad2ee200c8701df6342041086fda2bc7
[ "MIT" ]
null
null
null
DP/Gamblers Problem Solution.ipynb
itsmeashutosh43/reinforcement-learning
c670592bad2ee200c8701df6342041086fda2bc7
[ "MIT" ]
1
2021-07-05T08:21:19.000Z
2021-07-05T08:21:19.000Z
127.703448
17,196
0.842712
[ [ [ "### This is Example 4.3. Gambler’s Problem from Sutton's book.\n\nA gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. \nIf the coin comes up heads, he wins as many dollars as he has staked on that flip; \nif it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, \nor loses by running out of money. \n\nOn each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. \nThis problem can be formulated as an undiscounted, episodic, finite MDP. \n\nThe state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.\nThe actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. \nThe reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.\n\nThe state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport sys\nimport matplotlib.pyplot as plt\nif \"../\" not in sys.path:\n sys.path.append(\"../\") ", "_____no_output_____" ] ], [ [ "\n### Exercise 4.9 (programming)\n\nImplement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.", "_____no_output_____" ] ], [ [ "def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):\n \"\"\"\n Args:\n p_h: Probability of the coin coming up heads\n \"\"\"\n # The reward is zero on all transitions except those on which the gambler reaches his goal,\n # when it is +1.\n rewards = np.zeros(101)\n rewards[100] = 1 \n \n # We introduce two dummy states corresponding to termination with capital of 0 and 100\n V = np.zeros(101)\n \n def one_step_lookahead(s, V, rewards):\n \"\"\"\n Helper function to calculate the value for all action in a given state.\n \n Args:\n s: The gambler’s capital. Integer.\n V: The vector that contains values at each state. \n rewards: The reward vector.\n \n Returns:\n A vector containing the expected value of each action. \n Its length equals to the number of actions.\n \"\"\"\n A = np.zeros(101)\n stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).\n for a in stakes:\n # rewards[s+a], rewards[s-a] are immediate rewards.\n # V[s+a], V[s-a] are values of the next states.\n # This is the core of the Bellman equation: The expected value of your action is \n # the sum of immediate rewards and the value of the next state.\n A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)\n return A\n \n while True:\n # Stopping condition\n delta = 0\n # Update each state...\n for s in range(1, 100):\n # Do a one-step lookahead to find the best action\n A = one_step_lookahead(s, V, rewards)\n # print(s,A,V) # if you want to debug.\n best_action_value = np.max(A)\n # Calculate delta across all states seen so far\n delta = max(delta, np.abs(best_action_value - V[s]))\n # Update the value function. Ref: Sutton book eq. 4.10. \n V[s] = best_action_value \n # Check if we can stop \n if delta < theta:\n break\n \n # Create a deterministic policy using the optimal value function\n policy = np.zeros(100)\n for s in range(1, 100):\n # One step lookahead to find the best action for this state\n A = one_step_lookahead(s, V, rewards)\n best_action = np.argmax(A)\n # Always take the best action\n policy[s] = best_action\n \n return policy, V", "_____no_output_____" ], [ "policy, v = value_iteration_for_gamblers(0.25)\n\nprint(\"Optimized Policy:\")\nprint(policy)\nprint(\"\")\n\nprint(\"Optimized Value Function:\")\nprint(v)\nprint(\"\")", "Optimized Policy:\n[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.\n 18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.\n 11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.\n 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.\n 22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.\n 10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]\n\nOptimized Value Function:\n[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04\n 1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03\n 4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03\n 1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02\n 1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02\n 2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02\n 4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02\n 6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02\n 7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02\n 8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01\n 1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01\n 1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01\n 1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01\n 2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01\n 2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01\n 2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01\n 2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01\n 3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01\n 3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01\n 4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01\n 4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01\n 4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01\n 5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01\n 6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01\n 7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01\n 0.00000000e+00]\n\n" ] ], [ [ "### Show your results graphically, as in Figure 4.3.\n", "_____no_output_____" ] ], [ [ "# Plotting Final Policy (action stake) vs State (Capital)\n\n# x axis values\nx = range(100)\n# corresponding y axis values\ny = v[:100]\n \n# plotting the points \nplt.plot(x, y)\n \n# naming the x axis\nplt.xlabel('Capital')\n# naming the y axis\nplt.ylabel('Value Estimates')\n \n# giving a title to the graph\nplt.title('Final Policy (action stake) vs State (Capital)')\n \n# function to show the plot\nplt.show()", "_____no_output_____" ], [ "# Plotting Capital vs Final Policy\n\n# x axis values\nx = range(100)\n# corresponding y axis values\ny = policy\n \n# plotting the bars\nplt.bar(x, y, align='center', alpha=0.5)\n \n# naming the x axis\nplt.xlabel('Capital')\n# naming the y axis\nplt.ylabel('Final policy (stake)')\n \n# giving a title to the graph\nplt.title('Capital vs Final Policy')\n \n# function to show the plot\nplt.show()\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e760d145cbf99e078895aadd128770e4a67313b2
23,789
ipynb
Jupyter Notebook
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
d7a241db61aa671478bbf3287bd187444b6ca385
[ "Apache-2.0" ]
null
null
null
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
d7a241db61aa671478bbf3287bd187444b6ca385
[ "Apache-2.0" ]
null
null
null
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
d7a241db61aa671478bbf3287bd187444b6ca385
[ "Apache-2.0" ]
null
null
null
28.940389
929
0.538989
[ [ [ "Amazon web scraper", "_____no_output_____" ] ], [ [ "import csv\nfrom bs4 import BeautifulSoup", "_____no_output_____" ], [ "# firefox and Chrome\nfrom selenium import webdriver", "_____no_output_____" ] ], [ [ " Startup the webdriver", "_____no_output_____" ] ], [ [ "pip install webdriver-manager", "Requirement already satisfied: webdriver-manager in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (3.5.2)\nRequirement already satisfied: requests in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from webdriver-manager) (2.27.1)\nRequirement already satisfied: configparser in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from webdriver-manager) (5.2.0)\nRequirement already satisfied: crayons in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from webdriver-manager) (0.4.0)Note: you may need to restart the kernel to use updated packages.\n\nRequirement already satisfied: colorama in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from crayons->webdriver-manager) (0.4.4)\nRequirement already satisfied: certifi>=2017.4.17 in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from requests->webdriver-manager) (2021.10.8)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from requests->webdriver-manager) (1.26.7)\nRequirement already satisfied: charset-normalizer~=2.0.0 in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from requests->webdriver-manager) (2.0.4)\nRequirement already satisfied: idna<4,>=2.5 in c:\\users\\bikem\\anaconda3\\envs\\selenium_3.8\\lib\\site-packages (from requests->webdriver-manager) (3.3)\n" ], [ "## we activate the webdriver for Chrome as we are using Google Chrome\nfrom webdriver_manager.chrome import ChromeDriverManager\n\ndriver = webdriver.Chrome(ChromeDriverManager().install())", "\n\n====== WebDriver manager ======\nCurrent google-chrome version is 97.0.4692\nGet LATEST chromedriver version for 97.0.4692 google-chrome\nDriver [C:\\Users\\bikem\\.wdm\\drivers\\chromedriver\\win32\\97.0.4692.71\\chromedriver.exe] found in cache\nC:\\Users\\bikem\\AppData\\Local\\Temp/ipykernel_16024/3985819463.py:4: DeprecationWarning: executable_path has been deprecated, please pass in a Service object\n driver = webdriver.Chrome(ChromeDriverManager().install())\n" ], [ "# Using webdriver we'll now open the Amazon website in chrome\nurl = 'https://www.amazon.in'\n\n# We'll use the get method of driver and pass in the URL\ndriver.get(url)", "_____no_output_____" ], [ "def get_url(search_term):\n '''\n This function fetches the URL of the item that you want to search\n '''\n template = 'https://www.amazon.in/s?k={}&crid=UOAX8JTJZ8XJ&ref=nb_sb_noss_1'\n # We'are replacing every space with '+' to adhere with the pattern \n search_term = search_term.replace(\" \",\"+\")\n return template.format(search_term)", "_____no_output_____" ], [ "# Checking whether the function is working properly or not\nurl = get_url('nike shoes')\nprint(url)", "https://www.amazon.in/s?k=nike+shoes&crid=UOAX8JTJZ8XJ&ref=nb_sb_noss_1\n" ], [ "driver.get(url)", "_____no_output_____" ] ], [ [ "**Extract the collection**", "_____no_output_____" ] ], [ [ "#taking the page source and trying to extract from html\nsoup = BeautifulSoup(driver.page_source, 'html.parser')", "_____no_output_____" ], [ "# assigning the specific identity of the component we need to extract from the website\n# in this case we need to extract the whole component that we search in the site\n# say the mobile phone in this case and the whole component containing the name, price, etc needs to be assigned\nresults = soup.find_all('div', {'data-component-type': 's-search-result'})", "_____no_output_____" ], [ "len(results)", "_____no_output_____" ], [ "# prototype the results\nitem = results[0]", "_____no_output_____" ], [ "item.find('div', 'a-section a-spacing-medium a-text-center').text", "_____no_output_____" ], [ "''' while we try and extract the first most obvious thing to select would be the name of the product\n in order to select that we see that the name component was under the h2 tag and under a, hence \n we extract that and assign to atag'''\natag = item.h2.a", "_____no_output_____" ], [ "atag.text.strip()", "_____no_output_____" ], [ "# we select that text from the atag and strip to select only the name text component\ndescription = atag.text.strip()", "_____no_output_____" ], [ "# we need the exact url point of this component to be extracted properly\natag.get('href')", "_____no_output_____" ], [ "# additionally we need this to be prefixed with the https amazon tag\nurl = 'https://www.amazon.in' + atag.get('href')", "_____no_output_____" ], [ "# now we need to extract the price of the product\nitem.find('span', 'a-price-whole').text", "_____no_output_____" ], [ "# now we need to extract the price of the product\nprice = item.find('span', 'a-price-whole').text", "_____no_output_____" ], [ "item.i.text", "_____no_output_____" ], [ "rating = item.i.text", "_____no_output_____" ], [ "# to extract the number of reviews given to the product\nitem.find('span', 'a-size-base').text", "_____no_output_____" ], [ "rating_count = item.find('span', 'a-size-base').text", "_____no_output_____" ] ], [ [ "## Generalize the pattern now", "_____no_output_____" ] ], [ [ "def extract_records(item):\n '''Extract and return data from a single record'''\n #description and url\n atag = item.h2.a\n description = atag.text.strip()\n url = \"https://www.amazon.in\" + atag.get(\"href\")\n \n # price\n price = item.find('span', 'a-price-whole').text\n \n # rank and rating\n rating = item.i.text\n \n item.find('span', 'a-size-base').text\n rating_count = item.find('span', 'a-size-base').text\n \n result = (description, price, rating, rating_count, url)\n \n return result", "_____no_output_____" ], [ "# now we try to apply the above to the url and try to extract\nrecords = []\nresults = soup.find_all('div', {'data-component-type': 's-search-result'})\n\nfor item in results:\n records.append(extract_records(item))", "_____no_output_____" ] ], [ [ "**We encounter the attribute error, this is basically because there are numerous results in the website page that need not match all the description as given for each of the products. Not all products might be having the same descriptions. Hence the attribute error occuring. We need to give exception for this error in the code**", "_____no_output_____" ], [ "## Error Handling", "_____no_output_____" ] ], [ [ "def extract_records(item):\n '''Extract and return data from a single record'''\n #description and url\n atag = item.h2.a\n description = atag.text.strip()\n url = \"https://www.amazon.in\" + atag.get(\"href\")\n \n '''Basically we put the exception for price here which most definitely \n should not be there return'''\n try:\n # price\n price = item.find('span', 'a-price-whole').text\n except AttributeError:\n price = ''\n \n \n ''' For the ratings although there might be missign details and \n we rather add the exception properly'''\n try:\n # rank and rating\n rating = item.i.text\n \n item.find('span', 'a-size-base').text\n rating_count = item.find('span', 'a-size-base').text\n except AttributeError:\n rating = ''\n rating_count = ''\n \n result = (description, price, rating, rating_count, url)\n \n return result", "_____no_output_____" ], [ "# additionally we need to check whether the results give any empty records\nrecords = []\nresults = soup.find_all('div', {'data-component-type': 's-search-result'})\n\nfor item in results:\n records.append(extract_records(item))", "_____no_output_____" ], [ "records[0]", "_____no_output_____" ], [ "for row in records:\n print(row[1])", "3,240\n\n3,763\n5,596\n1,560\n4,103\n5,598\n1,729\n2,580\n2,412\n5,072\n3,652\n3,614\n5,493\n6,293\n4,224\n1,399\n1,599\n1,599\n1,399\n3,101\n\n2,506\n3,294\n4,273\n2,464\n2,654\n3,694\n2,397\n3,287\n5,995\n2,747\n5,852\n3,596\n1,896\n2,700\n2,346\n2,995\n6,942\n2,694\n2,018\n3,795\n3,477\n2,036\n3,357\n2,408\n8,588\n3,326\n5,396\n5,260\n8,109\n3,179\n13,019\n6,999\n449\n1,399\n1,399\n1,499\n9,899\n1,599\n" ], [ "# next step is to get the next page in order to get most reviews\n# we need to take the component for the next button\ndef get_url(search_term):\n '''\n This function fetches the URL of the item that you want to search\n '''\n template = 'https://www.amazon.in/s?k={}&crid=UOAX8JTJZ8XJ&ref=nb_sb_noss_1'\n # We'are replacing every space with '+' to adhere with the pattern \n search_term = search_term.replace(\" \",\"+\")\n \n # add term query to url\n url = template.format(search_term)\n \n # add page query placeholder\n url += '&page{}'\n \n return url", "_____no_output_____" ], [ "## Putting all the pieces of code together\nimport csv\nfrom bs4 import BeautifulSoup\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\n\ndef get_url(search_term):\n '''\n This function fetches the URL of the item that you want to search\n '''\n template = 'https://www.amazon.in/s?k={}&crid=UOAX8JTJZ8XJ&ref=nb_sb_noss_1'\n # We'are replacing every space with '+' to adhere with the pattern \n search_term = search_term.replace(\" \",\"+\")\n \n # add term query to url\n url = template.format(search_term)\n \n # add page query placeholder\n url += '&page{}'\n \n return url\n\ndef extract_records(item):\n '''Extract and return data from a single record'''\n #description and url\n atag = item.h2.a\n description = atag.text.strip()\n url = \"https://www.amazon.in\" + atag.get(\"href\")\n \n '''Basically we put the exception for price here which most definitely \n should not be there return'''\n try:\n # price\n price = item.find('span', 'a-price-whole').text\n except AttributeError:\n price = ''\n \n ''' For the ratings although there might be missign details and \n we rather add the exception properly'''\n try:\n # rank and rating\n rating = item.i.text\n \n item.find('span', 'a-size-base').text\n rating_count = item.find('span', 'a-size-base').text\n except AttributeError:\n rating = ''\n rating_count = ''\n \n result = (description, price, rating, rating_count, url)\n \n return result\n\ndef main(search_term):\n \"\"\"Run the main program routine\"\"\"\n driver = webdriver.Chrome(ChromeDriverManager().install())\n\n record = []\n url = get_url(search_term)\n \n for page in range(1, 6):\n driver.get(url.format(page))\n soup = BeautifulSoup(driver.page_source, 'html.parser')\n results = soup.find_all('div', {'data-component-type': 's-search-result'})\n \n for item in results:\n record = extract_records(item)\n if record:\n records.append(record)\n \n driver.close()\n \n #save the data to csv file\n with open('results.csv', 'w', newline='', encoding = 'utf-8') as f:\n writer = csv.writer(f)\n writer.writerow({'Description', 'Price_in_INR', 'Reviews', 'Review_count', 'Url'})\n writer.writerows(records)", "_____no_output_____" ], [ "main('nike shoes')", "\n\n====== WebDriver manager ======\nCurrent google-chrome version is 97.0.4692\nGet LATEST chromedriver version for 97.0.4692 google-chrome\nDriver [C:\\Users\\bikem\\.wdm\\drivers\\chromedriver\\win32\\97.0.4692.71\\chromedriver.exe] found in cache\nC:\\Users\\bikem\\AppData\\Local\\Temp/ipykernel_16024/2398463386.py:56: DeprecationWarning: executable_path has been deprecated, please pass in a Service object\n driver = webdriver.Chrome(ChromeDriverManager().install())\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e760d580a6d6679baf707a42e9ae619f055cd90f
529,946
ipynb
Jupyter Notebook
ADS-Spring2019/TestGradesInGraduateAdmission.ipynb
peperaj/XBUS-507-01.Applied_Data_Science
0928b9aa000bfaabef13a3a59c0c80c68307013b
[ "MIT" ]
9
2019-06-21T21:38:46.000Z
2022-01-11T18:16:22.000Z
ADS-Spring2019/TestGradesInGraduateAdmission.ipynb
peperaj/XBUS-507-01.Applied_Data_Science
0928b9aa000bfaabef13a3a59c0c80c68307013b
[ "MIT" ]
3
2019-08-24T18:56:27.000Z
2020-01-04T19:26:32.000Z
ADS-Spring2019/TestGradesInGraduateAdmission.ipynb
peperaj/XBUS-507-01.Applied_Data_Science
0928b9aa000bfaabef13a3a59c0c80c68307013b
[ "MIT" ]
13
2018-09-28T21:39:41.000Z
2022-01-11T18:16:25.000Z
41.318104
120,216
0.615791
[ [ [ "# Test the hypothesis whether GRE/TOEFL grades play pivotal role in graduate admissions", "_____no_output_____" ] ], [ [ "# for some basic operations\nimport numpy as np\nimport pandas as pd\n\n# for data visualizations\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nplt.style.use('fivethirtyeight')\n\n# for advanced visualizations\nimport plotly.offline as py\nfrom plotly.offline import init_notebook_mode, iplot\nimport plotly.graph_objs as go\nfrom plotly import tools\ninit_notebook_mode(connected = True)\nimport plotly.figure_factory as ff\n\n# for providing path\nimport os\nprint(os.listdir('/Users/sertan/Documents/DataScience_GT/TheProject/Bladder-Cancer-Detection/'))", "_____no_output_____" ], [ "# reading the data\n\ndata = pd.read_csv('/Users/sertan/Documents/DataScience_GT/TheProject/Bladder-Cancer-Detection/Admission_Predict_Ver1.1.csv')\n#data2 = pd.read_csv('/Users/sertan/Documents/DataScience_GT/TheProject/Bladder-Cancer-Detection/input/Admission_Predict.csv')\n\n# getting the shapes of the datasets\nprint(\"Shape of data1: \", data.shape)\n#print(\"Shape of data2 :\", data2.shape)\n\n# combining both the datasets as they have same columns\n\n#data = pd.concat([data, data2])\n\n# getting the shape of new dataset\ndata.head()", "Shape of data1: (500, 9)\n" ], [ "# checking if the data contains any NULL values\n\ndata.isnull().any().any()", "_____no_output_____" ], [ "data.describe()", "_____no_output_____" ], [ "# looking at the variations of LOR among the students\n\nplt.rcParams['figure.figsize'] = (18, 9)\nplt.style.use('dark_background')\n\nsns.countplot(data['LOR '], palette = 'PuBu')\nplt.title('Variations in Letter of Recommendations', fontsize = 30)\nplt.xlabel('LOR Score')\nplt.ylabel('count')\nplt.show()", "_____no_output_____" ], [ "# making a pie chart for the analysis of students rather they did research or not.\n\ndata_re = data['Research'].value_counts()\n\nlabel_re = data_re.index\nsize_re = data_re.values\n\ncolors = ['aqua', 'gold']\n\ntrace = go.Pie(\n labels = label_re, values = size_re, marker = dict(colors = colors), name = 'Research', hole = 0.3)\n\ndf = [trace]\n\nlayout1 = go.Layout(\n title = 'Research work done or not')\nfig = go.Figure(data = df, layout = layout1)\npy.iplot(fig)", "_____no_output_____" ], [ "# making a donut chart for the analysis of students with different university ratings\n\ndata_ur = data['University Rating'].value_counts()\n\nlabel_re = data_ur.index\nsize_re = data_ur.values\n\n\ntrace = go.Pie(\n labels = label_re,\n values = size_re,\n marker = dict(colors = ['gold' 'lightgreen', 'orange', 'yellow', 'pink']),\n name = 'University Ratings',\n hole = 0.2)\n\ndf2 = [trace]\n\nlayout1 = go.Layout(\n title = 'University Ratings of the Students')\nfig = go.Figure(data = df2, layout = layout1)\npy.iplot(fig)", "_____no_output_____" ], [ "trace = go.Box(\n x = data['University Rating'],\n y = data['GRE Score'],\n name = 'University Rating vs GRE Score',\n marker = dict(\n color = 'rgb(145, 165, 5)')\n)\n \n\ndf= [trace]\n\nlayout = go.Layout(\n boxmode = 'group',\n title = 'University Ratings vs GRE Score',\n \n)\n\nfig = go.Figure(data = df, layout = layout)\npy.iplot(fig)", "_____no_output_____" ], [ "plt.rcParams['figure.figsize'] = (18, 9)\nplt.style.use('ggplot')\n\nsns.boxenplot(data['University Rating'], data['TOEFL Score'], palette = 'RdPu')\nplt.title('University Ratings vs TOEFL Score', fontsize = 20)\nplt.show()", "_____no_output_____" ], [ "plt.rcParams['figure.figsize'] = (18, 9)\nplt.style.use('ggplot')\n\nsns.swarmplot(data['University Rating'], data['CGPA'], palette = 'twilight')\nplt.title('University Ratings vs CGPA', fontsize = 20)\nplt.show()", "_____no_output_____" ], [ "plt.rcParams['figure.figsize'] = (18, 9)\nplt.style.use('ggplot')\n\nsns.violinplot(data['University Rating'], data['Chance of Admit '], palette = 'rainbow')\nplt.title('University Ratings vs Chance of Admission', fontsize = 20)\nplt.show()", "_____no_output_____" ], [ "# prepare data\n\ndata2 = data.loc[:,[\"GRE Score\", \"TOEFL Score\", \"Chance of Admit \"]]\ndata2[\"index\"] = np.arange(1,len(data2)+1)\n\n# scatter matrix\nfig = ff.create_scatterplotmatrix(data2, diag='box', index='index',colormap='Portland',\n colormap_type='cat',\n height=700, width=700)\niplot(fig)", "_____no_output_____" ] ], [ [ "# Based on the correlation plots above, we concluded that GRE/TOEFL grades do play a pivotal role in graduate admissions", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e760d626f2a5e758ce41b1d3b46828de9538ce84
11,756
ipynb
Jupyter Notebook
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
cbf0faf548dfc35d799898178bf7e8c3461e5776
[ "MIT" ]
null
null
null
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
cbf0faf548dfc35d799898178bf7e8c3461e5776
[ "MIT" ]
null
null
null
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
cbf0faf548dfc35d799898178bf7e8c3461e5776
[ "MIT" ]
null
null
null
21.030411
87
0.462657
[ [ [ "<img src='./img/intel-logo.jpg' width=30%, Fig1> \n\n# 파이썬 기초강의\n<font size=5><b>01. 연 습 문 제<b></font>\n\n<div align='right'>이 인 구 (Ike Lee)</div>\n", "_____no_output_____" ], [ "## Example : 1\n\n### 직육면체의 부피를 구해보자\n<img src='./img/volume.png' width=50%, Fig2>", "_____no_output_____" ] ], [ [ "# 변수 설정 \nlength = 5\nheight = 5\nwidth = 20", "_____no_output_____" ], [ "volume = length*width*height\nprint('직육면체의 부피 : %d'%volume)", "_____no_output_____" ], [ "length = 10 # 다시 할당하면 됨\nvolume = length*width*height\nprint('직육면체의 부피 : %d'%volume)", "_____no_output_____" ] ], [ [ "## Example : 2\n### for 를 사용해서 암컷 개를 찾으세요. and를 사용해서 일치하는것을 찾으세요 ", "_____no_output_____" ] ], [ [ "suspects = [['낙타', '포유류','암컷'], ['상어','어류','숫컷'], ['푸들','개','암컷']]\nfor suspect in suspects:\n if suspect[1] == '개' and suspect[2] =='암컷':\n print('범인은', suspect[0], '입니다')", "_____no_output_____" ], [ "volume = length*width*height\nprint('직육면체의 부피 : %d'%volume)", "_____no_output_____" ], [ "length = 10 # 다시 할당하면 됨\nvolume = length*width*height\nprint('직육면체의 부피 : %d'%volume)", "_____no_output_____" ] ], [ [ "## Example : 3\n### 연이율구하기 \n```\n 2017년 7월 2일 연이율 3% 계좌를 생성하여 3000000원을 입금한 경우\n 2018년 7월 2일 계좌 총액을 계산하여 출력하는 프로그램을 작성하십시오.\n 프로그램에서 입금액을 위한 변수, 연이율을 위한 변수를 만들어 사용하십시오.\n\n위의 프로그램을 입금액과 연이율을 입력받아 총액을 출력하도록 변경하십시오.\n언어 : python3\n입력 설명 :\n\n다음은 입금액과 연이율의 입력예입니다.\n===============================\n입금액(원), 연이율(%):: 4000, 3\n출력 설명 :\n\n다음과 같이 1년 후 총액을 출력합니다.\n===============================\n4120.0\n\n샘플 입력 : 4000, 3\n샘플 출력 : 4120.0\n``` ", "_____no_output_____" ] ], [ [ "money, ratio = eval(input('입금액(원), 연이율(%)::'))\nprint(money*(1+(1/100)*ratio))", "_____no_output_____" ] ], [ [ "## Example : 4\n### 삼각형 넓이를 구하시오\n```\n\n삼각형의 세변의 길이가 3,4,5인 경우 삼각형 넓이는 다음과 같이 계산합니다.\n x = (3 + 4 + 5)/2\n 넒이는 x(x-3)(x-4)(x-5) 의 양의 제곱근\n언어 : python3\n입력 설명 :\n\n다음과 같이 삼각형의 세변의 길이를 입력합니다.\n======================\n삼각형 세변의 길이(comma로 구분): 3,4,5\n출력 설명 :\n\n다음과 같이 삼각형의 넓이를 출력합니다.\n======================\n6.0\n\n샘플 입력 : 3,4,5\n샘플 출력 : 6.0\n```", "_____no_output_____" ] ], [ [ "a,b,c = eval(input())\nx = (a+b+c)/2\narea = (x*(x-a)*(x-b)*(x-c))**(0.5)\nprint(area)", "_____no_output_____" ] ], [ [ "<img src='./img/C1.jpg'> \n", "_____no_output_____" ], [ "## Example : 5\n### for 를 사용해서 암컷 개를 찾으세요. and를 사용해서 일치하는것을 찾으세요 ", "_____no_output_____" ] ], [ [ "suspects = [['낙타', '포유류','암컷'], ['상어','어류','숫컷'], ['푸들','개','암컷']]\nfor suspect in suspects:\n if suspect[1] == '개' and suspect[2] =='암컷':\n print('범인은', suspect[0], '입니다')", "_____no_output_____" ] ], [ [ "## Example : 6\n### 중복되지 않는 카드 두 장을 뽑도록 빈칸을 채우세요. ", "_____no_output_____" ] ], [ [ "import random\ncities = ['서울','부산','울산','인천' ]\nprint(random.sample(cities, 2))\n", "_____no_output_____" ] ], [ [ "## Example : 7\n### 다음중 하나를 무작위로 뽑아주세요! \nannimals = 얼룩말, 황소, 개구리, 참새.\n", "_____no_output_____" ] ], [ [ "#리스트[]\nimport random\nannimals = ['얼룩말','황소', '개구리', '참새']\nprint(random.choice(annimals))\n", "_____no_output_____" ] ], [ [ "## Example : 8\n### def 를 이용해서 서로에게 인사하는 문구를 만들어 보세요! \n가브리엘 님 안녕하세요? \\\n엘리스 님 안녕하세요? ", "_____no_output_____" ] ], [ [ "def welcome(name):\n print(name,'님 안녕하세요?')\n \nwelcome('가브리엘')\nwelcome('엘리스')\n", "_____no_output_____" ] ], [ [ "## Example : 9\n### 점수에 따라 학점을 출력 해주세요. \n철수의 점수는 75점 입니다. 몇 학점 인지 표시해 주세요. \nA학점은 80< score <=100 \nB학점은 60< score <=80 \nC학점은 40< score <=60 ", "_____no_output_____" ] ], [ [ "score =75\nif 80< score <=100:\n print('학점은 A 입니다')\nif 60< score <=80:\n print('학점은 B 입니다')\nif 40< score <=60:\n print('학점은 C 입니다')\n", "_____no_output_____" ] ], [ [ "## Example : 10\n### 변수를 사용해서 매출액을 계산해 주세요. \n주문서1 - 커피2잔, 홍차4잔, 레몬티5잔 \n주문서2 - 커피1잔, 홍차1잔, 레몬티5잔 \n주문서3 - 커피2잔, 홍차3잔, 레몬티1잔 ", "_____no_output_____" ] ], [ [ "coffee =4000\ntea = 3000\nlemon =200\norder1 = (coffee*2 + tea*4 + lemon*5)\norder2 = (coffee*1 + tea*1 + lemon*5)\norder3 = (coffee*2 + tea*3 + lemon*1)\nprint(order1+order2+order3)\n", "_____no_output_____" ] ], [ [ "## Example : 11\n### 5바퀴를 도는 레이싱 경주를 하고 있습니다. while 코드를 이용해서 트랙의 수를 카운트하고 5바퀴를 돌면 종료 멧세지를 주세요. \n반복할 때마다 몇 번째 바퀴인지 출력하세요. \\\n5바퀴를 돌면 종료 멧세지와 함께 종료해 주세요. \n", "_____no_output_____" ] ], [ [ "count = 0 \nwhile count <5:\n count =count +1\n print(count, \"번째 바퀴입니다.\")\nprint('경주가 종료되었습니다!')", "_____no_output_____" ] ], [ [ "## Example : 12\n### 정답을 맟춰보세요. \n미국이 수도는 어디인기요? \\\n보기에서 찾아서 답하게 하세요. \n 런던,오타와, 파리, 뉴욕\n틀린 답을 말하면 어느 나라의 수도인지 말해주세요. \n", "_____no_output_____" ] ], [ [ "while True:\n answer = input('런던,오타와,파리,뉴욕 중 미국이 수도는 어디일까요?')\n if answer == '뉴욕':\n print('정답입니다. 뉴욕은 미국의 수도 입니다')\n break\n elif answer == '오타와':\n print('오타와는 캐나다의 수도 입니다')\n elif answer == '파리':\n print('파리는 프랑스의 수도 입니다')\n elif answer == '런던':\n print('런던은 영국의 수도 입니다')\n else:\n print('보기에서 골라주세요')\n ", "_____no_output_____" ] ], [ [ "## Example : 13\n### 물건을 교환 해주세요 \n철수는 마트에서 형광등을 샀습니다. 그런데 LED 전구가 전기 효율이 좋아 형광등을 LED 전구로 교환 하고자 합니다. \n형광등 3개를 LED 3개로 바꾸어 주세요. \n형광등, 형광등, 형광등 ==> LED 전구, LED 전구, LED전구", "_____no_output_____" ] ], [ [ "전구 = ['형광등', '형광등', '형광등']\nfor i in range(3):\n 전구[i] = 'LED 전구'\nprint(전구)\n ", "_____no_output_____" ] ], [ [ "## Example : 14\n### 반복하기 \n동물원 원숭이 10 마리에게 인사하기. \nfor을 사용해서 10마리에게 한번에 인사하기 코드를 적어주세요.\n", "_____no_output_____" ] ], [ [ "for num in range(10):\n print ('안녕 원숭이', num) ", "_____no_output_____" ], [ "my_str ='My name is %s' % 'Lion'\nprint(my_str)", "_____no_output_____" ], [ "'%d %d' % (1,2)", "_____no_output_____" ], [ "'%f %f' % (1,2)", "_____no_output_____" ] ], [ [ "### print Options", "_____no_output_____" ] ], [ [ "print('집단지성', end='/')", "_____no_output_____" ], [ "print('집단지성', end='통합하자')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e760da6c235010056d347addb3225cea547dff78
17,584
ipynb
Jupyter Notebook
old_WIMpy/Wfunctions/.ipynb_checkpoints/Wfunctions_test-checkpoint.ipynb
LBJ-Wade/-WIMpy_NREFT
4b9901bc74fe0bb2044fe8c95369e55cfc3ef0ca
[ "MIT" ]
8
2017-07-26T03:21:55.000Z
2019-10-11T04:32:36.000Z
old_WIMpy/Wfunctions/.ipynb_checkpoints/Wfunctions_test-checkpoint.ipynb
LBJ-Wade/-WIMpy_NREFT
4b9901bc74fe0bb2044fe8c95369e55cfc3ef0ca
[ "MIT" ]
2
2020-10-25T05:26:06.000Z
2021-09-30T14:54:05.000Z
old_WIMpy/Wfunctions/.ipynb_checkpoints/Wfunctions_test-checkpoint.ipynb
LBJ-Wade/-WIMpy_NREFT
4b9901bc74fe0bb2044fe8c95369e55cfc3ef0ca
[ "MIT" ]
5
2018-05-21T01:01:04.000Z
2022-01-26T20:29:28.000Z
75.793103
12,532
0.821087
[ [ [ "import numpy as np\nfrom matplotlib import pylab as pl", "_____no_output_____" ], [ "import WD, WM, WMP2\nimport WP1\nimport WP2\nimport WS1\nimport WS2\nimport WS1D", "_____no_output_____" ], [ "print WD.__doc__", "This module 'WD' is auto-generated with f2py (version:2).\nFunctions:\n wd = calcwd(i,j,y,target)\n.\n" ], [ "print WD.calcwd(0.1,0.1,1.0,'Xe131')\nprint WM.calcwm(0.1,0.1,1.0,'Xe131')\nprint WMP2.calcwmp2(0.1,0.1,1.0,'Xe131')\nprint WP1.calcwp1(0.1,0.1,1.0,'Xe131')\nprint WP2.calcwp2(0.1,0.1,1.0,'Xe131')\nprint WS1.calcws1(0.1,0.1,1.0,'Xe131')\nprint WS2.calcws2(0.1,0.1,1.0,'Xe131')\nprint WS1D.calcws1d(0.1,0.1,1.0,'Xe131')", "0.00101607560646\n0.915474534035\n0.174012035131\n9.44748279608e-10\n0.0331197045743\n0.000788004428614\n0.00235111522488\n-0.000667707354296\n" ], [ "yvals = np.linspace(0, 2,100)\ny = 0.1\ntau1 = np.array([0,1])\ntau2 = np.array([0,1])\n\nX = np.vectorize(WD.calcwd)(0.1,0.1,yvals,'Xe131')\nWMthingy = np.vectorize(WM.calcwm)(tau1[0], tau2[0], y, 'Xe131')\nprint WMthingy", "736.724121094\n" ], [ "cp=1\ncn=1\nc_0 = 0.5*(cp + cn)\nprint cp", "1\n" ], [ "def calcWM(E, y, target=\"Xe131\", cp=1, cn=1):\n c = np.array([0.5*(cp + cn), 0.5*(cp - cn)])\n tau1 = np.array([0.,1.])\n tau2 = np.array([0.,1.])\n WMvals = []\n for i in range(0,2):\n for j in range(0,2):\n print tau1[i], tau2[j], c[i], c[j]\n WMvals.append(c[i]*c[j]*WM.calcwm(tau1[i], tau2[j], y, target))\n return sum(WMvals)\n\nprint calcWM(0.1, 0.1, \"Xe131\")", "0.0 0.0 1.0 1.0\n0.0 1.0 1.0 0.0\n1.0 0.0 0.0 1.0\n1.0 1.0 0.0 0.0\n736.724121094\n" ], [ "yvals = np.linspace(0, 2,100)\n\npl.figure()\n# pl.plot(yvals, np.vectorize(WD.calcwd)(0.1,0.1,yvals,'Xe131'))\n# pl.plot(yvals, np.vectorize(WM.calcwm)(0.1,0.1,yvals,'Xe131'))\n# pl.plot(yvals, np.vectorize(WMP2.calcwmp2)(0.1,0.1,yvals,'Xe131'))\n# pl.plot(yvals, np.vectorize(WP1.calcwp1)(0.1,0.1,yvals,'Xe131'))\n# pl.plot(yvals, np.vectorize(WP2.calcwp2)(0.1,0.1,yvals,'Iodine'))\n# pl.plot(yvals, np.vectorize(WS1.calcws1)(0.1,0.1,yvals,'Xe131'))\n# pl.plot(yvals, np.vectorize(WS2.calcws2)(0.1,0.1,yvals,'Xe131'))\n# pl.plot(yvals, np.vectorize(WS1D.calcws1d)(0.1,0.1,yvals,'Flourine'))\npl.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e760daf8bfa4cb478157753c52ad43ff6b62ce2d
7,184
ipynb
Jupyter Notebook
webscraping.ipynb
BSski/LDA-classification
0f0bb0511dd4378ab73a0909d290ab02766fc6a9
[ "CNRI-Python" ]
null
null
null
webscraping.ipynb
BSski/LDA-classification
0f0bb0511dd4378ab73a0909d290ab02766fc6a9
[ "CNRI-Python" ]
null
null
null
webscraping.ipynb
BSski/LDA-classification
0f0bb0511dd4378ab73a0909d290ab02766fc6a9
[ "CNRI-Python" ]
null
null
null
33.570093
154
0.502784
[ [ [ "!pip install requests\n!pip install bs4\nimport requests\nimport bs4\n\nfrom google.colab import drive\n\ndrive.mount('/gdrive')\n\nimport pandas\nfrom bs4 import BeautifulSoup\nimport requests\nimport time", "Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (2.23.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests) (2021.10.8)\nRequirement already satisfied: bs4 in /usr/local/lib/python3.7/dist-packages (0.0.1)\nRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from bs4) (4.6.3)\n" ], [ "# UBER PART\nbase_url = 'https://www.consumeraffairs.com/travel/uber.html?page={}'\n\ncomment_list = []\ncomment_list = set(comment_list)\ncounter = 0\n\nwhile True:\n counter += 1\n\n res = requests.get(base_url.format(counter))\n soup = BeautifulSoup(res.text, 'html.parser')\n\n for item in soup.select('div[class=\"rvw-bd\"] p p'):\n if item.getText() not in comment_list:\n comment_list.add(item.getText())\n\n if counter == 30:\n break\n time.sleep(3)\n\n\nprint(len(comment_list))\n\n\ndf = pandas.DataFrame(comment_list)\nwith open('/gdrive/My Drive/uber_data.csv', 'w') as f:\n df.to_csv(f)", "Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (2.23.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests) (2020.11.8)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests) (3.0.4)\nRequirement already satisfied: bs4 in /usr/local/lib/python3.6/dist-packages (0.0.1)\nRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (from bs4) (4.6.3)\nDrive already mounted at /gdrive; to attempt to forcibly remount, call drive.mount(\"/gdrive\", force_remount=True).\n842\n" ], [ "# TAXI PART\nbase_url = 'https://www.reviews.co.uk/company-reviews/store/airports-taxi-transfers/{}'\n\ntaxi_comment_list = []\ntaxi_comment_list = set(taxi_comment_list)\ncounter = 0\n\nwhile True:\n counter += 1\n\n res = requests.get(base_url.format(counter))\n soup = BeautifulSoup(res.text, 'html.parser')\n\n for item in soup.select('span[class=\"Review__body\"]'):\n if item.getText() not in taxi_comment_list:\n taxi_comment_list.add(item.getText()[1:-1])\n\n if counter == 78:\n break\n time.sleep(3)\n\n\nprint(len(taxi_comment_list))\n\n\ndf_taxi = pandas.DataFrame(taxi_comment_list)\nwith open('/gdrive/My Drive/taxi_data.csv', 'w') as f:\n df_taxi.to_csv(f)", "1296\n" ], [ "# Concatenate both .csv files into one and save it to new .csv\nvertical_stack = pandas.concat([df_taxi, df], axis=0)\n\nwith open('/gdrive/My Drive/vertical_stack.csv', 'w') as f:\n vertical_stack.to_csv(f)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
e760e630ca14dba23b5c6493ad79d357136ff79e
101,683
ipynb
Jupyter Notebook
16Nov17.ipynb
FHomewood/ScientificComputing
bc3477b4607b25a700f2d89ca4f01cb3ea0998c4
[ "IJG" ]
null
null
null
16Nov17.ipynb
FHomewood/ScientificComputing
bc3477b4607b25a700f2d89ca4f01cb3ea0998c4
[ "IJG" ]
null
null
null
16Nov17.ipynb
FHomewood/ScientificComputing
bc3477b4607b25a700f2d89ca4f01cb3ea0998c4
[ "IJG" ]
null
null
null
601.674556
35,994
0.933971
[ [ [ "#Numerical Differentiation\n#part 2a)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.interpolate import interp1d, barycentric_interpolate\n\ndef diff(f,x,h=0.00000000001):\n return (f(x+h)-f(x-h))/2/h\n\nx = [0. , 0.1 , 0.2 , 0.3 , 0.4 ]\ny = [0.000000,0.078348,0.138910,0.192916,0.244981]\n\nf_cubic = interp1d(x,y, kind = 'cubic')\n\nx0 = np.arange(0.,0.4,0.001)\ny0 = barycentric_interpolate(x,y,x0)\nz = np.arange(0.1,0.3,0.001)\nz0 = diff(f_cubic,z)\nplt.plot(x,y,'kx')\nplt.plot(x0,f_cubic(x0), 'r-')\nplt.show()\n\nplt.plot(z,z0,'k-', markersize = 10)\nplt.plot(0.2,diff(f_cubic,0.2),'ro',markersize=6, label = \"f'(0.2)\")\nplt.legend()\nplt.show()\nprint (diff(f_cubic,0.2))", "_____no_output_____" ] ], [ [ "$part \\ 2b)$", "_____no_output_____" ] ], [ [ "#Define Subroutines\ndef f(x):\n return np.sin(x) + np.exp(-x)\ndef df(x):\n return -np.sin(x) + np.exp(-x)\ndef g(x,h):\n return 1/h**2*(f(x+h)-2*f(x)+f(x-h))\ndef G(x,h, p):\n G = (2**p*g(x,h/2) - g(x,h))/(2**p-1)\n return G\n#\n\n#Computation\nt = np.arange(0,10,0.1)\nfor h in [0.1, 0.5]:\n plt.plot(t,f(t) ,'-', color = 'black', linewidth = 3, label = \"f(x)\")\n plt.plot(t,df(t) ,'-', color = 'grey' , linewidth = 2, label = \"f'(x)\")\n plt.plot(t,g(t,h) ,'--', color = 'red' , linewidth = 2, label = \"g(x)\")\n plt.plot(t,G(t,h, 2),'-.', color = 'green', linewidth = 2, label = \"G(x)\")\n plt.title(\"h = \" + str(h))\n plt.ylabel(\"y axis\")\n plt.xlabel(\"x axis\")\n plt.legend()\n plt.show()\n#\n\n#Output\n#", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
e760ec76bfd2bc33eea0271c4b46495cba81cf21
24,221
ipynb
Jupyter Notebook
examples/howto/server_embed/notebook_embed.ipynb
GangminLi/bokeh
67fd70589385d63e7db555691b43082643a82ada
[ "BSD-3-Clause" ]
null
null
null
examples/howto/server_embed/notebook_embed.ipynb
GangminLi/bokeh
67fd70589385d63e7db555691b43082643a82ada
[ "BSD-3-Clause" ]
null
null
null
examples/howto/server_embed/notebook_embed.ipynb
GangminLi/bokeh
67fd70589385d63e7db555691b43082643a82ada
[ "BSD-3-Clause" ]
null
null
null
50.460417
5,427
0.524297
[ [ [ "## Embedding a Bokeh server in a Notebook\n\nThis notebook shows how a Bokeh server application can be embedded inside a Jupyter notebook. ", "_____no_output_____" ] ], [ [ "import yaml\n\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, Slider\nfrom bokeh.plotting import figure\nfrom bokeh.themes import Theme\nfrom bokeh.io import show, output_notebook\n\nfrom bokeh.sampledata.sea_surface_temperature import sea_surface_temperature\n\noutput_notebook()", "_____no_output_____" ] ], [ [ "There are various application handlers that can be used to build up Bokeh documents. For example, there is a `ScriptHandler` that uses the code from a `.py` file to produce Bokeh documents. This is the handler that is used when we run `bokeh serve app.py`. In the notebook we can use a function to define a Bokehg application.\n\nHere is the function `bkapp(doc)` that defines our app:", "_____no_output_____" ] ], [ [ "def bkapp(doc):\n df = sea_surface_temperature.copy()\n source = ColumnDataSource(data=df)\n\n plot = figure(x_axis_type='datetime', y_range=(0, 25),\n y_axis_label='Temperature (Celsius)',\n title=\"Sea Surface Temperature at 43.18, -70.43\")\n plot.line('time', 'temperature', source=source)\n\n def callback(attr, old, new):\n if new == 0:\n data = df\n else:\n data = df.rolling('{0}D'.format(new)).mean()\n source.data = ColumnDataSource.from_df(data)\n\n slider = Slider(start=0, end=30, value=0, step=1, title=\"Smoothing by N Days\")\n slider.on_change('value', callback)\n\n doc.add_root(column(slider, plot))\n\n doc.theme = Theme(json=yaml.load(\"\"\"\n attrs:\n Figure:\n background_fill_color: \"#DDDDDD\"\n outline_line_color: white\n toolbar_location: above\n height: 500\n width: 800\n Grid:\n grid_line_dash: [6, 4]\n grid_line_color: white\n \"\"\", Loader=yaml.FullLoader))", "_____no_output_____" ] ], [ [ "Now we can display our application using ``show``, which will automatically create an ``Application`` that wraps ``bkapp`` using ``FunctionHandler``. The end result is that the Bokeh server will call ``bkapp`` to build new documents for every new sessions that is opened.\n\n**Note**: If the current notebook is not displayed at the default URL, you must update the `notebook_url` parameter in the comment below to match, and pass it to `show`.", "_____no_output_____" ] ], [ [ "show(bkapp) # notebook_url=\"http://localhost:8888\" ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e760eef26e3835ed39ae237dda7d34ccb64d857b
13,053
ipynb
Jupyter Notebook
kNN/.ipynb_checkpoints/radius-nearest-neighbors-model-checkpoint.ipynb
emma-d-cotter/ARTEMIS
1586365a23d856abc5d81ad89ec830e5e903af33
[ "BSD-3-Clause" ]
null
null
null
kNN/.ipynb_checkpoints/radius-nearest-neighbors-model-checkpoint.ipynb
emma-d-cotter/ARTEMIS
1586365a23d856abc5d81ad89ec830e5e903af33
[ "BSD-3-Clause" ]
null
null
null
kNN/.ipynb_checkpoints/radius-nearest-neighbors-model-checkpoint.ipynb
emma-d-cotter/ARTEMIS
1586365a23d856abc5d81ad89ec830e5e903af33
[ "BSD-3-Clause" ]
null
null
null
30.858156
120
0.490385
[ [ [ "import numpy as np\nfrom sklearn.neighbors import NearestNeighbors", "_____no_output_____" ], [ "r = 1 # Initial radius for model\nx = 1 # increment for expanding radius\nd = .01 # distance threshold for determing if two points near equidistant\n\n# existing points and corresponding classes\nexistingPoints = np.array([[2,3,4,5,6,7,8],[1,3,4,5,6,7,8],[6,2,3,4,5,6,7]]) # existing points in model\nexistingClasses = np.array([1,2,3])\n\n# initialize classifier\nneigh = NearestNeighbors(radius = r)\n\n# read in new point\nnewPoints = np.array([2,3,4,5,6,7,8],ndmin=2)\n\n# fit existing points to classifier\nneigh.fit(existingPoints)\n\n#distances,indices = neigh.radius_neighbors(newPoints)\n\nnewClass = classify(neigh,existingClasses,newPoints)\n\n# Add the new point and new class to the model. Class = 0 if point is unclassified\nexistingPoints = np.append(existingPoints,newPoints)\nexistingClasses = np.append(existingClasses,newClass)\n\nprint(newClass)", "2\n" ], [ "def classify(neigh,existingClasses,newPoints):\n \n distances,indices = neigh.radius_neighbors(newPoints)\n\n # if there are no points in the radius, expand radius by x and check again before classifying point.\n if len(indices)==0 or len(indices)==1: \n neigh.radius = r+x\n distances,indices = neigh.radius_neighbors(newPoints) \n \n # else if there are two points from different classes that are close to the same distance\n # (within distance threshold), expand radius to see if there is another very close point\n elif len(indices) ==2 and existingClasses[indices[0]]!=existingClasses[indices[1]]:\n if abs(distances[0]-distances[1]) <= d:\n neigh.radius = r+x\n \n # predict class of new data \n if len(indices)!=0:\n # calculate weights (arbitrary weights for now)\n weights = np.array([0,5,5])\n \n # sum weights for each class\n classes = existingClasses[indices[0]]\n classes = np.unique(classes[np.where(classes!=0)]) # ignore zero class (outliers)\n \n classWeight = np.zeros(len(classes)) # initialize weight array\n \n for i,cl in enumerate(classes):\n classWeight[i] = sum(weights[np.where(classes==cl)])\n \n newClass = classes[np.argmax(classWeight)] \n \n else: \n newClass = 0\n \n return(newClass)\n \n \n\n\n\n\n", "_____no_output_____" ], [ "## Since we are including PAMguard into feature space, use an enum to make np.array work", "_____no_output_____" ], [ "from datetime import datetime\n\nclass DetectedTarget:\n \n features_label = [\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\"]\n \n def __init__(self, features=[], source=\"\", date=datetime.now(), classification=None):\n self.features = features\n self.source = source\n self.date = date\n self.classification = classification", "_____no_output_____" ], [ "def rescaleFrom0To1(features):\n for i in range(features.shape[1]):\n if features[:,i].min() == features[:,i].max():\n features[:,i] = 0.5\n else:\n features[:,i] = (features[:,i] - features[:,i].min()) / (features[:,i].max() - features[:,i].min())\n return features", "_____no_output_____" ], [ "features", "_____no_output_____" ], [ "rescaleFrom0To1(features)", "[ 0. 1.]\n[ 0. 1.]\n[ 0. 1.]\n[ 0.5 0.5]\n[ 1. 0.]\n[ 1. 0.]\n[ 1. 0.]\n" ], [ "from enum import Enum\nimport numpy as np\nfrom sklearn.neighbors import NearestNeighbors\nfrom datetime import datetime\nimport os.path\nimport csv\n\nclass Classes(Enum):\n Interesting = 1.1\n NotInteresting = 1.2\n \n FastLarge = 2.1\n FastSmall = 2.2\n SlowLarge = 2.3\n SlowSmall = 2.4\n \n # SchoolOfFish = 3.1\n # SingleFish = 3.2\n # Kelp = 3.3\n # DolphinOrPorpoise = 3.4\n\nclass DetectedTarget:\n \n features_label = [\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\"]\n \n def __init__(self, features=[], source=\"Unknown\", date=datetime.now(), classification=None):\n self.features = features\n self.source = source\n self.date = date\n self.classification = classification\n\nclass weightedNeighbors:\n \n def __init__(self):\n \n # Global variables\n self.INITIAL_RADIUS = 3.0\n self.SIZE_FEATURE_WEIGHT = 1.0\n self.SPEED_FEATURE_WEIGHT = 1.0\n self.SPEED_RELATIVE_TO_CURRENT_FEATURE_WEIGHT = 1.0\n self.TARGET_STRENGTH_FEATURE_WEIGHT = 1.0\n self.CURRENT_SPEED_FEATURE_WEIGHT = 1.0\n self.TIME_OF_DAY_FEATURE_WEIGHT = 1.0\n self.PASSIVE_ACOUSTICS_FEATURE_WEIGHT = 1.0\n self.RADIUS_INCREMENT = 0.1 # increment for expanding radius\n self.DISTANCE_THRESHOLD = 0.01 # distance threshold for determing if two points near equidistant\n \n # load current model and initialize NearestNeighbors model\n self.current_model_targets = self.load_detectedTargets()\n self.model = NearestNeighbors(radius=self.INITIAL_RADIUS)\n \n def load_detectedTargets(self):\n '''\n Load existing targets from current_model_targets.csv\n '''\n \n current_model_targets = []\n \n if os.path.isfile('current_model_targets.csv'):\n current_model_targets = []\n with open('current_model_targets.csv', 'r') as f:\n reader = csv.reader(f,delimiter = \";\")\n next(reader,None)\n for target in reader:\n current_model_targets.append( DetectedTarget(\n features=list(target[1:8]), source=target[8], date=target[10],\n classification=Classes(float(target[-1])))) \n\n else:\n print('No existing targets for model')\n \n return current_model_targets\n \n def fitModel(self):\n '''\n Fit current model targets to model\n '''\n self.model.fit(np.array(list(map(lambda x: x.features, self.current_model_targets))))\n \n def determine_weights(self,indices):\n '''\n This function will return the weights of points with the desired indices\n '''\n pass\n return np.array([0,5,5])\n \n \n def classify(self,newPoint):\n '''\n Predict class of new target detection(s)\n '''\n x = self.RADIUS_INCREMENT \n r = self.DISTANCE_THRESHOLD \n \n \n existingClasses = np.array(list(map(lambda x: x.classification, self.current_model_targets)))\n\n distances,indices = self.model.radius_neighbors(newPoint)\n\n # if there are no points in the radius, expand radius by x and check again before classifying point.\n if len(indices)==0 or len(indices)==1: \n neigh.radius = r+x\n distances,indices = self.model.radius_neighbors(newPoint) \n\n # else if there are two points from different classes that are close to the same distance\n # (within distance threshold), expand radius to see if there is another very close point\n elif len(indices) ==2 and existingClasses[indices[0]]!=existingClasses[indices[1]]:\n if abs(distances[0]-distances[1]) <= d:\n neigh.radius = r+x\n distances,indices = self.model.radius_neighbors(newPoint) \n\n # predict class of new data \n if len(indices)!=0:\n # calculate weights (arbitrary weights for now)\n weights = self.determine_weights(indices)\n\n # sum weights for each class\n classes = existingClasses[indices[0]]\n classes = np.unique(classes[np.where(classes!=0)]) # ignore zero class (outliers)\n\n classWeight = np.zeros(len(classes)) # initialize weight array\n\n for i,cl in enumerate(classes):\n classWeight[i] = sum(weights[np.where(classes==cl)])\n\n newClass = classes[np.argmax(classWeight)] \n\n else: \n newClass = 0\n \n return(newClass) ", "_____no_output_____" ], [ "neigh = weightedNeighbors()\nneigh.fitModel()\nneigh.classify(np.array([2,3,4,5,6,7,8],ndmin=2))", "['7', '6', '5', '4', '3', '2', '1']\n[['1' '2' '3' '4' '5' '6' '7']\n ['7' '6' '5' '4' '3' '2' '1']]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e760f431cdd083877426fb1f3792d88e13692fa3
5,103
ipynb
Jupyter Notebook
lectures/Untitled16.ipynb
Cheeto-exe/MATH50003NumericalAnalysis
ebe487264f23ca2b8d87af7b63c09253e8239da5
[ "MIT" ]
31
2022-01-10T18:38:03.000Z
2022-03-26T13:57:18.000Z
lectures/Untitled16.ipynb
Cheeto-exe/MATH50003NumericalAnalysis
ebe487264f23ca2b8d87af7b63c09253e8239da5
[ "MIT" ]
48
2022-01-12T10:19:51.000Z
2022-03-25T08:59:48.000Z
lectures/Untitled16.ipynb
Cheeto-exe/MATH50003NumericalAnalysis
ebe487264f23ca2b8d87af7b63c09253e8239da5
[ "MIT" ]
86
2022-01-10T17:17:02.000Z
2022-03-25T08:59:34.000Z
18.624088
69
0.451695
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e760fb9a7d48ac4ad62833420a6e4e2738f1f736
353,674
ipynb
Jupyter Notebook
notebooks/Victoria Phase and Amplitude.ipynb
SalishSeaCast/analysis-susan
52633f4fe82af6d7c69dff58f69f0da4f7933f48
[ "Apache-2.0" ]
null
null
null
notebooks/Victoria Phase and Amplitude.ipynb
SalishSeaCast/analysis-susan
52633f4fe82af6d7c69dff58f69f0da4f7933f48
[ "Apache-2.0" ]
null
null
null
notebooks/Victoria Phase and Amplitude.ipynb
SalishSeaCast/analysis-susan
52633f4fe82af6d7c69dff58f69f0da4f7933f48
[ "Apache-2.0" ]
null
null
null
893.116162
133,046
0.939235
[ [ [ "# imports\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom __future__ import division", "_____no_output_____" ], [ "M2_amp_obs_SoG = 0.902", "_____no_output_____" ], [ "pha_diff = {'ene': 86,\n 'wHoll': 87,\n 'llap': 87,\n 'wslip': 87,\n 'TS4': 86,\n 'smooth': 58,\n 'downto2': 82,\n 'downto5': 83,\n 'weaklog': 89,\n 'd10': 85,\n 'u20': 86,\n 'topog1': 80,\n 'base': 91,\n 'stronglog': 101,\n 'secondlog': 96,\n 'sticky': 90,\n 'deep' : 55,\n 'deep_fric': 83,\n 'GmO': 75,\n 'GmO_TS5': 81,\n 'GmO_TS6': 81,\n 'GmO_TS7': 77,\n 'GmO_TS8': 74,\n 'GmO_TS9': 75,\n 'GmO_TS10': 76,\n 'a1': 77,\n 'a2': 76,\n 'M17_a1': 94,\n 'M17_a2': 92,\n 'M17_a3': 88,\n 'M17_a4': 81,\n 'M17_a5': 77,\n 'M17_a6': 75,\n 'M17_a7': 73,\n 'M17_a9': 68,\n 'M17_a10': 73,\n 'M17_a11': 81,\n 'M17_a12': 75,\n 'M17_a13': 79,\n 'M17_a14': 77}\namp = {'ene': 0.25,\n 'wHoll': 0.255,\n 'llap': 0.29,\n 'wslip': 0.27,\n 'TS4': 0.26,\n 'smooth': 0.18,\n 'downto2': 0.24,\n 'downto5': 0.24,\n 'weaklog': 0.3,\n 'd10': 0.255,\n 'u20': 0.26,\n 'topog1': 0.28,\n 'base': 0.33,\n 'stronglog': 0.49,\n 'secondlog': 0.39,\n 'sticky': 0.40,\n 'deep': 0.19,\n 'deep_fric': 0.34,\n 'GmO': 0.36,\n 'GmO_TS5': 0.40,\n 'GmO_TS6': 0.40,\n 'GmO_TS7': 0.37,\n 'GmO_TS8': 0.34,\n 'GmO_TS9': 0.35,\n 'GmO_TS10': 0.36,\n 'a1': 0.37,\n 'a2': 0.36,\n 'M17_a1': 0.47,\n 'M17_a2': 0.45,\n 'M17_a3': 0.41,\n 'M17_a4': 0.36,\n 'M17_a5': 0.34,\n 'M17_a6': 0.31,\n 'M17_a7': 0.31,\n 'M17_a9': 0.30,\n 'M17_a10': 0.32,\n 'M17_a11': 0.37,\n 'M17_a12': 0.33,\n 'M17_a13': 0.36,\n 'M17_a14': 0.35}", "_____no_output_____" ], [ "bathymetry2 = ('llap', 'downto5', 'downto2', 'stronglog', 'weaklog', 'wslip', 'secondlog', 'base')", "_____no_output_____" ], [ "count = 0\nfig, ax = plt.subplots(1,1,figsize=(18, 4.5))\nfor key in pha_diff:\n count += 1\n if count <= 7:\n symbol = 'o'\n elif count <=14:\n symbol = '^'\n elif count <=21:\n symbol = '>'\n else:\n symbol = '<'\n ax.plot(pha_diff[key], amp[key], symbol, label=key)\nax.plot(75,0.37,'gs', label='Obs', markersize=10)\nax.plot(pha_diff['a2'], amp['a2'], 'mo', markersize=8, label='Proper Bottom')\nax.plot(92,0.33,'*',label='corr15')\nax.set_xlim((33,105))\nax.legend(loc='upper left')\nax.set_ylabel('Amplitude at Victoria (m)')\nax.set_xlabel('Phase difference Victoria to Pt Atkinson (deg)')", "_____no_output_____" ], [ "print (pha_diff['GmO'] + pha_diff['downto2'] - pha_diff['TS4'], 75, pha_diff['GmO_TS6']\n )\nprint (amp['GmO'] + amp['downto2'] - amp['TS4'], 0.37, amp['GmO_TS6']\n )\n", "71 75 81\n0.33999999999999997 0.37 0.4\n" ], [ "fig, ax = plt.subplots(1,1,figsize=(9,9))\nfor key in bathymetry2:\n ax.plot(pha_diff[key], amp[key]/M2_amp_obs_SoG, symbol, label=key, markersize=10)\nax.plot(75,0.37/M2_amp_obs_SoG,'s',label='Obs', markersize=20)\nax.plot(92,0.33/M2_amp_obs_SoG,'o',label='Nowcast v1.0', markersize=15)\nax.set_xlim((55,105))\nax.tick_params(axis='x', labelsize=16)\nax.tick_params(axis='y', labelsize=16)\nax.legend(loc='upper left', fontsize=16)\nax.set_ylabel('Amplitude Ratio at Victoria to Pt Atkinson', fontsize=16)\nax.set_xlabel('Phase difference Victoria to Pt Atkinson (deg)', fontsize=16)", "_____no_output_____" ], [ "fig, ax = plt.subplots(1,1,figsize=(9,9))\nfor key in bathymetry2:\n ax.plot(pha_diff[key], amp[key]/M2_amp_obs_SoG, '>b', markersize=10)\nfor key in ('smooth', 'sticky'):\n ax.plot(pha_diff[key], amp[key]/M2_amp_obs_SoG, '>m', markersize=10)\nfor key in ('deep', 'deep_fric' ):\n ax.plot(pha_diff[key], amp[key]/M2_amp_obs_SoG, '>y', markersize=10)\nax.plot(pha_diff['GmO'], amp[\"GmO\"]/M2_amp_obs_SoG, 'ro', markersize=15, label = 'New Bathymetry')\nax.plot(pha_diff['a1'], amp['a1']/M2_amp_obs_SoG, 'm*', markersize=10, label='Proper Bottom')\nax.plot(pha_diff['a2'], amp['a2']/M2_amp_obs_SoG, 'mo', markersize=10, label='Proper Bottom 2')\n\nax.plot(75,0.37/M2_amp_obs_SoG,'sg',label='Obs', markersize=20)\n#ax.plot(92,0.33/M2_amp_obs_SoG,'or',label='Nowcast v1.0', markersize=15)\nax.set_xlim((53,105))\nax.tick_params(axis='x', labelsize=16)\nax.tick_params(axis='y', labelsize=16)\nax.legend(loc='upper left', fontsize=16)\nax.set_ylabel('Amplitude Ratio at Victoria to Pt Atkinson', fontsize=16)\nax.set_xlabel('Phase difference Victoria to Pt Atkinson (deg)', fontsize=16)", "_____no_output_____" ], [ "fig, ax = plt.subplots(1,1,figsize=(9,9))\nfor key in ['GmO', 'GmO_TS5', 'GmO_TS6', 'GmO_TS7', 'GmO_TS8', 'GmO_TS9', 'GmO_TS10']:\n ax.plot(pha_diff[key], amp[key]/M2_amp_obs_SoG, '>', markersize=10, label=key)\nax.plot(75,0.37/M2_amp_obs_SoG,'sg',label='Obs', markersize=20)\nax.plot(pha_diff['a1'], amp['a1']/M2_amp_obs_SoG, 'm*', markersize=10, label='Proper Bottom')\nax.plot(pha_diff['a2'], amp['a2']/M2_amp_obs_SoG, 'mo', markersize=10, label='Proper Bottom 2')\n\nax.set_xlim((74,82))\nax.tick_params(axis='x', labelsize=16)\nax.tick_params(axis='y', labelsize=16)\nax.legend(loc='lower right', fontsize=16)\nax.set_ylabel('Amplitude Ratio at Victoria to Pt Atkinson', fontsize=16)\nax.set_xlabel('Phase difference Victoria to Pt Atkinson (deg)', fontsize=16)", "_____no_output_____" ], [ "6/40.*52\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7610fa40eb51145a6b4faa23f916bec4280483c
24,909
ipynb
Jupyter Notebook
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
b251e6fa12269aafa9effc272ec40c8d9db23158
[ "MIT" ]
null
null
null
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
b251e6fa12269aafa9effc272ec40c8d9db23158
[ "MIT" ]
null
null
null
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
b251e6fa12269aafa9effc272ec40c8d9db23158
[ "MIT" ]
null
null
null
33.982265
1,083
0.556425
[ [ [ "# What is Abstraction in OOP", "_____no_output_____" ], [ "<ul><li>Abstraction is the concept of object-oriented programming that “shows” only essential attributes and “hides” unnecessary information.</li><li>The main purpose of abstraction is hiding the unnecessary details from the users. </li><li> Abstraction is selecting data from a larger pool to show only relevant details of the object to the user. </li><li> It helps in reducing programming complexity and efforts. </li><li>It is one of the most important concepts of OOPs.</li></ul> ", "_____no_output_____" ], [ "# Abstraction in Python", "_____no_output_____" ], [ "<ul><li>Abstraction in python is defined as hiding the implementation of logic from the client and using the particular application. </li><li>It hides the irrelevant data specified in the project, reducing complexity and giving value to the efficiency.</li><li> Abstraction is made in Python using <b>Abstract classes</b> and their methods in the code.</li></ul>", "_____no_output_____" ], [ "## What is an Abstract Class?", "_____no_output_____" ], [ "<ul><li>Abstract Class is a type of class in OOPs, that declare one or more abstract methods. </li><li>These classes can have abstract methods as well as concrete methods. </li><li>A normal class cannot have abstract methods.</li><li>An abstract class is a class that contains at least one abstract method.</li></ul>", "_____no_output_____" ], [ "## What are Abstract Methods?\n<ul><li>Abstract Method is a method that has just the method definition but does not contain implementation.</li><li>A method without a body is known as an Abstract Method.</li><li>It must be declared in an abstract class.</li><li>The abstract method will never be final because the abstract class must implement all the abstract methods.</li></ul>", "_____no_output_____" ], [ "## When to use Abstract Methods & Abstract Class?\n<ul><li>Abstract methods are mostly declared where two or more subclasses are also doing the same thing in different ways through different implementations.</li><li>It also extends the same Abstract class and offers different implementations of the abstract methods.</li><li>Abstract classes help to describe generic types of behaviors and object-oriented programming class hierarchy. </li><li>It also describes subclasses to offer implementation details of the abstract class.</li></ul>", "_____no_output_____" ], [ "## Difference between Abstraction and Encapsulation", "_____no_output_____" ], [ "<table style=\"background-color:#ffe6e6\">\n <tr><th><b>Abstraction</b></th><th><b>Encapsulation</b></th></tr>\n <tr><td>Abstraction in Object Oriented Programming solves the issues at the design level.</td><td>Encapsulation solves it implementation level.</td></tr>\n <tr><td>Abstraction in Programming is about hiding unwanted details while showing most essential information.</td><td>Encapsulation means binding the code and data into a single unit.</td></tr>\n <tr><td>Data Abstraction in Java allows focussing on what the information object must contain</td><td>Encapsulation means hiding the internal details or mechanics of how an object does something for security reasons.</td></tr>\n</table>", "_____no_output_____" ], [ "## Advantages of Abstraction\n<ol><li>The main benefit of using an Abstraction in Programming is that it allows you to group several related classes as siblings.</li><li>\nAbstraction in Object Oriented Programming helps to reduce the complexity of the design and implementation process of software.</li></ol>", "_____no_output_____" ], [ "## How Abstract Base classes work : \n<ul><li>By default, Python does not provide abstract classes. Python comes with a module that provides the base for defining Abstract Base classes(ABC) and that module name is ABC. </li><li>ABC works by decorating methods of the base class as abstract and then registering concrete classes as implementations of the abstract base. </li><li>A method becomes abstract when decorated with the keyword @abstractmethod.</li></ul>", "_____no_output_____" ], [ "#### Syntax\n\nAbstract class Syntax is declared as:", "_____no_output_____" ] ], [ [ "from abc import ABC\n\n# declaration\nclass classname(ABC):\n def pau(self):\n pass\n \n ", "_____no_output_____" ] ], [ [ "Abstract method Syntax is declared as", "_____no_output_____" ] ], [ [ "def abstractmethod_name():\n pass\n ", "_____no_output_____" ] ], [ [ "### Few things to be noted in Python:\n\n<ul><li>In python, an abstract class can hold both an abstract method and a normal method.</li><li>\nThe second point is an abstract class is not initiated (no objects are created).</li><li>\nThe derived class implementation methods are defined in abstract base classes.</li></ul>", "_____no_output_____" ] ], [ [ "from ABC import abc\n\n# here abc and ABC are case-sensitive. When we swap it creates", "_____no_output_____" ] ], [ [ "### Code I:", "_____no_output_____" ] ], [ [ "from abc import ABC, abstractmethod\n\n# Abstract Class\nclass product(abc): \n \n # Normal Method\n def item_list(self, rate):\n print(\"amount submitted : \",rate)\n \n # Abstract Method\n @abstractmethod\n def product(self, rate): \n ", "_____no_output_____" ] ], [ [ "### Code II:\nA program to generate the volume of geometric shapes", "_____no_output_____" ] ], [ [ "from abc import ABC\n\nclass geometric(ABC):\n \n def volume(self):\n #abstract method\n pass\n \nclass Rect(geometric):\n length = 4\n width = 6\n height = 6\n \n def volume(self):\n return self.length * self.width *self.height\n \nclass Sphere(geometric):\n radius = 8\n def volume(self):\n return 1.3 * 3.14 * self.radius * self.radius *self.radius\n \nclass Cube(geometric):\n Edge = 5\n def volume(self):\n return self.Edge * self.Edge *self.Edge\n \nclass Triangle_3D:\n length = 5\n width = 4\n def volume(self):\n return 0.5 * self.length * self.width\n \nrr = Rect()\nss = Sphere()\ncc = Cube()\ntt = Triangle_3D()\nprint(\"Volume of a rectangle:\", rr.volume())\nprint(\"Volume of a circle:\", ss.volume())\nprint(\"Volume of a square:\", cc.volume())\nprint(\"Volume of a triangle:\", tt.volume())", "Volume of a rectangle: 144\nVolume of a circle: 2089.9840000000004\nVolume of a square: 125\nVolume of a triangle: 10.0\n" ] ], [ [ "### Code III\nA program to generate different invoices", "_____no_output_____" ] ], [ [ "from abc import ABC, abstractmethod\n\nclass Bill(ABC):\n def final_bill(self, pay):\n print('Purchase of the product: ', pay)\n \n @abstractmethod\n def Invoice(self, pay):\n pass\n \nclass Paycheque(Bill):\n def Invoice(self, pay):\n print('paycheque of: ', pay)\n \nclass CardPayment(Bill):\n def Invoice(self, pay):\n print('pay through card of: ', pay)\n \naa = Paycheque()\naa.Invoice(6500)\naa.final_bill(6500)\nprint(isinstance(aa,Invoice))\naa = CardPayment()\naa.Invoice(2600)\naa.final_bill(2600)\nprint(isinstance(aa,Invoice))", "_____no_output_____" ] ], [ [ "### Code IV:\n Python program showing abstract base class work", "_____no_output_____" ] ], [ [ "from abc import ABC, abstractmethod\n\nclass Animal(ABC):\n\n @abstractmethod\n def move(self):\n pass\n\nclass Human(Animal):\n \n def move(self):\n print(\"I can walk and run\")\n\nclass Snake(Animal):\n \n def move(self):\n print(\"I can crawl\")\n\nclass Dog(Animal):\n\n def move(self):\n print(\"I can bark\")\n\nclass Lion(Animal):\n \n def move(self):\n print(\"I can roar\")\n\n# Object Instantiation\nr = Human()\nr.move()\n\nk = Snake()\nk.move()\n\nd = Dog()\nd.move()\n\nm = Lion()\nm.move()\n", "I can walk and run\nI can crawl\nI can bark\nI can roar\n" ] ], [ [ "### Concrete Methods in Abstract Base Classes : \n<ul><li>Concrete (normal) classes contain only concrete (normal) methods whereas abstract classes may contain both concrete methods and abstract methods.</li><li> The concrete class provides an implementation of abstract methods, the abstract base class can also provide an implementation by invoking the methods via super().</li></ul>", "_____no_output_____" ], [ "### Code V:\nPython program invoking a method using super()", "_____no_output_____" ] ], [ [ "from abc import ABC, abstractmethod\n\nclass R(ABC):\n \n def rk(self):\n print(\"Abstract Base Class\")\n\nclass K(R):\n def rk(self):\n super().rk()\n print(\"subclass\")\n\n# Object instantiation\nr = K()\nr.rk()\n", "Abstract Base Class\nsubclass\n" ] ], [ [ "### Code VI:", "_____no_output_____" ] ], [ [ "from abc import ABC, abstractmethod\n\nclass Bank(ABC):\n def branch(self, Naira):\n print(\"Fees submitted : \",Naira)\n \n @abstractmethod\n def Bank(Naira):\n \n \nclass private(Bank):\n def Bank(naira):\n print(\"Total Naira Value here: \",Naira)\n \nclass public(bank):\n def Bank(Naira):\n print(\"Total Naira Value here:\",Naira)\n\nprivate.Bank(5000)\npublic.Bank(2000)\n\na = public()\n#a.branch(3500)", "_____no_output_____" ] ], [ [ "## Class Project I", "_____no_output_____" ], [ "Develop a python OOP program that creates an abstract base class called coup_de_ecriva. The base class will have one abstract method called <b>Fan_Page</b> and four subclassses namely; <b>FC_Cirok, Madiba_FC, Blue_Jay_FC and TSG_Walker</b>. The program will receive as input the name of the club the user supports and instantiate an object that will invoke the <b>Fan_Page</b> method in the subclass that prints Welcome to <b>\"club name\"</b>.\n\n<p><b>Hint:</b></p>\nThe subclasses will use <b>Single Inheritance</b> to inherit the abstract base class.\n ", "_____no_output_____" ] ], [ [ "from abc import ABC, abstractmethod\n\nclass coup_de_escriva(ABC):\n \n @abstractmethod\n def Fan_page(self):\n pass\n \nclass FC_Cirok(coup_de_escriva):\n def Fan_page(self):\n print(str(input(\"Enter your name\")))\n print(str(input(\"Which club do you support?\")))\n print(\"WELCOME TO CIROK FC!\")\n\nclass Madiba_FC(coup_de_escriva):\n def Fan_page(self):\n print(str(input(\"Enter your name\")))\n print(str(input(\"Which club do you support?\")))\n print(\"WELCOME TO MADIBA FC!\")\n\n \nclass Blue_Jay_FC(coup_de_escriva):\n def Fan_page(self):\n print(str(input(\"Enter your name\")))\n print(str(input(\"Which club do you support?\")))\n print(\"WELCOME TO THE BLUES!\")\n \nclass TSG_Walkers(coup_de_escriva):\n def Fan_page(self):\n print(str(input(\"Enter your name\")))\n print(str(input(\"Which club do you support?\")))\n print(\"WELCOME TO TSG WALKERS FC!\")\n\n \n \n \na = FC_Cirok()\na.Fan_page()\nb = Madiba_FC()\nb.Fan_page()\nc = Blue_Jay_FC()\nc.Fan_page()\nd = TSG_Walkers()\nd.Fan_page()\n\n\n \n ", "Enter your name Chima\n Chima\nWhich club do you support? Cirok\n Cirok\nWELCOME TO CIROK FC!\nEnter your name Toju\n Toju\nWhich club do you support? Madiba\n Madiba\nWELCOME TO MADIBA FC!\nEnter your name Daniel\n Daniel\nWhich club do you support? Bluejays\n Bluejays\nWELCOME TO THE BLUES!\nEnter your name Murewa\n Murewa\nWhich club do you support? TSG\n TSG\nWELCOME TO TSG WALKERS FC!\n" ] ], [ [ "## Class Project II", "_____no_output_____" ], [ "The Service Unit of PAU has contacted you to develop a program to manage some of the External Food Vendors. With your knowledge in python OOP develop a program to manage the PAU External Food Vendors. The program receives as input the vendor of interest and display the menu of the interested vendor. The External vendors are Faith hostel, Cooperative Hostel, and Student Center. Find below the menus:\n\n<table><tr><td>\n<table style=\"background-color:#47b5ff\">\n <tr><th colspan='2'>Cooperative Cafeteria</th></tr>\n <tr><th>Main Meal</th><th>Price (N)</th></tr>\n <tr><td>Jollof Rice and Stew</td><td>200</td></tr>\n <tr><td>White Rice and Stew</td><td>200</td></tr>\n <tr><td>Fried Rice</td><td>200</td></tr>\n <tr><td>Salad</td><td>100</td></tr>\n <tr><td>Platain</td><td>100</td></tr>\n</table>\n </td><td>\n<table style=\"background-color:pink\">\n <tr><th colspan='2'>Faith Hostel Cafeteria</th></tr>\n <tr><th>Main Meal</th><th>Price (N)</th></tr>\n <tr><td>Fried Rice</td><td>400</td></tr>\n <tr><td>White Rice and Stew</td><td>400</td></tr>\n <tr><td>Jollof Rice</td><td>400</td></tr>\n <tr><td>Beans</td><td>200</td></tr>\n <tr><td>Chicken</td><td>1000</td></tr>\n</table>\n </td><td>\n <table style=\"background-color:#fcf96c\">\n <tr><th colspan='2'>Student Centre Cafeteria</th></tr>\n <tr><th>Main Meal</th><th>Price (N)</th></tr>\n <tr><td>Chicken Fried Rice</td><td>800</td></tr>\n <tr><td>Pomo Sauce</td><td>300</td></tr>\n <tr><td>Spaghetti Jollof</td><td>500</td></tr>\n <tr><td>Amala/Ewedu</td><td>500</td></tr>\n <tr><td>Semo with Eforiro Soup</td><td>500</td></tr>\n</table>\n </td></tr>\n<table>\n \n<p><b>Hints:</b></p>\n <ul><li>The abstract base class is called <b>External_Vendors()</b>.</li><li>\n The abstract method is called <b>menu()</b>.</li><li>\nThe subclasses (the different vendors) will inherit the abstract base class.</li><li>\n Each subclass will have a normal method called <b>menu()</b>.</li></ul>\n \n ", "_____no_output_____" ] ], [ [ "from abc import ABC, abstractmethod\nclass External_Vendors(ABC):\n \n @abstractmethod\n def menu(self):\n pass\n\nclass Cooperative_cafeteria(External_Vendors):\n def menu(self):\n print(str(input(\"Which external vendor would you prefer?\")))\n print(\"Menu ; Jollof Rice and Stew, White Rice and Stew, Fried Rice, Salad, Platain\")\n\nclass Faith_Hostel_Cafeteria(External_Vendors):\n def menu(self):\n print(str(input(\"Which external vendor would you prefer?\")))\n print(\"Menu ; Jollof Rice , White Rice and Stew, Fried Rice, Beans, Chicken\")\n\nclass Student_centre_cafeteria(External_Vendors):\n def menu(self):\n print(str(input(\"Which external vendor would you prefer?\")))\n print(\"Menu ; Pomo sauce, Chicken Fried Rice, Spaghetti Jollof, Amala/Ewedu, Semo with Efo riro soup\")\n \na = Cooperative_cafeteria()\na.menu()\nb = Faith_Hostel_Cafeteria()\nb.menu()\nc = Student_centre_cafeteria()\nc.menu()\n ", "Which external vendor would you prefer? Cooperative\n Cooperative\nMenu ; Jollof Rice and Stew, White Rice and Stew, Fried Rice, Salad, Platain\nWhich external vendor would you prefer? Faith\n Faith\nMenu ; Jollof Rice , White Rice and Stew, Fried Rice, Beans, Chicken\nWhich external vendor would you prefer? Students Centre\n Students Centre\nMenu ; Pomo sauce, Chicken Fried Rice, Spaghetti Jollof, Amala/Ewedu, Semo with Efo riro soup\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e7614c3de714b070cb32d298c470aa97ff4ca21e
4,588
ipynb
Jupyter Notebook
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
f12ff29add95d0d312e0ace3e1960a147b819c2c
[ "Apache-2.0" ]
null
null
null
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
f12ff29add95d0d312e0ace3e1960a147b819c2c
[ "Apache-2.0" ]
null
null
null
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
f12ff29add95d0d312e0ace3e1960a147b819c2c
[ "Apache-2.0" ]
null
null
null
2,294
4,587
0.684612
[ [ [ "# Boston Housing KNN", "_____no_output_____" ] ], [ [ "import sys\nsys.path.append(\"..\")\nfrom pyspark.sql.types import BooleanType\nfrom pyspark.ml.feature import StandardScaler, VectorAssembler, BucketedRandomProjectionLSH\nfrom pyspark.ml.classification import LinearSVC\nfrom pyspark.sql import Row\nfrom pyspark.sql.session import SparkSession\nfrom pyspark.sql.functions import desc, expr\nfrom pyspark.ml.evaluation import BinaryClassificationEvaluator\nfrom helpers.path_translation import translate_to_file_string", "_____no_output_____" ], [ "inputFile = translate_to_file_string(\"../data/Boston_Housing_Data.csv\")", "_____no_output_____" ] ], [ [ "Spark session creation ", "_____no_output_____" ] ], [ [ "spark = (SparkSession\n .builder\n .appName(\"BostonHousingKNN\")\n .getOrCreate())", "_____no_output_____" ] ], [ [ "DataFrame creation using an ifered Schema ", "_____no_output_____" ] ], [ [ "df = spark.read.option(\"header\", \"true\") \\\n .option(\"inferSchema\", \"true\") \\\n .option(\"delimiter\", \";\") \\\n .csv(inputFile) \\\n .withColumn(\"CATBOOL\", expr(\"CAT\").cast(BooleanType()))\nprint(df.printSchema())", "_____no_output_____" ] ], [ [ "Prepare training and test data.", "_____no_output_____" ] ], [ [ "featureCols = df.columns.copy()\nfeatureCols.remove(\"MEDV\")\nfeatureCols.remove(\"CAT\")\nfeatureCols.remove(\"CATBOOL\") \nprint(featureCols)\n\nassembler = VectorAssembler(outputCol=\"features\", inputCols=featureCols)\nscaler = StandardScaler(inputCol=\"features\", outputCol=\"scaledFeatures\",\n withStd=True, withMean=False)", "_____no_output_____" ], [ "labledPointDataSet = assembler.transform(df)\nscaledDataSet = scaler.fit(labledPointDataSet).transform(labledPointDataSet)\nsplits = scaledDataSet.randomSplit([0.9, 0.1 ], 12345)\ntraining = splits[0]\ntest = splits[1]", "_____no_output_____" ] ], [ [ "LHS Euclidean Distance", "_____no_output_____" ] ], [ [ "# TODO optimize the params to minimize the test error\n# TODO try the MinHashLSH too\nlhsED = BucketedRandomProjectionLSH(inputCol=\"scaledFeatures\", outputCol=\"hashes\", bucketLength =2.0, numHashTables=3)", "_____no_output_____" ] ], [ [ "Train the model ", "_____no_output_____" ] ], [ [ "modelED = lhsED.fit(training)", "_____no_output_____" ] ], [ [ "Test the model", "_____no_output_____" ] ], [ [ "resultList = []\n# The Nearest neighbor testing\n# TODO add other aggregation methods \nfor row in test.collect() :\n neighbors = modelED.approxNearestNeighbors(training, row.scaledFeatures, 5)\n grouped = neighbors.groupBy(df.CAT).count()\n if grouped.count() > 0 :\n result = grouped.orderBy(desc(\"count\")).first().CAT\n newRow = Row(CAT=row.CAT, scaledFeatures=row.scaledFeatures, prediction=float (result))\n resultList.append(newRow)\t\n\npredictions = SparkSession.builder.getOrCreate().createDataFrame(resultList)\npredictions.show()", "_____no_output_____" ], [ "evaluator = BinaryClassificationEvaluator(labelCol=\"CAT\",rawPredictionCol=\"prediction\", metricName=\"areaUnderROC\")\naccuracy = evaluator.evaluate(predictions)\nprint(\"Test Error\",(1.0 - accuracy))", "_____no_output_____" ], [ "spark.stop()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7615858df03ad576b82fb00f6390060e1474702
104,579
ipynb
Jupyter Notebook
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
98c5702ef011445b62a6a498048e8306d669c9a9
[ "MIT" ]
null
null
null
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
98c5702ef011445b62a6a498048e8306d669c9a9
[ "MIT" ]
null
null
null
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
98c5702ef011445b62a6a498048e8306d669c9a9
[ "MIT" ]
null
null
null
52.844366
4,177
0.517599
[ [ [ "import turicreate", "_____no_output_____" ] ], [ [ "# Load some text data - from wikipedia, pages on people", "_____no_output_____" ] ], [ [ "people = turicreate.SFrame('people_wiki.sframe')", "_____no_output_____" ], [ "people", "_____no_output_____" ] ], [ [ "# Explore the dataset and checkout the text it contains", "_____no_output_____" ] ], [ [ "obama = people[people['name'] == 'Barack Obama']", "_____no_output_____" ], [ "obama", "_____no_output_____" ], [ "obama['text']", "_____no_output_____" ] ], [ [ "## Explore the entry for actor George Clooney", "_____no_output_____" ] ], [ [ "clooney = people[people['name'] == 'George Clooney']\nclooney['text']", "_____no_output_____" ] ], [ [ "# Word counts for Obama acticle", "_____no_output_____" ] ], [ [ "obama['word_count'] = turicreate.text_analytics.count_words(obama['text'])", "_____no_output_____" ], [ "obama['word_count']", "_____no_output_____" ] ], [ [ "## Sort the word counts for the Obama article", "_____no_output_____" ] ], [ [ "obama.stack('word_count',new_column_name=['word','count'])", "_____no_output_____" ], [ "obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])", "_____no_output_____" ], [ "obama_word_count_table", "_____no_output_____" ], [ "obama_word_count_table.sort('count',ascending=False)", "_____no_output_____" ] ], [ [ "## Compute TF-IDF for the corpus", "_____no_output_____" ] ], [ [ "people['word_count'] = turicreate.text_analytics.count_words(people['text'])", "_____no_output_____" ], [ "people", "_____no_output_____" ], [ "people['tfidf'] = turicreate.text_analytics.tf_idf(people['text'])", "_____no_output_____" ], [ "people", "_____no_output_____" ] ], [ [ "## Examine the TF-IDF for the Obama article", "_____no_output_____" ] ], [ [ "obama = people[people['name'] == 'Barack Obama']\nobama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)", "_____no_output_____" ] ], [ [ "# Manually compute distances between a few people", "_____no_output_____" ] ], [ [ "clinton = people[people['name'] == 'Bill Clinton']", "_____no_output_____" ], [ "beckham = people[people['name'] == 'David Beckham']", "_____no_output_____" ] ], [ [ "## Is Obama closer to Clinton than to Beckham?", "_____no_output_____" ] ], [ [ "turicreate.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])", "_____no_output_____" ], [ "# The smaller the cosine value, the more relevant it is.", "_____no_output_____" ], [ "turicreate.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])", "_____no_output_____" ] ], [ [ "# Build the nearest neighbor model for document retrieval", "_____no_output_____" ] ], [ [ "knn_model = turicreate.nearest_neighbors.create(people,features=['tfidf'],label='name')", "_____no_output_____" ] ], [ [ "# Applying the nearest-neighbors model for retrieval", "_____no_output_____" ], [ "## Who is closest to Obama?", "_____no_output_____" ] ], [ [ "knn_model.query(obama)", "_____no_output_____" ] ], [ [ "# Other examples of document retrieval", "_____no_output_____" ] ], [ [ "taylor_swift = people[people['name'] == 'Taylor Swift']", "_____no_output_____" ], [ "knn_model.query(taylor_swift)", "_____no_output_____" ], [ "jolie = people[people['name'] == 'Angelina Jolie']", "_____no_output_____" ], [ "knn_model.query(jolie)", "_____no_output_____" ], [ "arnold = people[people['name'] == 'Arnold Schwarzenegger']", "_____no_output_____" ], [ "knn_model.query(arnold)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e76189e3dc6cff8966af47817de84065f6748dba
479,299
ipynb
Jupyter Notebook
03_motion.ipynb
abostroem/AstronomicalData
8228f6b77e5723e4ddb6d5b98dbd744cbab996bc
[ "MIT" ]
null
null
null
03_motion.ipynb
abostroem/AstronomicalData
8228f6b77e5723e4ddb6d5b98dbd744cbab996bc
[ "MIT" ]
null
null
null
03_motion.ipynb
abostroem/AstronomicalData
8228f6b77e5723e4ddb6d5b98dbd744cbab996bc
[ "MIT" ]
null
null
null
252.129932
153,832
0.920382
[ [ [ "# Chapter 3\n\nThis is the third in a series of notebooks related to astronomy data.\n\nAs a running example, we are replicating parts of the analysis in a recent paper, \"[Off the beaten path: Gaia reveals GD-1 stars outside of the main stream](https://arxiv.org/abs/1805.00425)\" by Adrian M. Price-Whelan and Ana Bonaca.\n\nIn the first lesson, we wrote ADQL queries and used them to select and download data from the Gaia server.\n\nIn the second lesson, we wrote a query to select stars from the region of the sky where we expect GD-1 to be, and saved the results in a FITS file.\n\nNow we'll read that data back and implement the next step in the analysis, identifying stars with the proper motion we expect for GD-1.", "_____no_output_____" ], [ "## Outline\n\nHere are the steps in this lesson:\n\n1. We'll read back the results from the previous lesson, which we saved in a FITS file.\n\n2. Then we'll transform the coordinates and proper motion data from ICRS back to the coordinate frame of GD-1.\n\n3. We'll put those results into a Pandas `DataFrame`, which we'll use to select stars near the centerline of GD-1.\n\n4. Plotting the proper motion of those stars, we'll identify a region of proper motion for stars that are likely to be in GD-1.\n\n5. Finally, we'll select and plot the stars whose proper motion is in that region.\n\nAfter completing this lesson, you should be able to\n\n* Select rows and columns from an Astropy `Table`.\n\n* Use Matplotlib to make a scatter plot.\n\n* Use Gala to transform coordinates.\n\n* Make a Pandas `DataFrame` and use a Boolean `Series` to select rows.\n\n* Save a `DataFrame` in an HDF5 file.\n", "_____no_output_____" ], [ "## Installing libraries\n\nIf you are running this notebook on Colab, you can run the following cell to install Astroquery and the other libraries we'll use.\n\nIf you are running this notebook on your own computer, you might have to install these libraries yourself. See the instructions in the preface.", "_____no_output_____" ] ], [ [ "# If we're running on Colab, install libraries\n\nimport sys\nIN_COLAB = 'google.colab' in sys.modules\n\nif IN_COLAB:\n !pip install astroquery astro-gala pyia python-wget", "_____no_output_____" ] ], [ [ "## Reload the data\n\nIn the previous lesson, we ran a query on the Gaia server and downloaded data for roughly 100,000 stars. We saved the data in a FITS file so that now, picking up where we left off, we can read the data from a local file rather than running the query again.\n\nIf you ran the previous lesson successfully, you should already have a file called `gd1_results.fits` that contains the data we downloaded.\n\nIf not, you can run the following cell, which downloads the data from our repository.", "_____no_output_____" ] ], [ [ "import os\nfrom wget import download\n\nfilename = 'gd1_results.fits'\npath = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'\n\nif not os.path.exists(filename):\n print(download(path+filename))", "_____no_output_____" ] ], [ [ "Now here's how we can read the data from the file back into an Astropy `Table`:", "_____no_output_____" ] ], [ [ "from astropy.table import Table\n\nresults = Table.read(filename)", "_____no_output_____" ] ], [ [ "The result is an Astropy `Table`.\n\nWe can use `info` to refresh our memory of the contents.", "_____no_output_____" ] ], [ [ "results.info", "_____no_output_____" ] ], [ [ "## Selecting rows and columns\n\nIn this section we'll see operations for selecting columns and rows from an Astropy `Table`. You can find more information about these operations in the [Astropy documentation](https://docs.astropy.org/en/stable/table/access_table.html).\n\nWe can get the names of the columns like this:", "_____no_output_____" ] ], [ [ "results.colnames", "_____no_output_____" ] ], [ [ "And select an individual column like this:", "_____no_output_____" ] ], [ [ "results['ra']", "_____no_output_____" ] ], [ [ "The result is a `Column` object that contains the data, and also the data type, units, and name of the column.", "_____no_output_____" ] ], [ [ "type(results['ra'])", "_____no_output_____" ] ], [ [ "The rows in the `Table` are numbered from 0 to `n-1`, where `n` is the number of rows. We can select the first row like this:", "_____no_output_____" ] ], [ [ "results[0]", "_____no_output_____" ] ], [ [ "As you might have guessed, the result is a `Row` object.", "_____no_output_____" ] ], [ [ "type(results[0])", "_____no_output_____" ] ], [ [ "Notice that the bracket operator selects both columns and rows. You might wonder how it knows which to select.\n\nIf the expression in brackets is a string, it selects a column; if the expression is an integer, it selects a row.\n\nIf you apply the bracket operator twice, you can select a column and then an element from the column.", "_____no_output_____" ] ], [ [ "results['ra'][0]", "_____no_output_____" ] ], [ [ "Or you can select a row and then an element from the row.", "_____no_output_____" ] ], [ [ "results[0]['ra']", "_____no_output_____" ] ], [ [ "You get the same result either way.", "_____no_output_____" ], [ "## Scatter plot\n\nTo see what the results look like, we'll use a scatter plot. The library we'll use is [Matplotlib](https://matplotlib.org/), which is the most widely-used plotting library for Python.\n\nThe Matplotlib interface is based on MATLAB (hence the name), so if you know MATLAB, some of it will be familiar.\n\nWe'll import like this.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "Pyplot part of the Matplotlib library. It is conventional to import it using the shortened name `plt`.\n\nPyplot provides two functions that can make scatterplots, [plt.scatter](https://matplotlib.org/3.3.0/api/_as_gen/matplotlib.pyplot.scatter.html) and [plt.plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html).\n\n* `scatter` is more versatile; for example, you can make every point in a scatter plot a different color.\n\n* `plot` is more limited, but for simple cases, it can be substantially faster. \n\nJake Vanderplas explains these differences in [The Python Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/04.02-simple-scatter-plots.html)\n\nSince we are plotting more than 100,000 points and they are all the same size and color, we'll use `plot`.\n\nHere's a scatter plot with right ascension on the x-axis and declination on the y-axis, both ICRS coordinates in degrees.", "_____no_output_____" ] ], [ [ "x = results['ra']\ny = results['dec']\nplt.plot(x, y, 'ko')\n\nplt.xlabel('ra (degree ICRS)')\nplt.ylabel('dec (degree ICRS)');", "_____no_output_____" ] ], [ [ "The arguments to `plt.plot` are `x`, `y`, and a string that specifies the style. In this case, the letters `ko` indicate that we want a black, round marker (`k` is for black because `b` is for blue).\n\nThe functions `xlabel` and `ylabel` put labels on the axes.\n\nThis scatter plot has a problem. It is \"[overplotted](https://python-graph-gallery.com/134-how-to-avoid-overplotting-with-python/)\", which means that there are so many overlapping points, we can't distinguish between high and low density areas.\n\nTo fix this, we can provide optional arguments to control the size and transparency of the points.", "_____no_output_____" ], [ "## Exercise\n\nIn the call to `plt.plot`, add the keyword argument `markersize=0.1` to make the markers smaller.\n\nThen add the argument `alpha=0.1` to make the markers nearly transparent.\n\nAdjust these arguments until you think the figure shows the data most clearly.\n\nNote: Once you have made these changes, you might notice that the figure shows stripes with lower density of stars. These stripes are caused by the way Gaia scans the sky, which [you can read about here](https://www.cosmos.esa.int/web/gaia/scanning-law). The dataset we are using, [Gaia Data Release 2](https://www.cosmos.esa.int/web/gaia/dr2), covers 22 months of observations; during this time, some parts of the sky were scanned more than others.", "_____no_output_____" ], [ "## Transform back\n\nRemember that we selected data from a rectangle of coordinates in the `GD1Koposov10` frame, then transformed them to ICRS when we constructed the query.\nThe coordinates in `results` are in ICRS.\n\nTo plot them, we will transform them back to the `GD1Koposov10` frame; that way, the axes of the figure are aligned with the GD-1, which will make it easy to select stars near the centerline of the stream.\n\nTo do that, we'll put the results into a `GaiaData` object, provided by the [pyia library](https://pyia.readthedocs.io/en/latest/api/pyia.GaiaData.html).\n\nTODO: Do we need pyia, or could we do this with astropy and gala?", "_____no_output_____" ] ], [ [ "from pyia import GaiaData\n\ngaia_data = GaiaData(results)\ntype(gaia_data)", "_____no_output_____" ] ], [ [ "Now we can extract sky coordinates from the `GaiaData` object, like this:", "_____no_output_____" ] ], [ [ "import astropy.units as u\n\nskycoord = gaia_data.get_skycoord(\n distance=8*u.kpc, \n radial_velocity=0*u.km/u.s)", "_____no_output_____" ] ], [ [ "We provide `distance` and `radial_velocity` to prepare the data for reflex correction, which we explain below.", "_____no_output_____" ] ], [ [ "type(skycoord)", "_____no_output_____" ] ], [ [ "The result is an Astropy `SkyCoord` object ([documentation here](https://docs.astropy.org/en/stable/api/astropy.coordinates.SkyCoord.html#astropy.coordinates.SkyCoord)), which provides `transform_to`, so we can transform the coordinates to other frames.", "_____no_output_____" ] ], [ [ "import gala.coordinates as gc\n\ntransformed = skycoord.transform_to(gc.GD1Koposov10)\ntype(transformed)", "_____no_output_____" ] ], [ [ "The result is another `SkyCoord` object, now in the `GD1Koposov10` frame.", "_____no_output_____" ], [ "The next step is to correct the proper motion measurements from Gaia for reflex due to the motion of our solar system around the Galactic center.\n\nWhen we created `skycoord`, we provided `distance` and `radial_velocity` as arguments, which means we ignore the measurements provided by Gaia and replace them with these fixed values.\n\nThat might seem like a strange thing to do, but here's the motivation:\n\n* Because the stars in GD-1 are so far away, the distance estimates we get from Gaia, which are based on parallax, are not very precise. So we replace them with our current best estimate of the mean distance to GD-1, about 8 kpc. See [Koposov, Rix, and Hogg, 2010](https://ui.adsabs.harvard.edu/abs/2010ApJ...712..260K/abstract).\n\n* For the other stars in the table, this distance estimate will be inaccurate, so reflex correction will not be correct. But that should have only a small effect on our ability to identify stars with the proper motion we expect for GD-1.\n\n* The measurement of radial velocity has no effect on the correction for proper motion; the value we provide is arbitrary, but we have to provide a value to avoid errors in the reflex correction calculation.\n\nWe are grateful to Adrian Price-Whelen for his help explaining this step in the analysis.", "_____no_output_____" ], [ "With this preparation, we can use `reflex_correct` from Gala ([documentation here](https://gala-astro.readthedocs.io/en/latest/api/gala.coordinates.reflex_correct.html)) to correct for solar reflex motion.", "_____no_output_____" ] ], [ [ "gd1_coord = gc.reflex_correct(transformed)\n\ntype(gd1_coord)", "_____no_output_____" ] ], [ [ "The result is a `SkyCoord` object that contains \n\n* The transformed coordinates as attributes named `phi1` and `phi2`, which represent right ascension and declination in the `GD1Koposov10` frame.\n\n* The transformed and corrected proper motions as `pm_phi1_cosphi2` and `pm_phi2`.\n\nWe can select the coordinates like this:", "_____no_output_____" ] ], [ [ "phi1 = gd1_coord.phi1\nphi2 = gd1_coord.phi2", "_____no_output_____" ] ], [ [ "And plot them like this:", "_____no_output_____" ] ], [ [ "plt.plot(phi1, phi2, 'ko', markersize=0.1, alpha=0.2)\n\nplt.xlabel('ra (degree GD1)')\nplt.ylabel('dec (degree GD1)');", "_____no_output_____" ] ], [ [ "Remember that we started with a rectangle in GD-1 coordinates. When transformed to ICRS, it's a non-rectangular polygon. Now that we have transformed back to GD-1 coordinates, it's a rectangle again.", "_____no_output_____" ], [ "## Pandas DataFrame\n\nAt this point we have three objects containing different subsets of the data.", "_____no_output_____" ] ], [ [ "type(results)", "_____no_output_____" ], [ "type(gaia_data)", "_____no_output_____" ], [ "type(gd1_coord)", "_____no_output_____" ] ], [ [ "On one hand, this makes sense, since each object provides different capabilities. But working with three different object types can be awkward.\n\nIt will be more convenient to choose one object and get all of the data into it. We'll use a Pandas DataFrame, for two reasons:\n\n1. It provides capabilities that are pretty much a superset of the other data structures, so it's the all-in-one solution.\n\n2. Pandas is a general-purpose tool that is useful in many domains, especially data science. If you are going to develop expertise in one tool, Pandas is a good choice.\n\nHowever, compared to an Astropy `Table`, Pandas has one big drawback: it does not keep the metadata associated with the table, including the units for the columns.", "_____no_output_____" ], [ "It's easy to convert a `Table` to a Pandas `DataFrame`.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\ndf = results.to_pandas()\ndf.shape", "_____no_output_____" ] ], [ [ "`DataFrame` provides `shape`, which shows the number of rows and columns.\n\nIt also provides `head`, which displays the first few rows. It is useful for spot-checking large results as you go along.", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ] ], [ [ "Python detail: `shape` is an attribute, so we can display it's value without calling it as a function; `head` is a function, so we need the parentheses.", "_____no_output_____" ], [ "Now we can extract the columns we want from `gd1_coord` and add them as columns in the `DataFrame`. `phi1` and `phi2` contain the transformed coordinates.", "_____no_output_____" ] ], [ [ "df['phi1'] = gd1_coord.phi1\ndf['phi2'] = gd1_coord.phi2\ndf.shape", "_____no_output_____" ] ], [ [ "`pm_phi1_cosphi2` and `pm_phi2` contain the components of proper motion in the transformed frame.", "_____no_output_____" ] ], [ [ "df['pm_phi1'] = gd1_coord.pm_phi1_cosphi2\ndf['pm_phi2'] = gd1_coord.pm_phi2\ndf.shape", "_____no_output_____" ] ], [ [ "**Detail:** If you notice that `SkyCoord` has an attribute called `proper_motion`, you might wonder why we are not using it.\n\nWe could have: `proper_motion` contains the same data as `pm_phi1_cosphi2` and `pm_phi2`, but in a different format.", "_____no_output_____" ], [ "## Plot proper motion\n\nNow we are ready to replicate one of the panels in Figure 1 of the Price-Whelan and Bonaca paper, the one that shows the components of proper motion as a scatter plot:\n\n<img width=\"300\" src=\"https://github.com/datacarpentry/astronomy-python/raw/gh-pages/fig/gd1-1.png\">\n\nIn this figure, the shaded area is a high-density region of stars with the proper motion we expect for stars in GD-1. \n\n* Due to the nature of tidal streams, we expect the proper motion for most stars to be along the axis of the stream; that is, we expect motion in the direction of `phi2` to be near 0.\n\n* In the direction of `phi1`, we don't have a prior expectation for proper motion, except that it should form a cluster at a non-zero value. \n\nTo locate this cluster, we'll select stars near the centerline of GD-1 and plot their proper motion.", "_____no_output_____" ], [ "## Selecting the centerline\n\nAs we can see in the following figure, many stars in GD-1 are less than 1 degree of declination from the line `phi2=0`.\n\n<img src=\"https://github.com/datacarpentry/astronomy-python/raw/gh-pages/fig/gd1-4.png\">\n\nIf we select stars near this line, they are more likely to be in GD-1.\n\nWe'll start by selecting the `phi2` column from the `DataFrame`:", "_____no_output_____" ] ], [ [ "phi2 = df['phi2']\ntype(phi2)", "_____no_output_____" ] ], [ [ "The result is a `Series`, which is the structure Pandas uses to represent columns.\n\nWe can use a comparison operator, `>`, to compare the values in a `Series` to a constant.", "_____no_output_____" ] ], [ [ "phi2_min = -1.0 * u.deg\nphi2_max = 1.0 * u.deg\n\nmask = (df['phi2'] > phi2_min)\ntype(mask)", "_____no_output_____" ], [ "mask.dtype", "_____no_output_____" ] ], [ [ "The result is a `Series` of Boolean values, that is, `True` and `False`. ", "_____no_output_____" ] ], [ [ "mask.head()", "_____no_output_____" ] ], [ [ "A Boolean `Series` is sometimes called a \"mask\" because we can use it to mask out some of the rows in a `DataFrame` and select the rest, like this:", "_____no_output_____" ] ], [ [ "subset = df[mask]\ntype(subset)", "_____no_output_____" ] ], [ [ "`subset` is a `DataFrame` that contains only the rows from `df` that correspond to `True` values in `mask`.\n\nThe previous mask selects all stars where `phi2` exceeds `phi2_min`; now we'll select stars where `phi2` falls between `phi2_min` and `phi2_max`.", "_____no_output_____" ] ], [ [ "phi_mask = ((df['phi2'] > phi2_min) & \n (df['phi2'] < phi2_max))", "_____no_output_____" ] ], [ [ "The `&` operator computes \"logical AND\", which means the result is true where elements from both Boolean `Series` are true.\n\nThe sum of a Boolean `Series` is the number of `True` values, so we can use `sum` to see how many stars are in the selected region.", "_____no_output_____" ] ], [ [ "phi_mask.sum()", "_____no_output_____" ] ], [ [ "And we can use `phi1_mask` to select stars near the centerline, which are more likely to be in GD-1.", "_____no_output_____" ] ], [ [ "centerline = df[phi_mask]\nlen(centerline)", "_____no_output_____" ] ], [ [ "Here's a scatter plot of proper motion for the selected stars.", "_____no_output_____" ] ], [ [ "pm1 = centerline['pm_phi1']\npm2 = centerline['pm_phi2']\n\nplt.plot(pm1, pm2, 'ko', markersize=0.1, alpha=0.1)\n \nplt.xlabel('Proper motion phi1 (GD1 frame)')\nplt.ylabel('Proper motion phi2 (GD1 frame)');", "_____no_output_____" ] ], [ [ "Looking at these results, we see a large cluster around (0, 0), and a smaller cluster near (0, -10).\n\nWe can use `xlim` and `ylim` to set the limits on the axes and zoom in on the region near (0, 0).", "_____no_output_____" ] ], [ [ "pm1 = centerline['pm_phi1']\npm2 = centerline['pm_phi2']\n\nplt.plot(pm1, pm2, 'ko', markersize=0.3, alpha=0.3)\n \nplt.xlabel('Proper motion phi1 (GD1 frame)')\nplt.ylabel('Proper motion phi2 (GD1 frame)')\n\nplt.xlim(-12, 8)\nplt.ylim(-10, 10);", "_____no_output_____" ] ], [ [ "Now we can see the smaller cluster more clearly.\n\nYou might notice that our figure is less dense than the one in the paper. That's because we started with a set of stars from a relatively small region. The figure in the paper is based on a region about 10 times bigger.\n\nIn the next lesson we'll go back and select stars from a larger region. But first we'll use the proper motion data to identify stars likely to be in GD-1.", "_____no_output_____" ], [ "## Filtering based on proper motion\n\nThe next step is to select stars in the \"overdense\" region of proper motion, which are candidates to be in GD-1.\n\nIn the original paper, Price-Whelan and Bonaca used a polygon to cover this region, as shown in this figure.\n\n<img width=\"300\" src=\"https://github.com/datacarpentry/astronomy-python/raw/gh-pages/fig/gd1-1.png\">\n\nWe'll use a simple rectangle for now, but in a later lesson we'll see how to select a polygonal region as well.\n\nHere are bounds on proper motion we chose by eye,", "_____no_output_____" ] ], [ [ "pm1_min = -8.9\npm1_max = -6.9\npm2_min = -2.2\npm2_max = 1.0", "_____no_output_____" ] ], [ [ "To draw these bounds, we'll make two lists containing the coordinates of the corners of the rectangle.", "_____no_output_____" ] ], [ [ "pm1_rect = [pm1_min, pm1_min, pm1_max, pm1_max, pm1_min] * u.mas/u.yr\npm2_rect = [pm2_min, pm2_max, pm2_max, pm2_min, pm2_min] * u.mas/u.yr", "_____no_output_____" ] ], [ [ "Here's what the plot looks like with the bounds we chose.", "_____no_output_____" ] ], [ [ "plt.plot(pm1, pm2, 'ko', markersize=0.3, alpha=0.3)\nplt.plot(pm1_rect, pm2_rect, '-')\n \nplt.xlabel('Proper motion phi1 (GD1 frame)')\nplt.ylabel('Proper motion phi2 (GD1 frame)')\n\nplt.xlim(-12, 8)\nplt.ylim(-10, 10);", "_____no_output_____" ] ], [ [ "To select rows that fall within these bounds, we'll use the following function, which uses Pandas operators to make a mask that selects rows where `series` falls between `low` and `high`.", "_____no_output_____" ] ], [ [ "def between(series, low, high):\n \"\"\"Make a Boolean Series.\n \n series: Pandas Series\n low: lower bound\n high: upper bound\n \n returns: Boolean Series\n \"\"\"\n return (series > low) & (series < high)", "_____no_output_____" ] ], [ [ "The following mask select stars with proper motion in the region we chose.", "_____no_output_____" ] ], [ [ "pm_mask = (between(df['pm_phi1'], pm1_min, pm1_max) & \n between(df['pm_phi2'], pm2_min, pm2_max))", "_____no_output_____" ] ], [ [ "Again, the sum of a Boolean series is the number of `True` values.", "_____no_output_____" ] ], [ [ "pm_mask.sum()", "_____no_output_____" ] ], [ [ "Now we can use this mask to select rows from `df`.", "_____no_output_____" ] ], [ [ "selected = df[pm_mask]\nlen(selected)", "_____no_output_____" ] ], [ [ "These are the stars we think are likely to be in GD-1. Let's see what they look like, plotting their coordinates (not their proper motion).", "_____no_output_____" ] ], [ [ "phi1 = selected['phi1']\nphi2 = selected['phi2']\n\nplt.plot(phi1, phi2, 'ko', markersize=0.5, alpha=0.5)\n\nplt.xlabel('ra (degree GD1)')\nplt.ylabel('dec (degree GD1)');", "_____no_output_____" ] ], [ [ "Now that's starting to look like a tidal stream!", "_____no_output_____" ], [ "## Saving the DataFrame\n\nAt this point we have run a successful query and cleaned up the results; this is a good time to save the data.\n\nTo save a Pandas `DataFrame`, one option is to convert it to an Astropy `Table`, like this:", "_____no_output_____" ] ], [ [ "selected_table = Table.from_pandas(selected)\ntype(selected_table)", "_____no_output_____" ] ], [ [ "Then we could write the `Table` to a FITS file, as we did in the previous lesson. \n\nBut Pandas provides functions to write DataFrames in other formats; to see what they are [find the functions here that begin with `to_`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).\n\nOne of the best options is HDF5, which is Version 5 of [Hierarchical Data Format](https://en.wikipedia.org/wiki/Hierarchical_Data_Format).\n\nHDF5 is a binary format, so files are small and fast to read and write (like FITS, but unlike XML).\n\nAn HDF5 file is similar to an SQL database in the sense that it can contain more than one table, although in HDF5 vocabulary, a table is called a Dataset. ([Multi-extension FITS files](https://www.stsci.edu/itt/review/dhb_2011/Intro/intro_ch23.html) can also contain more than one table.)\n\nAnd HDF5 stores the metadata associated with the table, including column names, row labels, and data types (like FITS).\n\nFinally, HDF5 is a cross-language standard, so if you write an HDF5 file with Pandas, you can read it back with many other software tools (more than FITS).", "_____no_output_____" ], [ "We can write a Pandas `DataFrame` to an HDF5 file like this:", "_____no_output_____" ] ], [ [ "filename = 'gd1_dataframe.hdf5'\n\ndf.to_hdf(filename, 'df', mode='w')", "_____no_output_____" ] ], [ [ "Because an HDF5 file can contain more than one Dataset, we have to provide a name, or \"key\", that identifies the Dataset in the file.\n\nWe could use any string as the key, but in this example I use the variable name `df`.", "_____no_output_____" ], [ "## Exercise \n\nWe're going to need `centerline` and `selected` later as well. Write a line or two of code to add it as a second Dataset in the HDF5 file.", "_____no_output_____" ] ], [ [ "# Solution\n\ncenterline.to_hdf(filename, 'centerline')\nselected.to_hdf(filename, 'selected')", "_____no_output_____" ] ], [ [ "**Detail:** Reading and writing HDF5 tables requires a library called `PyTables` that is not always installed with Pandas. You can install it with pip like this:\n\n```\npip install tables\n```\n\nIf you install it using Conda, the name of the package is `pytables`.\n\n```\nconda install pytables\n```", "_____no_output_____" ], [ "We can use `ls` to confirm that the file exists and check the size:", "_____no_output_____" ] ], [ [ "!ls -lh gd1_dataframe.hdf5", "-rw-rw-r-- 1 downey downey 17M Nov 18 19:06 gd1_dataframe.hdf5\r\n" ] ], [ [ "If you are using Windows, `ls` might not work; in that case, try:\n\n```\n!dir gd1_dataframe.hdf5\n```\n\nWe can read the file back like this:", "_____no_output_____" ] ], [ [ "read_back_df = pd.read_hdf(filename, 'df')\nread_back_df.shape", "_____no_output_____" ] ], [ [ "Pandas can write a variety of other formats, [which you can read about here](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html).", "_____no_output_____" ], [ "## Summary\n\nIn this lesson, we re-loaded the Gaia data we saved from a previous query.\n\nWe transformed the coordinates and proper motion from ICRS to a frame aligned with GD-1, and stored the results in a Pandas `DataFrame`.\n\nThen we replicated the selection process from the Price-Whelan and Bonaca paper:\n\n* We selected stars near the centerline of GD-1 and made a scatter plot of their proper motion.\n\n* We identified a region of proper motion that contains stars likely to be in GD-1.\n\n* We used a Boolean `Series` as a mask to select stars whose proper motion is in that region.\n\nSo far, we have used data from a relatively small region of the sky. In the next lesson, we'll write a query that selects stars based on proper motion, which will allow us to explore a larger region.", "_____no_output_____" ], [ "## Best practices\n\n* When you make a scatter plot, adjust the size of the markers and their transparency so the figure is not overplotted; otherwise it can misrepresent the data badly.\n\n* For simple scatter plots in Matplotlib, `plot` is faster than `scatter`.\n\n* An Astropy `Table` and a Pandas `DataFrame` are similar in many ways and they provide many of the same functions. They have pros and cons, but for many projects, either one would be a reasonable choice.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e761a51f1c1dac4a12a0af2b5a8b4491a08ebacf
212,514
ipynb
Jupyter Notebook
DC_GAN's.ipynb
RishavMishraRM/DC_GAN
429fe72c116c0c9b67ee4906a44630c58446decf
[ "MIT" ]
null
null
null
DC_GAN's.ipynb
RishavMishraRM/DC_GAN
429fe72c116c0c9b67ee4906a44630c58446decf
[ "MIT" ]
null
null
null
DC_GAN's.ipynb
RishavMishraRM/DC_GAN
429fe72c116c0c9b67ee4906a44630c58446decf
[ "MIT" ]
null
null
null
304.461318
38,186
0.90107
[ [ [ "import warnings\n\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "import keras\nfrom keras.datasets import mnist\nfrom keras.layers import *\nfrom keras.layers.advanced_activations import LeakyReLU\nfrom keras.models import Sequential, Model\nfrom keras.optimizers import Adam", "_____no_output_____" ], [ "(X_train, _), (_, _) = mnist.load_data()", "_____no_output_____" ], [ "X_train[:5]", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "X_train = (X_train-127.5)/127.5\n\nprint(X_train.min())\nprint(X_train.max())", "-1.0\n1.0\n" ], [ "TOTAL_EPOCHS = 50\nBATCH_SIZE = 256\nHALF_BATCH = 128\n\nNO_OF_BATCHES = int(X_train.shape[0]/BATCH_SIZE)\n\nNOISE_DIM = 100\n\nadam = Adam(lr = 2e-4, beta_1 = 0.5)", "_____no_output_____" ], [ "#Generator Modeling --> Up Sampling\n\ngenerator = Sequential()\ngenerator.add(Dense(units= 7*7*128, input_shape = (NOISE_DIM, )))\ngenerator.add(Reshape((7, 7, 128)))\ngenerator.add(LeakyReLU(0.2))\ngenerator.add(BatchNormalization())\n\n#(7, 7, 128) -> (14, 14, 64)\ngenerator.add(Conv2DTranspose(64, (3, 3), strides=(2, 2), padding='same'))\ngenerator.add(LeakyReLU(0.2))\ngenerator.add(BatchNormalization())\n\n#(14, 14, 64) --> (28, 28, 1)\ngenerator.add(Conv2DTranspose(1, (3, 3), strides=(2, 2), padding='same', activation='tanh'))\n\n\ngenerator.compile(loss = keras.losses.binary_crossentropy, optimizer=adam)\n\ngenerator.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 6272) 633472 \n_________________________________________________________________\nreshape (Reshape) (None, 7, 7, 128) 0 \n_________________________________________________________________\nleaky_re_lu (LeakyReLU) (None, 7, 7, 128) 0 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 7, 7, 128) 512 \n_________________________________________________________________\nconv2d_transpose (Conv2DTran (None, 14, 14, 64) 73792 \n_________________________________________________________________\nleaky_re_lu_1 (LeakyReLU) (None, 14, 14, 64) 0 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 14, 14, 64) 256 \n_________________________________________________________________\nconv2d_transpose_1 (Conv2DTr (None, 28, 28, 1) 577 \n=================================================================\nTotal params: 708,609\nTrainable params: 708,225\nNon-trainable params: 384\n_________________________________________________________________\n" ], [ "#Discriminator Modeling --> Down Sampling\n\n# (28, 28, 1) --> (14, 14, 64)\ndiscriminator = Sequential()\ndiscriminator.add(Conv2D(64, kernel_size=(3, 3), strides=(2, 2), padding='same', input_shape = (28, 28, 1)))\ndiscriminator.add(LeakyReLU(0.2))\n\n#(14, 14, 64) --> (7, 7, 128)\ndiscriminator.add(Conv2D(128, kernel_size=(3, 3), strides=(2, 2), padding='same'))\ndiscriminator.add(LeakyReLU(0.2))\n\n#(7, 7, 128) --> 6272\ndiscriminator.add(Flatten())\ndiscriminator.add(Dense(100))\ndiscriminator.add(LeakyReLU(0.2))\n\ndiscriminator.add(Dense(1, activation='sigmoid'))\n\ndiscriminator.compile(loss = keras.losses.binary_crossentropy, optimizer=adam)\n\ndiscriminator.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 14, 14, 64) 640 \n_________________________________________________________________\nleaky_re_lu_2 (LeakyReLU) (None, 14, 14, 64) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 7, 7, 128) 73856 \n_________________________________________________________________\nleaky_re_lu_3 (LeakyReLU) (None, 7, 7, 128) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 6272) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 100) 627300 \n_________________________________________________________________\nleaky_re_lu_4 (LeakyReLU) (None, 100) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 1) 101 \n=================================================================\nTotal params: 701,897\nTrainable params: 701,897\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "## Combied Model \n\ndiscriminator.trainable = False\ngan_input = Input(shape = (NOISE_DIM, ))\n\ngenerated_img = generator(gan_input)\n\ngan_output = discriminator(generated_img)\n\nmodel = Model(gan_input, gan_output)\n\nmodel.compile(loss = \"binary_crossentropy\", optimizer=adam)", "_____no_output_____" ], [ "model.summary()", "Model: \"model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 100)] 0 \n_________________________________________________________________\nsequential (Sequential) (None, 28, 28, 1) 708609 \n_________________________________________________________________\nsequential_1 (Sequential) (None, 1) 701897 \n=================================================================\nTotal params: 1,410,506\nTrainable params: 708,225\nNon-trainable params: 702,281\n_________________________________________________________________\n" ], [ "#16.01\nX_train = X_train.reshape(-1, 28, 28, 1)", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "def display_images(samples = 25):\n\n noise = np.random.normal(0, 1, size=(samples, NOISE_DIM))\n\n generated_img = generator.predict(noise)\n\n plt.figure(figsize=(10, 10))\n\n for i in range(samples):\n plt.subplot(5, 5, i+1)\n plt.imshow(generated_img[i].reshape(28, 28), cmap=\"gray\")\n plt.axis('off')\n\n plt.show()", "_____no_output_____" ], [ "## Training Loop\n\nd_losses = []\ng_losses = []\nfor epoch in range(TOTAL_EPOCHS):\n\n epoch_d_loss = 0.0\n epoch_g_loss = 0.0\n\n for step in range(NO_OF_BATCHES):\n\n\n #++++++++++++++++++#\n #Step1. :- Train Discriminator\n discriminator.trainable = True\n\n #get the real data\n idx = np.random.randint(0, 60000, HALF_BATCH)\n real_imgs = X_train[idx]\n\n #get the fake data\n noise = np.random.normal(0, 1, size=(HALF_BATCH, NOISE_DIM))\n fake_imgs = generator.predict(noise)\n\n #Labels\n real_y = np.ones((HALF_BATCH, 1))*0.9\n fake_y = np.zeros((HALF_BATCH, 1))\n\n #now train D\n d_loss_real = discriminator.train_on_batch(real_imgs, real_y)\n d_loss_fake = discriminator.train_on_batch(fake_imgs, fake_y)\n\n d_loss = 0.5*d_loss_real + 0.5*d_loss_fake\n\n epoch_d_loss += d_loss\n\n\n #++++++++++++++++++++++#\n #Step2. :- Train Generator\n discriminator.trainable = False\n\n noise = np.random.normal(0, 1, size=(BATCH_SIZE, NOISE_DIM))\n ground_truth_y = np.ones((BATCH_SIZE, 1))\n\n g_loss = model.train_on_batch(noise, ground_truth_y)\n\n epoch_g_loss += g_loss\n\n #++++++++++++++++++++++#\n\n print(\"Epoch : {}\\t, Discriminator Loss : {}\\t, Generator Loss : {}\".format((epoch+1), epoch_d_loss/NO_OF_BATCHES, epoch_g_loss/BATCH_SIZE))\n\n d_losses.append(epoch_d_loss/NO_OF_BATCHES)\n g_losses.append(epoch_g_loss/NO_OF_BATCHES)\n\n\n if (epoch+1)%10==0:\n generator.save('generator.h5')\n display_images()", "Epoch : 1\t, Discriminator Loss : 0.25897651991146986\t, Generator Loss : 1.1500072128983447\nEpoch : 2\t, Discriminator Loss : 0.5677873889923605\t, Generator Loss : 1.9773283661343157\nEpoch : 3\t, Discriminator Loss : 0.5564669755279509\t, Generator Loss : 1.508944047614932\nEpoch : 4\t, Discriminator Loss : 0.5703185591687504\t, Generator Loss : 1.4899448323994875\nEpoch : 5\t, Discriminator Loss : 0.621234845274534\t, Generator Loss : 1.3262150567024946\nEpoch : 6\t, Discriminator Loss : 0.6406578118475075\t, Generator Loss : 1.202875533606857\nEpoch : 7\t, Discriminator Loss : 0.642724542472607\t, Generator Loss : 1.1239283843897283\nEpoch : 8\t, Discriminator Loss : 0.6468475127958844\t, Generator Loss : 1.076513821259141\nEpoch : 9\t, Discriminator Loss : 0.6445728270416586\t, Generator Loss : 1.0509009454399347\nEpoch : 10\t, Discriminator Loss : 0.6408960496385893\t, Generator Loss : 1.0446941344998777\n" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e761ad0e96398f366bc57fe77f086b4568677849
40,617
ipynb
Jupyter Notebook
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
51edf6c5afaf6a0c1f658c10134060433ce15694
[ "MIT" ]
null
null
null
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
51edf6c5afaf6a0c1f658c10134060433ce15694
[ "MIT" ]
null
null
null
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
51edf6c5afaf6a0c1f658c10134060433ce15694
[ "MIT" ]
null
null
null
36.330054
661
0.492282
[ [ [ "# Preparation for Colab\n\nMake sure you're running a GPU runtime; if not, select \"GPU\" as the hardware accelerator in Runtime > Change Runtime Type in the menu. The next cells will install the `clip` package and its dependencies, and check if PyTorch 1.7.1 or later is installed.", "_____no_output_____" ] ], [ [ "# ! pip install ftfy regex tqdm\n# ! pip install git+https://github.com/openai/CLIP.git", "Collecting ftfy\n Downloading ftfy-6.0.3.tar.gz (64 kB)\n\u001b[?25l\r\u001b[K |█████ | 10 kB 14.9 MB/s eta 0:00:01\r\u001b[K |██████████▏ | 20 kB 18.7 MB/s eta 0:00:01\r\u001b[K |███████████████▎ | 30 kB 9.0 MB/s eta 0:00:01\r\u001b[K |████████████████████▍ | 40 kB 4.1 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▌ | 51 kB 4.6 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▋ | 61 kB 4.7 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 64 kB 1.3 MB/s \n\u001b[?25hRequirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (2019.12.20)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (4.41.1)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from ftfy) (0.2.5)\nBuilding wheels for collected packages: ftfy\n Building wheel for ftfy (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for ftfy: filename=ftfy-6.0.3-py3-none-any.whl size=41934 sha256=90ec193331444b2c4ff1cd81935e7de42065b89d304db7efac67bcfd87c27873\n Stored in directory: /root/.cache/pip/wheels/19/f5/38/273eb3b5e76dfd850619312f693716ac4518b498f5ffb6f56d\nSuccessfully built ftfy\nInstalling collected packages: ftfy\nSuccessfully installed ftfy-6.0.3\nCollecting git+https://github.com/openai/CLIP.git\n Cloning https://github.com/openai/CLIP.git to /tmp/pip-req-build-hqnbveqi\n Running command git clone -q https://github.com/openai/CLIP.git /tmp/pip-req-build-hqnbveqi\nRequirement already satisfied: ftfy in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (6.0.3)\nRequirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (2019.12.20)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (4.41.1)\nRequirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (1.9.0+cu102)\nRequirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (0.10.0+cu102)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from ftfy->clip==1.0) (0.2.5)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->clip==1.0) (3.7.4.3)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision->clip==1.0) (1.19.5)\nRequirement already satisfied: pillow>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision->clip==1.0) (7.1.2)\nBuilding wheels for collected packages: clip\n Building wheel for clip (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for clip: filename=clip-1.0-py3-none-any.whl size=1369080 sha256=fda43d2b80cfb2b33c2d43e23ea5f53293a9a8b48d5f9e341de527f6adfbf5a3\n Stored in directory: /tmp/pip-ephem-wheel-cache-kmmplf44/wheels/fd/b9/c3/5b4470e35ed76e174bff77c92f91da82098d5e35fd5bc8cdac\nSuccessfully built clip\nInstalling collected packages: clip\nSuccessfully installed clip-1.0\n" ], [ "import numpy as np\nimport torch\nimport clip\n# from tqdm.notebook import tqdm\nfrom tqdm import tqdm\n\nprint(\"Torch version:\", torch.__version__)\n\nassert torch.__version__.split(\".\") >= [\"1\", \"7\", \"1\"], \"PyTorch 1.7.1 or later is required\"", "Torch version: 1.9.0\n" ] ], [ [ "# Loading the model\n\nDownload and instantiate a CLIP model using the `clip` module that we just installed.", "_____no_output_____" ] ], [ [ "print(clip.available_models())\nprint(os.getcwd())", "['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32', 'ViT-B/16']\n/localdata/workspace/CLIP/notebooks\n" ], [ "# model, preprocess = clip.load(\"ViT-B/32\")\nmodel, preprocess = clip.load(\"../models/200m0.988.pt\")", "_____no_output_____" ], [ "input_resolution = model.visual.input_resolution\ncontext_length = model.context_length\nvocab_size = model.vocab_size\n\nprint(\"Model parameters:\", f\"{np.sum([int(np.prod(p.shape)) for p in model.parameters()]):,}\")\nprint(\"Input resolution:\", input_resolution)\nprint(\"Context length:\", context_length)\nprint(\"Vocab size:\", vocab_size)", "Model parameters: 151,277,313\nInput resolution: 224\nContext length: 77\nVocab size: 49408\n" ] ], [ [ "# Preparing ImageNet labels and prompts\n\nThe following cell contains the 1,000 labels for the ImageNet dataset, followed by the text templates we'll use as \"prompt engineering\".", "_____no_output_____" ] ], [ [ "classes = [\n 'apple',\n 'aquarium fish',\n 'baby',\n 'bear',\n 'beaver',\n 'bed',\n 'bee',\n 'beetle',\n 'bicycle',\n 'bottle',\n 'bowl',\n 'boy',\n 'bridge',\n 'bus',\n 'butterfly',\n 'camel',\n 'can',\n 'castle',\n 'caterpillar',\n 'cattle',\n 'chair',\n 'chimpanzee',\n 'clock',\n 'cloud',\n 'cockroach',\n 'couch',\n 'crab',\n 'crocodile',\n 'cup',\n 'dinosaur',\n 'dolphin',\n 'elephant',\n 'flatfish',\n 'forest',\n 'fox',\n 'girl',\n 'hamster',\n 'house',\n 'kangaroo',\n 'keyboard',\n 'lamp',\n 'lawn mower',\n 'leopard',\n 'lion',\n 'lizard',\n 'lobster',\n 'man',\n 'maple tree',\n 'motorcycle',\n 'mountain',\n 'mouse',\n 'mushroom',\n 'oak tree',\n 'orange',\n 'orchid',\n 'otter',\n 'palm tree',\n 'pear',\n 'pickup truck',\n 'pine tree',\n 'plain',\n 'plate',\n 'poppy',\n 'porcupine',\n 'possum',\n 'rabbit',\n 'raccoon',\n 'ray',\n 'road',\n 'rocket',\n 'rose',\n 'sea',\n 'seal',\n 'shark',\n 'shrew',\n 'skunk',\n 'skyscraper',\n 'snail',\n 'snake',\n 'spider',\n 'squirrel',\n 'streetcar',\n 'sunflower',\n 'sweet pepper',\n 'table',\n 'tank',\n 'telephone',\n 'television',\n 'tiger',\n 'tractor',\n 'train',\n 'trout',\n 'tulip',\n 'turtle',\n 'wardrobe',\n 'whale',\n 'willow tree',\n 'wolf',\n 'woman',\n 'worm',\n]\n\ntemplates = [\n 'a photo of a {}.',\n 'a blurry photo of a {}.',\n 'a black and white photo of a {}.',\n 'a low contrast photo of a {}.',\n 'a high contrast photo of a {}.',\n 'a bad photo of a {}.',\n 'a good photo of a {}.',\n 'a photo of a small {}.',\n 'a photo of a big {}.',\n 'a photo of the {}.',\n 'a blurry photo of the {}.',\n 'a black and white photo of the {}.',\n 'a low contrast photo of the {}.',\n 'a high contrast photo of the {}.',\n 'a bad photo of the {}.',\n 'a good photo of the {}.',\n 'a photo of the small {}.',\n 'a photo of the big {}.',\n]\n", "_____no_output_____" ] ], [ [ "A subset of these class names are modified from the default ImageNet class names sourced from Anish Athalye's imagenet-simple-labels.\n\nThese edits were made via trial and error and concentrated on the lowest performing classes according to top_1 and top_5 accuracy on the ImageNet training set for the RN50, RN101, and RN50x4 models. These tweaks improve top_1 by 1.5% on ViT-B/32 over using the default class names. Alec got bored somewhere along the way as gains started to diminish and never finished updating / tweaking the list. He also didn't revisit this with the better performing RN50x16, RN50x64, or any of the ViT models. He thinks it's likely another 0.5% to 1% top_1 could be gained from further work here. It'd be interesting to more rigorously study / understand this.\n\nSome examples beyond the crane/crane -> construction crane / bird crane issue mentioned in Section 3.1.4 of the paper include:\n\n- CLIP interprets \"nail\" as \"fingernail\" so we changed the label to \"metal nail\".\n- ImageNet kite class refers to the bird of prey, not the flying toy, so we changed \"kite\" to \"kite (bird of prey)\"\n- The ImageNet class for red wolf seems to include a lot of mislabeled maned wolfs so we changed \"red wolf\" to \"red wolf or maned wolf\"", "_____no_output_____" ] ], [ [ "\n\nprint(f\"{len(classes)} classes, {len(templates)} templates\")", "100 classes, 18 templates\n" ] ], [ [ "A similar, intuition-guided trial and error based on the ImageNet training set was used for templates. This list is pretty haphazard and was gradually made / expanded over the course of about a year of the project and was revisited / tweaked every few months. A surprising / weird thing was adding templates intended to help ImageNet-R performance (specifying different possible renditions of an object) improved standard ImageNet accuracy too.\n\nAfter the 80 templates were \"locked\" for the paper, we ran sequential forward selection over the list of 80 templates. The search terminated after ensembling 7 templates and selected them in the order below.\n\n1. itap of a {}.\n2. a bad photo of the {}.\n3. a origami {}.\n4. a photo of the large {}.\n5. a {} in a video game.\n6. art of the {}.\n7. a photo of the small {}.\n\nSpeculating, we think it's interesting to see different scales (large and small), a difficult view (a bad photo), and \"abstract\" versions (origami, video game, art), were all selected for, but we haven't studied this in any detail. This subset performs a bit better than the full 80 ensemble reported in the paper, especially for the smaller models.", "_____no_output_____" ], [ "# Loading the Images\n\nThe ILSVRC2012 datasets are no longer available for download publicly. We instead download the ImageNet-V2 dataset by [Recht et al.](https://arxiv.org/abs/1902.10811).\n\nIf you have the ImageNet dataset downloaded, you can replace the dataset with the official torchvision loader, e.g.:\n\n```python\nimages = torchvision.datasets.ImageNet(\"path/to/imagenet\", split='val', transform=preprocess)\n```", "_____no_output_____" ] ], [ [ "# ! pip install git+https://github.com/modestyachts/ImageNetV2_pytorch\n\n# from imagenetv2_pytorch import ImageNetV2Dataset\n\n# images = ImageNetV2Dataset(transform=preprocess)\n\nfrom torchvision.datasets import CIFAR100\nimport os\ncifar100 = CIFAR100(root=os.path.expanduser(\"./data/cifar100\"), download=True, train=True, transform=preprocess)\n\nloader = torch.utils.data.DataLoader(cifar100, batch_size=32, num_workers=8)", "Files already downloaded and verified\n" ] ], [ [ "# Creating zero-shot classifier weights", "_____no_output_____" ] ], [ [ "def zeroshot_classifier(classnames, templates):\n with torch.no_grad():\n zeroshot_weights = []\n for classname in tqdm(classnames):\n texts = [template.format(classname) for template in templates] #format with class\n texts = clip.tokenize(texts).cuda() #tokenize\n class_embeddings = model.encode_text(texts) #embed with text encoder\n class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True)\n class_embedding = class_embeddings.mean(dim=0)\n class_embedding /= class_embedding.norm()\n zeroshot_weights.append(class_embedding)\n zeroshot_weights = torch.stack(zeroshot_weights, dim=1).cuda()\n return zeroshot_weights\n\n\nzeroshot_weights = zeroshot_classifier(classes, templates)", "100%|██████████| 100/100 [00:01<00:00, 74.94it/s]\n" ] ], [ [ "# Zero-shot prediction", "_____no_output_____" ] ], [ [ "def accuracy(output, target, topk=(1,)):\n pred = output.topk(max(topk), 1, True, True)[1].t()\n correct = pred.eq(target.view(1, -1).expand_as(pred))\n return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk]", "_____no_output_____" ], [ "with torch.no_grad():\n top1, top5, n = 0., 0., 0.\n for i, (images, target) in enumerate(tqdm(loader)):\n images = images.cuda()\n target = target.cuda()\n \n # predict\n image_features = model.encode_image(images)\n image_features /= image_features.norm(dim=-1, keepdim=True)\n logits = 100. * image_features @ zeroshot_weights\n\n # measure accuracy\n acc1, acc5 = accuracy(logits, target, topk=(1, 5))\n top1 += acc1\n top5 += acc5\n n += images.size(0)\n\ntop1 = (top1 / n) * 100\ntop5 = (top5 / n) * 100 \n\nprint(f\"Top-1 accuracy: {top1:.2f}\")\nprint(f\"Top-5 accuracy: {top5:.2f}\")", "100%|██████████| 1563/1563 [00:43<00:00, 35.92it/s]" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e761b12b8b64a3bbe02601e238873fe613f50f90
191,929
ipynb
Jupyter Notebook
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
a485ddafbb77098fc413e1ffb2bf4d2ece091b63
[ "AAL" ]
null
null
null
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
a485ddafbb77098fc413e1ffb2bf4d2ece091b63
[ "AAL" ]
null
null
null
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
a485ddafbb77098fc413e1ffb2bf4d2ece091b63
[ "AAL" ]
null
null
null
29.890827
634
0.406463
[ [ [ "## Car Accidents During COVID Stay-At-Home Periods Analysis\nThis notebook is using data collected and cleaned in the `car_accidents_eda.ipynb` notebook, and will be used for examining differences state-by-state for severity and frequency of car accidents. Tableau graphs will be used for final graphs, but this notebook gives a sense of direction.", "_____no_output_____" ], [ "### TABLE OF CONTENTS\n1. [Setup](#Setup)\n2. [Top 20 States Analysis](#Top-20-States-Analysis)\n - [California](#California)\n - [Texas](#Texas)\n - [Florida](#Florida)\n - [South Carolina](#South-Carolina)\n - [North Carolina](#North-Carolina)\n - [New York](#New-York)\n - [Pennsylvania](#Pennsylvania)\n - [Illinois](#Illinois)\n - [Virginia](#Virginia)\n - [Michigan](#Michigan)\n - [Georgia](#Georgia)\n - [Oregon](#Oregon)\n - [Minnesota](#Minnesota)\n - [Arizona](#Arizona)\n - [Tennessee](#Tennessee)\n - [Washington](#Washington)\n - [Ohio](#Ohio)\n - [Louisiana](#Louisiana)\n - [Oklahoma](#Oklahoma)\n - [New Jersey](#New-Jersey)\n3. [Top 20 States Averages](#Top-20-States-Averages)", "_____no_output_____" ], [ "First, import libraries and modules needed >", "_____no_output_____" ] ], [ [ "import pickle\nimport pandas as pd\nimport numpy as np", "_____no_output_____" ] ], [ [ "### Setup", "_____no_output_____" ], [ "First import the dataset created and prepared for analysis >", "_____no_output_____" ] ], [ [ "# open our master dataset for analysis\n\nwith open('pickle/car_accidents_master.pickle','rb') as read_file:\n car_accidents_master = pickle.load(read_file)", "_____no_output_____" ] ], [ [ "Note that we are looking at our dataset before it was narrowed for modeling (e.g. only severity 2 & 3, only using mapquest-sourced data). For the analysis we'll be doing here we can look at the full dataset.\n\nBut first we'll narrow to only the columns we'll be using, which are date, state, whether in period of shutdown, and severity of the accident >", "_____no_output_____" ] ], [ [ "accident_covid = car_accidents_master[['Severity','Date',\n 'State','Year','Month',\n 'Day','Day_Of_Week',\n 'Shut_Down'\n ]]", "_____no_output_____" ] ], [ [ "Now we'll look at amount of accidents in each states >", "_____no_output_____" ] ], [ [ "accident_covid.State.value_counts()", "_____no_output_____" ] ], [ [ "And then see if we narrowed the number of states, how many we'd need to get to at least 75% of total accidents >", "_____no_output_____" ] ], [ [ "accident_covid.State.value_counts()[:20].sum()/accident_covid.State.value_counts().sum()", "_____no_output_____" ], [ "accident_covid.State.value_counts()[:20]", "_____no_output_____" ] ], [ [ "First we'll focus on the state with the highest amount of accidents, as they will provided more data to then draw better conclusions from and also are of greater interest since more need to address car accidents. We'll focus on the 20 states that have the most car accidents, which also represent 86% of total accidents in our full dataset for the US. Those states are: CA, TX, FL, SC, NC, NY, PA, IL, VA, MI, GA, OR, MN, AZ, TN, WA, OH, LA, OK, and NJ. These states also vary in time period of shutdown and political leanings that may influence behavior around a shutdown period, so we have nice diversity for comparison.\n\nFor each state, we'll create dataframes for separate periods we want to analyze and compare. We'll compare stay-at-home order periods with past three years in that same period.\n\nFirst we'll create functions to get the data we'll need for each state >", "_____no_output_____" ] ], [ [ "def severity_percent(state_data):\n '''\n Takes in state data and returns\n a list of percentages for each \n severity level to total accidents.\n '''\n percentages = []\n if 1 not in sorted(state_data.Severity.unique()):\n percentages.append(0)\n \n for x in sorted(state_data.Severity.unique()):\n percent = state_data.Severity.value_counts()[x]/state_data.Severity.count()\n percentages.append(round((percent*100),2))\n \n if 4 not in sorted(state_data.Severity.unique()):\n percentages.append(0)\n \n return percentages", "_____no_output_____" ], [ "def change_freq(state_freq_data):\n '''\n Takes in the dataframe for fequency of \n car accidents for a state and outputs\n the percent change from stay-home 2020\n period compared to average of equivalent\n periods 2017-2019 rounded to 2 decimals.\n '''\n freq_2020 = state_freq_data.Num_Accidents[1]\n avg_freq = np.mean(state_freq_data.Num_Accidents[2:5])\n \n return round(((freq_2020 - avg_freq)/avg_freq * 100),2)", "_____no_output_____" ] ], [ [ "-\n## Top 20 States Analysis", "_____no_output_____" ], [ "### California", "_____no_output_____" ], [ "First create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nca_ttl = accident_covid[(accident_covid.State == 'CA')]\n\n# shutdown period\nca_sd = accident_covid[\n (accident_covid.State == 'CA') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \n# note CA is still in shutdown, so we're comparing to when dataset ended in 2020\n# not when their stay-at-home orders ended\n\nca_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-19') & \n (car_accidents_master['Date'] <= '2019-06-30') & \n (car_accidents_master['State'] == 'CA')]\n\n# 2018\nca_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-19') & \n (car_accidents_master['Date'] <= '2018-06-30') & \n (car_accidents_master['State'] == 'CA')]\n\n# 2017\nca_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-19') & \n (car_accidents_master['Date'] <= '2017-06-30') & \n (car_accidents_master['State'] == 'CA')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nca_freqs = [ca_ttl.shape[0],\n ca_sd.shape[0],\n ca_2019.shape[0],\n ca_2018.shape[0],\n ca_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nca_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': ca_freqs})\n\nca_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(ca_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 76.44%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nca_sp_ttl = severity_percent(ca_ttl)\nca_sp_sd = severity_percent(ca_sd)\nca_sp_2019 = severity_percent(ca_2019)\nca_sp_2018 = severity_percent(ca_2018)\nca_sp_2017 = severity_percent(ca_2017)\n\n# then again create a simple dataframe for comparison\n\nca_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': ca_sp_ttl,\n 'Shut_Down': ca_sp_sd,\n '2019': ca_sp_2019,\n '2018': ca_sp_2018,\n '2017': ca_sp_2017,\n })\n\nca_severity_data", "_____no_output_____" ] ], [ [ "#### CA Takeaways:\n- See increase in fequency during shut down period compared to same periods in past three years\n- Seeing increase in 'edges' of severity -- 1 & 4 -- of car accidents with lowering of 'middle' severities -- 2 & 3 -- for shutdown period compared to average ttl and past year's prior periods", "_____no_output_____" ], [ "-\n### Texas", "_____no_output_____" ], [ "First create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\ntx_ttl = accident_covid[(accident_covid.State == 'TX')]\n\n# shutdown period\ntx_sd = accident_covid[\n (accident_covid.State == 'TX') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \ntx_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-31') & \n (car_accidents_master['Date'] <= '2019-04-30') & \n (car_accidents_master['State'] == 'TX')]\n\n# 2018\ntx_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-31') & \n (car_accidents_master['Date'] <= '2018-04-30') & \n (car_accidents_master['State'] == 'TX')]\n\n# 2017\ntx_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-31') & \n (car_accidents_master['Date'] <= '2017-04-30') & \n (car_accidents_master['State'] == 'TX')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\ntx_freqs = [tx_ttl.shape[0],\n tx_sd.shape[0],\n tx_2019.shape[0],\n tx_2018.shape[0],\n tx_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\ntx_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': tx_freqs})\n\ntx_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(tx_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: -37.65%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\ntx_sp_ttl = severity_percent(tx_ttl)\ntx_sp_sd = severity_percent(tx_sd)\ntx_sp_2019 = severity_percent(tx_2019)\ntx_sp_2018 = severity_percent(tx_2018)\ntx_sp_2017 = severity_percent(tx_2017)\n\n# then again create a simple dataframe for comparison\n\ntx_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': tx_sp_ttl,\n 'Shut_Down': tx_sp_sd,\n '2019': tx_sp_2019,\n '2018': tx_sp_2018,\n '2017': tx_sp_2017\n })\n\ntx_severity_data", "_____no_output_____" ] ], [ [ "#### TX Takeaways:\n- Frequency: Unlike CA, TX had a drop in frequency during shutdown compared to past periods\n- Severity: Similar to CA, still the 'edges' of severity (1 & 4) growing, 4 in particular to a greater degree than CA ", "_____no_output_____" ], [ "-\n### Florida\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nfl_ttl = accident_covid[(accident_covid.State == 'FL')]\n\n# shutdown period\nfl_sd = accident_covid[\n (accident_covid.State == 'FL') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nfl_2019 = accident_covid[(car_accidents_master['Date'] > '2019-04-03') & \n (car_accidents_master['Date'] <= '2019-04-30') & \n (car_accidents_master['State'] == 'FL')]\n\n# 2018\nfl_2018 = accident_covid[(car_accidents_master['Date'] > '2018-04-03') & \n (car_accidents_master['Date'] <= '2018-04-30') & \n (car_accidents_master['State'] == 'FL')]\n\n# 2017\nfl_2017 = accident_covid[(car_accidents_master['Date'] > '2017-04-03') & \n (car_accidents_master['Date'] <= '2017-04-30') & \n (car_accidents_master['State'] == 'FL')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nfl_freqs = [fl_ttl.shape[0],\n fl_sd.shape[0],\n fl_2019.shape[0],\n fl_2018.shape[0],\n fl_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nfl_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': fl_freqs})\n\nfl_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(fl_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 6.62%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nfl_sp_ttl = severity_percent(fl_ttl)\nfl_sp_sd = severity_percent(fl_sd)\nfl_sp_2019 = severity_percent(fl_2019)\nfl_sp_2018 = severity_percent(fl_2018)\nfl_sp_2017 = severity_percent(fl_2017)\n\n# then again create a simple dataframe for comparison\n\nfl_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': fl_sp_ttl,\n 'Shut_Down': fl_sp_sd,\n '2019': fl_sp_2019,\n '2018': fl_sp_2018,\n '2017': fl_sp_2017\n })\n\nfl_severity_data", "_____no_output_____" ] ], [ [ "#### FL Takeaways:\n- Frequency: Not really much change positively or negatively for shutdown compared to past periods\n- Severity: Don't see a lot of difference in severity 4 as with CA & TX, but do see significant increase in 1s while 2 & 3 dropped", "_____no_output_____" ], [ "-\n### South Carolina\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nsc_ttl = accident_covid[(accident_covid.State == 'SC')]\n\n# shutdown period\nsc_sd = accident_covid[\n (accident_covid.State == 'SC') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nsc_2019 = accident_covid[(car_accidents_master['Date'] > '2019-04-06') & \n (car_accidents_master['Date'] <= '2019-05-04') & \n (car_accidents_master['State'] == 'SC')]\n\n# 2018\nsc_2018 = accident_covid[(car_accidents_master['Date'] > '2018-04-06') & \n (car_accidents_master['Date'] <= '2018-05-04') & \n (car_accidents_master['State'] == 'SC')]\n\n# 2017\nsc_2017 = accident_covid[(car_accidents_master['Date'] > '2017-04-06') & \n (car_accidents_master['Date'] <= '2017-05-04') & \n (car_accidents_master['State'] == 'SC')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nsc_freqs = [sc_ttl.shape[0],\n sc_sd.shape[0],\n sc_2019.shape[0],\n sc_2018.shape[0],\n sc_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nsc_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': sc_freqs})\n\nsc_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(sc_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 23.58%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nsc_sp_ttl = severity_percent(sc_ttl)\nsc_sp_sd = severity_percent(sc_sd)\nsc_sp_2019 = severity_percent(sc_2019)\nsc_sp_2018 = severity_percent(sc_2018)\nsc_sp_2017 = severity_percent(sc_2017)\n\n# then again create a simple dataframe for comparison\n\nsc_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': sc_sp_ttl,\n 'Shut_Down': sc_sp_sd,\n '2019': sc_sp_2019,\n '2018': sc_sp_2018,\n '2017': sc_sp_2017\n })\n\nsc_severity_data", "_____no_output_____" ] ], [ [ "#### SC Takeaways:\n- Frequency: no significant difference for shutdown period. 2017 seems low, but this may be due to less data collected that year. \n- Severity: in general, seemed to lower in severity. See more 1 & 2 and less 3 & 4", "_____no_output_____" ], [ "-\n### North Carolina\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nnc_ttl = accident_covid[(accident_covid.State == 'NC')]\n\n# shutdown period\nnc_sd = accident_covid[\n (accident_covid.State == 'NC') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nnc_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-30') & \n (car_accidents_master['Date'] <= '2019-05-22') & \n (car_accidents_master['State'] == 'NC')]\n\n# 2018\nnc_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-30') & \n (car_accidents_master['Date'] <= '2018-05-22') & \n (car_accidents_master['State'] == 'NC')]\n\n# 2017\nnc_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-30') & \n (car_accidents_master['Date'] <= '2017-05-22') & \n (car_accidents_master['State'] == 'NC')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nnc_freqs = [nc_ttl.shape[0],\n nc_sd.shape[0],\n nc_2019.shape[0],\n nc_2018.shape[0],\n nc_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nnc_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': nc_freqs})\n\nnc_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(nc_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 22.40%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nnc_sp_ttl = severity_percent(nc_ttl)\nnc_sp_sd = severity_percent(nc_sd)\nnc_sp_2019 = severity_percent(nc_2019)\nnc_sp_2018 = severity_percent(nc_2018)\nnc_sp_2017 = severity_percent(nc_2017)\n\n# then again create a simple dataframe for comparison\n\nnc_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': nc_sp_ttl,\n 'Shut_Down': nc_sp_sd,\n '2019': nc_sp_2019,\n '2018': nc_sp_2018,\n '2017': nc_sp_2017\n })\n\nnc_severity_data", "_____no_output_____" ] ], [ [ "#### NC Takeaways:\n- Fequency: No significant change during shutdown period\n- Severity: Do see an increase in level 4 severity, but greatest change is increase in severity 1 with drop in severity 2", "_____no_output_____" ], [ "-\n### New York\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nny_ttl = accident_covid[(accident_covid.State == 'NY')]\n\n# shutdown period\nny_sd = accident_covid[\n (accident_covid.State == 'NY') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nny_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-22') & \n (car_accidents_master['Date'] <= '2019-06-13') & \n (car_accidents_master['State'] == 'NY')]\n\n# 2018\nny_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-22') & \n (car_accidents_master['Date'] <= '2018-06-13') & \n (car_accidents_master['State'] == 'NY')]\n\n# 2017\nny_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-22') & \n (car_accidents_master['Date'] <= '2017-06-13') & \n (car_accidents_master['State'] == 'NY')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nny_freqs = [ny_ttl.shape[0],\n ny_sd.shape[0],\n ny_2019.shape[0],\n ny_2018.shape[0],\n ny_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nny_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': ny_freqs})\n\nny_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(ny_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 50.28%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nny_sp_ttl = severity_percent(ny_ttl)\nny_sp_sd = severity_percent(ny_sd)\nny_sp_2019 = severity_percent(ny_2019)\nny_sp_2018 = severity_percent(ny_2018)\nny_sp_2017 = severity_percent(ny_2017)\n\n# then again create a simple dataframe for comparison\n\nny_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': ny_sp_ttl,\n 'Shut_Down': ny_sp_sd,\n '2019': ny_sp_2019,\n '2018': ny_sp_2018,\n '2017': ny_sp_2017\n })\n\nny_severity_data", "_____no_output_____" ] ], [ [ "#### NY Takeaways:\n- Frequency: See an increase in shutdown period, although there was also an even larger jump from 2018 to 2019\n- Severity: See increase in edge severities (1 & 2)", "_____no_output_____" ], [ "-\n### Pennsylvania\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\npa_ttl = accident_covid[(accident_covid.State == 'PA')]\n\n# shutdown period\npa_sd = accident_covid[\n (accident_covid.State == 'PA') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \npa_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-23') & \n (car_accidents_master['Date'] <= '2019-06-04') & \n (car_accidents_master['State'] == 'PA')]\n\n# 2018\npa_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-23') & \n (car_accidents_master['Date'] <= '2018-06-04') & \n (car_accidents_master['State'] == 'PA')]\n\n# 2017\npa_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-23') & \n (car_accidents_master['Date'] <= '2017-06-04') & \n (car_accidents_master['State'] == 'PA')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\npa_freqs = [pa_ttl.shape[0],\n pa_sd.shape[0],\n pa_2019.shape[0],\n pa_2018.shape[0],\n pa_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\npa_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': pa_freqs})\n\npa_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(pa_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 90.76%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\npa_sp_ttl = severity_percent(pa_ttl)\npa_sp_sd = severity_percent(pa_sd)\npa_sp_2019 = severity_percent(pa_2019)\npa_sp_2018 = severity_percent(pa_2018)\npa_sp_2017 = severity_percent(pa_2017)\n\n# then again create a simple dataframe for comparison\n\npa_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': pa_sp_ttl,\n 'Shut_Down': pa_sp_sd,\n '2019': pa_sp_2019,\n '2018': pa_sp_2018,\n '2017': pa_sp_2017\n })\n\npa_severity_data", "_____no_output_____" ] ], [ [ "#### PA Takeaways:\n- Frequency: See a fairly significant increase in accidents during shutdown period\n- Severity: See increase in severity 1 and drop in severity 2, but not that large", "_____no_output_____" ], [ "-\n### Illinois\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nil_ttl = accident_covid[(accident_covid.State == 'IL')]\n\n# shutdown period\nil_sd = accident_covid[\n (accident_covid.State == 'IL') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nil_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-25') & \n (car_accidents_master['Date'] <= '2019-05-31') & \n (car_accidents_master['State'] == 'IL')]\n\n# 2018\nil_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-25') & \n (car_accidents_master['Date'] <= '2018-05-31') & \n (car_accidents_master['State'] == 'IL')]\n\n# 2017\nil_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-25') & \n (car_accidents_master['Date'] <= '2017-05-31') & \n (car_accidents_master['State'] == 'IL')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nil_freqs = [il_ttl.shape[0],\n il_sd.shape[0],\n il_2019.shape[0],\n il_2018.shape[0],\n il_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nil_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': il_freqs})\n\nil_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(il_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 24.55%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nil_sp_ttl = severity_percent(il_ttl)\nil_sp_sd = severity_percent(il_sd)\nil_sp_2019 = severity_percent(il_2019)\nil_sp_2018 = severity_percent(il_2018)\nil_sp_2017 = severity_percent(il_2017)\n\n# then again create a simple dataframe for comparison\n\nil_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': il_sp_ttl,\n 'Shut_Down': il_sp_sd,\n '2019': il_sp_2019,\n '2018': il_sp_2018,\n '2017': il_sp_2017\n })\n\nil_severity_data", "_____no_output_____" ] ], [ [ "#### IL Takeaways:\n- Frequency: Increase during shutdown period\n- Severity: Large increase in severity 3 and decrese in severity 2. Also increase in severity 1", "_____no_output_____" ], [ "-\n### Virginia\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nva_ttl = accident_covid[(accident_covid.State == 'VA')]\n\n# shutdown period\nva_sd = accident_covid[\n (accident_covid.State == 'VA') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nva_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-30') & \n (car_accidents_master['Date'] <= '2019-06-10') & \n (car_accidents_master['State'] == 'VA')]\n\n# 2018\nva_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-30') & \n (car_accidents_master['Date'] <= '2018-06-10') & \n (car_accidents_master['State'] == 'VA')]\n\n# 2017\nva_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-30') & \n (car_accidents_master['Date'] <= '2017-06-10') & \n (car_accidents_master['State'] == 'VA')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nva_freqs = [va_ttl.shape[0],\n va_sd.shape[0],\n va_2019.shape[0],\n va_2018.shape[0],\n va_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nva_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': va_freqs})\n\nva_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(va_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 113.11%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nva_sp_ttl = severity_percent(va_ttl)\nva_sp_sd = severity_percent(va_sd)\nva_sp_2019 = severity_percent(va_2019)\nva_sp_2018 = severity_percent(va_2018)\nva_sp_2017 = severity_percent(va_2017)\n\n# then again create a simple dataframe for comparison\n\nva_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': va_sp_ttl,\n 'Shut_Down': va_sp_sd,\n '2019': va_sp_2019,\n '2018': va_sp_2018,\n '2017': va_sp_2017\n })\n\nva_severity_data", "_____no_output_____" ] ], [ [ "#### VA Takeaways:\n- Frequency: See a large increase in frequency during shutdown\n- Severity: Significant increase in severity 1 and drop in severity 3", "_____no_output_____" ], [ "-\n### Michigan\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nmi_ttl = accident_covid[(accident_covid.State == 'MI')]\n\n# shutdown period\nmi_sd = accident_covid[\n (accident_covid.State == 'MI') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nmi_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-24') & \n (car_accidents_master['Date'] <= '2019-05-28') & \n (car_accidents_master['State'] == 'MI')]\n\n# 2018\nmi_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-24') & \n (car_accidents_master['Date'] <= '2018-05-28') & \n (car_accidents_master['State'] == 'MI')]\n\n# 2017\nmi_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-24') & \n (car_accidents_master['Date'] <= '2017-05-28') & \n (car_accidents_master['State'] == 'MI')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nmi_freqs = [mi_ttl.shape[0],\n mi_sd.shape[0],\n mi_2019.shape[0],\n mi_2018.shape[0],\n mi_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nmi_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': mi_freqs})\n\nmi_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(mi_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: -51.62%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nmi_sp_ttl = severity_percent(mi_ttl)\nmi_sp_sd = severity_percent(mi_sd)\nmi_sp_2019 = severity_percent(mi_2019)\nmi_sp_2018 = severity_percent(mi_2018)\nmi_sp_2017 = severity_percent(mi_2017)\n\n# then again create a simple dataframe for comparison\n\nmi_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': mi_sp_ttl,\n 'Shut_Down': mi_sp_sd,\n '2019': mi_sp_2019,\n '2018': mi_sp_2018,\n '2017': mi_sp_2017\n })\n\nmi_severity_data", "_____no_output_____" ] ], [ [ "#### MI Takeaways:\n- Frequency: Drop in number of accidents\n- Severity: Increase in edges of severity 1 & 4 -- most dramatic with severity 4", "_____no_output_____" ], [ "-\n### Georgia\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nga_ttl = accident_covid[(accident_covid.State == 'GA')]\n\n# shutdown period\nga_sd = accident_covid[\n (accident_covid.State == 'GA') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nga_2019 = accident_covid[(car_accidents_master['Date'] > '2019-04-03') & \n (car_accidents_master['Date'] <= '2019-04-30') & \n (car_accidents_master['State'] == 'GA')]\n\n# 2018\nga_2018 = accident_covid[(car_accidents_master['Date'] > '2018-04-03') & \n (car_accidents_master['Date'] <= '2018-04-30') & \n (car_accidents_master['State'] == 'GA')]\n\n# 2017\nga_2017 = accident_covid[(car_accidents_master['Date'] > '2017-04-03') & \n (car_accidents_master['Date'] <= '2017-04-30') & \n (car_accidents_master['State'] == 'GA')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nga_freqs = [ga_ttl.shape[0],\n ga_sd.shape[0],\n ga_2019.shape[0],\n ga_2018.shape[0],\n ga_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nga_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': ga_freqs})\n\nga_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(ga_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: -13.71%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nga_sp_ttl = severity_percent(ga_ttl)\nga_sp_sd = severity_percent(ga_sd)\nga_sp_2019 = severity_percent(ga_2019)\nga_sp_2018 = severity_percent(ga_2018)\nga_sp_2017 = severity_percent(ga_2017)\n\n# then again create a simple dataframe for comparison\n\nga_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': ga_sp_ttl,\n 'Shut_Down': ga_sp_sd,\n '2019': ga_sp_2019,\n '2018': ga_sp_2018,\n '2017': ga_sp_2017\n })\n\nga_severity_data", "_____no_output_____" ] ], [ [ "-\n### Oregon\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nor_ttl = accident_covid[(accident_covid.State == 'OR')]\n\n# shutdown period\nor_sd = accident_covid[\n (accident_covid.State == 'OR') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nor_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-23') & \n (car_accidents_master['Date'] <= '2019-06-30') & \n (car_accidents_master['State'] == 'OR')]\n\n# 2018\nor_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-23') & \n (car_accidents_master['Date'] <= '2018-06-30') & \n (car_accidents_master['State'] == 'OR')]\n\n# 2017\nor_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-23') & \n (car_accidents_master['Date'] <= '2017-06-30') & \n (car_accidents_master['State'] == 'OR')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >m", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nor_freqs = [or_ttl.shape[0],\n or_sd.shape[0],\n or_2019.shape[0],\n or_2018.shape[0],\n or_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nor_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': or_freqs})\n\nor_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(or_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 187.12%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nor_sp_ttl = severity_percent(or_ttl)\nor_sp_sd = severity_percent(or_sd)\nor_sp_2019 = severity_percent(or_2019)\nor_sp_2018 = severity_percent(or_2018)\nor_sp_2017 = severity_percent(or_2017)\n\n# then again create a simple dataframe for comparison\n\nor_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': or_sp_ttl,\n 'Shut_Down': or_sp_sd,\n '2019': or_sp_2019,\n '2018': or_sp_2018,\n '2017': or_sp_2017\n })\n\nor_severity_data", "_____no_output_____" ] ], [ [ "-\n### Minnesota\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nmn_ttl = accident_covid[(accident_covid.State == 'MN')]\n\n# shutdown period\nmn_sd = accident_covid[\n (accident_covid.State == 'MN') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nmn_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-27') & \n (car_accidents_master['Date'] <= '2019-05-17') & \n (car_accidents_master['State'] == 'MN')]\n\n# 2018\nmn_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-27') & \n (car_accidents_master['Date'] <= '2018-05-17') & \n (car_accidents_master['State'] == 'MN')]\n\n# 2017\nmn_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-27') & \n (car_accidents_master['Date'] <= '2017-05-17') & \n (car_accidents_master['State'] == 'MN')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nmn_freqs = [mn_ttl.shape[0],\n mn_sd.shape[0],\n mn_2019.shape[0],\n mn_2018.shape[0],\n mn_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nmn_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': mn_freqs})\n\nmn_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(mn_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 83.18%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nmn_sp_ttl = severity_percent(mn_ttl)\nmn_sp_sd = severity_percent(mn_sd)\nmn_sp_2019 = severity_percent(mn_2019)\nmn_sp_2018 = severity_percent(mn_2018)\nmn_sp_2017 = severity_percent(mn_2017)\n\n# then again create a simple dataframe for comparison\n\nmn_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': mn_sp_ttl,\n 'Shut_Down': mn_sp_sd,\n '2019': mn_sp_2019,\n '2018': mn_sp_2018,\n '2017': mn_sp_2017\n })\n\nmn_severity_data", "_____no_output_____" ] ], [ [ "-\n### Arizona\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\naz_ttl = accident_covid[(accident_covid.State == 'AZ')]\n\n# shutdown period\naz_sd = accident_covid[\n (accident_covid.State == 'AZ') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \naz_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-31') & \n (car_accidents_master['Date'] <= '2019-05-15') & \n (car_accidents_master['State'] == 'AZ')]\n\n# 2018\naz_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-31') & \n (car_accidents_master['Date'] <= '2018-05-15') & \n (car_accidents_master['State'] == 'AZ')]\n\n# 2017\naz_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-31') & \n (car_accidents_master['Date'] <= '2017-05-15') & \n (car_accidents_master['State'] == 'AZ')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\naz_freqs = [az_ttl.shape[0],\n az_sd.shape[0],\n az_2019.shape[0],\n az_2018.shape[0],\n az_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\naz_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': az_freqs})\n\naz_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(az_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 152.00%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\naz_sp_ttl = severity_percent(az_ttl)\naz_sp_sd = severity_percent(az_sd)\naz_sp_2019 = severity_percent(az_2019)\naz_sp_2018 = severity_percent(az_2018)\naz_sp_2017 = severity_percent(az_2017)\n\n# then again create a simple dataframe for comparison\n\naz_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': az_sp_ttl,\n 'Shut_Down': az_sp_sd,\n '2019': az_sp_2019,\n '2018': az_sp_2018,\n '2017': az_sp_2017\n })\n\naz_severity_data", "_____no_output_____" ] ], [ [ "-\n### Tennessee\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\ntn_ttl = accident_covid[(accident_covid.State == 'TN')]\n\n# shutdown period\ntn_sd = accident_covid[\n (accident_covid.State == 'TN') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \ntn_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-31') & \n (car_accidents_master['Date'] <= '2019-04-30') & \n (car_accidents_master['State'] == 'TN')]\n\n# 2018\ntn_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-31') & \n (car_accidents_master['Date'] <= '2018-04-30') & \n (car_accidents_master['State'] == 'TN')]\n\n# 2017\ntn_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-31') & \n (car_accidents_master['Date'] <= '2017-04-30') & \n (car_accidents_master['State'] == 'TN')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\ntn_freqs = [tn_ttl.shape[0],\n tn_sd.shape[0],\n tn_2019.shape[0],\n tn_2018.shape[0],\n tn_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\ntn_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': tn_freqs})\n\ntn_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(tn_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 2.54%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\ntn_sp_ttl = severity_percent(tn_ttl)\ntn_sp_sd = severity_percent(tn_sd)\ntn_sp_2019 = severity_percent(tn_2019)\ntn_sp_2018 = severity_percent(tn_2018)\ntn_sp_2017 = severity_percent(tn_2017)\n\n# then again create a simple dataframe for comparison\n\ntn_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': tn_sp_ttl,\n 'Shut_Down': tn_sp_sd,\n '2019': tn_sp_2019,\n '2018': tn_sp_2018,\n '2017': tn_sp_2017\n })\n\ntn_severity_data", "_____no_output_____" ] ], [ [ "-\n### Washington\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nwa_ttl = accident_covid[(accident_covid.State == 'WA')]\n\n# shutdown period\nwa_sd = accident_covid[\n (accident_covid.State == 'WA') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nwa_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-23') & \n (car_accidents_master['Date'] <= '2019-05-31') & \n (car_accidents_master['State'] == 'WA')]\n\n# 2018\nwa_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-23') & \n (car_accidents_master['Date'] <= '2018-05-31') & \n (car_accidents_master['State'] == 'WA')]\n\n# 2017\nwa_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-23') & \n (car_accidents_master['Date'] <= '2017-05-31') & \n (car_accidents_master['State'] == 'WA')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nwa_freqs = [wa_ttl.shape[0],\n wa_sd.shape[0],\n wa_2019.shape[0],\n wa_2018.shape[0],\n wa_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nwa_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': wa_freqs})\n\nwa_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(wa_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: -16.54%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nwa_sp_ttl = severity_percent(wa_ttl)\nwa_sp_sd = severity_percent(wa_sd)\nwa_sp_2019 = severity_percent(wa_2019)\nwa_sp_2018 = severity_percent(wa_2018)\nwa_sp_2017 = severity_percent(wa_2017)\n\n# then again create a simple dataframe for comparison\n\nwa_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': wa_sp_ttl,\n 'Shut_Down': wa_sp_sd,\n '2019': wa_sp_2019,\n '2018': wa_sp_2018,\n '2017': wa_sp_2017\n })\n\nwa_severity_data", "_____no_output_____" ] ], [ [ "-\n### Ohio\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\noh_ttl = accident_covid[(accident_covid.State == 'OH')]\n\n# shutdown period\noh_sd = accident_covid[\n (accident_covid.State == 'OH') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \noh_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-22') & \n (car_accidents_master['Date'] <= '2019-06-13') & \n (car_accidents_master['State'] == 'OH')]\n\n# 2018\noh_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-22') & \n (car_accidents_master['Date'] <= '2018-06-13') & \n (car_accidents_master['State'] == 'OH')]\n\n# 2017\noh_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-22') & \n (car_accidents_master['Date'] <= '2017-06-13') & \n (car_accidents_master['State'] == 'OH')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\noh_freqs = [oh_ttl.shape[0],\n oh_sd.shape[0],\n oh_2019.shape[0],\n oh_2018.shape[0],\n oh_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\noh_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': oh_freqs})\n\noh_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(oh_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 117.88%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\noh_sp_ttl = severity_percent(oh_ttl)\noh_sp_sd = severity_percent(oh_sd)\noh_sp_2019 = severity_percent(oh_2019)\noh_sp_2018 = severity_percent(oh_2018)\noh_sp_2017 = severity_percent(oh_2017)\n\n# then again create a simple dataframe for comparison\n\noh_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': oh_sp_ttl,\n 'Shut_Down': oh_sp_sd,\n '2019': oh_sp_2019,\n '2018': oh_sp_2018,\n '2017': oh_sp_2017\n })\n\noh_severity_data", "_____no_output_____" ] ], [ [ "-\n### Louisiana\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nla_ttl = accident_covid[(accident_covid.State == 'LA')]\n\n# shutdown period\nla_sd = accident_covid[\n (accident_covid.State == 'LA') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nla_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-22') & \n (car_accidents_master['Date'] <= '2019-05-15') & \n (car_accidents_master['State'] == 'LA')]\n\n# 2018\nla_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-22') & \n (car_accidents_master['Date'] <= '2018-05-15') & \n (car_accidents_master['State'] == 'LA')]\n\n# 2017\nla_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-22') & \n (car_accidents_master['Date'] <= '2017-05-15') & \n (car_accidents_master['State'] == 'LA')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nla_freqs = [la_ttl.shape[0],\n la_sd.shape[0],\n la_2019.shape[0],\n la_2018.shape[0],\n la_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nla_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': la_freqs})\n\nla_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(la_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 13.45%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nla_sp_ttl = severity_percent(la_ttl)\nla_sp_sd = severity_percent(la_sd)\nla_sp_2019 = severity_percent(la_2019)\nla_sp_2018 = severity_percent(la_2018)\nla_sp_2017 = severity_percent(la_2017)\n\n# then again create a simple dataframe for comparison\n\nla_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': la_sp_ttl,\n 'Shut_Down': la_sp_sd,\n '2019': la_sp_2019,\n '2018': la_sp_2018,\n '2017': la_sp_2017\n })\n\nla_severity_data", "_____no_output_____" ] ], [ [ "-\n### Oklahoma\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nok_ttl = accident_covid[(accident_covid.State == 'OK')]\n\n# shutdown period\nok_sd = accident_covid[\n (accident_covid.State == 'OK') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nok_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-24') & \n (car_accidents_master['Date'] <= '2019-05-06') & \n (car_accidents_master['State'] == 'OK')]\n\n# 2018\nok_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-24') & \n (car_accidents_master['Date'] <= '2018-05-06') & \n (car_accidents_master['State'] == 'OK')]\n\n# 2017\nok_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-24') & \n (car_accidents_master['Date'] <= '2017-05-06') & \n (car_accidents_master['State'] == 'OK')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nok_freqs = [ok_ttl.shape[0],\n ok_sd.shape[0],\n ok_2019.shape[0],\n ok_2018.shape[0],\n ok_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nok_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': ok_freqs})\n\nok_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(ok_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 4.37%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nok_sp_ttl = severity_percent(ok_ttl)\nok_sp_sd = severity_percent(ok_sd)\nok_sp_2019 = severity_percent(ok_2019)\nok_sp_2018 = severity_percent(ok_2018)\nok_sp_2017 = severity_percent(ok_2017)\n\n# then again create a simple dataframe for comparison\n\nok_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': ok_sp_ttl,\n 'Shut_Down': ok_sp_sd,\n '2019': ok_sp_2019,\n '2018': ok_sp_2018,\n '2017': ok_sp_2017\n })\n\nok_severity_data", "_____no_output_____" ] ], [ [ "-\n### New Jersey\n\nFirst create periods we'll want to compare for the state >", "_____no_output_____" ] ], [ [ "# full state period\nnj_ttl = accident_covid[(accident_covid.State == 'NJ')]\n\n# shutdown period\nnj_sd = accident_covid[\n (accident_covid.State == 'NJ') & \n (accident_covid.Shut_Down == 1)]\n\n# 2019 \nnj_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-21') & \n (car_accidents_master['Date'] <= '2019-06-30') & \n (car_accidents_master['State'] == 'NJ')]\n\n# 2018\nnj_2018 = accident_covid[(car_accidents_master['Date'] > '2018-03-21') & \n (car_accidents_master['Date'] <= '2018-06-30') & \n (car_accidents_master['State'] == 'NJ')]\n\n# 2017\nnj_2017 = accident_covid[(car_accidents_master['Date'] > '2017-03-21') & \n (car_accidents_master['Date'] <= '2017-06-30') & \n (car_accidents_master['State'] == 'NJ')]", "_____no_output_____" ] ], [ [ "#### Frequency\nNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# find total frequencies for state total and in each period \n\nnj_freqs = [nj_ttl.shape[0],\n nj_sd.shape[0],\n nj_2019.shape[0],\n nj_2018.shape[0],\n nj_2017.shape[0]\n ]\n\n# then develop a simple dataframe to compare them\nnj_freq_data = pd.DataFrame({'Timeframe': ['Total','Shut_Down',2019,2018,2017],\n 'Num_Accidents': nj_freqs})\n\nnj_freq_data", "_____no_output_____" ], [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(nj_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 18.74%\n" ] ], [ [ "#### Severity\nNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >", "_____no_output_____" ] ], [ [ "# first find percentages of severity ranks to total accidents for \n# total and for each period \n\nnj_sp_ttl = severity_percent(nj_ttl)\nnj_sp_sd = severity_percent(nj_sd)\nnj_sp_2019 = severity_percent(nj_2019)\nnj_sp_2018 = severity_percent(nj_2018)\nnj_sp_2017 = severity_percent(nj_2017)\n\n# then again create a simple dataframe for comparison\n\nnj_severity_data = pd.DataFrame({'Severity_Rank': [1,2,3,4],\n 'TTL': nj_sp_ttl,\n 'Shut_Down': nj_sp_sd,\n '2019': nj_sp_2019,\n '2018': nj_sp_2018,\n '2017': nj_sp_2017\n })\n\nnj_severity_data", "_____no_output_____" ] ], [ [ "-\n### Top 20 States Averages\n\nCombining numbers for the Top 20 states to see average trends for those states >", "_____no_output_____" ], [ "#### Frequency", "_____no_output_____" ] ], [ [ "# combine fequency data for the top 20 states \n\ntop20_freq_data = (ca_freq_data + \n tx_freq_data +\n fl_freq_data +\n sc_freq_data +\n nc_freq_data +\n ny_freq_data +\n pa_freq_data +\n il_freq_data +\n va_freq_data +\n mi_freq_data +\n ga_freq_data + \n or_freq_data +\n mn_freq_data +\n az_freq_data +\n tn_freq_data +\n wa_freq_data +\n oh_freq_data +\n la_freq_data +\n ok_freq_data +\n nj_freq_data)\n\ntop20_freq_data['Timeframe'] = ['Total','Shut_Down',2019,2018,2017]\n\ntop20_freq_data", "_____no_output_____" ] ], [ [ "Now find change in 2020 shutdown period compared to average of past three years >", "_____no_output_____" ] ], [ [ "# percent change in frequency in 2020 stay-home compared to average past years\n\nchange = change_freq(top20_freq_data)\nprint(\"Change Frequency: %.2f%%\" % (change))", "Change Frequency: 51.77%\n" ] ], [ [ "#### Severity\n\nFor average severity rank for top 10 states, we will include weighting for each state based on the states number of accidents compared to all accidents for the top 10 states", "_____no_output_____" ] ], [ [ "# finding weight for each state to then include in next cell\n\nlen(accident_covid[accident_covid.State == 'NJ'])/accident_covid.State.value_counts()[:20].sum()", "_____no_output_____" ], [ "# add all state severity dfs together\n\ntop20_severity_data = (ca_severity_data * 27 +\n tx_severity_data * 11 +\n fl_severity_data * 9 +\n sc_severity_data * 6 +\n nc_severity_data * 5 +\n ny_severity_data * 5 +\n pa_severity_data * 4 +\n il_severity_data * 3 +\n va_severity_data * 3 +\n mi_severity_data * 3 +\n ga_severity_data * 3 +\n or_severity_data * 3 +\n mn_severity_data * 3 +\n az_severity_data * 3 +\n tn_severity_data * 2 +\n wa_severity_data * 2 +\n oh_severity_data * 2 +\n la_severity_data * 2 +\n ok_severity_data * 2 +\n nj_severity_data * 2 )\n\n# get average for the 20 states\ntop20_severity_data = round(top20_severity_data.div(104),2)\n\ntop20_severity_data", "_____no_output_____" ], [ "# (2020 value - average 2017-2019) / average 2017-2019 * 100\n\nsev1_2020 = top20_severity_data.Shut_Down[0]\n\nsev1_avg_past = np.mean(top20_severity_data['2017'][0] + \n top20_severity_data['2018'][0] + \n top20_severity_data['2019'][0])\n\nprint('Percent Change in 2020 stay-home period vs. average 2017-2019:')\n((sev1_2020 - sev1_avg_past) / sev1_avg_past) * 100", "Percent Change in 2020 stay-home period vs. average 2017-2019:\n" ] ], [ [ "#### Top 20 Takeaways:\n- Frequency: See a 51% increase during shutdown period compared to past periods\n- Severity: See greatest shift in severity 1 -- an increase of 8,870%. Severity 4's are also showing an increase compared to 2019 and 2018, but not to 2017. Changes in severity 4 seem to be more on a state-by-state basis", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e761c643d0840d6268f78394a88afbaf62d99271
537,836
ipynb
Jupyter Notebook
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
4ebe63afa28ef424cdbedbba9df45636d4df624f
[ "MIT" ]
null
null
null
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
4ebe63afa28ef424cdbedbba9df45636d4df624f
[ "MIT" ]
null
null
null
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
4ebe63afa28ef424cdbedbba9df45636d4df624f
[ "MIT" ]
null
null
null
466.466609
436,476
0.93113
[ [ [ "<img src=\"images/usm.jpg\" width=\"480\" height=\"240\" align=\"left\"/>", "_____no_output_____" ], [ "# MAT281 - Modelos de clasificación", "_____no_output_____" ], [ "## Objetivos de la clase\n\n* Aprender conceptos básicos de los modelos de clasificación en python.", "_____no_output_____" ], [ "## Contenidos\n\n* [Modelos de clasificación](#c1)\n* [Ejemplos con python](#c2)\n", "_____no_output_____" ], [ "## I.- Modelos de clasificación\n\nLos modelos de clasificacion son ocupadas para predecir valores categóricos, por ejemplo, determinar la especie de una flor basado en el largo (y ancho) de su pétalo (y sépalo).\nDentro de los modelos de clasificación, el modelo más básico (y no por eso menos importante) es el **modelo de regresión logística**.\n\n\n### Regresión logística\n\nLa **regresión logística** puede usarse para tratar de correlacionar la probabilidad de una variable cualitativa binaria (asumiremos que puede tomar los valores reales **0** y **1**) con una variable escalar $x$. \n\nLa idea es que la regresión logística aproxime la probabilidad de obtener **0** (no ocurre cierto suceso) o **1** (ocurre el suceso) con el valor de la variable explicativa $x$. \n\nEn esas condiciones, la probabilidad aproximada del suceso se aproximará mediante una función logística del tipo:\n\n$$\\pi(x) =\\dfrac{e^{\\beta_0+\\beta_1x}}{e^{\\beta_0+\\beta_1x}+1}=\\dfrac{1}{e^{-(\\beta_0+\\beta_1x)}+1}$$\n\nque puede reducirse al cálculo de una regresión lineal para la función logit de la probabilidad:\n\n$$g(x) = ln(\\dfrac{\\pi(x)}{1- \\pi(x)})=\\beta_0+\\beta_1 x$$\n\nEs decir, el problema de regresión logística puede ser visto como un problema de regresión lineal.\n\n<img src=\"http://res.cloudinary.com/dyd911kmh/image/upload/f_auto,q_auto:best/v1534281070/linear_vs_logistic_regression_edxw03.png\" width=\"560\" height=\"480\" align=\"center\"/>", "_____no_output_____" ], [ "### ¿ Cómo se mide el error para elos modelos de clasificación?\n\nLos modelos de clasificacion son ocupadas para predecir valores categóricos, por ejemplo, determinar la especie de una flor basado en el largo (y ancho) de su pétalo (y sépalo).Para este caso, es necesario introducir el concepto de matriz de confusión.\n\nLa **matriz de confusión** es una herramienta que permite la visualización del desempeño de un algoritmo \nPara la clasificación de dos clases (por ejemplo, 0 y 1), se tiene la siguiente matriz de confusión:\n\n<img src=\"https://static.tildacdn.com/tild6630-3965-4833-b932-646530343464/9.svg\n\" width=\"480\" height=\"360\" align=\"rigt\"/>\n\nAcá se define:\n\n* **TP = Verdadero positivo**: el modelo predijo la clase positiva correctamente, para ser una clase positiva.\n* **FP = Falso positivo**: el modelo predijo la clase negativa incorrectamente, para ser una clase positiva.\n* **FN = Falso negativo**: el modelo predijo incorrectamente que la clase positiva sería la clase negativa.\n* **TN = Verdadero negativo**: el modelo predijo la clase negativa correctamente, para ser la clase negativa.\n\nEn este contexto, los valores TP Y TN muestran los valores correctos que tuve al momento de realizar la predicción, mientras que los valores de de FN Y FP denotan los valores que me equivoque de clase.\n\nLos conceptos de FN y FP se pueden interpretar con la siguiente imagen:\n\n<img src=\"https://static.tildacdn.com/tild6436-6637-4562-b738-613433303838/error.jpg\n\" width=\"480\" height=\"360\" align=\"rigt\"/>\n\n\nEn este contexto, se busca maximizar el número al máximo la suma de los elementos TP Y TN, mientras que se busca disminuir la suma de los elementos de FN y FP. Para esto se definen las siguientes métricas:\n\n\n1. **Accuracy**\n\n$$accuracy(y,\\hat{y}) = \\dfrac{TP+TN}{TP+TN+FP+FN}$$\n\n2. **Recall**:\n\n$$recall(y,\\hat{y}) = \\dfrac{TP}{TP+FN}$$\n\n3. **Precision**:\n\n$$precision(y,\\hat{y}) = \\dfrac{TP}{TP+FP} $$\n\n4. **F-score**:\n\n$$fscore(y,\\hat{y}) = 2\\times \\dfrac{precision(y,\\hat{y})\\times recall(y,\\hat{y})}{precision(y,\\hat{y})+recall(y,\\hat{y})} $$", "_____no_output_____" ], [ "## II.- Ejemplos con python", "_____no_output_____" ], [ "### a) Dataset Iris (regresión logística)\n\nVeamos un pequeño ejemplo de como se implementa en python. En este ejemplo voy a utilizar el dataset **Iris** que ya viene junto con Scikit-learn y es ideal para practicar con regresiones logística ; el mismo contiene los tipos de flores basado en en largo y ancho de su sépalo y pétalo.", "_____no_output_____" ] ], [ [ "# librerias\n \nimport os\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn import datasets\nfrom sklearn.model_selection import train_test_split\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns \npd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes\n\n# Ver gráficos de matplotlib en jupyter notebook/lab\n%matplotlib inline", "_____no_output_____" ], [ "# cargar datos\niris = datasets.load_iris()\nprint(iris.DESCR)", ".. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%[email protected])\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher's paper. Note that it's the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher's paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. \"The use of multiple measurements in taxonomic problems\"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\n Mathematical Statistics\" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments\". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n" ], [ "# dejar en formato dataframe\n\niris_df = pd.DataFrame(iris.data, columns=iris.feature_names)\niris_df['TARGET'] = iris.target\niris_df.head() # estructura de nuestro dataset.", "_____no_output_____" ] ], [ [ "Para ver gráficamente el modelo de regresión logística, ajustemos el modelo solo a dos variables: petal length (cm), petal width (cm).", "_____no_output_____" ] ], [ [ "# datos \nfrom sklearn.linear_model import LogisticRegression\n\nX = iris_df[['sepal length (cm)', 'sepal width (cm)']]\nY = iris_df['TARGET']\n\n# split dataset\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state = 2) ", "_____no_output_____" ], [ "# print rows train and test sets\nprint('Separando informacion:\\n')\nprint('numero de filas data original : ',len(X))\nprint('numero de filas train set : ',len(X_train))\nprint('numero de filas test set : ',len(X_test))", "Separando informacion:\n\nnumero de filas data original : 150\nnumero de filas train set : 120\nnumero de filas test set : 30\n" ], [ "# Creando el modelo\nrlog = LogisticRegression()\nrlog.fit(X_train, Y_train) # ajustando el modelo", "_____no_output_____" ] ], [ [ "Grafiquemos nuestro resultados:\n", "_____no_output_____" ] ], [ [ "# grafica de la regresion logistica \nplt.figure(figsize=(12,4))\n\n# dataframe a matriz\nX = X.values\nY = Y.values\n\nx_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\ny_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n\nh = .02 # step size in the mesh\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\nZ = rlog.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1, figsize=(4, 3))\nplt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)\n\n# Plot also the training points\nplt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)\nplt.xlabel('Sepal length')\nplt.ylabel('Sepal width')\nplt.show()", "_____no_output_____" ] ], [ [ "Gráficamente podemos decir que el modelo se ajusta bastante bien, puesto que las clasificaciones son adecuadas y el modelo no se confunde entre una clase y otra. Por otro lado, existe valores numéricos que también nos pueden ayudar a convensernos de estos, que son las métricas que se habian definidos con anterioridad. \n\nPara ello, instanciaremos las distintas metricas del archivo **metrics_classification.py** y calcularemos sus distintos valores.", "_____no_output_____" ] ], [ [ "# metrics\n\nfrom metrics_classification import *\nfrom sklearn.metrics import confusion_matrix\n\ny_true = list(Y_test)\ny_pred = list(rlog.predict(X_test))\n\nprint('Valores:\\n')\nprint('originales: ', y_true)\nprint('predicho: ', y_pred)\n\n\nprint('\\nMatriz de confusion:\\n ')\nprint(confusion_matrix(y_true,y_pred))\n\n# ejemplo \ndf_temp = pd.DataFrame(\n {\n 'y':y_true,\n 'yhat':y_pred\n }\n)\n\ndf_metrics = summary_metrics(df_temp)\nprint(\"\\nMetricas para los regresores : 'sepal length (cm)' y 'sepal width (cm)'\")\nprint(\"\")\nprint(df_metrics)", "Valores:\n\noriginales: [0, 0, 2, 0, 0, 2, 0, 2, 2, 0, 0, 0, 0, 0, 1, 1, 0, 1, 2, 1, 1, 1, 2, 1, 1, 0, 0, 2, 0, 2]\npredicho: [0, 0, 1, 0, 0, 2, 0, 2, 2, 0, 0, 0, 0, 0, 2, 2, 1, 1, 2, 1, 2, 2, 2, 1, 1, 0, 0, 1, 0, 2]\n\nMatriz de confusion:\n \n[[13 1 0]\n [ 0 4 4]\n [ 0 2 6]]\n\nMetricas para los regresores : 'sepal length (cm)' y 'sepal width (cm)'\n\n accuracy recall precision fscore\n0 0.7667 0.7262 0.7238 0.721\n" ] ], [ [ "Basado en las métricas y en la gráfica, podemos concluir que el ajuste realizado es bastante asertado. \n\nOtra forma de convencernos que nuestro ajuste es correcto es analizando la [curva AUC–ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic).\n\nLa curva AUC-ROC es la métrica de selección del modelo para el problema de clasificación bi-multi class. ROC es una curva de probabilidad para diferentes clases. ROC nos dice qué tan bueno es el modelo para distinguir las clases dadas, en términos de la probabilidad predicha.\n\nUna curva ROC típica tiene una tasa de falsos positivos (FPR) en el eje X y una tasa de verdaderos positivos (TPR) en el eje Y.\n\n<img src=\"https://s3.amazonaws.com/stackabuse/media/understanding-roc-curves-python-3.png\" width=\"480\" height=\"480\" align=\"center\"/>", "_____no_output_____" ], [ "El área cubierta por la curva es el área entre la línea naranja (ROC) y el eje. Esta área cubierta es AUC. Cuanto más grande sea el área cubierta, mejores serán los modelos de aprendizaje automático para distinguir las clases dadas. El valor ideal para AUC es 1.\n\nBasado en este concepto, calculemos la curva AUC-ROC para nuestro ejemplo. Cabe destacar que esta curva es efectiva solo para clasificación binaria, por lo que para efectos prácticos convertiremos nuestro **TARGET** en binarios (0 ó 1).\n\nPara efectos prácticos tranformaremos la clase objetivo (en este caso, la clase **0**) a **1**, y el resto de las clases (clase **1** y **2**) las dejaremos en la clase **0**.", "_____no_output_____" ] ], [ [ "from sklearn.metrics import roc_curve\nfrom sklearn.metrics import roc_auc_score", "_____no_output_____" ], [ "# graficar curva roc\ndef plot_roc_curve(fpr, tpr):\n plt.figure(figsize=(9,4))\n plt.plot(fpr, tpr, color='orange', label='ROC')\n plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--')\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title('Receiver Operating Characteristic (ROC) Curve')\n plt.legend()\n plt.show()", "_____no_output_____" ], [ "# separar clase 0 del resto\nX = iris_df[['sepal length (cm)', 'sepal width (cm)']]\nY = iris_df['TARGET'].apply(lambda x: 1 if x ==2 else 0)\nmodel = LogisticRegression()\n\n# split dataset\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state = 2)\n\n# ajustar modelo \nmodel.fit(X_train,Y_train)", "_____no_output_____" ], [ "# calcular score AUC\n\nprobs = model.predict_proba(X_test) # predecir probabilidades para X_test\nprobs_tp = probs[:, 1] # mantener solo las probabilidades de la clase positiva \n \nauc = roc_auc_score(Y_test, probs_tp) # calcular score AUC \n\nprint('AUC: %.2f' % auc)", "AUC: 0.93\n" ], [ "# calcular curva ROC\nfpr, tpr, thresholds = roc_curve(Y_test, probs_tp) # obtener curva ROC\nplot_roc_curve(fpr, tpr)", "_____no_output_____" ] ], [ [ "### b) Varios modelos de clasificación\n\nExisten varios modelos de clasificación que podemos ir comparando unos con otros, dentro de los cuales estacamos los siguientes:\n\n\n* [Regresión Logística](https://es.wikipedia.org/wiki/Regresi%C3%B3n_log%C3%ADstica)\n* [Arboles de Decision](https://es.wikipedia.org/wiki/%C3%81rbol_de_decisi%C3%B3n)\n* [Random Forest](https://es.wikipedia.org/wiki/Random_forest)\n* [SVM](https://es.wikipedia.org/wiki/M%C3%A1quinas_de_vectores_de_soporte) \n\nNos basaremos en un ejemplo de **sklearn** que muestra los resultados de aplicar estos cuatro modelos sobre tres conjunto de datos distintos ( **make_moons**, **make_circles**, **make_classification**). Además, se crea un rutina para comparar los resultados de las distintas métricas.", "_____no_output_____" ], [ "### a) Gráficos\n\nSimilar al gráfico aplicado al conjunto de datos Iris, aca se realiza el mismo ejercicio pero para tres conjunto de datos sobre los distintos modelos.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import make_moons, make_circles, make_classification\nfrom sklearn.preprocessing import StandardScaler\n\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom matplotlib.colors import ListedColormap\n\n\nh = .02 # step size in the mesh\n\nnames = [\"Logistic\",\n \"RBF SVM\", \n \"Decision Tree\", \n \"Random Forest\"\n]\n\nclassifiers = [\n LogisticRegression(),\n SVC(gamma=2, C=1),\n DecisionTreeClassifier(max_depth=5),\n RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),\n]\n\n\n\nX, y = make_classification(n_features=2, n_redundant=0, n_informative=2,\n random_state=1, n_clusters_per_class=1)\nrng = np.random.RandomState(2)\nX += 2 * rng.uniform(size=X.shape)\nlinearly_separable = (X, y)\n\ndatasets = [make_moons(noise=0.3, random_state=0),\n make_circles(noise=0.2, factor=0.5, random_state=1),\n linearly_separable\n ]\n\n\n\nfigure = plt.figure(figsize=(27, 9))\ni = 1\n\n\n\n# iterate over datasets\nfor ds_cnt, ds in enumerate(datasets):\n # preprocess dataset, split into training and test part\n X, y = ds\n X = StandardScaler().fit_transform(X)\n X_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=.4, random_state=42)\n\n x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n\n # just plot the dataset first\n cm = plt.cm.RdBu\n cm_bright = ListedColormap(['#FF0000', '#0000FF'])\n ax = plt.subplot(len(datasets), len(classifiers) + 1, i)\n if ds_cnt == 0:\n ax.set_title(\"Input data\")\n # Plot the training points\n ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,\n edgecolors='k')\n # Plot the testing points\n ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6,\n edgecolors='k')\n ax.set_xlim(xx.min(), xx.max())\n ax.set_ylim(yy.min(), yy.max())\n ax.set_xticks(())\n ax.set_yticks(())\n i += 1\n\n # iterate over classifiers\n for name, clf in zip(names, classifiers):\n ax = plt.subplot(len(datasets), len(classifiers) + 1, i)\n clf.fit(X_train, y_train)\n score = clf.score(X_test, y_test)\n\n # Plot the decision boundary. For that, we will assign a color to each\n # point in the mesh [x_min, x_max]x[y_min, y_max].\n if hasattr(clf, \"decision_function\"):\n Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])\n else:\n Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n\n # Put the result into a color plot\n Z = Z.reshape(xx.shape)\n ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)\n\n # Plot the training points\n ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,\n edgecolors='k')\n # Plot the testing points\n ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,\n edgecolors='k', alpha=0.6)\n\n ax.set_xlim(xx.min(), xx.max())\n ax.set_ylim(yy.min(), yy.max())\n ax.set_xticks(())\n ax.set_yticks(())\n if ds_cnt == 0:\n ax.set_title(name)\n ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),\n size=15, horizontalalignment='right')\n i += 1\n\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "### b) Métricas\n\nDado que el sistema de calcular métricas sigue el mismo formato, solo cambiando el conjunto de datos y el modelo, se decide realizar una clase que automatice este proceso.", "_____no_output_____" ] ], [ [ "from metrics_classification import *\n\nclass SklearnClassificationModels:\n def __init__(self,model,name_model):\n\n self.model = model\n self.name_model = name_model\n \n @staticmethod\n def test_train_model(X,y,n_size):\n X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=n_size , random_state=42)\n return X_train, X_test, y_train, y_test\n \n def fit_model(self,X,y,test_size):\n X_train, X_test, y_train, y_test = self.test_train_model(X,y,test_size )\n return self.model.fit(X_train, y_train) \n \n def df_testig(self,X,y,test_size):\n X_train, X_test, y_train, y_test = self.test_train_model(X,y,test_size )\n model_fit = self.model.fit(X_train, y_train)\n preds = model_fit.predict(X_test)\n df_temp = pd.DataFrame(\n {\n 'y':y_test,\n 'yhat': model_fit.predict(X_test)\n }\n )\n \n return df_temp\n \n def metrics(self,X,y,test_size):\n df_temp = self.df_testig(X,y,test_size)\n df_metrics = summary_metrics(df_temp)\n df_metrics['model'] = self.name_model\n \n return df_metrics\n", "_____no_output_____" ], [ "# metrics \n\nimport itertools\n\n# nombre modelos\nnames_models = [\"Logistic\",\n \"RBF SVM\", \n \"Decision Tree\", \n \"Random Forest\"\n]\n\n# modelos\nclassifiers = [\n LogisticRegression(),\n SVC(gamma=2, C=1),\n DecisionTreeClassifier(max_depth=5),\n RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),\n]\n\n# datasets\nnames_dataset = ['make_moons',\n 'make_circles',\n 'linearly_separable'\n ]\n\nX, y = make_classification(n_features=2, n_redundant=0, n_informative=2,\n random_state=1, n_clusters_per_class=1)\nrng = np.random.RandomState(2)\nX += 2 * rng.uniform(size=X.shape)\nlinearly_separable = (X, y)\n\ndatasets = [make_moons(noise=0.3, random_state=0),\n make_circles(noise=0.2, factor=0.5, random_state=1),\n linearly_separable\n ]\n\n\n# juntar informacion\nlist_models = list(zip(names_models,classifiers))\nlist_dataset = list(zip(names_dataset,datasets))\n\nframes = []\nfor x in itertools.product(list_models, list_dataset):\n \n name_model = x[0][0]\n classifier = x[0][1]\n \n name_dataset = x[1][0]\n dataset = x[1][1]\n \n X = dataset[0]\n Y = dataset[1]\n \n fit_model = SklearnClassificationModels( classifier,name_model)\n df = fit_model.metrics(X,Y,0.2)\n df['dataset'] = name_dataset\n \n frames.append(df)", "/home/falfaro/.cache/pypoetry/virtualenvs/pymessi-xyyw3p3f-py3.6/lib/python3.6/site-packages/sklearn/metrics/_classification.py:1221: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.\n _warn_prf(average, modifier, msg_start, len(result))\n" ], [ "# juntar resultados\npd.concat(frames)", "_____no_output_____" ] ], [ [ "## Referencia\n\n1. [Supervised learning](https://scikit-learn.org/stable/supervised_learning.html)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e761e11a7ec401595691b6dc6d1c2d2e36a7f742
76,191
ipynb
Jupyter Notebook
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
a2948d9a007eda8d4575a3ce3f093d042b08c7c8
[ "Apache-2.0" ]
1
2021-06-20T07:44:45.000Z
2021-06-20T07:44:45.000Z
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
a2948d9a007eda8d4575a3ce3f093d042b08c7c8
[ "Apache-2.0" ]
null
null
null
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
a2948d9a007eda8d4575a3ce3f093d042b08c7c8
[ "Apache-2.0" ]
null
null
null
32.013025
278
0.462797
[ [ [ "## Constant features", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.feature_selection import VarianceThreshold", "_____no_output_____" ] ], [ [ "## Read Data", "_____no_output_____" ] ], [ [ "data.head(5)", "_____no_output_____" ] ], [ [ "### Train - Test Split", "_____no_output_____" ] ], [ [ "data.convert_dtypes().dtypes", "_____no_output_____" ], [ "# separate dataset into train and test\nX_train, X_test, y_train, y_test = train_test_split(\n data.drop(labels=['Label_code'], axis=1), # drop the target\n data['Label_code'], # just the target\n test_size=0.2,\n random_state=0)\nX_train.shape, X_test.shape", "_____no_output_____" ] ], [ [ "### Using VarianceThreshold from Scikit-learn\n\nThe VarianceThreshold from sklearn provides a simple baseline approach to feature selection. It removes all features which variance doesn’t meet a certain threshold. By default, it removes all zero-variance features, i.e., features that have the same value in all samples.", "_____no_output_____" ] ], [ [ "sel = VarianceThreshold(threshold=0.01)\nsel.fit(X_train) # fit finds the features with zero variance", "_____no_output_____" ], [ "# get_support is a boolean vector that indicates which features are retained\n# if we sum over get_support, we get the number of features that are not constant\n# (if necessary, print the result of sel.get_support() to understand its output)\nsum(sel.get_support())", "_____no_output_____" ], [ "# now let's print the number of constant feautures\n# (see how we use ~ to exclude non-constant features)\nconstant = X_train.columns[~sel.get_support()]\nlen(constant)", "_____no_output_____" ] ], [ [ "We can see that 0 columns / variables are constant. This means that 0 variables show the same value, just one value, for all the observations of the training set.", "_____no_output_____" ] ], [ [ "# let's print the constant variable names\nconstant", "_____no_output_____" ], [ "# let's visualise the values of one of the constant variables\n# as an example\nX_train['Protocol_code'].unique()", "_____no_output_____" ], [ "# we can do the same for every feature:\nfor col in constant:\n print(col, X_train[col].unique())", "_____no_output_____" ] ], [ [ "We then use the transform() method of the VarianceThreshold to reduce the training and testing sets to its non-constant features.\n\nNote that VarianceThreshold returns a NumPy array without feature names, so we need to capture the names first, and reconstitute the dataframe in a later step.", "_____no_output_____" ] ], [ [ "# capture non-constant feature names\nfeat_names = X_train.columns[sel.get_support()]", "_____no_output_____" ], [ "X_train = sel.transform(X_train)\nX_test = sel.transform(X_test)\nX_train.shape, X_test.shape", "_____no_output_____" ] ], [ [ "We have now 23 variables.", "_____no_output_____" ] ], [ [ "# X_ train is a NumPy array\nX_train", "_____no_output_____" ], [ "# reconstitute de dataframe\nX_train = pd.DataFrame(X_train, columns=feat_names)\nX_train.head()", "_____no_output_____" ] ], [ [ "In the Kyoto dataset, 0 features was classified as constant were found, remaining the original 23 features of the dataset", "_____no_output_____" ], [ "## Standardize Data", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler().fit(X_train)\nX_train = scaler.transform(X_train)", "_____no_output_____" ] ], [ [ "## Hyperparameter Optimization", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import GridSearchCV\n\nclass EstimatorSelectionHelper:\n \n def __init__(self, models, params):\n self.models = models\n self.params = params\n self.keys = models.keys()\n self.grid_searches = {}\n \n def fit(self, X, y, **grid_kwargs):\n for key in self.keys:\n print('Running GridSearchCV for %s.' % key)\n model = self.models[key]\n params = self.params[key]\n grid_search = GridSearchCV(model, params, **grid_kwargs)\n grid_search.fit(X, y)\n self.grid_searches[key] = grid_search\n print('Done.')\n \n def score_summary(self, sort_by='mean_test_score'):\n frames = []\n for name, grid_search in self.grid_searches.items():\n frame = pd.DataFrame(grid_search.cv_results_)\n frame = frame.filter(regex='^(?!.*param_).*$')\n frame['estimator'] = len(frame)*[name]\n frames.append(frame)\n df = pd.concat(frames)\n \n df = df.sort_values([sort_by], ascending=False)\n df = df.reset_index()\n df = df.drop(['rank_test_score', 'index'], 1)\n \n columns = df.columns.tolist()\n columns.remove('estimator')\n columns = ['estimator']+columns\n df = df[columns]\n return df", "_____no_output_____" ], [ "from sklearn import linear_model\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom catboost import CatBoostClassifier\n\n\nmodels = { \n 'LogisticRegression': linear_model.LogisticRegression(max_iter=1000),\n 'GaussianNB': GaussianNB(),\n 'RandomForest': RandomForestClassifier(random_state=123),\n 'KNN': KNeighborsClassifier(n_jobs=-1),\n 'CatBoost': CatBoostClassifier(),\n}\n\nparams = { \n 'LogisticRegression': {'C':[0.1,0.5,1,2,3,4,5,10,20,25]},\n 'GaussianNB': {'var_smoothing':[1e-9,1e-8,1e-7,1e-6,1e-5,1e-4]},\n 'RandomForest': {'max_depth': [70,80,90,100],'n_estimators': [100,1000]},\n 'KNN': {'n_neighbors':[2,3,4,5],'leaf_size':[1,2,3],'weights':['uniform', 'distance'],\n 'algorithm':['auto','ball_tree','kd_tree','brute']},\n 'CatBoost': {'depth':[4,5,6,7],'learning_rate':[0.01,0.02,0.03,0.04],'iterations':[10,20,30,40,50]}\n}", "_____no_output_____" ], [ "%%time\nhelper = EstimatorSelectionHelper(models, params)\nhelper.fit(X_test, y_test, scoring='f1', n_jobs=2)", "Running GridSearchCV for LogisticRegression.\n" ], [ "helper.score_summary()", "_____no_output_____" ], [ "df_gridsearchcv_summary = helper.score_summary()", "_____no_output_____" ], [ "df_gridsearchcv_summary.to_csv(\"gridsearchcv_summaryBase.csv\", index=False)", "_____no_output_____" ] ], [ [ "## Classifiers", "_____no_output_____" ] ], [ [ "from sklearn import linear_model\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom catboost import CatBoostClassifier", "_____no_output_____" ] ], [ [ "## Metrics Evaluation", "_____no_output_____" ] ], [ [ "from sklearn.metrics import accuracy_score\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import roc_curve, f1_score\nfrom sklearn import metrics\nfrom sklearn.model_selection import cross_val_score", "_____no_output_____" ] ], [ [ "### Logistic Regression", "_____no_output_____" ] ], [ [ "%%time\nclf_LR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=1).fit(X_train, y_train)", "CPU times: user 77.6 ms, sys: 211 ms, total: 289 ms\nWall time: 3.2 s\n" ], [ "pred_y_test = clf_LR.predict(X_test)\nprint('Accuracy:', accuracy_score(y_test, pred_y_test))\n\nf1 = f1_score(y_test, pred_y_test)\nprint('F1 Score:', f1)\n\nfpr, tpr, thresholds = roc_curve(y_test, pred_y_test)\nprint('FPR:', fpr[1])\nprint('TPR:', tpr[1])", "Accuracy: 0.4527830397807424\nF1 Score: 0.2463502636691646\nFPR: 0.5989054991991457\nTPR: 0.9503211991434689\n" ] ], [ [ "### Naive Bayes", "_____no_output_____" ] ], [ [ "%%time\nclf_NB = GaussianNB(var_smoothing=1e-05).fit(X_train, y_train)", "CPU times: user 40.1 ms, sys: 7.54 ms, total: 47.7 ms\nWall time: 45.9 ms\n" ], [ "pred_y_testNB = clf_NB.predict(X_test)\nprint('Accuracy:', accuracy_score(y_test, pred_y_testNB))\n\nf1 = f1_score(y_test, pred_y_testNB)\nprint('F1 Score:', f1)\n\nfpr, tpr, thresholds = roc_curve(y_test, pred_y_testNB)\nprint('FPR:', fpr[1])\nprint('TPR:', tpr[1])", "Accuracy: 0.9045584619725122\nF1 Score: 0.0\nFPR: 0.0014682327816337426\nTPR: 0.0\n" ] ], [ [ "### Random Forest", "_____no_output_____" ] ], [ [ "%%time\nclf_RF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100).fit(X_train, y_train)", "CPU times: user 6.16 s, sys: 45.8 ms, total: 6.2 s\nWall time: 6.2 s\n" ], [ "pred_y_testRF = clf_RF.predict(X_test)\nprint('Accuracy:', accuracy_score(y_test, pred_y_testRF))\n\nf1 = f1_score(y_test, pred_y_testRF, average='weighted', zero_division=0)\nprint('F1 Score:', f1)\n\nfpr, tpr, thresholds = roc_curve(y_test, pred_y_testRF)\nprint('FPR:', fpr[1])\nprint('TPR:', tpr[1])", "Accuracy: 0.9058885171899561\nF1 Score: 0.8611563563923045\nFPR: 1.0\nTPR: 1.0\n" ] ], [ [ "### KNN", "_____no_output_____" ] ], [ [ "%%time\nclf_KNN = KNeighborsClassifier(algorithm='auto',leaf_size=1,n_neighbors=2,weights='uniform').fit(X_train, y_train)", "CPU times: user 9.96 s, sys: 61.3 ms, total: 10 s\nWall time: 9.98 s\n" ], [ "pred_y_testKNN = clf_KNN.predict(X_test)\nprint('accuracy_score:', accuracy_score(y_test, pred_y_testKNN))\n\nf1 = f1_score(y_test, pred_y_testKNN)\nprint('f1:', f1)\n\nfpr, tpr, thresholds = roc_curve(y_test, pred_y_testKNN)\nprint('fpr:', fpr[1])\nprint('tpr:', tpr[1])", "accuracy_score: 0.9058885171899561\nf1: 0.0\nfpr: 1.0\ntpr: 1.0\n" ] ], [ [ "### CatBoost", "_____no_output_____" ] ], [ [ "%%time\nclf_CB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04).fit(X_train, y_train)", "0:\tlearn: 0.5950021\ttotal: 21.5ms\tremaining: 1.05s\n1:\tlearn: 0.4943009\ttotal: 39.9ms\tremaining: 958ms\n2:\tlearn: 0.4235946\ttotal: 57.5ms\tremaining: 901ms\n3:\tlearn: 0.3620619\ttotal: 75.7ms\tremaining: 871ms\n4:\tlearn: 0.3121757\ttotal: 93.6ms\tremaining: 842ms\n5:\tlearn: 0.2371811\ttotal: 111ms\tremaining: 816ms\n6:\tlearn: 0.2070297\ttotal: 129ms\tremaining: 795ms\n7:\tlearn: 0.1610239\ttotal: 148ms\tremaining: 779ms\n8:\tlearn: 0.1441127\ttotal: 166ms\tremaining: 756ms\n9:\tlearn: 0.1140228\ttotal: 186ms\tremaining: 744ms\n10:\tlearn: 0.1028733\ttotal: 203ms\tremaining: 721ms\n11:\tlearn: 0.0790135\ttotal: 221ms\tremaining: 701ms\n12:\tlearn: 0.0614288\ttotal: 239ms\tremaining: 681ms\n13:\tlearn: 0.0478516\ttotal: 259ms\tremaining: 666ms\n14:\tlearn: 0.0375712\ttotal: 277ms\tremaining: 647ms\n15:\tlearn: 0.0312555\ttotal: 295ms\tremaining: 627ms\n16:\tlearn: 0.0249900\ttotal: 312ms\tremaining: 606ms\n17:\tlearn: 0.0196449\ttotal: 330ms\tremaining: 586ms\n18:\tlearn: 0.0159795\ttotal: 348ms\tremaining: 567ms\n19:\tlearn: 0.0128696\ttotal: 365ms\tremaining: 548ms\n20:\tlearn: 0.0106125\ttotal: 384ms\tremaining: 530ms\n21:\tlearn: 0.0090042\ttotal: 402ms\tremaining: 512ms\n22:\tlearn: 0.0074177\ttotal: 420ms\tremaining: 493ms\n23:\tlearn: 0.0062532\ttotal: 438ms\tremaining: 474ms\n24:\tlearn: 0.0052067\ttotal: 456ms\tremaining: 456ms\n25:\tlearn: 0.0044424\ttotal: 474ms\tremaining: 437ms\n26:\tlearn: 0.0037750\ttotal: 491ms\tremaining: 418ms\n27:\tlearn: 0.0032876\ttotal: 508ms\tremaining: 399ms\n28:\tlearn: 0.0028467\ttotal: 526ms\tremaining: 381ms\n29:\tlearn: 0.0024856\ttotal: 543ms\tremaining: 362ms\n30:\tlearn: 0.0022038\ttotal: 561ms\tremaining: 344ms\n31:\tlearn: 0.0019416\ttotal: 580ms\tremaining: 326ms\n32:\tlearn: 0.0017309\ttotal: 598ms\tremaining: 308ms\n33:\tlearn: 0.0015386\ttotal: 615ms\tremaining: 289ms\n34:\tlearn: 0.0013602\ttotal: 632ms\tremaining: 271ms\n35:\tlearn: 0.0012191\ttotal: 650ms\tremaining: 253ms\n36:\tlearn: 0.0010849\ttotal: 668ms\tremaining: 235ms\n37:\tlearn: 0.0009851\ttotal: 687ms\tremaining: 217ms\n38:\tlearn: 0.0008939\ttotal: 704ms\tremaining: 199ms\n39:\tlearn: 0.0008057\ttotal: 722ms\tremaining: 180ms\n40:\tlearn: 0.0007422\ttotal: 739ms\tremaining: 162ms\n41:\tlearn: 0.0006801\ttotal: 756ms\tremaining: 144ms\n42:\tlearn: 0.0006254\ttotal: 774ms\tremaining: 126ms\n43:\tlearn: 0.0005768\ttotal: 791ms\tremaining: 108ms\n44:\tlearn: 0.0005253\ttotal: 808ms\tremaining: 89.8ms\n45:\tlearn: 0.0004894\ttotal: 827ms\tremaining: 71.9ms\n46:\tlearn: 0.0004579\ttotal: 846ms\tremaining: 54ms\n47:\tlearn: 0.0004291\ttotal: 865ms\tremaining: 36ms\n48:\tlearn: 0.0004010\ttotal: 883ms\tremaining: 18ms\n49:\tlearn: 0.0003927\ttotal: 899ms\tremaining: 0us\nCPU times: user 6.4 s, sys: 1.79 s, total: 8.18 s\nWall time: 961 ms\n" ], [ "pred_y_testCB = clf_CB.predict(X_test)\nprint('Accuracy:', accuracy_score(y_test, pred_y_testCB))\n\nf1 = f1_score(y_test, pred_y_testCB, average='weighted', zero_division=0)\nprint('F1 Score:', f1)\n\nfpr, tpr, thresholds = roc_curve(y_test, pred_y_testCB)\nprint('FPR:', fpr[1])\nprint('TPR:', tpr[1])", "Accuracy: 0.9058885171899561\nF1 Score: 0.8611563563923045\nFPR: 1.0\nTPR: 1.0\n" ] ], [ [ "## Model Evaluation", "_____no_output_____" ] ], [ [ "import pandas as pd, numpy as np\ntest_df = pd.read_csv(\"../Kyoto_Test.csv\")\ntest_df.shape", "_____no_output_____" ], [ "# Create feature matrix X and target vextor y\ny_eval = test_df['Label_code']\nX_eval = test_df.drop(columns=['Label_code'])", "_____no_output_____" ] ], [ [ "### Model Evaluation - Logistic Regression", "_____no_output_____" ] ], [ [ "modelLR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=1)\nmodelLR.fit(X_train, y_train)", "_____no_output_____" ], [ "# Predict on the new unseen test data\ny_evalpredLR = modelLR.predict(X_eval)\ny_predLR = modelLR.predict(X_test)", "_____no_output_____" ], [ "train_scoreLR = modelLR.score(X_train, y_train)\ntest_scoreLR = modelLR.score(X_test, y_test)\nprint(\"Training accuracy is \", train_scoreLR)\nprint(\"Testing accuracy is \", test_scoreLR)", "Training accuracy is 0.9304038531296602\nTesting accuracy is 0.4527830397807424\n" ], [ "from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score\nprint('Performance measures for test:')\nprint('--------')\nprint('Accuracy:', test_scoreLR)\nprint('F1 Score:',f1_score(y_test, y_predLR))\nprint('Precision Score:',precision_score(y_test, y_predLR))\nprint('Recall Score:', recall_score(y_test, y_predLR))\nprint('Confusion Matrix:\\n', confusion_matrix(y_test, y_predLR))", "Performance measures for test:\n--------\nAccuracy: 0.4527830397807424\nF1 Score: 0.2463502636691646\nPrecision Score: 0.14151785714285714\nRecall Score: 0.9503211991434689\nConfusion Matrix:\n [[ 9015 13461]\n [ 116 2219]]\n" ] ], [ [ "### Cross validation - Logistic Regression", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\nfrom sklearn import metrics\n\naccuracy = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='accuracy')\nprint(\"Accuracy: %0.5f (+/- %0.5f)\" % (accuracy.mean(), accuracy.std() * 2))\n\nf = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='f1')\nprint(\"F1 Score: %0.5f (+/- %0.5f)\" % (f.mean(), f.std() * 2))\n\nprecision = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='precision')\nprint(\"Precision: %0.5f (+/- %0.5f)\" % (precision.mean(), precision.std() * 2))\n\nrecall = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='recall')\nprint(\"Recall: %0.5f (+/- %0.5f)\" % (recall.mean(), recall.std() * 2))", "Accuracy: 0.90021 (+/- 0.00142)\nF1 Score: 0.00064 (+/- 0.00385)\nPrecision: 0.00769 (+/- 0.04615)\nRecall: 0.00033 (+/- 0.00201)\n" ] ], [ [ "### Model Evaluation - Naive Bayes", "_____no_output_____" ] ], [ [ "modelNB = GaussianNB(var_smoothing=1e-05)\nmodelNB.fit(X_train, y_train)", "_____no_output_____" ], [ "# Predict on the new unseen test data\ny_evalpredNB = modelNB.predict(X_eval)\ny_predNB = modelNB.predict(X_test)", "_____no_output_____" ], [ "train_scoreNB = modelNB.score(X_train, y_train)\ntest_scoreNB = modelNB.score(X_test, y_test)\nprint(\"Training accuracy is \", train_scoreNB)\nprint(\"Testing accuracy is \", test_scoreNB)", "Training accuracy is 0.3204626979968562\nTesting accuracy is 0.9045584619725122\n" ], [ "from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score\nprint('Performance measures for test:')\nprint('--------')\nprint('Accuracy:', test_scoreNB)\nprint('F1 Score:',f1_score(y_test, y_predNB))\nprint('Precision Score:',precision_score(y_test, y_predNB))\nprint('Recall Score:', recall_score(y_test, y_predNB))\nprint('Confusion Matrix:\\n', confusion_matrix(y_test, y_predNB))", "Performance measures for test:\n--------\nAccuracy: 0.9045584619725122\nF1 Score: 0.0\nPrecision Score: 0.0\nRecall Score: 0.0\nConfusion Matrix:\n [[22443 33]\n [ 2335 0]]\n" ] ], [ [ "### Cross validation - Naive Bayes", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\nfrom sklearn import metrics\n\naccuracy = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='accuracy')\nprint(\"Accuracy: %0.5f (+/- %0.5f)\" % (accuracy.mean(), accuracy.std() * 2))\n\nf = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='f1')\nprint(\"F1 Score: %0.5f (+/- %0.5f)\" % (f.mean(), f.std() * 2))\n\nprecision = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='precision')\nprint(\"Precision: %0.5f (+/- %0.5f)\" % (precision.mean(), precision.std() * 2))\n\nrecall = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='recall')\nprint(\"Recall: %0.5f (+/- %0.5f)\" % (recall.mean(), recall.std() * 2))", "Accuracy: 0.51851 (+/- 0.28039)\nF1 Score: 0.25979 (+/- 0.02808)\nPrecision: 0.21404 (+/- 0.38891)\nRecall: 0.86306 (+/- 0.46302)\n" ] ], [ [ "### Model Evaluation - Random Forest", "_____no_output_____" ] ], [ [ "modelRF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100)\nmodelRF.fit(X_train, y_train)", "_____no_output_____" ], [ "# Predict on the new unseen test data\ny_evalpredRF = modelRF.predict(X_eval)\ny_predRF = modelRF.predict(X_test)", "_____no_output_____" ], [ "train_scoreRF = modelRF.score(X_train, y_train)\ntest_scoreRF = modelRF.score(X_test, y_test)\nprint(\"Training accuracy is \", train_scoreRF)\nprint(\"Testing accuracy is \", test_scoreRF)", "Training accuracy is 1.0\nTesting accuracy is 0.9058885171899561\n" ], [ "import warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score\nprint('Performance measures for test:')\nprint('--------')\nprint('Accuracy:', test_scoreRF)\nprint('F1 Score:', f1_score(y_test, y_predRF, average='weighted', zero_division=1))\nprint('Precision Score:', precision_score(y_test, y_predRF, average='weighted', zero_division=1))\nprint('Recall Score:', recall_score(y_test, y_predRF, average='weighted', zero_division=1))\nprint('Confusion Matrix:\\n', confusion_matrix(y_test, y_predRF))", "Performance measures for test:\n--------\nAccuracy: 0.9058885171899561\nF1 Score: 0.8611563563923045\nPrecision Score: 0.9147454883866614\nRecall Score: 0.9058885171899561\nConfusion Matrix:\n [[22476 0]\n [ 2335 0]]\n" ] ], [ [ "### Cross validation - Random Forest", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\nfrom sklearn import metrics\n\naccuracy = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='accuracy')\nprint(\"Accuracy: %0.5f (+/- %0.5f)\" % (accuracy.mean(), accuracy.std() * 2))\n\nf = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='f1')\nprint(\"F1 Score: %0.5f (+/- %0.5f)\" % (f.mean(), f.std() * 2))\n\nprecision = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='precision')\nprint(\"Precision: %0.5f (+/- %0.5f)\" % (precision.mean(), precision.std() * 2))\n\nrecall = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='recall')\nprint(\"Recall: %0.5f (+/- %0.5f)\" % (recall.mean(), recall.std() * 2))", "Accuracy: 0.99929 (+/- 0.00060)\nF1 Score: 0.99631 (+/- 0.00312)\nPrecision: 0.99950 (+/- 0.00154)\nRecall: 0.99315 (+/- 0.00676)\n" ] ], [ [ "### Model Evaluation - KNN", "_____no_output_____" ] ], [ [ "modelKNN = KNeighborsClassifier(algorithm='auto',leaf_size=1,n_neighbors=2,weights='uniform')\nmodelKNN.fit(X_train, y_train)", "_____no_output_____" ], [ "# Predict on the new unseen test data\ny_evalpredKNN = modelKNN.predict(X_eval)\ny_predKNN = modelKNN.predict(X_test)", "_____no_output_____" ], [ "train_scoreKNN = modelKNN.score(X_train, y_train)\ntest_scoreKNN = modelKNN.score(X_test, y_test)\nprint(\"Training accuracy is \", train_scoreKNN)\nprint(\"Testing accuracy is \", test_scoreKNN)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score\nprint('Performance measures for test:')\nprint('--------')\nprint('Accuracy:', test_scoreKNN)\nprint('F1 Score:', f1_score(y_test, y_predKNN, average='weighted', zero_division=1))\nprint('Precision Score:', precision_score(y_test, y_predKNN, average='weighted', zero_division=1))\nprint('Recall Score:', recall_score(y_test, y_predKNN, average='weighted', zero_division=1))\nprint('Confusion Matrix:\\n', confusion_matrix(y_test, y_predKNN))", "_____no_output_____" ] ], [ [ "### Cross validation - KNN", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\nfrom sklearn import metrics\n\naccuracy = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='accuracy')\nprint(\"Accuracy: %0.5f (+/- %0.5f)\" % (accuracy.mean(), accuracy.std() * 2))\n\nf = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='f1')\nprint(\"F1 Score: %0.5f (+/- %0.5f)\" % (f.mean(), f.std() * 2))\n\nprecision = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='precision')\nprint(\"Precision: %0.5f (+/- %0.5f)\" % (precision.mean(), precision.std() * 2))\n\nrecall = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='recall')\nprint(\"Recall: %0.5f (+/- %0.5f)\" % (recall.mean(), recall.std() * 2))", "_____no_output_____" ] ], [ [ "### Model Evaluation - CatBoost", "_____no_output_____" ] ], [ [ "modelCB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04)\nmodelCB.fit(X_train, y_train)", "_____no_output_____" ], [ "# Predict on the new unseen test data\ny_evalpredCB = modelCB.predict(X_eval)\ny_predCB = modelCB.predict(X_test)", "_____no_output_____" ], [ "train_scoreCB = modelCB.score(X_train, y_train)\ntest_scoreCB = modelCB.score(X_test, y_test)\nprint(\"Training accuracy is \", train_scoreCB)\nprint(\"Testing accuracy is \", test_scoreCB)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score\nprint('Performance measures for test:')\nprint('--------')\nprint('Accuracy:', test_scoreCB)\nprint('F1 Score:',f1_score(y_test, y_predCB, average='weighted', zero_division=1))\nprint('Precision Score:',precision_score(y_test, y_predCB, average='weighted', zero_division=1))\nprint('Recall Score:', recall_score(y_test, y_predCB, average='weighted', zero_division=1))\nprint('Confusion Matrix:\\n', confusion_matrix(y_test, y_predCB))", "_____no_output_____" ] ], [ [ "### Cross validation - CatBoost", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\nfrom sklearn import metrics\n\naccuracy = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='accuracy')\nf = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='f1')\nprecision = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='precision')\nrecall = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='recall')", "_____no_output_____" ], [ "print(\"Accuracy: %0.5f (+/- %0.5f)\" % (accuracy.mean(), accuracy.std() * 2))\nprint(\"F1 Score: %0.5f (+/- %0.5f)\" % (f.mean(), f.std() * 2))\nprint(\"Precision: %0.5f (+/- %0.5f)\" % (precision.mean(), precision.std() * 2))\nprint(\"Recall: %0.5f (+/- %0.5f)\" % (recall.mean(), recall.std() * 2))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e761e3e9b979106f7470d411490ad9a63d65f1be
46,198
ipynb
Jupyter Notebook
ConfusionMatrices across different models/ConfusionMatrix_Between_AmazonRekognition&PytorchModel.ipynb
UNT-5214-P3/EmotionRecognition
f605307d5a84f88e9310f79876de8248d78b8e7f
[ "MIT" ]
null
null
null
ConfusionMatrices across different models/ConfusionMatrix_Between_AmazonRekognition&PytorchModel.ipynb
UNT-5214-P3/EmotionRecognition
f605307d5a84f88e9310f79876de8248d78b8e7f
[ "MIT" ]
null
null
null
ConfusionMatrices across different models/ConfusionMatrix_Between_AmazonRekognition&PytorchModel.ipynb
UNT-5214-P3/EmotionRecognition
f605307d5a84f88e9310f79876de8248d78b8e7f
[ "MIT" ]
1
2020-10-26T00:13:16.000Z
2020-10-26T00:13:16.000Z
51.388209
27,200
0.731742
[ [ [ "amazon_rekognition_labels = ['Angry',\n 'Angry',\n 'Fear',\n 'Disgust',\n 'Sad',\n 'Neutral',\n 'Surprise',\n 'Angry',\n 'Fear',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Surprise',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Fear',\n 'Neutral',\n 'Neutral',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Sad',\n 'Fear',\n 'Sad',\n 'Disgust',\n 'Disgust',\n 'Neutral',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Angry',\n 'Neutral',\n 'Neutral',\n 'Angry',\n 'Fear',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Disgust',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Fear',\n 'Neutral',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Sad',\n 'Neutral',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Angry',\n 'Surprise',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Angry',\n 'Disgust',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Disgust',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Surprise',\n 'Disgust',\n 'Angry',\n 'Angry',\n 'Neutral',\n 'Disgust',\n 'Fear',\n 'Disgust',\n 'Disgust',\n 'Angry',\n 'Neutral',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Angry',\n 'Neutral',\n 'Disgust',\n 'Sad',\n 'Angry',\n 'Disgust',\n 'Angry',\n 'Disgust',\n 'Disgust',\n 'Angry',\n 'Neutral',\n 'Angry',\n 'Neutral',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Neutral',\n 'Neutral',\n 'Disgust',\n 'Disgust',\n 'Angry',\n 'Disgust',\n 'Angry',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Fear',\n 'Surprise',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Neutral',\n 'Disgust',\n 'Angry',\n 'Neutral',\n 'Fear',\n 'Angry',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Angry',\n 'Disgust',\n 'Angry',\n 'Disgust',\n 'Angry',\n 'Surprise',\n 'Disgust',\n 'Angry',\n 'Disgust',\n 'Neutral',\n 'Disgust',\n 'Neutral',\n 'Sad',\n 'Angry',\n 'Fear',\n 'Surprise',\n 'Angry',\n 'Angry',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Neutral',\n 'Angry',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Neutral',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Disgust',\n 'Happy',\n 'Neutral',\n 'Angry',\n 'Angry',\n 'Fear',\n 'Fear',\n 'Neutral',\n 'Surprise',\n 'Angry',\n 'Angry',\n 'Fear',\n 'Surprise',\n 'Fear',\n 'Fear',\n 'Neutral',\n 'Fear',\n 'Fear',\n 'Neutral',\n 'Fear',\n 'Neutral',\n 'Surprise',\n 'Sad',\n 'Fear',\n 'Angry',\n 'Neutral',\n 'Fear',\n 'Neutral',\n 'Sad',\n 'Fear',\n 'Surprise',\n 'Disgust',\n 'Sad',\n 'Neutral',\n 'Sad',\n 'Surprise',\n 'Sad',\n 'Sad',\n 'Sad',\n 'Fear',\n 'Fear',\n 'Happy',\n 'Sad',\n 'Sad',\n 'Fear',\n 'Sad',\n 'Angry',\n 'Fear',\n 'Sad',\n 'Neutral',\n 'Fear',\n 'Sad',\n 'Sad',\n 'Sad',\n 'Angry',\n 'Fear',\n 'Neutral',\n 'Sad',\n 'Surprise',\n 'Neutral',\n 'Sad',\n 'Fear',\n 'Fear',\n 'Neutral',\n 'Fear',\n 'Angry',\n 'Sad',\n 'Fear',\n 'Angry',\n 'Surprise',\n 'Fear',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Surprise',\n 'Angry',\n 'Sad',\n 'Happy',\n 'Angry',\n 'Fear',\n 'Neutral',\n 'Angry',\n 'Disgust',\n 'Angry',\n 'Neutral',\n 'Surprise',\n 'Fear',\n 'Sad',\n 'Fear',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Surprise',\n 'Sad',\n 'Neutral',\n 'Surprise',\n 'Fear',\n 'Fear',\n 'Neutral',\n 'Surprise',\n 'Neutral',\n 'Surprise',\n 'Sad',\n 'Surprise',\n 'Happy',\n 'Neutral',\n 'Surprise',\n 'Happy',\n 'Neutral',\n 'Happy',\n 'Happy',\n 'Disgust',\n 'Happy',\n 'Surprise',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Happy',\n 'Surprise',\n 'Neutral',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Fear',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Angry',\n 'Happy',\n 'Angry',\n 'Neutral',\n 'Happy',\n 'Disgust',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Surprise',\n 'Happy',\n 'Happy',\n 'Sad',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Disgust',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Surprise',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Disgust',\n 'Surprise',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Disgust',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Surprise',\n 'Happy',\n 'Disgust',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Happy',\n 'Surprise',\n 'Happy',\n 'Neutral',\n 'Sad',\n 'Sad',\n 'Neutral',\n 'Sad',\n 'Angry',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Sad',\n 'Neutral',\n 'Angry',\n 'Neutral',\n 'Neutral',\n 'Disgust',\n 'Neutral',\n 'Sad',\n 'Sad',\n 'Angry',\n 'Surprise',\n 'Sad',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Disgust',\n 'Neutral',\n 'Disgust',\n 'Neutral',\n 'Surprise',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Surprise',\n 'Fear',\n 'Angry',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Angry',\n 'Neutral',\n 'Neutral',\n 'Fear',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Angry',\n 'Sad',\n 'Sad',\n 'Sad',\n 'Angry',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Disgust',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Sad',\n 'Sad',\n 'Sad',\n 'Sad',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Angry',\n 'Fear',\n 'Sad',\n 'Sad',\n 'Neutral',\n 'Sad',\n 'Happy',\n 'Disgust',\n 'Sad',\n 'Neutral',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Neutral',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Neutral',\n 'Surprise',\n 'Surprise',\n 'Happy',\n 'Sad',\n 'Surprise',\n 'Surprise',\n 'Neutral',\n 'Surprise',\n 'Neutral',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Neutral',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Neutral',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Fear',\n 'Neutral',\n 'Angry',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Sad',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Angry',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Fear',\n 'Fear',\n 'Neutral',\n 'Surprise',\n 'Surprise',\n 'Surprise',\n 'Happy',\n 'Neutral',\n 'Neutral',\n 'Disgust',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Disgust',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Happy',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Angry',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Angry',\n 'Happy',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Sad',\n 'Neutral',\n 'Surprise',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Happy',\n 'Neutral',\n 'Happy',\n 'Sad',\n 'Disgust',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Fear',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Disgust',\n 'Disgust',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Surprise',\n 'Neutral',\n 'Neutral',\n 'Happy',\n 'Disgust',\n 'Neutral',\n 'Sad',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Happy',\n 'Disgust',\n 'Happy',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Happy',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Neutral',\n 'Sad',\n 'Happy',\n 'Disgust',\n 'Neutral',\n 'Neutral',\n 'Happy',\n 'Angry',\n 'Neutral',\n 'Happy']", "_____no_output_____" ], [ "len(amazon_rekognition_labels)", "_____no_output_____" ], [ "import torch\nimport torchvision\nimport torchvision.transforms as transforms\nfrom torchvision.datasets.folder import pil_loader\nimport pickle\nPkl_Filename = \"P3ModelPyTorch.pkl\"\nwith open(Pkl_Filename, 'rb') as file: \n SavedModel = pickle.load(file)", "_____no_output_____" ], [ "transform_test = transforms.Compose([\n transforms.Resize((128,128)),\n transforms.ToTensor(),\n transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),\n ])", "_____no_output_____" ], [ "\ndef pytorch_model_prediction(im):\n newPic = transform_test(pil_loader(im))\n newPic = newPic.unsqueeze(0)\n outputNew = SavedModel(newPic)\n _, predicted = torch.max(outputNew.data, 1)\n if predicted == 0:\n return 'Anger'\n elif predicted == 1:\n return 'Disgust'\n elif predicted == 2:\n return 'Fear'\n elif predicted == 3:\n return 'Happy'\n elif predicted == 4:\n return 'Sad'\n elif predicted == 5:\n return 'Surprise'\n elif predicted == 6:\n return 'Neutral'\n \n# sm = torch.nn.Softmax(dim=1)\n# probabilities = sm(outputNew)\n# b = max(list(max(probabilities.data)))\n# print('{:f}'.format(b))", "_____no_output_____" ], [ "import os\n\nroot_path = '/Users/nvvankad/Documents/Personal/Masters/CSCE 5214 - Software Development for AI/P3 Project/test_data'\nfolders = ['0Angry', '1Disgust', '2Fear', '3Happy', '4Sad', '5Surprise', '6Neutral']\npytorch_predicted_labels = []\n\nfor folder in folders:\n for currentpath, folders, files in os.walk(os.path.join(root_path, folder)):\n for file in files:\n file_path = os.path.join(currentpath, file)\n if '.DS_Store' in file_path:\n continue\n pytorch_predicted_labels.append(pytorch_model_prediction(file_path))", "_____no_output_____" ], [ "len(pytorch_predicted_labels)", "_____no_output_____" ], [ "# Sample confusion matrix between true vs true.\nimport pandas as pd\nimport seaborn as sn\nimport matplotlib.pyplot as plt\n\ndata = {'amazon_rekogntion_predictions': amazon_rekognition_labels,\n 'pytorch_predictions': pytorch_predicted_labels\n }\n\ndf = pd.DataFrame(data, columns=['amazon_rekogntion_predictions','pytorch_predictions'])\nconfusion_matrix = pd.crosstab(df['amazon_rekogntion_predictions'], df['pytorch_predictions'], rownames=['Amazon Rekognition Predictions'], colnames=['Pytorch Model Predictions'])\n\nsn.heatmap(confusion_matrix, annot=True, fmt = \".0f\")\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e761eb8a06f6be50cad38d8e7aa6db010a7c697d
3,268
ipynb
Jupyter Notebook
Practical_Lab_Exam_1_.ipynb
Nickamaes/Linear-Algebra_58109
5faedb8edfc5890016738f348fb7dbfccfab6f91
[ "Apache-2.0" ]
null
null
null
Practical_Lab_Exam_1_.ipynb
Nickamaes/Linear-Algebra_58109
5faedb8edfc5890016738f348fb7dbfccfab6f91
[ "Apache-2.0" ]
null
null
null
Practical_Lab_Exam_1_.ipynb
Nickamaes/Linear-Algebra_58109
5faedb8edfc5890016738f348fb7dbfccfab6f91
[ "Apache-2.0" ]
null
null
null
24.571429
248
0.394125
[ [ [ "<a href=\"https://colab.research.google.com/github/Nickamaes/Linear-Algebra_58109/blob/main/Practical_Lab_Exam_1_.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "###Problem 1", "_____no_output_____" ] ], [ [ "import numpy as np\nA = np.array([[1,2,3],[4,5,6]])\nB = np.array([[1,2],[3,4],[5,6]]) \nC = np.array([[1,2,3],[4,5,6],[7,8,9]])\nD = np.array([[1,2],[3,4]]) \n\na = np.dot(A,B)\nb = np.add(D,D)\nc = 2*C\n\nprint(\"A. AB\")\nprint(a)\nprint(\"\\nB. D+D\")\nprint(b)\nprint(\"\\nC. 2*C\")\nprint(c)", "A. AB\n[[22 28]\n [49 64]]\n\nB. D+D\n[[2 4]\n [6 8]]\n\nC. 2*C\n[[ 2 4 6]\n [ 8 10 12]\n [14 16 18]]\n" ] ], [ [ "###Problem 2", "_____no_output_____" ] ], [ [ "import numpy as np\nX = np.array([5,3,-1])\n\nprint(X)\nprint('\\ntype of array', type(X))\nprint('size of array : ', X.size)\nprint('shape of array: ', X.shape)\nprint('dimension of array:', X.ndim)", "[ 5 3 -1]\n\ntype of array <class 'numpy.ndarray'>\nsize of array : 3\nshape of array: (3,)\ndimension of array: 1\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e761ef6d2ae09c5189141e5353bb18b6e4479f99
121,800
ipynb
Jupyter Notebook
model-fitting.ipynb
talsiddiqui/Titanic
bfa8cfc7cd657ca50426d4c3270c7bbbeb709ae9
[ "MIT" ]
null
null
null
model-fitting.ipynb
talsiddiqui/Titanic
bfa8cfc7cd657ca50426d4c3270c7bbbeb709ae9
[ "MIT" ]
null
null
null
model-fitting.ipynb
talsiddiqui/Titanic
bfa8cfc7cd657ca50426d4c3270c7bbbeb709ae9
[ "MIT" ]
null
null
null
80.822827
21,580
0.769425
[ [ [ "# Supervised Learning Decision Tree Classification Model Fitting", "_____no_output_____" ], [ "In this notebook, I will be applying the following techniques to predict Titanic survival:\n\n1. Decision Tree Classification (with default settings)\n2. Decision Tree Classification with the following optimal hyperparameters identified by using 5-fold Cross Validation\n * Max Depth\n * Min Samples Split\n * Min Samples Leaf\n * Max Features\n3. Random Forest\n\nLet's start by importing the necessary libraries.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np", "_____no_output_____" ] ], [ [ "Now import the train and test data sets. Take a quick glance at the data and get summary statistics for our variables.", "_____no_output_____" ] ], [ [ "train = pd.read_csv(\"data/new_train.csv\", index_col=0)\ntrain.head(5)", "_____no_output_____" ], [ "train.describe()", "_____no_output_____" ], [ "test = pd.read_csv(\"data/new_test.csv\", index_col=0)\ntest.head(5)", "_____no_output_____" ], [ "test.describe()", "_____no_output_____" ] ], [ [ "Feature columns from our data:", "_____no_output_____" ] ], [ [ "feature_cols = ['Pclass','LastName','Title','Sex','Age','SibSp','Parch','Fare','Embarked','TicketNumber','OtherName']", "_____no_output_____" ] ], [ [ "## Decision Tree Classification (with default settings)", "_____no_output_____" ], [ "Using the new_test and new_train CSV files, predict the Titanic survival with default model settings.", "_____no_output_____" ] ], [ [ "from sklearn import tree ", "_____no_output_____" ], [ "model = tree.DecisionTreeClassifier(random_state=42)\n# Prepare data for model fitting\nX = train.loc[:, feature_cols]\ny = train.Survived", "_____no_output_____" ], [ "# Fit a model\nmodel.fit(X, y)", "_____no_output_____" ], [ "X_test = test.loc[:, feature_cols]\nSurvived = model.predict(X_test)", "_____no_output_____" ], [ "test[\"Survived\"] = Survived", "_____no_output_____" ], [ "test.drop(['Pclass','LastName','Title','Sex','Age','SibSp','Parch','Fare','Embarked','TicketNumber','OtherName'],\n axis=1, inplace=True)", "_____no_output_____" ], [ "test.to_csv(\"data/submission.csv\", sep=',')", "_____no_output_____" ] ], [ [ "Result of the 1st submission is not bad:", "_____no_output_____" ], [ "![](img/submission_01.png)", "_____no_output_____" ], [ "But I am curious if it can be improved if we applied better hyperparameters.", "_____no_output_____" ], [ "## Decision Tree Classification with optimal hyperparameters\n### Identified using 5-fold Cross Validation", "_____no_output_____" ], [ "Let's start by re-importing the train and test data sets so we have a clean, new data set.", "_____no_output_____" ] ], [ [ "train = pd.read_csv(\"data/new_train.csv\", index_col=0)\ntest = pd.read_csv(\"data/new_test.csv\", index_col=0)\nX = train.loc[:, feature_cols]\ny = train.Survived", "_____no_output_____" ] ], [ [ "### max_depth", "_____no_output_____" ], [ "The first parameter we will optimize is `max_depth`. Let's test 40 possible values from 1 to 40 for `max_depth` to identify the ideal value.", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split, cross_val_score\n\nmax_depth = np.arange(1,41,1)\nscores = []\n\n# split the data set into 80% training data and 20% test data\nXtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2)\n\nfor i in max_depth:\n model = tree.DecisionTreeClassifier(max_depth=i, random_state=42)\n avg_score = np.mean(cross_val_score(model, Xtrain, ytrain, cv=5))\n scores.append(avg_score)\n \n# plotting validation performance\nplt.plot(max_depth, scores)\nplt.xlabel(\"Max Depth\")\nplt.ylabel(\"Classification accuracy\")\n\n# Use best value of max_depth and report performance on the test split\nbest_max_depth = max_depth[np.argmax(scores)]\nbest_model = tree.DecisionTreeClassifier(max_depth=best_max_depth, random_state=42)\nbest_model.fit(Xtrain,ytrain)\n\n# reporting best accuracy\nprint(\"Value of max_depth with the best accuracy:\", best_max_depth)\nprint(\"Performance on test split:\", best_model.score(Xtest,ytest))", "Value of max_depth with the best accuracy: 3\nPerformance on test split: 0.8603351955307262\n" ] ], [ [ "### min_samples_split", "_____no_output_____" ], [ "The second parameter we will optimize is `min_samples_split`. Let's again test 40 possible values but this time, from 2 to 160 for `min_samples_split` to identify the ideal value.", "_____no_output_____" ] ], [ [ "min_samples_split = np.arange(2,160,4)\nscores = []\n\n# split the data set into 80% training data and 20% test data\nXtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2)\n\nfor i in min_samples_split:\n model = tree.DecisionTreeClassifier(min_samples_split=i, random_state=42)\n avg_score = np.mean(cross_val_score(model, Xtrain, ytrain, cv=5))\n scores.append(avg_score)\n \n# plotting validation performance\nplt.plot(min_samples_split, scores)\nplt.xlabel(\"Min Samples Split\")\nplt.ylabel(\"Classification accuracy\")\n\n# Use best value of min_samples_split and report performance on the test split\nbest_min_samples_split = min_samples_split[np.argmax(scores)]\nbest_model = tree.DecisionTreeClassifier(min_samples_split=best_min_samples_split, random_state=42)\nbest_model.fit(Xtrain,ytrain)\n\n# reporting best accuracy\nprint(\"Value of min_samples_split with the best accuracy:\", best_min_samples_split)\nprint(\"Performance on test split:\", best_model.score(Xtest,ytest))", "Value of min_samples_split with the best accuracy: 86\nPerformance on test split: 0.7988826815642458\n" ] ], [ [ "### min_samples_leaf", "_____no_output_____" ], [ "The third parameter we will optimize is `min_samples_leaf`. Similar to `max_depth`, let's test 40 possible values from 1 to 40 for `min_samples_leaf` to identify the ideal value.", "_____no_output_____" ] ], [ [ "min_samples_leaf = np.arange(1,41,1)\nscores = []\n\n# split the data set into 80% training data and 20% test data\nXtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2)\n\nfor i in min_samples_leaf:\n model = tree.DecisionTreeClassifier(min_samples_leaf=i, random_state=42)\n avg_score = np.mean(cross_val_score(model, Xtrain, ytrain, cv=5))\n scores.append(avg_score)\n \n# plotting validation performance\nplt.plot(min_samples_leaf, scores)\nplt.xlabel(\"Min Samples Leaf\")\nplt.ylabel(\"Classification accuracy\")\n\n# Use best value of min_samples_leaf and report performance on the test split\nbest_min_samples_leaf = min_samples_leaf[np.argmax(scores)]\nbest_model = tree.DecisionTreeClassifier(min_samples_leaf=best_min_samples_leaf, random_state=42)\nbest_model.fit(Xtrain,ytrain)\n\n# reporting best accuracy\nprint(\"Value of min_samples_leaf with the best accuracy:\", best_min_samples_leaf)\nprint(\"Performance on test split:\", best_model.score(Xtest,ytest))", "Value of min_samples_leaf with the best accuracy: 10\nPerformance on test split: 0.8324022346368715\n" ] ], [ [ "### max_features", "_____no_output_____" ], [ "The final parameter we will optimize is `max_features`", "_____no_output_____" ] ], [ [ "max_features = np.arange(1,len(feature_cols),1)\nscores = []\n\n# split the data set into 80% training data and 20% test data\nXtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2)\n\nfor i in max_features:\n model = tree.DecisionTreeClassifier(max_features=i, random_state=42)\n avg_score = np.mean(cross_val_score(model, Xtrain, ytrain, cv=5))\n scores.append(avg_score)\n \n# plotting validation performance\nplt.plot(max_features, scores)\nplt.xlabel(\"Max Features\")\nplt.ylabel(\"Classification accuracy\")\n\n# Use best value of max_features and report performance on the test split\nbest_max_features = max_features[np.argmax(scores)]\nbest_model = tree.DecisionTreeClassifier(max_features=best_max_features, random_state=42)\nbest_model.fit(Xtrain,ytrain)\n\n# reporting best accuracy\nprint(\"Value of max_features with the best accuracy:\", best_max_features)\nprint(\"Performance on test split:\", best_model.score(Xtest,ytest))", "Value of max_features with the best accuracy: 10\nPerformance on test split: 0.7821229050279329\n" ] ], [ [ "Let's take our _best_ optimized hyperparameters and create the _best_ model. Then, fit the model and make preditions.", "_____no_output_____" ] ], [ [ "best_model = tree.DecisionTreeClassifier(max_depth = best_max_depth,\n max_features = best_max_features,\n min_samples_split = best_min_samples_split,\n min_samples_leaf = best_min_samples_leaf,\n random_state=42)", "_____no_output_____" ], [ "# Fit a model\nbest_model.fit(X, y)", "_____no_output_____" ], [ "X_test = test.loc[:, feature_cols]\nSurvived = best_model.predict(X_test)", "_____no_output_____" ], [ "test[\"Survived\"] = Survived", "_____no_output_____" ] ], [ [ "Prepare the predictions for submission.", "_____no_output_____" ] ], [ [ "test.drop(['Pclass','LastName','Title','Sex','Age','SibSp','Parch','Fare','Embarked','TicketNumber','OtherName'],\n axis=1, inplace=True)", "_____no_output_____" ], [ "test.to_csv(\"data/submission.csv\", sep=',')", "_____no_output_____" ] ], [ [ "The results are an improvement:", "_____no_output_____" ], [ "![](img/submission_02.png)", "_____no_output_____" ], [ "We can observe a 3.35% improvement of accuracy using optimized hyperparameters.", "_____no_output_____" ], [ "## Random Forest", "_____no_output_____" ], [ "A combination of multiple decision tree classifiers creates a very powerful classifier -- Random Forest. Let's predict using it now!", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestClassifier", "C:\\Users\\talha\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.\n from numpy.core.umath_tests import inner1d\n" ], [ "train = pd.read_csv(\"data/new_train.csv\", index_col=0)\ntest = pd.read_csv(\"data/new_test.csv\", index_col=0)\nX = train.loc[:, feature_cols]\ny = train.Survived", "_____no_output_____" ] ], [ [ "Let's do a multi-dimensional search for ideal hyperparameters.", "_____no_output_____" ] ], [ [ "# hyperparameters\nn_estimators = np.arange(2,100,4)\nmax_depth = np.arange(1,41,1)\nmax_features = np.arange(1,len(feature_cols),1)\n\n# split the data set into 80% training data and 20% test data\nXtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2)\n\n# accuracy\ntrain_accuracy = np.zeros((len(n_estimators),len(max_depth),len(max_features)))\ntest_accuracy = np.zeros((len(n_estimators),len(max_depth),len(max_features)))\n\n# search for best hyperparameters\nfor i,ne in enumerate(n_estimators):\n for j,md in enumerate(max_depth):\n for k,mf in enumerate(max_features):\n model = RandomForestClassifier()\n model.fit(Xtrain,ytrain)\n train_accuracy[i,j,k] = model.score(Xtrain,ytrain)\n test_accuracy[i,j,k] = model.score(Xtest,ytest)\n \n# Identify best value of hyperparameters\nideal_hp_index = np.unravel_index(np.argmax(test_accuracy, axis=None), test_accuracy.shape)\nn_est = n_estimators[ideal_hp_index[0]]\nmax_dep = max_depth[ideal_hp_index[1]]\nmax_feat = max_features[ideal_hp_index[2]]\n\n# reporting best accuracy\nprint(\"n_estimators:\",n_est)\nprint(\"max_depth:\",max_dep)\nprint(\"max_features:\",max_feat)\nprint(\"Generated a test accuracy of\", test_accuracy[ideal_hp_index])", "n_estimators: 46\nmax_depth: 37\nmax_features: 1\nGenerated a test accuracy of 0.8770949720670391\n" ], [ "best_random_forest = RandomForestClassifier(n_estimators=n_est,\n max_features = max_feat,\n max_depth = max_dep)", "_____no_output_____" ], [ "# Fit a model\nbest_random_forest.fit(X, y)", "_____no_output_____" ], [ "X_test = test.loc[:, feature_cols]\nSurvived = best_random_forest.predict(X_test)", "_____no_output_____" ], [ "test[\"Survived\"] = Survived", "_____no_output_____" ], [ "test.drop(['Pclass','LastName','Title','Sex','Age','SibSp','Parch','Fare','Embarked','TicketNumber','OtherName'],\n axis=1, inplace=True)", "_____no_output_____" ], [ "test.to_csv(\"data/submission.csv\", sep=',')", "_____no_output_____" ] ], [ [ "![](img/submission_03.png)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e761f1178d0c0473042520c552bab61d828e1faa
313,554
ipynb
Jupyter Notebook
Load and Run.ipynb
jasonnoy/COMP5329
fc17c80b1ac41d788cc0a92d3a033dbe2f9b8b81
[ "MIT" ]
null
null
null
Load and Run.ipynb
jasonnoy/COMP5329
fc17c80b1ac41d788cc0a92d3a033dbe2f9b8b81
[ "MIT" ]
null
null
null
Load and Run.ipynb
jasonnoy/COMP5329
fc17c80b1ac41d788cc0a92d3a033dbe2f9b8b81
[ "MIT" ]
null
null
null
313,554
313,554
0.757694
[ [ [ "#mount google drive to colab if not mounted before\nfrom google.colab import files\nfrom google.colab import drive \ndrive.mount('/content/drive')", "Mounted at /content/drive\n" ], [ "!wget https://miil-public-eu.oss-eu-central-1.aliyuncs.com/model-zoo/ASL/MS_COCO_TRresNet_M_224_81.8.pth", "--2021-05-13 14:15:46-- https://miil-public-eu.oss-eu-central-1.aliyuncs.com/model-zoo/ASL/MS_COCO_TRresNet_M_224_81.8.pth\nResolving miil-public-eu.oss-eu-central-1.aliyuncs.com (miil-public-eu.oss-eu-central-1.aliyuncs.com)... 47.254.187.26\nConnecting to miil-public-eu.oss-eu-central-1.aliyuncs.com (miil-public-eu.oss-eu-central-1.aliyuncs.com)|47.254.187.26|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 118405727 (113M) [application/octet-stream]\nSaving to: ‘MS_COCO_TRresNet_M_224_81.8.pth’\n\nMS_COCO_TRresNet_M_ 100%[===================>] 112.92M 6.71MB/s in 15s \n\n2021-05-13 14:16:01 (7.70 MB/s) - ‘MS_COCO_TRresNet_M_224_81.8.pth’ saved [118405727/118405727]\n\n" ], [ "# must update before use!\n!pip install --upgrade --force-reinstall --no-deps kaggle\n\nfrom google.colab import files\n\nfiles.upload()\n\n! mkdir ~/.kaggle\n\n! cp kaggle.json ~/.kaggle/\n\n! chmod 600 ~/.kaggle/kaggle.json\n\n! kaggle datasets list\n\n! kaggle competitions download -c '2021s1comp5329assignment2'\n\n! unzip 2021s1comp5329assignment2.zip", "Collecting kaggle\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/3a/e7/3bac01547d2ed3d308ac92a0878fbdb0ed0f3d41fb1906c319ccbba1bfbc/kaggle-1.5.12.tar.gz (58kB)\n\r\u001b[K |█████▋ | 10kB 21.0MB/s eta 0:00:01\r\u001b[K |███████████▏ | 20kB 27.3MB/s eta 0:00:01\r\u001b[K |████████████████▊ | 30kB 23.3MB/s eta 0:00:01\r\u001b[K |██████████████████████▎ | 40kB 17.8MB/s eta 0:00:01\r\u001b[K |███████████████████████████▉ | 51kB 8.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 61kB 2.9MB/s \n\u001b[?25hBuilding wheels for collected packages: kaggle\n Building wheel for kaggle (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for kaggle: filename=kaggle-1.5.12-cp37-none-any.whl size=73053 sha256=845b2d5bf587ede2fa2f7551abf4bd0bf23549e4021098d6366244c6dce8ed3a\n Stored in directory: /root/.cache/pip/wheels/a1/6a/26/d30b7499ff85a4a4593377a87ecf55f7d08af42f0de9b60303\nSuccessfully built kaggle\nInstalling collected packages: kaggle\n Found existing installation: kaggle 1.5.12\n Uninstalling kaggle-1.5.12:\n Successfully uninstalled kaggle-1.5.12\nSuccessfully installed kaggle-1.5.12\n" ], [ "from google.colab import drive\ndrive.mount('/content/drive')", "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n" ], [ "ls /content/COMP5329S1A2Dataset/data/17657.jpg", "/content/COMP5329S1A2Dataset/data/17657.jpg\n" ], [ "import sys\nsys.path.append('/content/drive/MyDrive/Colab Notebooks/ASL-main')", "_____no_output_____" ], [ "%pip install inplace-abn", "Collecting inplace-abn\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/a4/27/c5791febcdd9af346b66dff19759898476f148177c02b02a72e07ca8aba0/inplace-abn-1.1.0.tar.gz (137kB)\n\r\u001b[K |██▍ | 10kB 22.4MB/s eta 0:00:01\r\u001b[K |████▊ | 20kB 25.8MB/s eta 0:00:01\r\u001b[K |███████▏ | 30kB 30.7MB/s eta 0:00:01\r\u001b[K |█████████▌ | 40kB 22.8MB/s eta 0:00:01\r\u001b[K |████████████ | 51kB 8.3MB/s eta 0:00:01\r\u001b[K |██████████████▎ | 61kB 8.3MB/s eta 0:00:01\r\u001b[K |████████████████▊ | 71kB 8.5MB/s eta 0:00:01\r\u001b[K |███████████████████ | 81kB 9.5MB/s eta 0:00:01\r\u001b[K |█████████████████████▌ | 92kB 10.3MB/s eta 0:00:01\r\u001b[K |███████████████████████▉ | 102kB 11.1MB/s eta 0:00:01\r\u001b[K |██████████████████████████▎ | 112kB 11.1MB/s eta 0:00:01\r\u001b[K |████████████████████████████▋ | 122kB 11.1MB/s eta 0:00:01\r\u001b[K |███████████████████████████████ | 133kB 11.1MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 143kB 11.1MB/s \n\u001b[?25hBuilding wheels for collected packages: inplace-abn\n Building wheel for inplace-abn (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for inplace-abn: filename=inplace_abn-1.1.0-cp37-cp37m-linux_x86_64.whl size=3042384 sha256=358e9a13babe5b282aa1a9dc258478f4b91547d6ba9f2476cf51312cb33dc2c8\n Stored in directory: /root/.cache/pip/wheels/90/e6/ce/baadcff0441c600caa5874d4d3322a7909e724fb7abab21a15\nSuccessfully built inplace-abn\nInstalling collected packages: inplace-abn\nSuccessfully installed inplace-abn-1.1.0\n" ], [ "!python '/content/drive/MyDrive/Colab Notebooks/ASL-main/train1.py' -j 0 '/content/drive/MyDrive/Colab Notebooks/ASL-main/COMP5329S1A2Dataset'", "creating model...\ndone\n\nNumber of training images: 25500\nNumber of validation images: 4500\nlen(val_dataset)): 4500\nlen(train_dataset)): 25500\n24032.jpg\n/content/drive/MyDrive/Colab Notebooks/ASL-main/imagedata.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\n return torch.tensor(image, dtype=torch.float32), torch.tensor(targets, dtype=torch.float32)\n904.jpg\n15119.jpg\n16471.jpg\n6368.jpg\n21482.jpg\n21737.jpg\n23668.jpg\n10956.jpg\n12391.jpg\n10234.jpg\n15874.jpg\n20559.jpg\n8018.jpg\n3010.jpg\n23760.jpg\n3320.jpg\n21361.jpg\n18547.jpg\n17986.jpg\n23136.jpg\n9112.jpg\n17962.jpg\n2333.jpg\n9392.jpg\n20822.jpg\n11204.jpg\n12890.jpg\n12322.jpg\n2704.jpg\n17856.jpg\n16307.jpg\n13275.jpg\n2183.jpg\n15446.jpg\n12274.jpg\n10043.jpg\n8772.jpg\n21012.jpg\n20334.jpg\n24977.jpg\n23179.jpg\n18033.jpg\n15744.jpg\n12494.jpg\n7825.jpg\n24352.jpg\n16843.jpg\n649.jpg\n23423.jpg\n13477.jpg\n19057.jpg\n9601.jpg\n13948.jpg\n15107.jpg\n5656.jpg\n10329.jpg\n20903.jpg\n9549.jpg\n5859.jpg\n3155.jpg\n25143.jpg\n19760.jpg\n726.jpg\n25201.jpg\n2386.jpg\n4509.jpg\n23706.jpg\n15492.jpg\n13121.jpg\n774.jpg\n23791.jpg\n22256.jpg\n9126.jpg\n7843.jpg\n452.jpg\n11851.jpg\n13517.jpg\n7882.jpg\n1595.jpg\n5121.jpg\n5001.jpg\n21628.jpg\n6315.jpg\n24248.jpg\n2803.jpg\n15912.jpg\n13136.jpg\n1944.jpg\n15400.jpg\n6566.jpg\n6445.jpg\n6349.jpg\n8959.jpg\n25371.jpg\n15077.jpg\n10242.jpg\n14223.jpg\n15179.jpg\n22926.jpg\n1746.jpg\n20651.jpg\n11953.jpg\n11360.jpg\n12834.jpg\n13690.jpg\n12491.jpg\n15700.jpg\n5002.jpg\n19790.jpg\n2645.jpg\n17040.jpg\n7142.jpg\n13995.jpg\n13580.jpg\n24800.jpg\n24934.jpg\n20418.jpg\n9449.jpg\n538.jpg\n2888.jpg\n11082.jpg\n1249.jpg\n259.jpg\n10996.jpg\n11434.jpg\n1872.jpg\n7787.jpg\ntorch.Size([128, 3, 200, 200])\ntorch.Size([128, 20])\nTraceback (most recent call last):\n File \"/content/drive/MyDrive/Colab Notebooks/ASL-main/train1.py\", line 165, in <module>\n main()\n File \"/content/drive/MyDrive/Colab Notebooks/ASL-main/train1.py\", line 67, in main\n train_multi_label(model, train_loader, val_loader, args.lr)\n File \"/content/drive/MyDrive/Colab Notebooks/ASL-main/train1.py\", line 98, in train_multi_label\n loss = criterion(output, target)\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 889, in _call_impl\n result = self.forward(*input, **kwargs)\n File \"/content/drive/MyDrive/Colab Notebooks/ASL-main/src/loss_functions/losses.py\", line 33, in forward\n los_pos = y * torch.log(xs_pos.clamp(min=self.eps))\nRuntimeError: The size of tensor a (128) must match the size of tensor b (20) at non-singleton dimension 1\n" ], [ "import pands as pd\ndata_root_dir='/content/drive/MyDrive/Colab Notebooks/ASL-main/COMP5329S1A2Dataset'\n\ntrain_csv = pd.read_csv(os.path.join(data_root_dir, 'train.csv'))\n\n# read data form file, make binary array for all classes\ntrain_csv_image_names = train_csv[:]['ImageID']\n\ntrain_csv_label = train_csv[:]['Labels'].str.split(' ')\n\nprint(train_csv_label[:10])\n\nnp_train_csv_label = np.zeros((train_csv_label.shape[0],20), dtype=int)\nfor outter_it in range(train_csv_label.shape[0]):\n for inner_it in train_csv_label[outter_it]:\n np_train_csv_label[outter_it][int(inner_it)] = 1\n\nprint(np_train_csv_label[:10])", "_____no_output_____" ], [ "import torch\nmodel = torch.hub.load('pytorch/vision:v0.9.0', 'resnet18', pretrained=True)\nmodel = torch.hub.load('pytorch/vision:v0.9.0', 'resnet34', pretrained=True)\nmodel = torch.hub.load('pytorch/vision:v0.9.0', 'resnet50', pretrained=True)\nmodel = torch.hub.load('pytorch/vision:v0.9.0', 'googlenet', pretrained=True)", "Downloading: \"https://github.com/pytorch/vision/archive/v0.9.0.zip\" to /root/.cache/torch/hub/v0.9.0.zip\nDownloading: \"https://download.pytorch.org/models/resnet18-5c106cde.pth\" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth\n" ], [ "# https://github.com/jiangqy/Customized-DataLoader-pytorch/blob/master/multi_label_classifier.py\n# https://debuggercafe.com/multi-label-image-classification-with-pytorch-and-deep-learning/", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e761f258784a8966e97acc521b46e98ba1ba7bca
255,038
ipynb
Jupyter Notebook
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
411d8d920ea794ba3a69fad6fd21e272c4fe5fb2
[ "MIT" ]
null
null
null
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
411d8d920ea794ba3a69fad6fd21e272c4fe5fb2
[ "MIT" ]
null
null
null
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
411d8d920ea794ba3a69fad6fd21e272c4fe5fb2
[ "MIT" ]
null
null
null
93.420513
28,874
0.649307
[ [ [ "<a href=\"https://colab.research.google.com/github/jacKlinc/german_char_recogniser/blob/main/mdl_german_char_classifier.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "#hide\n!pip install -Uqq fastbook\nimport fastbook\nfastbook.setup_book()", "_____no_output_____" ], [ "from fastbook import *\nfrom fastai.vision.all import *", "_____no_output_____" ], [ "path = Path('gdrive/MyDrive/Colab Notebooks')\nPath.BASE_PATH = path", "_____no_output_____" ], [ "a = (path/'A').ls().sorted()\nb = (path/'B').ls().sorted()", "_____no_output_____" ], [ "len(b)", "_____no_output_____" ], [ "img = Image.open(a[100])\nimg", "_____no_output_____" ], [ "img_t = tensor(img)", "_____no_output_____" ], [ "df = pd.DataFrame(img_t)\ndf.style.set_properties(**{'font-size': '6pt'}).background_gradient('Greys')", "_____no_output_____" ] ], [ [ "The cell above shows how the image is simply made of black and white and pixels; the darker the pixel, the higher the number in the cell.", "_____no_output_____" ], [ "## Pixel Similarity\nFind the mean shape of A and comapre each new image to that.", "_____no_output_____" ] ], [ [ "# get first 64 for testing\na_tensor = [tensor(Image.open(x)) for x in a[:64]]\nb_tensor = [tensor(Image.open(x)) for x in b[:64]]", "_____no_output_____" ] ], [ [ "To make this into one big matrix a neural network can use we need to stack all the tensors on top of each other.", "_____no_output_____" ] ], [ [ "stacked_a = torch.stack(a_tensor)\nstacked_b = torch.stack(b_tensor)", "_____no_output_____" ] ], [ [ "The data must be converted from a 0-255 scale to 0-1 for the neural network. The tensors are integers so they must be converted to floats too.", "_____no_output_____" ] ], [ [ "stacked_a = stacked_a.float()/255\nstacked_b = stacked_b.float()/255", "_____no_output_____" ] ], [ [ "Find the mean of each matrix.", "_____no_output_____" ] ], [ [ "mean_a = stacked_a.mean(0)\nmean_b = stacked_b.mean(0)", "_____no_output_____" ] ], [ [ "Getting the distance from the ideal (mean) character is going to determine how it is classified. There are two methods we can use:\n\n1. L1 norm - absolute mean value. This subtracts the mean from each value in the tensor and makes it postive.\n1. L2 norm - Root Mean Squared Error (RMSE). This subtracts the mean in the tensor but squares the difference. This makes for smaller errors having little weight and larger ones having quite an effect.\n\n", "_____no_output_____" ] ], [ [ "a0 = stacked_a[0]\nshow_image(a0)", "_____no_output_____" ], [ "dist_a_abs = (a0 - mean_a).abs().mean()\ndist_a_rmse = ((a0 - mean_a)**2).mean().sqrt()\ndist_a_abs, dist_a_rmse", "_____no_output_____" ], [ "dist_b_abs = (a0 - mean_b).abs().mean()\ndist_b_rmse = ((a0 - mean_b)**2).mean().sqrt()\ndist_b_abs, dist_b_rmse", "_____no_output_____" ] ], [ [ "Using either of the above approaches, the average distance between the \"a\" example and the ideal \"a\" is less than that between the \"b\" example. Either can be used to classify. Defining a function for each might be the most convenient way to proceed.", "_____no_output_____" ] ], [ [ "def abs_dist(ex, mean):\n return (ex - mean).abs().mean((-1, -2))\n\ndef rmse_dist(ex, mean):\n return ((ex - mean)**2).mean().sqrt()\n\nabs_dist(a0, mean_b), rmse_dist(a0, mean_b)", "_____no_output_____" ] ], [ [ "Now the error between an example and the mean can be found easily. Defining a way to compare the error to other character is the next step.\n\nThe code below checks if the absolute mean error between an \"a\" example and the mean of \"a\" is less than that of a \"b\" example.", "_____no_output_____" ] ], [ [ "abs_dist(a0, mean_a) < abs_dist(a0, mean_b)", "_____no_output_____" ], [ "def is_a(ex_a):\n return (abs_dist(ex_a, mean_a) < abs_dist(ex_a, mean_b)).float()\n\nis_a(a0) ", "_____no_output_____" ] ], [ [ "### Create Validation Set\nGather validation data to measure accuracy of classification.", "_____no_output_____" ] ], [ [ "valid_stacked_a = torch.stack([tensor(Image.open(x)) \n for x in a[65:128]]).float()/255\n \nvalid_stacked_b = torch.stack([tensor(Image.open(x)) \n for x in b[65:128]]).float()/255\n\nvalid_stacked_a.shape, valid_stacked_b.shape", "_____no_output_____" ], [ "accuracy_a = is_a(valid_stacked_a) .mean()\naccuracy_b = (1 - is_a(valid_stacked_b)).mean()\n\naccuracy_a, accuracy_b", "_____no_output_____" ] ], [ [ "Next we need to find the distance from each value in the validation matrix from the ideal \"a\". This could be acheived by looping over the each value but this would be inefficient. \n\nLuckily, broadcasting exists, this is where PyTorch has tensors of different ranks and it expands the smaller one (```mean_a```) to the size of the larger one (```valid_stacked_a```) so that each value is operated on.\n\nThis makes things much faster because the operation is passed directly through C or if running on a GPU, it can be millions of times faster.", "_____no_output_____" ] ], [ [ "valid_a_dist = abs_dist(valid_stacked_a, mean_a)\nvalid_a_dist, valid_a_dist.shape", "_____no_output_____" ] ], [ [ "### Stochastic Gradient Descent (SGD)\nAlgorithm used to optimise weights and parameters so that the loss function is minimised. Gradient descent wants to find the lowest point on the graph so that the losses are also low.\n\n**7 Steps:**\n1. Init weights\n1. Predict character\n1. Calculate loss\n1. Calculate gradient\n1. Make step\n1. Repeat 2.\n1. End when converged or out of time\n\nTo calculate this gradient, we need to find the derivative of the loss function. The derivative will basically find the slope of the curve for a given weight, each value for the gradient will return in a matrix.", "_____no_output_____" ] ], [ [ "xt = tensor(3.).requires_grad_()", "_____no_output_____" ] ], [ [ "The ```requires_grad_()``` function marks the variable so that when it is used in future it knows that the gradient needs to be calculated.", "_____no_output_____" ] ], [ [ "def f(x): return x**2\n\nyt = f(xt)\nyt", "_____no_output_____" ] ], [ [ "Calculate the gradients now", "_____no_output_____" ] ], [ [ "yt.backward()", "_____no_output_____" ] ], [ [ "The ```.backward()``` function applies the backpropagation algorithm to each layer of the network", "_____no_output_____" ] ], [ [ "xt.grad", "_____no_output_____" ] ], [ [ "The gradients only tell us the slope of our weights and don't really provide anything actionable to use. This is where steps are used.", "_____no_output_____" ], [ "### Steps\nThis step of the process uses the gradients to attempt to lower the loss function by \"jumping\" down the curve. The size of the jump is called the learning late ```lr```, the bigger it is the more risky it is, as it could overshoot and increase the loss; too small and training the algorithm would take too long and would be expensive in real-world applications.\n\nThe formula can be described here:\n\n``` w = w - gradient(w) * lr```\n\nwhere w is each weight.\n\nSteps are taken until convergence or until you run out of time. ", "_____no_output_____" ], [ "### Sigmoid Function\nThe issue with above approach is the output will be linear meaning I get a number back when I predict, this isn't much use for telling me whether it's an \"a\" or a \"b\". Enter sigmoid, this gets a function and polarises the output so that a yes/no, a/b result is found.", "_____no_output_____" ] ], [ [ "def sigmoid(x):\n return 1 / (1 + torch.exp(-x))\n\nplot_function(torch.sigmoid, title='Sigmoid', min=-4, max=4)", "_____no_output_____" ] ], [ [ "After the midpoint (0.5) all values below are 0 and all above are 1. This can now be used in the loss function.", "_____no_output_____" ] ], [ [ "def char_loss(preds, targets):\n preds = preds.sigmoid()\n # return mean of either: preds or (1-preds)\n return torch.where(preds == 1, 1 - preds, preds).mean()", "_____no_output_____" ] ], [ [ "### Mini-Batches\nThe loss function has been defined, we're ready to run gradient descent on the data. Well maybe not there's a few more things to iron out. The issue with running gradient descent on all our data is that would take quite a while; instead we could run it on each item, this would be much faster. The issue with operated on each example is that it doesn't account for other examples.\n\n\nThe Goldielox solution is to use mini-batches, it's faster than calculating the loss for the entire set but more accurate than doing it on each datum. FastAI provides the ```DataLoader``` to handle this.", "_____no_output_____" ] ], [ [ "col = range(30)\n# bs=batch size\ndl = DataLoader(col, bs=5, shuffle=True)\nlist(dl)", "_____no_output_____" ] ], [ [ "## Implementation\n", "_____no_output_____" ] ], [ [ "path = Path('gdrive/MyDrive/Colab Notebooks/characters')\nPath.BASE_PATH = path", "_____no_output_____" ], [ "fns = get_image_files(path)\nfns", "_____no_output_____" ], [ "failed = verify_images(fns)\nfailed", "_____no_output_____" ] ], [ [ "Delete failed images", "_____no_output_____" ] ], [ [ "failed.map(Path.unlink);", "_____no_output_____" ] ], [ [ "### Data Curation", "_____no_output_____" ] ], [ [ "characters = DataBlock(\n blocks=(ImageBlock, CategoryBlock), \n get_items=get_image_files, \n splitter=RandomSplitter(valid_pct=0.2, seed=42),\n get_y=parent_label)", "_____no_output_____" ] ], [ [ "This defines how the data will be retreived and organised. The ```splitter``` divides the data into the training and validation sets.", "_____no_output_____" ] ], [ [ "dls = characters.dataloaders(path)", "_____no_output_____" ], [ "dls.valid.show_batch(max_n=4, nrows=1)", "_____no_output_____" ] ], [ [ "### Model Training", "_____no_output_____" ] ], [ [ "characters = characters.new(\n # item_tfms=RandomResizedCrop(224, min_scale=0.5),\n batch_tfms=aug_transforms())\ndls = characters.dataloaders(path)", "_____no_output_____" ], [ "learn = cnn_learner(dls, resnet18, metrics=error_rate)\nlearn.fine_tune(5)", "Downloading: \"https://download.pytorch.org/models/resnet18-5c106cde.pth\" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth\n" ], [ "interp = ClassificationInterpretation.from_learner(learn)\ninterp.plot_confusion_matrix()", "_____no_output_____" ], [ "interp.plot_top_losses(3, nrows=1)", "_____no_output_____" ] ], [ [ "### Model Inference\nExport model to app.", "_____no_output_____" ] ], [ [ "learn.export()", "_____no_output_____" ], [ "path = Path()\npath.ls(file_exts='.pkl')", "_____no_output_____" ], [ "%mv export.pkl gdrive/MyDrive/Colab\\ Notebooks/characters", "_____no_output_____" ], [ "%ls gdrive/MyDrive/Colab\\ Notebooks/characters", "\u001b[0m\u001b[01;34mA\u001b[0m/ \u001b[01;34mB\u001b[0m/ export.pkl\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e761f4cac5e8f392de397d461c0af1bf4111ad8c
3,988
ipynb
Jupyter Notebook
Parallel-Computing/.ipynb_checkpoints/PSUM+RPSUM-checkpoint.ipynb
Rothdyt/codes-for-courses
a2dfea516ebc7cabef31a5169533b6da352e7ccb
[ "MIT" ]
4
2018-09-23T00:00:13.000Z
2018-11-02T22:56:35.000Z
Parallel-Computing/.ipynb_checkpoints/PSUM+RPSUM-checkpoint.ipynb
Rothdyt/codes-for-courses
a2dfea516ebc7cabef31a5169533b6da352e7ccb
[ "MIT" ]
null
null
null
Parallel-Computing/.ipynb_checkpoints/PSUM+RPSUM-checkpoint.ipynb
Rothdyt/codes-for-courses
a2dfea516ebc7cabef31a5169533b6da352e7ccb
[ "MIT" ]
null
null
null
34.678261
126
0.495486
[ [ [ "import numpy as np\nimport pandas as pd\nimport multiprocessing as mp\nfrom multiprocessing import Pool, freeze_support\nfrom itertools import repeat\nimport time\n\ndef softT(m, t, zeta):\n \"\"\"\n Soft-Threshold for solving univariate lasso.\n \"\"\"\n # m: float; m_j in the thesis\n # t: float; truncation threshold\n # zeta: float; penalty factor, the same as the `lambda` in the thesis.\n if (zeta < abs(m)):\n z = (-m - zeta) / (2 * t) if (m < 0) else (-m + zeta) / (2 * t)\n else:\n z = 0\n return z\n\ndef unpack_args(X, Y, beta, beta0, zeta, chunks = 4):\n args_iterator = []\n n, _ = X.shape\n chunk_size = np.int(n/chunks) + 1\n for i in range(chunks):\n args_iterator.append((X[i*chunk_size:(i+1)*chunk_size], Y[i*chunk_size:(i+1)*chunk_size], beta, beta0, zeta))\n return args_iterator\n\ndef calculate_object_function(X, Y, beta, beta0, zeta):\n \"\"\"\n Calculate objetc function value.\n @Input:\n X: dataframe; n * p\n Y: dataframe; n * 1\n beta: 2-d np.array; p*1\n beta0: float\n zeta: float\n @Output:\n fval: 2-d np.array; n*1\n \"\"\"\n X_by_beta_plus_beta0 = np.matmul(X, beta) + beta0\n log_likelihood = np.sum(Y * X_by_beta_plus_beta0 - np.log(1 + np.exp(X_by_beta_plus_beta0)))\n fval = (-log_likelihood) / (2 * n) + zeta * np.sum(np.abs(beta)) \n return fval\n\n\nif __name__ == \"__main__\":\n data = pd.read_csv(\"./toyexample.csv\")\n data = data.drop([\"Unnamed: 0\"], axis=1)\n X = data.drop('y', axis=1)\n Y = data.iloc[:,-1].values.reshape(-1,1)\n n, p = X.shape\n # rand_seed = np.random.RandomState(1234)\n # n, p = 1000, 100\n # X = rand_seed.beta(1,2,size=(n, p))\n # Y = rand_seed.binomial(n=1, p=0.2, size=n).reshape(-1,1)\n beta = np.ones(p).reshape(-1,1)\n beta0 = 1\n zeta = 0.1\n parallel = False\n print(\"Shape of X is n={}, p={}\".format(n,p))\n serial_start = time.time()\n serila_result = calculate_object_function(X, Y, beta, beta0, zeta)\n serial_end = time.time()\n print(\"Serial took: [{}] s\".format(serial_end - serial_start))\n if parallel:\n num_cores = mp.cpu_count()\n pool_start = time.time()\n args = unpack_args(X, Y, beta, beta0, zeta, chunks=10)\n t = time.time()\n with Pool(processes = num_cores) as pool:\n pool_result = pool.starmap(calculate_object_function, args)\n tt = time.time()\n pool_end = time.time()\n print(\"Pool took: [{}] s\".format(pool_end - pool_start))\n \n try:\n print(\"Difference betwwen two results: {}\".format((np.sum(pool_result) - np.sum(serila_result))/n))\n print(tt-t)\n except NameError:\n pass", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e762013d8be60aea48f18d5fe9e8b00f073c5193
29,452
ipynb
Jupyter Notebook
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
fa75fcb2fb7fa24451966b5ae2a44480f88d508b
[ "BSD-3-Clause-Clear" ]
2
2021-09-11T20:32:18.000Z
2021-09-11T20:32:37.000Z
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
fa75fcb2fb7fa24451966b5ae2a44480f88d508b
[ "BSD-3-Clause-Clear" ]
null
null
null
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
fa75fcb2fb7fa24451966b5ae2a44480f88d508b
[ "BSD-3-Clause-Clear" ]
1
2019-09-13T07:46:09.000Z
2019-09-13T07:46:09.000Z
28.87451
520
0.590486
[ [ [ "# Managing assignment files", "_____no_output_____" ], [ "Distributing assignments to students and collecting them can be a logistical nightmare. If you are running nbgrader on a server, some of this pain can be relieved by relying on nbgrader's built-in functionality for releasing and collecting assignments on the instructor's side, and fetching and submitting assignments on the student's side.", "_____no_output_____" ] ], [ [ ".. contents:: Table of Contents\n :depth: 2", "_____no_output_____" ] ], [ [ "## Setting up the exchange", "_____no_output_____" ], [ "After an assignment has been created using `nbgrader assign`, the instructor must actually release that assignment to students. If the class is being taught on a single filesystem, then the instructor may use `nbgrader release` to copy the assignment files to a shared location on the filesystem for students to then download.\n\nFirst, we must specify a few configuration options. To do this, we'll create a `nbgrader_config.py` file that will get automatically loaded when we run `nbgrader`:", "_____no_output_____" ] ], [ [ "%%file nbgrader_config.py\n\nc = get_config()\n\nc.Exchange.course_id = \"example_course\"\nc.Exchange.root = \"/tmp/exchange\"", "Writing nbgrader_config.py\n" ] ], [ [ "In the config file, we've specified the \"exchange\" directory to be `/tmp/exchange`. This directory must exist before running `nbgrader`, and it *must* be readable and writable by all users, so we'll first create it and configure the appropriate permissions:", "_____no_output_____" ] ], [ [ "%%bash\n\n# remove existing directory, so we can start fresh for demo purposes\nrm -rf /tmp/exchange\n\n# create the exchange directory, with write permissions for everyone\nmkdir /tmp/exchange\nchmod ugo+rw /tmp/exchange", "_____no_output_____" ] ], [ [ "## Releasing assignments", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`creating_and_grading_assignments`\n Details on generating assignments\n\n :doc:`/command_line_tools/nbgrader-release`\n Command line options for ``nbgrader release``\n\n :doc:`/command_line_tools/nbgrader-list`\n Command line options for ``nbgrader list``\n\n :doc:`philosophy`\n More details on how the nbgrader hierarchy is structured.\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ] ], [ [ "### From the formgrader", "_____no_output_____" ], [ "Using the formgrader extension, you may release assignments by clicking on the \"release\" button:\n\n![](images/manage_assignments5.png)\n\n**Note** that for the \"release\" button to become available, the `course_id` option must be set in `nbgrader_config.py`.\nOnce completed, you will see a pop-up window with log output:\n\n![](images/release_assignment.png)\n\nIf you decide you want to \"un-release\" an assignment, you may do so by clicking again on the \"release\" button (which is now an \"x\"). **However, note that students who have already downloaded the assignment will still have access to their downloaded copy. Unreleasing an assignment only prevents more students from downloading it.**\n\n![](images/manage_assignments6.png)", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ], [ "Now that we have the directory created, we can actually run `nbgrader release` (and as with the other nbgrader commands for instructors, this must be run from the root of the course directory):", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader release \"ps1\"", "[ReleaseApp | INFO] Source: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/release/./ps1\n[ReleaseApp | INFO] Destination: /tmp/exchange/example_course/outbound/ps1\n[ReleaseApp | INFO] Released as: example_course ps1\n" ] ], [ [ "Finally, you can verify that the assignment has been appropriately released by running the `nbgrader list` command:", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader list", "[ListApp | INFO] Released assignments:\n[ListApp | INFO] example_course ps1\n" ] ], [ [ "Note that there should only ever be *one* instructor who runs the `nbgrader release` and `nbgrader collect` commands (and there should probably only be one instructor -- the same instructor -- who runs `nbgrader assign`, `nbgrader autograde` and the formgrader as well). However this does not mean that only one instructor can do the grading, it just means that only one instructor manages the assignment files. Other instructors can still perform grading by accessing the notebook where the formgrader is running.", "_____no_output_____" ] ], [ [ ".. _fetching-assignments:", "_____no_output_____" ] ], [ [ "## Fetching assignments", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`/command_line_tools/nbgrader-fetch`\n Command line options for ``nbgrader fetch``\n\n :doc:`/command_line_tools/nbgrader-list`\n Command line options for ``nbgrader list``\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ] ], [ [ "From the student's perspective, they can list what assignments have been released, and then fetch a copy of the assignment to work on. First, we'll create a temporary directory to represent the student's home directory:", "_____no_output_____" ] ], [ [ "%%bash\n\n# remove the fake student home directory if it exists, for demo purposes\nrm -rf /tmp/student_home\n\n# create the fake student home directory and switch to it\nmkdir /tmp/student_home", "_____no_output_____" ] ], [ [ "If you are not using the default exchange directory (as is the case here), you will additionally need to provide your students with a configuration file that sets the appropriate directory for them:", "_____no_output_____" ] ], [ [ "%%file /tmp/student_home/nbgrader_config.py\n\nc = get_config()\nc.Exchange.root = '/tmp/exchange'\nc.Exchange.course_id = \"example_course\"", "Writing /tmp/student_home/nbgrader_config.py\n" ] ], [ [ "### From the notebook dashboard", "_____no_output_____" ] ], [ [ ".. warning::\n\n The \"Assignment List\" extension is not fully compatible with multiple\n courses on the same server. Please see :ref:`multiple-classes` for details.\n\nAlternatively, students can fetch assignments using the assignment list notebook server extension. You must have installed the extension by following the instructions :doc:`here </user_guide/installation>`, after which you should see an \"Assignments\" tab in dashboard:", "_____no_output_____" ] ], [ [ "![](images/assignment_list_released.png)", "_____no_output_____" ], [ "The image above shows that there has been one assignment released (\"ps1\") for the class \"example_course\". To get this assignment, students can click the \"Fetch\" button (analogous to running `nbgrader fetch ps1 --course example_course`. **Note: this assumes nbgrader is always run from the root of the notebook server, which on JupyterHub is most likely the root of the user's home directory.**\n\nAfter the assignment is fetched, it will appear in the list of \"Downloaded assignments\":\n\n![](images/assignment_list_downloaded.png)", "_____no_output_____" ], [ "Students can click on the name of the assignment to expand it and see all the notebooks in the assignment:\n\n![](images/assignment_list_downloaded_expanded.png)\n\nClicking on a particular notebook will open it in a new tab in the browser.", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ], [ "From the student's perspective, they can see what assignments have been released using `nbgrader list`, and passing the name of the class:", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nnbgrader list", "[ListApp | INFO] Released assignments:\n[ListApp | INFO] example_course ps1\n" ] ], [ [ "They can then fetch an assignment for that class using `nbgrader fetch` and passing the name of the class and the name of the assignment:", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nnbgrader fetch \"ps1\"", "[FetchApp | INFO] Source: /tmp/exchange/example_course/outbound/ps1\n[FetchApp | INFO] Destination: /private/tmp/student_home/ps1\n[FetchApp | INFO] Fetched as: example_course ps1\n" ] ], [ [ "Note that running `nbgrader fetch` copies the assignment files from the exchange directory to the local directory, and therefore can be used from any directory:", "_____no_output_____" ] ], [ [ "%%bash\n\nls -l \"/tmp/student_home/ps1\"", "total 40\n-rw-r--r-- 1 jhamrick wheel 5733 Apr 22 15:29 jupyter.png\n-rw-r--r-- 1 jhamrick wheel 8126 Apr 22 15:29 problem1.ipynb\n-rw-r--r-- 1 jhamrick wheel 2318 Apr 22 15:29 problem2.ipynb\n" ] ], [ [ "Additionally, the `nbgrader fetch` (as well as `nbgrader submit`) command also does not rely on having access to the nbgrader database -- the database is only used by instructors.", "_____no_output_____" ], [ "## Submitting assignments", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`/command_line_tools/nbgrader-submit`\n Command line options for ``nbgrader fetch``\n\n :doc:`/command_line_tools/nbgrader-list`\n Command line options for ``nbgrader list``\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ] ], [ [ "### From the notebook dashboard", "_____no_output_____" ] ], [ [ ".. warning::\n\n The \"Assignment List\" extension is not fully compatible with multiple\n courses on the same server. Please see :ref:`multiple-classes` for details.\n\nAlternatively, students can submit assignments using the assignment list notebook server extension. You must have installed the extension by following the instructions `here <https://github.com/jupyter/nbgrader>`__. Students must have also downloaded the assignments (see :ref:`fetching-assignments`).", "_____no_output_____" ] ], [ [ "After students have worked on the assignment for a while, but before submitting, they can validate that their notebooks pass the tests by clicking the \"Validate\" button (analogous to running `nbgrader validate`). If any tests fail, they will see a warning:\n\n![](images/assignment_list_validate_failed.png)", "_____no_output_____" ], [ "If there are no errors, they will see that the validation passes:\n\n![](images/assignment_list_validate_succeeded.png)", "_____no_output_____" ] ], [ [ ".. note::\n\n If the notebook has been released with hidden tests removed from the source version\n (see :ref:`autograder-tests-cell-hidden-tests`) then this validation is only done against the tests the students can\n see in the release version.", "_____no_output_____" ] ], [ [ "Once students have validated all the notebooks, they can click the \"Submit\" button to submit the assignment (analogous to running `nbgrader submit ps1 --course example_course`). Afterwards, it will show up in the list of submitted assignments (and also still in the list of downloaded assignments):\n\n![](images/assignment_list_submitted.png)", "_____no_output_____" ], [ "Students may submit an assignment as many times as they'd like. All copies of a submission will show up in the submitted assignments list, and when the instructor collects the assignments, they will get the most recent version of the assignment:\n\n![](images/assignment_list_submitted_again.png)", "_____no_output_____" ], [ "Similarly, if the ``strict`` option (in the student's ``nbgrader_config.py`` file) is set to ``True``, the students will not be able to submit an assignment with missing notebooks (for a given assignment):\n\n![](images/assignment_list_submit_error.jpg)", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ], [ "First, as a reminder, here is what the student's `nbgrader_config.py` file looks like:", "_____no_output_____" ] ], [ [ "%%bash\n\ncat /tmp/student_home/nbgrader_config.py", "\nc = get_config()\nc.Exchange.root = '/tmp/exchange'\nc.Exchange.course_id = \"example_course\"" ] ], [ [ "After working on an assignment, the student can submit their version for grading using `nbgrader submit` and passing the name of the assignment and the name of the class:", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nnbgrader submit \"ps1\"", "[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:40.397476 UTC\n[SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:40.397476 UTC\n" ] ], [ [ "Note that \"the name of the assignment\" really corresponds to \"the name of a folder\". It just happens that, in our current directory, there is a folder called \"ps1\":", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nls -l \"/tmp/student_home\"", "total 8\ndrwxr-xr-x 3 jhamrick wheel 96 Apr 22 15:29 Library\n-rw-r--r-- 1 jhamrick wheel 91 Apr 22 15:29 nbgrader_config.py\ndrwxr-xr-x 5 jhamrick wheel 160 Apr 22 15:29 ps1\n" ] ], [ [ "Students can see what assignments they have submitted using `nbgrader list --inbound`:", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nnbgrader list --inbound", "[ListApp | INFO] Submitted assignments:\n[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC\n" ] ], [ [ "Importantly, students can run `nbgrader submit` as many times as they want, and all submitted copies of the assignment will be preserved:", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nnbgrader submit \"ps1\"", "[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:43.070290 UTC\n[SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:43.070290 UTC\n" ] ], [ [ "We can see all versions that have been submitted by again running `nbgrader list --inbound`:", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nnbgrader list --inbound", "[ListApp | INFO] Submitted assignments:\n[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC\n[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:43.070290 UTC\n" ] ], [ [ "Note that the `nbgrader submit` (as well as `nbgrader fetch`) command also does not rely on having access to the nbgrader database -- the database is only used by instructors.", "_____no_output_____" ], [ "``nbgrader`` requires that the submitted notebook names match the released notebook names for each assignment. For example if a student were to rename one of the given assignment notebooks:", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\n# assume the student renamed the assignment file\nmv ps1/problem1.ipynb ps1/myproblem1.ipynb\n\nnbgrader submit \"ps1\"", "[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:46.167901 UTC\n[SubmitApp | WARNING] Possible missing notebooks and/or extra notebooks submitted for assignment ps1:\n Expected:\n \tproblem1.ipynb: MISSING\n \tproblem2.ipynb: FOUND\n Submitted:\n \tmyproblem1.ipynb: EXTRA\n \tproblem2.ipynb: OK\n[SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:46.167901 UTC\n" ] ], [ [ "By default this assignment will still be submitted however only the \"FOUND\" notebooks (for the given assignment) can be ``autograded`` and will appear on the ``formgrade`` extension. \"EXTRA\" notebooks will not be ``autograded`` and will not appear on the ``formgrade`` extension.", "_____no_output_____" ], [ "To ensure that students cannot submit an assignment with missing notebooks (for a given assignment) the ``strict`` option, in the student's ``nbgrader_config.py`` file, can be set to ``True``:", "_____no_output_____" ] ], [ [ "%%file /tmp/student_home/nbgrader_config.py\n\nc = get_config()\nc.Exchange.root = '/tmp/exchange'\nc.Exchange.course_id = \"example_course\"\nc.ExchangeSubmit.strict = True", "Overwriting /tmp/student_home/nbgrader_config.py\n" ], [ "%%bash\nexport HOME=/tmp/student_home && cd $HOME\n\nnbgrader submit \"ps1\"", "[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:47.497419 UTC\n[SubmitApp | CRITICAL] Assignment ps1 not submitted. There are missing notebooks for the submission:\n Expected:\n \tproblem1.ipynb: MISSING\n \tproblem2.ipynb: FOUND\n Submitted:\n \tmyproblem1.ipynb: EXTRA\n \tproblem2.ipynb: OK\n[SubmitApp | ERROR] nbgrader submit failed\n" ] ], [ [ "## Collecting assignments", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`creating_and_grading_assignments`\n Details on grading assignments after they have been collected\n\n :doc:`/command_line_tools/nbgrader-collect`\n Command line options for ``nbgrader fetch``\n\n :doc:`/command_line_tools/nbgrader-list`\n Command line options for ``nbgrader list``\n\n :doc:`philosophy`\n More details on how the nbgrader hierarchy is structured.\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ] ], [ [ "First, as a reminder, here is what the instructor's `nbgrader_config.py` file looks like:", "_____no_output_____" ] ], [ [ "%%bash\n\ncat nbgrader_config.py", "\nc = get_config()\n\nc.Exchange.course_id = \"example_course\"\nc.Exchange.root = \"/tmp/exchange\"" ] ], [ [ "### From the formgrader", "_____no_output_____" ], [ "From the formgrader extension, we can collect submissions by clicking on the \"collect\" button:\n\n![](images/manage_assignments7.png)\n\nAs with releasing, this will display a pop-up window when the operation is complete, telling you how many submissions were collected:\n\n![](images/collect_assignment.png)\n\nFrom here, you can click on the number of submissions to grade the collected submissions:\n\n![](images/manage_assignments8.png)", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ], [ "After students have submitted their assignments, the instructor can view what has been submitted with `nbgrader list --inbound`:", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader list --inbound", "[ListApp | INFO] Submitted assignments:\n[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC\n[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:43.070290 UTC\n[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:46.167901 UTC\n" ] ], [ [ "The instructor can then collect all submitted assignments with `nbgrader collect` and passing the name of the assignment (and as with the other nbgrader commands for instructors, this must be run from the root of the course directory):", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader collect \"ps1\"", "[CollectApp | INFO] Processing 1 submissions of 'ps1' for course 'example_course'\n[CollectApp | INFO] Collecting submission: jhamrick ps1\n" ] ], [ [ "This will copy the student submissions to the `submitted` folder in a way that is automatically compatible with `nbgrader autograde`:", "_____no_output_____" ] ], [ [ "%%bash\n\nls -l submitted", "total 0\ndrwxr-xr-x 3 jhamrick staff 96 May 31 2017 bitdiddle\ndrwxr-xr-x 3 jhamrick staff 96 May 31 2017 hacker\ndrwxr-xr-x 3 jhamrick staff 96 Apr 22 15:29 jhamrick\n" ] ], [ [ "Note that there should only ever be *one* instructor who runs the `nbgrader release` and `nbgrader collect` commands (and there should probably only be one instructor -- the same instructor -- who runs `nbgrader assign`, `nbgrader autograde` and the formgrader as well). However this does not mean that only one instructor can do the grading, it just means that only one instructor manages the assignment files. Other instructors can still perform grading by accessing the notebook server running the formgrader.", "_____no_output_____" ] ] ]
[ "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e762058c6a5eaa47ca6c32c848e7e6a5b5a0a93c
10,818
ipynb
Jupyter Notebook
IIT Mandi/3rd Week/IITMANDI.Assignment2(Week 3)-checkpoint.ipynb
thechiragthakur/Data-Science-Using-Python
d30a282b3b40a5d499f9a6a27aa8dd33c537312f
[ "MIT" ]
null
null
null
IIT Mandi/3rd Week/IITMANDI.Assignment2(Week 3)-checkpoint.ipynb
thechiragthakur/Data-Science-Using-Python
d30a282b3b40a5d499f9a6a27aa8dd33c537312f
[ "MIT" ]
null
null
null
IIT Mandi/3rd Week/IITMANDI.Assignment2(Week 3)-checkpoint.ipynb
thechiragthakur/Data-Science-Using-Python
d30a282b3b40a5d499f9a6a27aa8dd33c537312f
[ "MIT" ]
null
null
null
29.237838
182
0.557219
[ [ [ "import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.metrics import accuracy_score,confusion_matrix\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "import pandas as pd\n#imported the dataset given to us\npid=pd.read_csv(\"C:\\\\Users\\\\Micontroller Lab N16\\\\IIT MANDI\\\\3rd Week\\\\pima-indians-diabetes.csv\",sep=',')\n\n#made a copy of the original dataset\npid1=pid.copy()\nprint(pid1)", "_____no_output_____" ], [ "pid.columns\n#created a list of all the columns present in the dataset\npid_col=list(pid.columns)\npid2=pid.copy() #made a copy of original dataset without attribute class\npid_col\npid_col1=pid_col.copy() #we do not want to bring changes in the class column so we removed it from the copied dataset\npid_col1.remove('class')\npid2.drop([\"class\"], axis = 1, inplace = True)\nprint(pid_col1)\nprint('\\n\\n')\nprint(pid2)", "_____no_output_____" ], [ "X1 = pid2 # X denotes the input functions and here class defines whether the person is ill or not\nprint(X1)\ny1 = pid['class'] #y denotes the output functions\nprint(y1)", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split #As given we are assigning 70% of data for training and 30% for testing\nX1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, train_size = 0.7,random_state = 42)\n\nprint(X1_train)\nprint(y1_train)\nprint(X1_test)\nprint(y1_test)", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.metrics import accuracy_score,confusion_matrix\nneighbors=[1,3,5,7,9,11,13,15,17,19,21]\ntrain_accuracy = np.empty(len(neighbors))\ntest_accuracy = np.empty(len(neighbors))\nacc=[]\n \n# Loop over K values\nfor i, k in enumerate(neighbors):\n knn = KNeighborsClassifier(n_neighbors=k)\n knn.fit(X1_train, y1_train)\n print('Predicted Outcomes for neighbours =',k,'are', knn.predict(X1_test))\n print('\\n')\n \n print('Accuracy = ',knn.score(X1_test, y1_test))\n if ((knn.score(X1_test, y1_test))>=0):\n acc.append(knn.score(X1_test, y1_test))\n print('\\n')\n \n matrix = confusion_matrix(y1_test,knn.predict(X1_test))\n print('Confusion Matrix = ',matrix)\n print('\\n\\n')\n \n \n # Compute traning and test data accuracy\n train_accuracy[i] = knn.score(X1_train, y1_train)\n test_accuracy[i] = knn.score(X1_test, y1_test)\n\nprint(acc)\nprint('\\n')\nprint('maximum accuracy is =', max(acc)*100)\n\n \n# Generate plot\nplt.plot(neighbors, test_accuracy, label = 'Testing dataset Accuracy')\nplt.plot(neighbors, train_accuracy, label = 'Training dataset Accuracy')\n\n\nplt.legend()\nplt.xlabel('n_neighbors')\nplt.ylabel('Accuracy')\nplt.show()\n\n", "_____no_output_____" ], [ "X2 = pid2 # X denotes the input functions and here class defines whether the person is ill or not\nprint(X1)\ny2 = pid['class'] #y denotes the output functions\nprint(y1)", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split #As given we are assigning 70% of data for training and 30% for testing\nX2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, train_size = 0.7,random_state = 42)\n\nprint(X2_train)\nprint(y2_train)\nprint(X2_test)\nprint(y2_test)", "_____no_output_____" ], [ "model=GaussianNB()\nmodel.fit(X2_train, y2_train)", "_____no_output_____" ], [ "y_pred=model.predict(X2_test)\ny_pred", "_____no_output_____" ], [ "accuracy=accuracy_score(y2_test, y_pred)*100\nprint('accuracy = ',accuracy,'%')\nprint('\\n')\nmatrix=confusion_matrix(y2_test,y_pred)\nprint('Confusion Matrix = ',matrix)", "_____no_output_____" ] ], [ [ "Common way of speeding up a machine learning algorithm is by using Principal Component Analysis (PCA). \nIf your learning algorithm is too slow because the input dimension is too high, then using PCA to speed it up can be a reasonable choice.", "_____no_output_____" ] ], [ [ "X3 = pid2 # X denotes the input functions and here class defines whether the person is ill or not\nprint(X1)\ny3 = pid['class'] #y denotes the output functions\nprint(y1)", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split #As given we are assigning 70% of data for training and 30% for testing\nX3_train, X3_test, y3_train, y3_test = train_test_split(X3, y3, train_size = 0.7,random_state = 42)\n\nprint(X3_train)\nprint('\\n\\n')\nprint(y3_train)\nprint('\\n\\n')\nprint(X3_test)\nprint('\\n\\n')\nprint(y3_test)", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\n\n# Fit on training set only.\nscaler.fit(X3_train)\nprint(X3_train)\n\n# Apply transform to both the training set and the test set.\nX3_train = scaler.transform(X3_train) #expecting 2D array so only X values can be passed\nX3_test = scaler.transform(X3_test)\n", "_____no_output_____" ], [ " \n\n#the code has .95 for the number of components parameter. It means that scikit-learn choose the minimum number of principal components such that 95% of the variance is retained", "_____no_output_____" ], [ " #number of components PCA choose after fitting the model ", "_____no_output_____" ], [ "from sklearn.decomposition import PCA\n# Make an instance of the Model\ndimension = [0.25,0.40,0.60,0.70,0.80,0.90,0.95,0.99]\naccu=[]\nfor z in enumerate(dimension):\n \n pca = PCA(z) \n\n pca.fit(X3_train)\n pca.n_components_ \n\n X3_train = pca.transform(X3_train)\n X3_test= pca.transform(X3_test)\n\n\n from sklearn.neighbors import KNeighborsClassifier\n from sklearn.metrics import accuracy_score,confusion_matrix\n neighbors=[1,3,5,7,9,11,13,15,17,19,21]\n train_accuracy = np.empty(len(neighbors))\n test_accuracy = np.empty(len(neighbors))\n acc=[]\n\n # Loop over K values\n for i, k in enumerate(neighbors):\n knn = KNeighborsClassifier(n_neighbors=k)\n knn.fit(X3_train, y3_train)\n print('Predicted Outcomes for neighbours =',k,'are', knn.predict(X3_test))\n print('\\n')\n\n print('Accuracy = ',knn.score(X3_test, y3_test))\n if ((knn.score(X3_test, y3_test))>=0):\n acc.append(knn.score(X3_test, y3_test)) \n\n print('\\n')\n\n matrix = confusion_matrix(y3_test,knn.predict(X3_test))\n print('Confusion Matrix = ',matrix)\n print('\\n\\n')\n\n\n # Compute traning and test data accuracy\n train_accuracy[i] = knn.score(X3_train, y3_train)\n test_accuracy[i] = knn.score(X3_test, y3_test)\n\n print(acc)\n print('\\n')\n print('maximum accuracy is =', max(acc)*100)\n\n \n# Generate plot\n#plt.plot(neighbors, test_accuracy, label = 'Testing dataset Accuracy')\n#plt.plot(neighbors, train_accuracy, label = 'Training dataset Accuracy')\n\n\n#plt.legend()\n#plt.xlabel('n_neighbors')\n#plt.ylabel('Accuracy')\n#plt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e76214e03dd014c59316f5fe36e4dd82a4d1ade5
13,670
ipynb
Jupyter Notebook
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
cbbecbd780368c2a4567aaf9ec8d66f7c7cdfa06
[ "MIT" ]
362
2020-10-08T07:34:25.000Z
2022-03-30T05:11:30.000Z
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
cbbecbd780368c2a4567aaf9ec8d66f7c7cdfa06
[ "MIT" ]
7
2020-07-07T16:10:23.000Z
2021-06-04T08:17:55.000Z
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
cbbecbd780368c2a4567aaf9ec8d66f7c7cdfa06
[ "MIT" ]
238
2020-10-08T12:01:31.000Z
2022-03-25T08:10:42.000Z
44.096774
939
0.599415
[ [ [ "# The Transformer Decoder: Ungraded Lab Notebook\n\nIn this notebook, you'll explore the transformer decoder and how to implement it with Trax. \n\n## Background\n\nIn the last lecture notebook, you saw how to translate the mathematics of attention into NumPy code. Here, you'll see how multi-head causal attention fits into a GPT-2 transformer decoder, and how to build one with Trax layers. In the assignment notebook, you'll implement causal attention from scratch, but here, you'll exploit the handy-dandy `tl.CausalAttention()` layer.\n\nThe schematic below illustrates the components and flow of a transformer decoder. Note that while the algorithm diagram flows from the bottom to the top, the overview and subsequent Trax layer codes are top-down.\n\n<img src=\"transformer_decoder_lnb_figs/C4_W2_L6_transformer-decoder_S01_transformer-decoder.png\" width=\"1000\"/>", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "import sys\nimport os\n\nimport time\nimport numpy as np\nimport gin\n\nimport textwrap\nwrapper = textwrap.TextWrapper(width=70)\n\nimport trax\nfrom trax import layers as tl\nfrom trax.fastmath import numpy as jnp\n\n# to print the entire np array\nnp.set_printoptions(threshold=sys.maxsize)", "INFO:tensorflow:tokens_length=568 inputs_length=512 targets_length=114 noise_density=0.15 mean_noise_span_length=3.0 \n" ] ], [ [ "## Sentence gets embedded, add positional encoding\nEmbed the words, then create vectors representing each word's position in each sentence $\\in \\{ 0, 1, 2, \\ldots , K\\}$ = `range(max_len)`, where `max_len` = $K+1$)", "_____no_output_____" ] ], [ [ "def PositionalEncoder(vocab_size, d_model, dropout, max_len, mode):\n \"\"\"Returns a list of layers that: \n 1. takes a block of text as input, \n 2. embeds the words in that text, and \n 3. adds positional encoding, \n i.e. associates a number in range(max_len) with \n each word in each sentence of embedded input text \n \n The input is a list of tokenized blocks of text\n \n Args:\n vocab_size (int): vocab size.\n d_model (int): depth of embedding.\n dropout (float): dropout rate (how much to drop out).\n max_len (int): maximum symbol length for positional encoding.\n mode (str): 'train' or 'eval'.\n \"\"\"\n # Embedding inputs and positional encoder\n return [ \n # Add embedding layer of dimension (vocab_size, d_model)\n tl.Embedding(vocab_size, d_model), \n # Use dropout with rate and mode specified\n tl.Dropout(rate=dropout, mode=mode), \n # Add positional encoding layer with maximum input length and mode specified\n tl.PositionalEncoding(max_len=max_len, mode=mode)] ", "_____no_output_____" ] ], [ [ "## Multi-head causal attention\n\nThe layers and array dimensions involved in multi-head causal attention (which looks at previous words in the input text) are summarized in the figure below: \n\n<img src=\"transformer_decoder_lnb_figs/C4_W2_L5_multi-head-attention_S05_multi-head-attention-concatenation_stripped.png\" width=\"1000\"/>\n\n`tl.CausalAttention()` does all of this for you! You might be wondering, though, whether you need to pass in your input text 3 times, since for causal attention, the queries Q, keys K, and values V all come from the same source. Fortunately, `tl.CausalAttention()` handles this as well by making use of the [`tl.Branch()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#module-trax.layers.combinators) combinator layer. In general, each branch within a `tl.Branch()` layer performs parallel operations on copies of the layer's inputs. For causal attention, each branch (representing Q, K, and V) applies a linear transformation (i.e. a dense layer without a subsequent activation) to its copy of the input, then splits that result into heads. You can see the syntax for this in the screenshot from the `trax.layers.attention.py` [source code](https://github.com/google/trax/blob/master/trax/layers/attention.py) below: \n\n<img src=\"transformer_decoder_lnb_figs/use-of-tl-Branch-in-tl-CausalAttention.png\" width=\"500\"/>", "_____no_output_____" ], [ "## Feed-forward layer \n* Typically ends with a ReLU activation, but we'll leave open the possibility of a different activation\n* Most of the parameters are here", "_____no_output_____" ] ], [ [ "def FeedForward(d_model, d_ff, dropout, mode, ff_activation):\n \"\"\"Returns a list of layers that implements a feed-forward block.\n\n The input is an activation tensor.\n\n Args:\n d_model (int): depth of embedding.\n d_ff (int): depth of feed-forward layer.\n dropout (float): dropout rate (how much to drop out).\n mode (str): 'train' or 'eval'.\n ff_activation (function): the non-linearity in feed-forward layer.\n\n Returns:\n list: list of trax.layers.combinators.Serial that maps an activation tensor to an activation tensor.\n \"\"\"\n \n # Create feed-forward block (list) with two dense layers with dropout and input normalized\n return [ \n # Normalize layer inputs\n tl.LayerNorm(), \n # Add first feed forward (dense) layer (don't forget to set the correct value for n_units)\n tl.Dense(d_ff), \n # Add activation function passed in as a parameter (you need to call it!)\n ff_activation(), # Generally ReLU\n # Add dropout with rate and mode specified (i.e., don't use dropout during evaluation)\n tl.Dropout(rate=dropout, mode=mode), \n # Add second feed forward layer (don't forget to set the correct value for n_units)\n tl.Dense(d_model), \n # Add dropout with rate and mode specified (i.e., don't use dropout during evaluation)\n tl.Dropout(rate=dropout, mode=mode) \n ]", "_____no_output_____" ] ], [ [ "## Decoder block\nHere, we return a list containing two residual blocks. The first wraps around the causal attention layer, whose inputs are normalized and to which we apply dropout regulation. The second wraps around the feed-forward layer. You may notice that the second call to `tl.Residual()` doesn't call a normalization layer before calling the feed-forward layer. This is because the normalization layer is included in the feed-forward layer.", "_____no_output_____" ] ], [ [ "def DecoderBlock(d_model, d_ff, n_heads,\n dropout, mode, ff_activation):\n \"\"\"Returns a list of layers that implements a Transformer decoder block.\n\n The input is an activation tensor.\n\n Args:\n d_model (int): depth of embedding.\n d_ff (int): depth of feed-forward layer.\n n_heads (int): number of attention heads.\n dropout (float): dropout rate (how much to drop out).\n mode (str): 'train' or 'eval'.\n ff_activation (function): the non-linearity in feed-forward layer.\n\n Returns:\n list: list of trax.layers.combinators.Serial that maps an activation tensor to an activation tensor.\n \"\"\"\n \n # Add list of two Residual blocks: the attention with normalization and dropout and feed-forward blocks\n return [\n tl.Residual(\n # Normalize layer input\n tl.LayerNorm(), \n # Add causal attention \n tl.CausalAttention(d_feature, n_heads=n_heads, dropout=dropout, mode=mode) \n ),\n tl.Residual(\n # Add feed-forward block\n # We don't need to normalize the layer inputs here. The feed-forward block takes care of that for us.\n FeedForward(d_model, d_ff, dropout, mode, ff_activation)\n ),\n ]", "_____no_output_____" ] ], [ [ "## The transformer decoder: putting it all together\n## A.k.a. repeat N times, dense layer and softmax for output", "_____no_output_____" ] ], [ [ "def TransformerLM(vocab_size=33300,\n d_model=512,\n d_ff=2048,\n n_layers=6,\n n_heads=8,\n dropout=0.1,\n max_len=4096,\n mode='train',\n ff_activation=tl.Relu):\n \"\"\"Returns a Transformer language model.\n\n The input to the model is a tensor of tokens. (This model uses only the\n decoder part of the overall Transformer.)\n\n Args:\n vocab_size (int): vocab size.\n d_model (int): depth of embedding.\n d_ff (int): depth of feed-forward layer.\n n_layers (int): number of decoder layers.\n n_heads (int): number of attention heads.\n dropout (float): dropout rate (how much to drop out).\n max_len (int): maximum symbol length for positional encoding.\n mode (str): 'train', 'eval' or 'predict', predict mode is for fast inference.\n ff_activation (function): the non-linearity in feed-forward layer.\n\n Returns:\n trax.layers.combinators.Serial: A Transformer language model as a layer that maps from a tensor of tokens\n to activations over a vocab set.\n \"\"\"\n \n # Create stack (list) of decoder blocks with n_layers with necessary parameters\n decoder_blocks = [ \n DecoderBlock(d_model, d_ff, n_heads, dropout, mode, ff_activation) for _ in range(n_layers)] \n\n # Create the complete model as written in the figure\n return tl.Serial(\n # Use teacher forcing (feed output of previous step to current step)\n tl.ShiftRight(mode=mode), \n # Add embedding inputs and positional encoder\n PositionalEncoder(vocab_size, d_model, dropout, max_len, mode),\n # Add decoder blocks\n decoder_blocks, \n # Normalize layer\n tl.LayerNorm(), \n\n # Add dense layer of vocab_size (since need to select a word to translate to)\n # (a.k.a., logits layer. Note: activation already set by ff_activation)\n tl.Dense(vocab_size), \n # Get probabilities with Logsoftmax\n tl.LogSoftmax() \n )", "_____no_output_____" ] ], [ [ "## Concluding remarks\n\nIn this week's assignment, you'll see how to train a transformer decoder on the [cnn_dailymail](https://www.tensorflow.org/datasets/catalog/cnn_dailymail) dataset, available from TensorFlow Datasets (part of TensorFlow Data Services). Because training such a model from scratch is time-intensive, you'll use a pre-trained model to summarize documents later in the assignment. Due to time and storage concerns, we will also not train the decoder on a different summarization dataset in this lab. If you have the time and space, we encourage you to explore the other [summarization](https://www.tensorflow.org/datasets/catalog/overview#summarization) datasets at TensorFlow Datasets. Which of them might suit your purposes better than the `cnn_dailymail` dataset? Where else can you find datasets for text summarization models?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7622d6e1d757ad944c3fe0a616c99700e5f1ade
8,547
ipynb
Jupyter Notebook
pipeline/misc/rds_to_vcf.ipynb
floutt/xqtl-pipeline
5ee370b6ff783166f0715d0b8c7c66497fc98a93
[ "MIT" ]
null
null
null
pipeline/misc/rds_to_vcf.ipynb
floutt/xqtl-pipeline
5ee370b6ff783166f0715d0b8c7c66497fc98a93
[ "MIT" ]
null
null
null
pipeline/misc/rds_to_vcf.ipynb
floutt/xqtl-pipeline
5ee370b6ff783166f0715d0b8c7c66497fc98a93
[ "MIT" ]
null
null
null
39.569444
202
0.507781
[ [ [ "# Summary statistics in VCF format\n\nmodified from the create_vcf of mrcieu/gwasvcf package to transform the mash output matrixs from the rds format into a vcf file, with a effect size = to the coef and the se = 1, named as EF:SE.\n\nInput:\na collection of gene-level rds file, each file is a matrix of mash output, with colnames = studies, rownames = snps, snps shall be in the form of chr:pos_alt_ref,\nA list of aforementioned MASH output\n\noutput:\nA collection of gene-level vcf output:vcf file and corresponding index\na list of aforementioned vcf\n\nRequired R packages:\n dplyr\n readr\n VariantAnnotation\n \n\nThe output format of this workflow is following this specification https://github.com/MRCIEU/gwas-vcf-specification, with each study stands for a column", "_____no_output_____" ] ], [ [ "[global]\nimport glob\n# single column file each line is the data filename\nparameter: analysis_units = path\n# Path to data directory\nparameter: data_dir = \"/\"\n# data file suffix\nparameter: data_suffix = \"\"\n# Path to work directory where output locates\nparameter: wd = path(\"./output\")\n# An identifier for your run of analysis\nparameter: name = \"\"\n\nregions = [x.replace(\"\\\"\", \"\" ).strip().split() for x in open(analysis_units).readlines() if x.strip() and not x.strip().startswith('#')]\ngenes = regions\n# Containers that contains the necessary packages\nparameter: container = 'gaow/twas'\n", "_____no_output_____" ], [ "[rds_to_vcf_1]\ninput: genes, group_by = 1\noutput: vcf = f'{wd:a}/mash_vcf/{_input:bn}.vcf.bgz'\ntask: trunk_workers = 1, walltime = '1h', trunk_size = 1, mem = '10G', cores = 1, tags = f'{_output:bn}'\nR: expand = '$[ ]', stdout = f\"{_output[0]:nn}.stdout\", stderr = f\"{_output[0]:nn}.stderr\"\n library(\"dplyr\")\n library(\"stringr\")\n library(\"readr\")\n library(\"purrr\")\n ## Define a wrapper, modified from the gwasvcf packages, to create the vcf of needed.\n \n create_vcf = function (chrom, pos, nea, ea, snp = NULL, ea_af = NULL, effect = NULL, \n se = NULL, pval = NULL, n = NULL, ncase = NULL, name = NULL) \n {\n stopifnot(length(chrom) == length(pos))\n if (is.null(snp)) {\n snp <- paste0(chrom, \":\", pos)\n }\n snp <- paste0(chrom, \":\", pos)\n nsnp <- length(chrom)\n gen <- list()\n ## Setupt data content for each sample column\n if (!is.null(ea_af)) \n gen[[\"AF\"]] <- matrix(ea_af, nsnp)\n if (!is.null(effect)) \n gen[[\"ES\"]] <- matrix(effect, nsnp)\n if (!is.null(se)) \n gen[[\"SE\"]] <- matrix(se, nsnp)\n if (!is.null(pval)) \n gen[[\"LP\"]] <- matrix(-log10(pval), nsnp)\n if (!is.null(n)) \n gen[[\"SS\"]] <- matrix(n, nsnp)\n if (!is.null(ncase)) \n gen[[\"NC\"]] <- matrix(ncase, nsnp)\n gen <- S4Vectors::SimpleList(gen)\n \n ## Setup snps info for the fix columns\n gr <- GenomicRanges::GRanges(chrom, IRanges::IRanges(start = pos, \n end = pos + pmax(nchar(nea), nchar(ea)) - 1, names = snp))\n ## Setup meta informations\n coldata <- S4Vectors::DataFrame(Studies = name, row.names = name)\n hdr <- VariantAnnotation::VCFHeader(header = IRanges::DataFrameList(fileformat = S4Vectors::DataFrame(Value = \"VCFv4.2\", \n row.names = \"fileformat\")), sample = name)\n VariantAnnotation::geno(hdr) <- S4Vectors::DataFrame(Number = c(\"A\", \n \"A\", \"A\", \"A\", \"A\", \"A\"), Type = c(\"Float\", \"Float\", \n \"Float\", \"Float\", \"Float\", \"Float\"), Description = c(\"Effect size estimate relative to the alternative allele\", \n \"Standard error of effect size estimate\", \"-log10 p-value for effect estimate\", \n \"Alternate allele frequency in the association study\", \n \"Sample size used to estimate genetic effect\", \"Number of cases used to estimate genetic effect\"), \n row.names = c(\"ES\", \"SE\", \"LP\", \"AF\", \"SS\", \"NC\"))\n ## Save only the meta information in the sample columns \n VariantAnnotation::geno(hdr) <- subset(VariantAnnotation::geno(hdr), \n rownames(VariantAnnotation::geno(hdr)) %in% names(gen))\n ## Save VCF values\n vcf <- VariantAnnotation::VCF(rowRanges = gr, colData = coldata, \n exptData = list(header = hdr), geno = gen)\n VariantAnnotation::alt(vcf) <- Biostrings::DNAStringSetList(as.list(ea))\n VariantAnnotation::ref(vcf) <- Biostrings::DNAStringSet(nea)\n ## Write fixed values\n VariantAnnotation::fixed(vcf)$FILTER <- \"PASS\"\n return(sort(vcf))\n }\n \n input = readRDS($[_input:r])\n input_effect = input$PosteriorMean\n if(is.null(input$PosteriorSD)){\n input$PosteriorSD = matrix(1,nrow = nrow(input_effect),ncol = ncol(input_effect) )\n }\n input_se = input$PosteriorSD\n df = tibble(snps = input$snps)\n df = df%>%mutate( chr = map_dbl(snps,~str_remove(read.table(text = .x,sep = \":\",as.is = T)$V1, \"chr\")%>%as.numeric),\n pos_alt_ref = map_chr(snps,~read.table(text = .x,sep = \":\",as.is = TRUE)$V2),\n pos = map_dbl(pos_alt_ref,~read.table(text = .x,sep = \"_\",as.is = TRUE)$V1),\n alt = map_chr(pos_alt_ref,~read.table(text = .x,sep = \"_\",as.is = TRUE, colClass = \"character\")$V2),\n ref = map_chr(pos_alt_ref,~read.table(text = .x,sep = \"_\",as.is = TRUE, colClass = \"character\")$V3))\n \n \n vcf = create_vcf(\n chrom = df$chr,\n pos = df$pos,\n ea = df$alt,\n nea = df$ref,\n effect = input_effect ,\n se = input_se,\n name = colnames(input_effect))\n \n VariantAnnotation::writeVcf(vcf,$[_output:nr],index = TRUE)\n ", "_____no_output_____" ], [ "[rds_to_vcf_2]\ninput: group_by = \"all\"\noutput: vcf_list = f'{_input[0]:d}/vcf_output_list.txt'\nbash: expand = '${ }', stdout = f\"{_output[0]:nn}.stdout\", stderr = f\"{_output[0]:nn}.stderr\"\n cd ${_input[0]:d}\n ls *.vcf.bgz > ${_output}", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
e76248a001c44c608267a6dd6fff4474e2da8198
102,988
ipynb
Jupyter Notebook
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
9248e7d5b3ec24d34c6d39a6b38295d5c062dc75
[ "BSD-3-Clause" ]
16
2018-03-26T19:55:40.000Z
2022-03-29T10:17:33.000Z
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
9248e7d5b3ec24d34c6d39a6b38295d5c062dc75
[ "BSD-3-Clause" ]
2
2018-05-01T03:28:58.000Z
2018-05-06T09:59:57.000Z
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
9248e7d5b3ec24d34c6d39a6b38295d5c062dc75
[ "BSD-3-Clause" ]
3
2019-02-02T18:48:07.000Z
2020-10-29T09:29:30.000Z
62.874237
58,736
0.794122
[ [ [ "Convolutional Dictionary Learning\n=================================\n\nThis example demonstrates the use of [prlcnscdl.ConvBPDNDictLearn_Consensus](http://sporco.rtfd.org/en/latest/modules/sporco.dictlrn.prlcnscdl.html#sporco.dictlrn.prlcnscdl.ConvBPDNDictLearn_Consensus) for learning a convolutional dictionary from a set of colour training images [[51]](http://sporco.rtfd.org/en/latest/zreferences.html#id54). The dictionary learning algorithm is based on the ADMM consensus dictionary update [[1]](http://sporco.rtfd.org/en/latest/zreferences.html#id44) [[26]](http://sporco.rtfd.org/en/latest/zreferences.html#id25).", "_____no_output_____" ] ], [ [ "from __future__ import print_function\nfrom builtins import input\n\nimport pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40\nimport numpy as np\n\nfrom sporco.dictlrn import prlcnscdl\nfrom sporco import util\nfrom sporco import signal\nfrom sporco import plot\nplot.config_notebook_plotting()", "_____no_output_____" ] ], [ [ "Load training images.", "_____no_output_____" ] ], [ [ "exim = util.ExampleImages(scaled=True, zoom=0.25)\nS1 = exim.image('barbara.png', idxexp=np.s_[10:522, 100:612])\nS2 = exim.image('kodim23.png', idxexp=np.s_[:, 60:572])\nS3 = exim.image('monarch.png', idxexp=np.s_[:, 160:672])\nS4 = exim.image('sail.png', idxexp=np.s_[:, 210:722])\nS5 = exim.image('tulips.png', idxexp=np.s_[:, 30:542])\nS = np.stack((S1, S2, S3, S4, S5), axis=3)", "_____no_output_____" ] ], [ [ "Highpass filter training images.", "_____no_output_____" ] ], [ [ "npd = 16\nfltlmbd = 5\nsl, sh = signal.tikhonov_filter(S, fltlmbd, npd)", "_____no_output_____" ] ], [ [ "Construct initial dictionary.", "_____no_output_____" ] ], [ [ "np.random.seed(12345)\nD0 = np.random.randn(8, 8, 3, 64)", "_____no_output_____" ] ], [ [ "Set regularization parameter and options for dictionary learning solver.", "_____no_output_____" ] ], [ [ "lmbda = 0.2\nopt = prlcnscdl.ConvBPDNDictLearn_Consensus.Options({'Verbose': True,\n 'MaxMainIter': 200,\n 'CBPDN': {'rho': 50.0*lmbda + 0.5},\n 'CCMOD': {'rho': 1.0, 'ZeroMean': True}})", "_____no_output_____" ] ], [ [ "Create solver object and solve.", "_____no_output_____" ] ], [ [ "d = prlcnscdl.ConvBPDNDictLearn_Consensus(D0, sh, lmbda, opt)\nD1 = d.solve()\nprint(\"ConvBPDNDictLearn_Consensus solve time: %.2fs\" %\n d.timer.elapsed('solve'))", "Itn Fnc DFid Regℓ1 \n----------------------------------\n" ] ], [ [ "Display initial and final dictionaries.", "_____no_output_____" ] ], [ [ "D1 = D1.squeeze()\nfig = plot.figure(figsize=(14, 7))\nplot.subplot(1, 2, 1)\nplot.imview(util.tiledict(D0), title='D0', fig=fig)\nplot.subplot(1, 2, 2)\nplot.imview(util.tiledict(D1), title='D1', fig=fig)\nfig.show()", "_____no_output_____" ] ], [ [ "Get iterations statistics from solver object and plot functional value", "_____no_output_____" ] ], [ [ "its = d.getitstat()\nplot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7624bd9a10455428b4eee952b77defc3ae207e5
322,612
ipynb
Jupyter Notebook
international_cases.ipynb
debsankha/covid-19
b77aacd9bc2074275d8f5c8e715f14760000a222
[ "MIT" ]
null
null
null
international_cases.ipynb
debsankha/covid-19
b77aacd9bc2074275d8f5c8e715f14760000a222
[ "MIT" ]
null
null
null
international_cases.ipynb
debsankha/covid-19
b77aacd9bc2074275d8f5c8e715f14760000a222
[ "MIT" ]
null
null
null
846.750656
311,268
0.944689
[ [ [ "A little notebook to help visualise the official numbers for personal use. Absolutely no guarantees are made.\n\n**This is not a replacement for expert advice. Please listen to your local health authorities.**\n\nThe data is dynamically loaded from: https://github.com/CSSEGISandData/COVID-19", "_____no_output_____" ] ], [ [ "%matplotlib inline\n%config InlineBackend.figure_format ='retina'", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport pandas as pd\n\nfrom jhu_helpers import *", "_____no_output_____" ], [ "jhu = aggregte_jhu_by_state(*get_jhu_data())", "_____no_output_____" ], [ "#jhu.confirmed.columns.tolist() # print a list of all countries in the data set", "_____no_output_____" ], [ "# look at recent numbers from highly affected countries\nget_aggregate_top_n(jhu.confirmed)", "_____no_output_____" ], [ "# choose a random list of countries to plot\nplot_countries = [\n 'China',\n 'Italy',\n 'Singapore', \n 'US',\n 'France',\n 'Germany',\n]", "_____no_output_____" ], [ "plt.close(1)\nfig1, ax1 = plt.subplots(nrows=2, ncols=2, figsize=(10,8), sharex=True, num=1)\n\njhu.confirmed[plot_countries].plot(ax=ax1[0,0], logy=True)\nax1[0,0].set_ylabel('Confirmed')\n\nsmooth_rate_d = 1\njhu.infection_rate[plot_countries].rolling(smooth_rate_d, center=True, min_periods=1).mean().plot(ax=ax1[1,0], logy=False)\nax1[1,0].set_ylabel('Infection Rate per Infected')\n\njhu.recovered[plot_countries].plot(ax=ax1[0,1], logy=True)\nax1[0,1].set_ylabel('Recovered')\n\njhu.deaths[plot_countries].plot(ax=ax1[1,1], logy=True)\nax1[1,1].set_ylabel('Deaths')\n\nfig1.tight_layout()", "_____no_output_____" ], [ "# save the above figure\n#fig1.savefig('sars-covid-19_timeseries.png')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7626571f538e6d5d2a7c1f8f6b854f3e1650919
10,882
ipynb
Jupyter Notebook
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
75b9304095cc79cf399bed35eab5bba88195e108
[ "MIT" ]
null
null
null
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
75b9304095cc79cf399bed35eab5bba88195e108
[ "MIT" ]
null
null
null
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
75b9304095cc79cf399bed35eab5bba88195e108
[ "MIT" ]
null
null
null
24.290179
116
0.455063
[ [ [ "Question 1:", "_____no_output_____" ] ], [ [ "Create a checker board generator, which takes as inputs n and 2 elements to generate\nan n x n checkerboard with those two elements as alternating squares.\n\nExamples\n\nchecker_board(2, 7, 6) [\n [7, 6],\n [6, 7]\n]\n\nchecker_board(3, \"A\", \"B\") [\n [\"A\", \"B\", \"A\"],\n [\"B\", \"A\", \"B\"],\n [\"A\", \"B\", \"A\"]\n]\n\nchecker_board(4, \"c\", \"d\") [\n [\"c\", \"d\", \"c\", \"d\"],\n [\"d\", \"c\", \"d\", \"c\"],\n [\"c\", \"d\", \"c\", \"d\"],\n [\"d\", \"c\", \"d\", \"c\"]\n]\n\nchecker_board(4, \"c\", \"c\") \"invalid\"", "_____no_output_____" ] ], [ [ "Answer :", "_____no_output_____" ] ], [ [ "def checker_board(n,a,b):\n \n if a == b:\n return \"invalid\"\n \n board = []\n for i in range(n):\n temp = []\n for j in range(n):\n temp.append(a)\n a, b = b, a\n b, a = temp[0:2]\n board.append(temp) \n \n return board\n \n \nfor i in checker_board(2, 7, 6):print(i)\nprint()\n\nfor i in checker_board(3, \"A\", \"B\"):print(i)\nprint()\n\nfor i in checker_board(4, \"c\", \"d\"):print(i)\nprint()\n\nchecker_board(4, \"c\", \"c\")", "[7, 6]\n[6, 7]\n\n['A', 'B', 'A']\n['B', 'A', 'B']\n['A', 'B', 'A']\n\n['c', 'd', 'c', 'd']\n['d', 'c', 'd', 'c']\n['c', 'd', 'c', 'd']\n['d', 'c', 'd', 'c']\n\n" ] ], [ [ "Question 2:", "_____no_output_____" ] ], [ [ "A string is an almost-palindrome if, by changing only one character, you\ncan make it a palindrome. Create a function that returns True if a string is an \nalmost-palindrome and False otherwise.\n\nExamples\n\nalmost_palindrome(\"abcdcbg\") True\n# Transformed to \"abcdcba\" by changing \"g\" to \"a\".\n\nalmost_palindrome(\"abccia\") True\n# Transformed to \"abccba\" by changing \"i\" to \"b\".\n\nalmost_palindrome(\"abcdaaa\") False\n# Can't be transformed to a palindrome in exactly 1 turn.\n\nalmost_palindrome(\"1234312\") False", "_____no_output_____" ] ], [ [ "Answer :", "_____no_output_____" ] ], [ [ "import string\n\ndef isPalindrome(str_):\n return str_ == str_[::-1]\n\ndef almost_palindrome(str_):\n \n check = string.ascii_lowercase + \"0123456789\"\n \n for i in str_:\n for j in check:\n temp = str_.replace(i, j, 1)\n if isPalindrome(temp):\n return True\n \n return False\n\n\nprint(almost_palindrome(\"abcdcbg\"))\nprint(almost_palindrome(\"abccia\"))\nprint(almost_palindrome(\"abcdaaa\"))\nprint(almost_palindrome(\"1234312\"))", "True\nTrue\nFalse\nFalse\n" ] ], [ [ "Question 3:", "_____no_output_____" ] ], [ [ "Create a function that finds how many prime numbers there are, up to the given integer.\n\nExamples\n\nprime_numbers(10) 4\n# 2, 3, 5 and 7\nprime_numbers(20) 8\n# 2, 3, 5, 7, 11, 13, 17 and 19\nprime_numbers(30) 10\n# 2, 3, 5, 7, 11, 13, 17, 19, 23 and 29", "_____no_output_____" ] ], [ [ "Answer :", "_____no_output_____" ] ], [ [ "def isPrime(n):\n if n <= 1:\n return False\n \n for i in range(2, int(n**(1/2))+1):\n if n % i == 0:\n return False\n return True\n\n\ndef prime_numbers(n):\n if n<2:\n return 0\n return sum([isPrime(i) for i in range(2,n+1)])\n\n\nprint(prime_numbers(10))\nprint(prime_numbers(20))\nprint(prime_numbers(30))", "4\n8\n10\n" ] ], [ [ "Question 4:", "_____no_output_____" ] ], [ [ " If today was Monday, in two days, it would be Wednesday. Create a function that takes in a list of days\nas input and the number of days to increment by. \nReturn a list of days after n number of days has passed.\n\nExamples\n\nafter_n_days([\"Thursday\", \"Monday\"], 4) [\"Monday\", \"Friday\"]\nafter_n_days([\"Sunday\", \"Sunday\", \"Sunday\"], 1) [\"Monday\", \"Monday\", \"Monday\"]\nafter_n_days([\"Monday\", \"Tuesday\", \"Friday\"], 1) [\"Tuesday\", \"Wednesday\", \"Saturday\"]", "_____no_output_____" ] ], [ [ "Answer :", "_____no_output_____" ] ], [ [ "def after_n_days(lst,n):\n \n new_lst = []\n \n if n>=7:\n _, n = divmod(n,7)\n \n for i in lst:\n days = [\"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\", \"Thursday\", \"Friday\", \"Saturday\"]\n idx = days.index(i)\n days = [days[idx]]+days[idx+1::]+days[0:idx]\n new_lst.append(days[n])\n \n return new_lst\n\n\nprint(after_n_days([\"Thursday\", \"Monday\"], 4))\nprint(after_n_days([\"Sunday\", \"Sunday\", \"Sunday\"], 1))\nprint(after_n_days([\"Monday\", \"Tuesday\", \"Friday\"], 1))", "['Monday', 'Friday']\n['Monday', 'Monday', 'Monday']\n['Tuesday', 'Wednesday', 'Saturday']\n" ] ], [ [ "Question 5:", "_____no_output_____" ] ], [ [ "You are in the process of creating a chat application and want to add an\nanonymous name feature. This anonymous name feature will create an alias that \nconsists of two capitalized words beginning with the same letter as the users first name.\nCreate a function that determines if the list of users is mapped to a list of\nanonymous names correctly.\n\nExamples\n\nis_correct_aliases([\"Adrian M.\", \"Harriet S.\", \"Mandy T.\"],\n [\"Amazing Artichoke\", \"Hopeful Hedgehog\", \"Marvelous Mouse\"]) True\n\nis_correct_aliases([\"Rachel F.\", \"Pam G.\", \"Fred Z.\", \"Nancy K.\"],\n [\"Reassuring Rat\", \"Peaceful Panda\", \"Fantastic Frog\", \"Notable Nickel\"]) True\n\nis_correct_aliases([\"Beth T.\"], [\"Brandishing Mimosa\"]) False\n\n# Both words in \"Brandishing Mimosa\" should begin with a \"B\" - \"Brandishing Beaver\" would do the trick.", "_____no_output_____" ] ], [ [ "Answer :", "_____no_output_____" ] ], [ [ "def is_correct_aliases(lst1,lst2):\n \n bool_ = []\n for i, j in zip(lst1,lst2):\n temp = j.split()\n bool_.append(i[0] == temp[0][0] and i[0] == temp[1][0])\n return all(bool_)\n\nprint(is_correct_aliases([\"Adrian M.\", \"Harriet S.\", \"Mandy T.\"],\n [\"Amazing Artichoke\", \"Hopeful Hedgehog\", \"Marvelous Mouse\"]))\nprint(is_correct_aliases([\"Rachel F.\", \"Pam G.\", \"Fred Z.\", \"Nancy K.\"],\n [\"Reassuring Rat\", \"Peaceful Panda\", \"Fantastic Frog\", \"Notable Nickel\"]))\nprint(is_correct_aliases([\"Beth T.\"], [\"Brandishing Mimosa\"]))", "True\nTrue\nFalse\n" ] ] ]
[ "markdown", "raw", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "raw", "markdown", "code" ]
[ [ "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ] ]
e7626ddb411071f263c398b22e4e81d4305fd60b
13,228
ipynb
Jupyter Notebook
jupyter/enterprise/healthcare/Disambiguation.ipynb
richardclarus/spark-nlp-workshop
64abb0ca9db5385d5ba38343bd3312ad4cc57845
[ "Apache-2.0" ]
1
2021-01-23T15:24:45.000Z
2021-01-23T15:24:45.000Z
jupyter/enterprise/healthcare/Disambiguation.ipynb
richardclarus/spark-nlp-workshop
64abb0ca9db5385d5ba38343bd3312ad4cc57845
[ "Apache-2.0" ]
null
null
null
jupyter/enterprise/healthcare/Disambiguation.ipynb
richardclarus/spark-nlp-workshop
64abb0ca9db5385d5ba38343bd3312ad4cc57845
[ "Apache-2.0" ]
1
2021-01-23T15:24:52.000Z
2021-01-23T15:24:52.000Z
50.48855
628
0.424176
[ [ [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/enterprise/healthcare/Disambiguation.ipynb)", "_____no_output_____" ] ], [ [ "import json\n\nwith open('251keys.json') as f:\n license_keys = json.load(f)\n\nlicense_keys.keys()\n", "_____no_output_____" ], [ "\n# Install java\nimport os\n\n! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null\nos.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\"\nos.environ[\"PATH\"] = os.environ[\"JAVA_HOME\"] + \"/bin:\" + os.environ[\"PATH\"]\n! java -version\n\n# Install pyspark\n! pip install --ignore-installed -q pyspark==2.4.4\n\nsecret = license_keys['secret']\nos.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']\nos.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']\nos.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']\nos.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']\n\n! python -m pip install --upgrade spark-nlp-jsl==2.5.1rc1 --extra-index-url https://pypi.johnsnowlabs.com/$secret\n\nimport sparknlp\n\nprint (sparknlp.version())\n\nimport json\nfrom pyspark.ml import Pipeline\nfrom pyspark.sql import SparkSession\n\n\nfrom sparknlp.annotator import *\nfrom sparknlp_jsl.annotator import *\nfrom sparknlp.base import *\nimport sparknlp_jsl\n\n\n\ndef start(secret):\n builder = SparkSession.builder \\\n .appName(\"Spark NLP Licensed\") \\\n .master(\"local[*]\") \\\n .config(\"spark.driver.memory\", \"16G\") \\\n .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\") \\\n .config(\"spark.kryoserializer.buffer.max\", \"2000M\") \\\n .config(\"spark.jars.packages\", \"com.johnsnowlabs.nlp:spark-nlp_2.11:2.5.1\") \\\n .config(\"spark.jars\", \"https://pypi.johnsnowlabs.com/\"+secret+\"/spark-nlp-jsl-2.5.1rc1.jar\")\n \n return builder.getOrCreate()\n\n\nspark = start(secret) # if you want to start the session with custom params as in start function above\n# sparknlp_jsl.start(secret)", "openjdk version \"1.8.0_252\"\nOpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)\nOpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)\n\u001b[K |████████████████████████████████| 215.7MB 59kB/s \n\u001b[K |████████████████████████████████| 204kB 51.8MB/s \n\u001b[?25h Building wheel for pyspark (setup.py) ... \u001b[?25l\u001b[?25hdone\nLooking in indexes: https://pypi.org/simple, https://pypi.johnsnowlabs.com/9hk9l8ybo1\nCollecting spark-nlp-jsl==2.5.1rc1\n Downloading https://pypi.johnsnowlabs.com/9hk9l8ybo1/spark-nlp-jsl/spark_nlp_jsl-2.5.1rc1-py3-none-any.whl\nCollecting spark-nlp==2.5.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/df/b4/db653f8080a446de8ce981b262d85c85c61de7e920930726da0d1c6b4c65/spark_nlp-2.5.1-py2.py3-none-any.whl (121kB)\n\u001b[K |████████████████████████████████| 122kB 2.8MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: pyspark==2.4.4 in /usr/local/lib/python3.6/dist-packages (from spark-nlp-jsl==2.5.1rc1) (2.4.4)\nRequirement already satisfied, skipping upgrade: py4j==0.10.7 in /usr/local/lib/python3.6/dist-packages (from pyspark==2.4.4->spark-nlp-jsl==2.5.1rc1) (0.10.7)\nInstalling collected packages: spark-nlp, spark-nlp-jsl\nSuccessfully installed spark-nlp-2.5.1 spark-nlp-jsl-2.5.1rc1\n2.5.1\n" ], [ "# Sample data\ntext = \"The show also had a contestant named Donald Trump \" \\\n + \"who later defeated Christina Aguilera on the way to become Female Vocalist Champion in the 1989 edition of Star Search in the United States. \"\ndata = spark.createDataFrame([\n [text]]) \\\n .toDF(\"text\").cache()\n", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "# Preprocessing pipeline\nda = DocumentAssembler().setInputCol(\"text\").setOutputCol(\"document\")\nsd = SentenceDetector().setInputCols(\"document\").setOutputCol(\"sentence\")\ntk = Tokenizer().setInputCols(\"sentence\").setOutputCol(\"token\")\nemb = WordEmbeddingsModel.pretrained().setOutputCol(\"embs\")\nsemb = SentenceEmbeddings().setInputCols(\"sentence\",\"embs\").setOutputCol(\"sentence_embeddings\")\nner = NerDLModel.pretrained().setInputCols(\"sentence\",\"token\",\"embs\").setOutputCol(\"ner\")\nnc = NerConverter().setInputCols(\"sentence\",\"token\",\"ner\").setOutputCol(\"ner_chunk\").setWhiteList([\"PER\"])\ndisambiguator = NerDisambiguator() \\\n .setS3KnowledgeBaseName(\"i-per\") \\\n .setInputCols(\"ner_chunk\", \"sentence_embeddings\") \\\n .setOutputCol(\"disambiguation\") \\\n .setNumFirstChars(5)\npl = Pipeline().setStages([da,sd,tk,emb,semb,ner,nc,disambiguator])\ndata = pl.fit(data).transform(data)\ndata.selectExpr(\"explode(disambiguation)\").show(10, False)", "glove_100d download started this may take some time.\nApproximate size to download 145.3 MB\n[OK!]\nner_dl download started this may take some time.\nApproximate size to download 13.6 MB\n[OK!]\n+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n|col |\n+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n|[disambiguation, 37, 48, http://en.wikipedia.org/?curid=4848272, http://en.wikipedia.org/?curid=31698421, http://en.wikipedia.org/?curid=55907961, [chunk -> Donald Trump, titles -> donald trump ::::: donald crump ::::: donald frump, links -> http://en.wikipedia.org/?curid=4848272 ::::: http://en.wikipedia.org/?curid=31698421 ::::: http://en.wikipedia.org/?curid=55907961, beginInText -> 37, scores -> 0.9637175040283449, 0.9555978783336097, 0.10186673596888873, categories -> Businesspeople, Politicians, Businesspeople, Businesspeople, Politicians, ids -> 4848272, 31698421, 55907961, endInText -> 48], []]|\n|[disambiguation, 69, 86, http://en.wikipedia.org/?curid=144171, http://en.wikipedia.org/?curid=6636454, [chunk -> Christina Aguilera, titles -> christina aguilera ::::: christina aguilar, links -> http://en.wikipedia.org/?curid=144171 ::::: http://en.wikipedia.org/?curid=6636454, beginInText -> 69, scores -> 0.975820790095742, 0.9726838470180229, categories -> Musicians, Singers, Actors, Businesspeople, Musicians, Singers, ids -> 144171, 6636454, endInText -> 86], []] |\n+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e76273beb1eeb1c25c294d500b9e2acc2cfa4c84
13,865
ipynb
Jupyter Notebook
examples/Supply chain physics.ipynb
aletuf93/logproj
ca80637dab7237123899d6972fba96e53916760a
[ "MIT" ]
null
null
null
examples/Supply chain physics.ipynb
aletuf93/logproj
ca80637dab7237123899d6972fba96e53916760a
[ "MIT" ]
null
null
null
examples/Supply chain physics.ipynb
aletuf93/logproj
ca80637dab7237123899d6972fba96e53916760a
[ "MIT" ]
1
2021-10-02T16:54:12.000Z
2021-10-02T16:54:12.000Z
33.817073
1,853
0.561918
[ [ [ "# Supply chain physics\n*This notebook illustrates methods to investigate the physics of a supply chain*\n***\nAlessandro Tufano 2020", "_____no_output_____" ], [ "### Import packages", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n", "_____no_output_____" ] ], [ [ "### Generate empirical demand and production \nWe define an yearly sample of production quantity $x$, and demand quantity $d$", "_____no_output_____" ] ], [ [ "number_of_sample = 365 #days\nmu_production = 105 #units per day\nsigma_production = 1 # units per day\n\nmu_demand = 100 #units per day\nsigma_demand = 0.3 # units per day\n\n\nx = np.random.normal(mu_production,sigma_production,number_of_sample) \n#d = np.random.normal(mu_demand,sigma_demand,number_of_sample) \nd = brownian(x0=mu_demand, n=365, dt=1, delta=sigma_demand, out=None) #demand stochastic process\n# represent demand\nplt.hist(d,color='orange')\nplt.hist(x,color='skyblue')\n\nplt.title('Production and Demand histogram')\nplt.xlabel('Daily rate')\nplt.ylabel('Frequency')\nplt.legend(['Demand','Production'])\n\nx = np.array(x)\nd = np.array(d)\n\nplt.figure()\nplt.plot(d)\nplt.title(\"Demand curve $d$\")\nplt.xlabel('Time in days')\nplt.ylabel('Numbar of parts')\n\nplt.figure()\nplt.plot(x)\nplt.title(\"Production curve $x$\")\nplt.xlabel('Time in days')\nplt.ylabel('Number of parts')", "_____no_output_____" ] ], [ [ "### Define the inventory function $q$ \nThe empirical inventory function $q$ is defined as the differende between production and demand, plus the residual inventory. \n$q_t = q_{t-1} + x_t - d_t$", "_____no_output_____" ] ], [ [ "q = [mu_production] #initial inventory with production mean value\nfor i in range(0,len(d)):\n inventory_value = q[i] + x[i] - d[i] \n if inventory_value <0 : \n inventory_value=0\n q.append(inventory_value)\n \nplt.plot(q)\nplt.xlabel('days')\nplt.ylabel('Inventory quantity $q$')\nplt.title('Inventory function $q$')\n\nq = np.array(q)", "_____no_output_____" ] ], [ [ "### Define pull and push forces (the momentum $p=\\dot{q}$) \nBy using continuous notation we obtain the derivative $\\dot{q}=p=x-d$. The derivative of the inventory represents the *momentum* of the supply chain, i.e. the speed a which the inventory values goes up (production), and down (demand). We use the term **productivity** to identify the momentum $p$. The forces changing the value of the productivity are called **movements** $\\dot{p}$.", "_____no_output_____" ] ], [ [ "p1 = [q[i]-q[i-1] for i in range(1,len(q))]\np2 = [x[i]-d[i] for i in range(1,len(d))]\nplt.plot(p1)\nplt.plot(p2)\nplt.xlabel('days')\nplt.ylabel('Value')\nplt.title('Momentum function $p$')\n\np=np.array(p)", "_____no_output_____" ] ], [ [ "### Define a linear potential $V(q)$ \nwe introduce a linear potential to describe the amount of *energy* related with a given quantity of the inventory $q$.", "_____no_output_____" ] ], [ [ "F0 = 0.1\n\n#eta = 1.2\n#lam = mu_demand\n#F0=eta*lam\n\nprint(F0)\n\nV_q = -F0*q\nV_q = V_q[0:-1]", "_____no_output_____" ] ], [ [ "### Define the energy conservation function using the Lagrangianm and the Hamiltonian \nWe use the Lagrangian to describe the energy conservation equation.\n$L(q,\\dot{q}) = H = \\frac{1}{2}\\dot{q} - V(q)$", "_____no_output_____" ] ], [ [ "H = (p**2)/2 - F0*q[0:-1]\n\nplt.plot(H)\nplt.xlabel('days')\nplt.ylabel('value')\nplt.title('Function $H$')", "_____no_output_____" ] ], [ [ "### Obtain the inventory $q$, given $H$", "_____no_output_____" ] ], [ [ "S_q = [H[i-1] + H[i] for i in range(1,len(H))]\nplt.plot(S_q)\nplt.xlabel('days')\nplt.ylabel('value')\nplt.title('Function $S[q]$')\n\n\n#compare with q\nplt.plot(q)\nplt.xlabel('days')\nplt.ylabel('Inventory quantity $q$')\nplt.title('Inventory function $q$')\n\nplt.legend(['Model inventory','Empirical inventory'])", "_____no_output_____" ] ], [ [ "# Inventory control", "_____no_output_____" ], [ "### Define the Brownian process", "_____no_output_____" ] ], [ [ "from math import sqrt\nfrom scipy.stats import norm\nimport numpy as np\n\n\ndef brownian(x0, n, dt, delta, out=None):\n \"\"\"\n Generate an instance of Brownian motion (i.e. the Wiener process):\n\n X(t) = X(0) + N(0, delta**2 * t; 0, t)\n\n where N(a,b; t0, t1) is a normally distributed random variable with mean a and\n variance b. The parameters t0 and t1 make explicit the statistical\n independence of N on different time intervals; that is, if [t0, t1) and\n [t2, t3) are disjoint intervals, then N(a, b; t0, t1) and N(a, b; t2, t3)\n are independent.\n \n Written as an iteration scheme,\n\n X(t + dt) = X(t) + N(0, delta**2 * dt; t, t+dt)\n\n\n If `x0` is an array (or array-like), each value in `x0` is treated as\n an initial condition, and the value returned is a numpy array with one\n more dimension than `x0`.\n\n Arguments\n ---------\n x0 : float or numpy array (or something that can be converted to a numpy array\n using numpy.asarray(x0)).\n The initial condition(s) (i.e. position(s)) of the Brownian motion.\n n : int\n The number of steps to take.\n dt : float\n The time step.\n delta : float\n delta determines the \"speed\" of the Brownian motion. The random variable\n of the position at time t, X(t), has a normal distribution whose mean is\n the position at time t=0 and whose variance is delta**2*t.\n out : numpy array or None\n If `out` is not None, it specifies the array in which to put the\n result. If `out` is None, a new numpy array is created and returned.\n\n Returns\n -------\n A numpy array of floats with shape `x0.shape + (n,)`.\n \n Note that the initial value `x0` is not included in the returned array.\n \"\"\"\n\n x0 = np.asarray(x0)\n\n # For each element of x0, generate a sample of n numbers from a\n # normal distribution.\n r = norm.rvs(size=x0.shape + (n,), scale=delta*sqrt(dt))\n\n # If `out` was not given, create an output array.\n if out is None:\n out = np.empty(r.shape)\n\n # This computes the Brownian motion by forming the cumulative sum of\n # the random samples. \n np.cumsum(r, axis=-1, out=out)\n\n # Add the initial condition.\n out += np.expand_dims(x0, axis=-1)\n\n return out", "_____no_output_____" ] ], [ [ "### Define the supply chain control model", "_____no_output_____" ] ], [ [ "# supply chain control model\ndef supply_chain_control_model(p,beta,eta,F0):\n #p is the productivity function defined as the defivative of q\n #beta is the diffusion coefficient, i.e. the delta of the Brownian process, the std of the demand can be used\n #eta represents the flexibility of the productio. It is the number of days to reach a target inventory\n #F0 is the potential \n \n Fr_t = brownian(x0=F0, n=365, dt=1, delta=beta, out=None) #demand stochastic process\n p_dot = F0 -eta*p + Fr_t\n return p_dot, Fr_t", "_____no_output_____" ], [ "#identify the sensitivity of the inventory control with different values of eta\nfor eta in [0.1,1,2,7,30]:\n p_dot, Fr_t = supply_chain_control_model(p=p,beta = sigma_demand,eta=eta,F0=F0)\n \n\n plt.figure()\n plt.plot(Fr_t)\n plt.plot(p)\n plt.plot(p_dot)\n plt.title(f\"Inventory control with eta={eta}\")\n plt.legend(['Demand','Productivity','Movements $\\dot{p}$'])", "_____no_output_____" ], [ "p_dot, Fr_t = supply_chain_control_model(p=p,beta = sigma_demand,eta=1,F0=0.9)\np_model = [p_dot[i-1] + p_dot[i] for i in range(1,len(p_dot))]\nq_model = [p_model[i-1] + p_model[i] for i in range(1,len(p_model))]\n\n\nplt.plot(q_model)\nplt.plot(p_model)\nplt.legend(['$q$: inventory','$p$: productivity'])\n\np_mean = np.mean(p_model)\np_std = np.std(p_model)\n\nprint(f\"Movements mean: {p_mean}, std: {p_std}\")\n\nq_mean = np.mean(q_model)\nq_std = np.std(q_model)\n\nprint(f\"Inventory mean: {q_mean}, std: {q_std}\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7627a18650339ed3a74c016f761d0f031f12883
2,431
ipynb
Jupyter Notebook
variable_exploration/mk/1_Ingestion_Wrangling/0_data_pull.ipynb
georgetown-analytics/Airbnb-Price-Prediction
2798c1a8607b2d8d493b1e6cdd7933413c73755f
[ "MIT" ]
null
null
null
variable_exploration/mk/1_Ingestion_Wrangling/0_data_pull.ipynb
georgetown-analytics/Airbnb-Price-Prediction
2798c1a8607b2d8d493b1e6cdd7933413c73755f
[ "MIT" ]
null
null
null
variable_exploration/mk/1_Ingestion_Wrangling/0_data_pull.ipynb
georgetown-analytics/Airbnb-Price-Prediction
2798c1a8607b2d8d493b1e6cdd7933413c73755f
[ "MIT" ]
3
2020-04-12T04:50:17.000Z
2022-02-11T20:12:17.000Z
25.589474
232
0.590292
[ [ [ "# Ingest Data\nOriginal data was from inside airbnb, to secure the files, they were copied to google drive", "_____no_output_____" ] ], [ [ "import os\nfrom google_drive_downloader import GoogleDriveDownloader as gdd", "_____no_output_____" ] ], [ [ "# Pull files from Google Drive\nlistings shared url: https://drive.google.com/file/d/1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO/view?usp=sharing\n\ncalendar shared url: https://drive.google.com/file/d/1VjlSWEr4vaJHdT9o2OF9N2Ga0X2b22v9/view?usp=sharing\n\nreviews shared url: https://drive.google.com/file/d/1_ojDocAs_LtcBLNxDHqH_TSBWjPz-Zme/view?usp=sharing\n\n", "_____no_output_____" ] ], [ [ "# gdd.download_file_from_google_drive(file_id='1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO',\n# dest_path='../data/gdrive/listings.csv.gz'", "_____no_output_____" ], [ "#source_dest = {'../data/gdrive/listings.csv.gz':'1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO'}\n\nsource_dest = {'../data/gdrive/listings.csv.gz':'1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO', '../data/gdrive/calendar.csv.gz':'1VjlSWEr4vaJHdT9o2OF9N2Ga0X2b22v9', '../data/gdrive/reviews.csv.gz':'1_ojDocAs_LtcBLNxDHqH_TSBWjPz-Zme'}\n\n", "_____no_output_____" ], [ "for k, v in source_dest.items():\n gdd.download_file_from_google_drive(file_id=v, dest_path=k)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e76283c133eabcdcd853dd97e572195c6e0ace56
8,913
ipynb
Jupyter Notebook
deep_fake.ipynb
hamdirhibi/Deep-fake
c25664140759b460701e39353bd5bfafdd6e7a98
[ "MIT" ]
null
null
null
deep_fake.ipynb
hamdirhibi/Deep-fake
c25664140759b460701e39353bd5bfafdd6e7a98
[ "MIT" ]
null
null
null
deep_fake.ipynb
hamdirhibi/Deep-fake
c25664140759b460701e39353bd5bfafdd6e7a98
[ "MIT" ]
null
null
null
29.71
286
0.511388
[ [ [ "# Demo for paper \"First Order Motion Model for Image Animation\"\n\n---\n\n", "_____no_output_____" ], [ "**Clone repository**", "_____no_output_____" ] ], [ [ "!git clone https://github.com/hamdirhibi/Deep-fake", "_____no_output_____" ], [ "cd Deep-fake", "_____no_output_____" ] ], [ [ "\n\n```\n# This is formatted as code\n```\n\n**Mount your Google drive folder on Colab**", "_____no_output_____" ] ], [ [ "from google.colab import drive\ndrive.mount('/content/gdrive')", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "**Add folder https://drive.google.com/drive/folders/157-wifsuylAkO1E4hBGO_QXyn22mDXET?usp=sharing to your google drive.\nAlternativelly you can use this mirror link https://drive.google.com/drive/folders/157-wifsuylAkO1E4hBGO_QXyn22mDXET?usp=sharing", "_____no_output_____" ], [ "**Load driving video and source image**", "_____no_output_____" ] ], [ [ "import imageio\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom skimage.transform import resize\nfrom IPython.display import HTML\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nsource_image = imageio.imread('/content/gdrive/My Drive/first-order-motion-model/rayen.jpg')\nreader = imageio.get_reader('/content/gdrive/My Drive/first-order-motion-model/hamdi.mp4')\n\n\n#Resize image and video to 256x256\n\nsource_image = resize(source_image, (256, 256))[..., :3]\n\nfps = reader.get_meta_data()['fps']\ndriving_video = []\ntry:\n for im in reader:\n driving_video.append(im)\nexcept RuntimeError:\n pass\nreader.close()\n\ndriving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video]\n\ndef display(source, driving, generated=None):\n fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6))\n\n ims = []\n for i in range(len(driving)):\n cols = [source]\n cols.append(driving[i])\n if generated is not None:\n cols.append(generated[i])\n im = plt.imshow(np.concatenate(cols, axis=1), animated=True)\n plt.axis('off')\n ims.append([im])\n\n ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000)\n plt.close()\n return ani\n \n\nHTML(display(source_image, driving_video).to_html5_video())", "_____no_output_____" ] ], [ [ "**Create a model and load checkpoints**", "_____no_output_____" ] ], [ [ "from demo import load_checkpoints\ngenerator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml', \n checkpoint_path='/content/gdrive/My Drive/first-order-motion-model/vox-cpk.pth.tar')", "_____no_output_____" ] ], [ [ "** **bold text**Perform image animation**", "_____no_output_____" ] ], [ [ "from demo import make_animation\nfrom skimage import img_as_ubyte\n\npredictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True)\n\n#save resulting video\nimageio.mimsave('../generated.mp4', [img_as_ubyte(frame) for frame in predictions], fps=fps)\n#video can be downloaded from /content folder\n\nHTML(display(source_image, driving_video, predictions).to_html5_video())", "_____no_output_____" ] ], [ [ "**In the cell above we use relative keypoint displacement to animate the objects. We can use absolute coordinates instead, but in this way all the object proporions will be inherited from the driving video. For example Putin haircut will be extended to match Trump haircut.**", "_____no_output_____" ] ], [ [ "predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=False, adapt_movement_scale=True)\nHTML(display(source_image, driving_video, predictions).to_html5_video())", "_____no_output_____" ] ], [ [ "## Running on your data\n\n**First we need to crop a face from both source image and video, while simple graphic editor like paint can be used for cropping from image. Cropping from video is more complicated. You can use ffpmeg for this.**", "_____no_output_____" ] ], [ [ "!ffmpeg -i /content/gdrive/My\\ Drive/first-order-motion-model/07.mkv -ss 00:08:57.50 -t 00:00:08 -filter:v \"crop=600:600:760:50\" -async 1 hinton.mp4", "_____no_output_____" ] ], [ [ "**Another posibility is to use some screen recording tool, or if you need to crop many images at ones use face detector(https://github.com/1adrianb/face-alignment) , see https://github.com/AliaksandrSiarohin/video-preprocessing for preprcessing of VoxCeleb.** ", "_____no_output_____" ] ], [ [ "source_image = imageio.imread('/content/gdrive/My Drive/first-order-motion-model/09.png')\ndriving_video = imageio.mimread('hinton.mp4', memtest=False)\n\n\n#Resize image and video to 256x256\n\nsource_image = resize(source_image, (256, 256))[..., :3]\ndriving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video]\n\npredictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True,\n adapt_movement_scale=True)\n\nHTML(display(source_image, driving_video, predictions).to_html5_video())", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e76292b663ed7e9d88eee84e05d794a219977367
10,209
ipynb
Jupyter Notebook
scripts/Introd. ciencia de dados - Parte 3.ipynb
adrianosantospb/jatic2017
619ee4dc786ebb01935adf118053cb339cdaec61
[ "MIT" ]
null
null
null
scripts/Introd. ciencia de dados - Parte 3.ipynb
adrianosantospb/jatic2017
619ee4dc786ebb01935adf118053cb339cdaec61
[ "MIT" ]
null
null
null
scripts/Introd. ciencia de dados - Parte 3.ipynb
adrianosantospb/jatic2017
619ee4dc786ebb01935adf118053cb339cdaec61
[ "MIT" ]
null
null
null
27.296791
140
0.43922
[ [ [ "### Realizando limpeza dos dados\n##### Por Adriano Santos\n\n###### Dentre as atividades que um cientista de dados deve realizar, o processo de limpeza e tratamento é uma das mais importantes.\n\nNesta aula aprenderemos a:\n\n* Remover informações de um DataFrame;\n", "_____no_output_____" ] ], [ [ "# Carregando módulos \nimport pandas as pd", "_____no_output_____" ], [ "# Importando os dados para manipulação\ndf = pd.read_csv('../Dados/WHO.csv', delimiter=',')\nprint (df.head())", " Country Region Population Under15 Over60 \\\n0 Afghanistan Eastern Mediterranean 29825 47.42 3.82 \n1 Albania Europe 3162 21.33 14.93 \n2 Algeria Africa 38482 27.42 7.17 \n3 Andorra Europe 78 15.20 22.86 \n4 Angola Africa 20821 47.58 3.84 \n\n FertilityRate LifeExpectancy ChildMortality CellularSubscribers \\\n0 5.40 60 98.5 54.26 \n1 1.75 74 16.7 96.39 \n2 2.83 73 20.0 98.99 \n3 NaN 82 3.2 75.49 \n4 6.10 51 163.5 48.38 \n\n LiteracyRate GNI PrimarySchoolEnrollmentMale \\\n0 NaN 1140.0 NaN \n1 NaN 8820.0 NaN \n2 NaN 8310.0 98.2 \n3 NaN NaN 78.4 \n4 70.1 5230.0 93.1 \n\n PrimarySchoolEnrollmentFemale \n0 NaN \n1 NaN \n2 96.4 \n3 79.4 \n4 78.2 \n" ] ], [ [ "##### Verificando a existência de dados missing (NaN)", "_____no_output_____" ] ], [ [ "# O any() possibibilitará saber, coluna a coluna, se qualquer um dos valores é inexistente.\ndf.isnull().any()\n", "_____no_output_____" ], [ "# Possibilitirá se existe alguma coluna em branco.\nprint (df.isnull().all())\nprint ('Número de registros:', df.shape)", "Country False\nRegion False\nPopulation False\nUnder15 False\nOver60 False\nFertilityRate False\nLifeExpectancy False\nChildMortality False\nCellularSubscribers False\nLiteracyRate False\nGNI False\nPrimarySchoolEnrollmentMale False\nPrimarySchoolEnrollmentFemale False\ndtype: bool\nNúmero de registros: (194, 13)\n" ], [ "# O comando dropna() remove do DataFrame qualquer linha que tenha pelo menos um NaN.\ndf.dropna(inplace=True) # thresh=2 -> se tiver mais de dois NaN; how='all' se tiver uma linha completa em NaN\nprint ('Número de registros:', df['Country'].count())\ndf.isnull().any()", "Número de registros: 50\n" ], [ "# Carrega os dados novamente para iniciarmos outras formas de ajustes de dados.\ndf = pd.read_csv('../Dados/WHO.csv', delimiter=',')\nprint (df.shape)", "(194, 13)\n" ], [ "# Caso você queira preencher os valores inexistentes, você deve usar a função fillna()\ndf.isnull().any()\n# Iremos preencher os valores NaN da coluna \nprint('Registros:', df['FertilityRate'].count())\ndf['FertilityRate'].head()", "Registros: 183\n" ], [ "# Preenchendo os valores com os valores da média\ndf['FertilityRate'].fillna(df['FertilityRate'].mean(),inplace=True)\nprint('Número de registros:', df['FertilityRate'].count())\ndf['FertilityRate'].head()\n", "Número de registros: 194\n" ] ], [ [ "###### Comandos para remoção de coluna", "_____no_output_____" ] ], [ [ "# Para remover, faça:\ndf.drop('CellularSubscribers', axis=1, inplace=True) # axis 1 = coluna; axis 0 = linha.\nprint (df.columns)\n", "Index(['Country', 'Region', 'Population', 'Under15', 'Over60', 'FertilityRate',\n 'LifeExpectancy', 'ChildMortality', 'LiteracyRate', 'GNI',\n 'PrimarySchoolEnrollmentMale', 'PrimarySchoolEnrollmentFemale'],\n dtype='object')\n" ], [ "# Avaliando se existe duplicata \nprint(df.duplicated('Region').head())", "0 False\n1 False\n2 False\n3 True\n4 True\ndtype: bool\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7629657a0e463a618ea1122b516a69a3ce7e025
188,045
ipynb
Jupyter Notebook
Chapter10/R4-Calculating-distance-between-events.ipynb
paulorobertolds/Python-Feature-Engineering-Cookbook
192e97743c7a586ee8a776a4fe5501edec908a26
[ "MIT" ]
245
2019-12-24T01:54:34.000Z
2022-03-25T02:59:45.000Z
Chapter10/R4-Calculating-distance-between-events.ipynb
vlasvlasvlas/Python-Feature-Engineering-Cookbook
e140311b506cc156ab8b7dbe2862b4ba722139b8
[ "MIT" ]
3
2020-02-21T19:06:35.000Z
2021-09-29T07:08:58.000Z
Chapter10/R4-Calculating-distance-between-events.ipynb
vlasvlasvlas/Python-Feature-Engineering-Cookbook
e140311b506cc156ab8b7dbe2862b4ba722139b8
[ "MIT" ]
130
2019-12-24T18:18:54.000Z
2022-03-10T10:07:55.000Z
93.928571
39,648
0.796655
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy.signal import find_peaks", "_____no_output_____" ], [ "# load the appliances energy prediction data set\n\ndata = pd.read_csv('energydata_complete.csv')\n\ndata.head()", "_____no_output_____" ], [ "# parse as datetime data type\n\ndata['date'] = pd.to_datetime(data['date'])", "_____no_output_____" ], [ "# determine time between transactions, in this case, energy records\n\ndata['time_since_previous'] = data['date'].diff()\n\ndata['time_since_previous'] = data['time_since_previous']/np.timedelta64(1,'m')\n\ndata[['date', 'time_since_previous']].head(10)", "_____no_output_____" ], [ "# extract day and month from datetime variable\n\ndata[['day', 'month']] = pd.DataFrame([(x.day, x.month) for x in data['date']])\n\ndata.head()", "_____no_output_____" ], [ "# make the datetime variable the index of the series\n\ndata.index = data['date']", "_____no_output_____" ], [ "# Plot mean energy consumption by appliances per day\n# we are going to work with this data\n\ndata.groupby(['month', 'day'])['Appliances'].mean().plot()", "_____no_output_____" ], [ "# Plot mean energy consumption by lights per day\n# we are going to work with this data\n\ndata.groupby(['month', 'day'])['lights'].mean().plot()", "_____no_output_____" ], [ "# create pandas series with the mean energy consumption per day\n\n# electricity consumption by appliances per day\nelec_pday = data.groupby(['month', 'day'])['Appliances'].mean()\n\n# light energy consumption per day\nlight_pday = data.groupby(['month', 'day'])['lights'].mean()", "_____no_output_____" ], [ "# find the peaks, that is, the local maxima\n\npeaks, _ = find_peaks(elec_pday.values, height=60)\n\npeaks", "_____no_output_____" ], [ "# compare the shape of the time series with that of the selected\n# local maxima series\n\nelec_pday.shape, elec_pday[peaks].shape", "_____no_output_____" ], [ "# select only the values of the series with the local maxima\n\nelec_pday[peaks]", "_____no_output_____" ], [ "# capture the series with local maxima in a pandas dataframe\n# then reset index so that the month and day become part of the columns\n\n# finally, we need to add the year, to be able to reconstitute the date\n# from the existing time columns\n\ntmp = pd.DataFrame(elec_pday[peaks]).reset_index(drop=False)\ntmp['year'] = 2016\ntmp.head()", "_____no_output_____" ], [ "# reconstitute the datetime variable\n\ntmp['date'] = pd.to_datetime(tmp[['year', 'month', 'day']])\n\ntmp.head()", "_____no_output_____" ], [ "# calculate the distance, in days, between the local maxima\n# we do this utilizing the dataframe with only the local maxima\n\ntmp['peak_distance'] = tmp['date'].diff()\n\ntmp['peak_distance'] = tmp['peak_distance'].dt.days\n\ntmp.head()", "_____no_output_____" ], [ "# now we put all the steps together in a function\n\n# not in book, but useful information for readers, to\n# automate the calculation of peak distances across variables\n\ndef time_between_peaks(ser):\n\n # find local maxima\n peaks, _ = find_peaks(ser.values)\n\n # select the series values with local maxima only\n # transform the series into a dataframe with the month\n # and day index as columns\n tmp = pd.DataFrame(ser[peaks]).reset_index(drop=False)\n\n # add year to reconstitute date\n tmp['year'] = 2016\n\n # reconstitute date\n tmp['date'] = pd.to_datetime(tmp[['year', 'month', 'day']])\n\n # calculate difference in days between local maxima\n tmp['peak_distance'] = tmp['date'].diff()\n tmp['peak_distance'] = tmp['peak_distance'].dt.days\n\n # return difference in days between local maxima\n # that is a pandas series\n return tmp['peak_distance']", "_____no_output_____" ], [ "# return a series with a difference in days respect to the\n# previous local maxima for the time series with the\n# mean daily energy consumption by lights\n\ndistances = time_between_peaks(light_pday)\n\n# display first 10 values of the series\ndistances[0:10]", "_____no_output_____" ] ], [ [ "## To determine distance between local maxima and minima\n\nWe need to calculate both, and then concatenate the arrays, and use that to select the data", "_____no_output_____" ] ], [ [ "# determine the days of minimum electricity consumption \n# throughout the 5 months, that is the local minima\n\n# we use peak values but we turn the series upside down with the\n# reciprocal function\n\nvalleys, _ = find_peaks(1 / elec_pday.values, height=(-np.Inf, 1/60))\nvalleys", "_____no_output_____" ], [ "# compare the number of observations in the entire series\n# vs the number of local maxima, vs the number of local minima\n\nelec_pday.shape, elec_pday[peaks].shape, elec_pday[valleys].shape", "_____no_output_____" ], [ "# concatenate the indices that contain the local minima and maxima\n# and then sort its values\n\npeaksandvalleys = np.concatenate([peaks, valleys])\npeaksandvalleys.sort()\npeaksandvalleys", "_____no_output_____" ], [ "# now we use this index to select the data\n\nelec_pday[peaksandvalleys].shape", "_____no_output_____" ] ], [ [ "To determine the time elapsed between local maxima and minima, we need create a dataframe with those values executing:\n\n tmp = pd.DataFrame(elec_pday[peaksandvalleys]).reset_index(drop=False)\n\nand then, 1) add the year, 2) reconstitute the date, and 3) calculate the time between the local maxima and minima, as we have done in previous cells.", "_____no_output_____" ], [ "## There is more\n\nWe can determine the mean difference between events for various customers or entities.\n\nLet's work with the mock customer transactions data set as example.", "_____no_output_____" ] ], [ [ "import featuretools as ft\n\n# load data set from feature tools\ndata_dict = ft.demo.load_mock_customer()\n\ndata = data_dict[\"transactions\"].merge(\n data_dict[\"sessions\"]).merge(data_dict[\"customers\"])\n\ncols = ['customer_id',\n 'transaction_id',\n 'transaction_time',\n 'amount',\n ]\n\ndata = data[cols]\n\ndata.head()", "_____no_output_____" ], [ "# Let's first calculate the time since previous transaction for each transaction\n\n# sort the data by transaction date and time\ndata.sort_values(by=['transaction_time'], ascending=True, inplace=True)\n\n# calculate time since previous transaction in hours\ndata['time_since_previous'] = data['transaction_time'].diff()\ndata['time_since_previous'] = data['time_since_previous']/np.timedelta64(1,'h')", "_____no_output_____" ], [ "# calculate mean time between transactions per customer\n# all transactions occur every 1h and 5 min in the toy data set\n# so the result is a bit boring, but you get the yiest\n\ntmp = data.groupby('customer_id')['time_since_previous'].mean()\ntmp", "_____no_output_____" ], [ "# Now, let's calculate the time between local extrema\n\n# extract the hour of the transaction\n\ndata['hr'] = data['transaction_time'].dt.hour\n\ndata.head()", "_____no_output_____" ], [ "# now let's plot the local maxima for the time series with \n# the mean amount spent per hour ==>\n\n# one plot per customer\n\n# this code is intended to get the reader familiar with the data\n# and therefore facilitate the understanding of the recipe code\n\n\ndef find_and_plot_peaks(x, customer):\n\n # find local maxima and minima\n peaks, _ = find_peaks(x)\n valleys, _ = find_peaks(1/x)\n \n # plot the peaks and valleys\n plt.figure(figsize=(4,3))\n plt.plot(x)\n plt.plot(peaks, x[peaks], \"x\")\n plt.plot(valleys, x[valleys], \"x\", color='red')\n plt.title('Customer number {}'.format(customer))\n plt.show()", "_____no_output_____" ], [ "# make a plot per customer, with local minima and\n# maxima of amount spent per hr\n\nfor customer in data['customer_id'].unique():\n tmp = data[data['customer_id']==customer]\n tmp = tmp.groupby('hr')['amount'].mean()\n tmp.reset_index(drop=True, inplace=True)\n find_and_plot_peaks(tmp, customer)\n", "_____no_output_____" ], [ "# create a series of functions to find the local maxima and minima\n# put the arrays together, and then slice the original series into\n# those values, to finally calculate the time elapsed between them\n\n# these functions operate at a pandas series level\n# x is a pandas series\n\ndef find_no_peaks(x):\n # finds number of local maxima\n peaks, _ = find_peaks(x)\n return peaks\n\ndef find_no_valleys(x):\n # finds number of local minima\n valleys, _ = find_peaks(1/x)\n return valleys\n\ndef concatenate_pav(x):\n # concatenates the indeces of the peaks and valleys\n ids = np.concatenate([find_no_peaks(x), find_no_valleys(x)])\n ids.sort()\n return ids\n\ndef slice_and_measure(x):\n # selects the points with peaks and valleys in the series\n # and determines the hr difference between them.\n # finally, returns the mean distance between\n # all local maxima and minima\n ids = concatenate_pav(x)\n tmp = pd.DataFrame(x.iloc[ids]).reset_index(drop=False)\n t = tmp['hr'].diff()\n return t.mean(skipna=True)", "_____no_output_____" ], [ "# this pandas series, df, is the argument we need for the\n# precedent functions\n\n# we use the data of customer 3 as an example to test each\n# individual function\n\ndf = data[data['customer_id'] == 3]\ndf = df.groupby('hr')['amount'].mean()\ndf", "_____no_output_____" ], [ "# test function that finds number of peaks\npeaks = find_no_peaks(df)\npeaks", "_____no_output_____" ], [ "# test function that finds number of valleys\nvalleys = find_no_valleys(df)\nvalleys", "_____no_output_____" ], [ "# test concatenate function\n\nids = concatenate_pav(df)\nids", "_____no_output_____" ], [ "# test result of concatenate_pav when applied to the\n# entire dataset: that would be the indeces with max and min\n# transaction amount per customer\n\ndata.groupby(['customer_id', 'hr'])['amount'].mean().groupby('customer_id').apply(concatenate_pav)", "_____no_output_____" ], [ "# step by step the inner code of the function slide_and_measure()\n\ntmp = pd.DataFrame(df.iloc[ids]).reset_index(drop=False)\nt = tmp['hr'].diff()\nt", "_____no_output_____" ], [ "# output of slide_and_measure()\n\nt.mean(skipna=True)", "_____no_output_____" ], [ "# test slide_and_measure() on 1 customer\nslice_and_measure(df)", "_____no_output_____" ], [ "# apply slide_and_measure() to the entire data set\n\ndata.groupby(['customer_id', 'hr'])['amount'].mean().groupby(\n 'customer_id').apply(slice_and_measure)", "_____no_output_____" ] ], [ [ "Compare the distances returned by our function with the peak distances in the plots in cell **23**.\n\nCustomer 2 shows np.nan, because it only contains 1 local maxima, so it is not possible to calculate distances.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7629d16a7d1e4d7b1773106f1c7fa422a87b019
99,602
ipynb
Jupyter Notebook
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
36074b5d448c9cfdba166332e99100afb3390824
[ "Apache-2.0" ]
2
2020-11-23T23:06:04.000Z
2020-11-23T23:06:07.000Z
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
josemartinsgeo/arcgis-python-api
4c10bb1ce900060959829f7ac6c58d4d67037d56
[ "Apache-2.0" ]
null
null
null
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
josemartinsgeo/arcgis-python-api
4c10bb1ce900060959829f7ac6c58d4d67037d56
[ "Apache-2.0" ]
1
2020-06-06T21:21:18.000Z
2020-06-06T21:21:18.000Z
97.267578
41,768
0.715859
[ [ [ "<h1>**Table of Contents**<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Feature-engineering---quantifying-access-to-facilities---batch-mode\" data-toc-modified-id=\"Feature-engineering---quantifying-access-to-facilities---batch-mode-1\">Feature engineering - quantifying access to facilities - batch mode</a></span><ul class=\"toc-item\"><li><span><a href=\"#Read-the-shortlisted-properties\" data-toc-modified-id=\"Read-the-shortlisted-properties-1.1\">Read the shortlisted properties</a></span></li><li><span><a href=\"#Loop-through-each-property-and-build-the-neighborhood-facility-table\" data-toc-modified-id=\"Loop-through-each-property-and-build-the-neighborhood-facility-table-1.2\">Loop through each property and build the neighborhood facility table</a></span></li><li><span><a href=\"#Feature-engineer-with-access-to-amenities\" data-toc-modified-id=\"Feature-engineer-with-access-to-amenities-1.3\">Feature engineer with access to amenities</a></span><ul class=\"toc-item\"><li><span><a href=\"#Plot-the-distribution-of-facility-access\" data-toc-modified-id=\"Plot-the-distribution-of-facility-access-1.3.1\">Plot the distribution of facility access</a></span></li><li><span><a href=\"#Store-to-disk\" data-toc-modified-id=\"Store-to-disk-1.3.2\">Store to disk</a></span></li></ul></li></ul></li></ul></div>", "_____no_output_____" ], [ "# Feature engineering - quantifying access to facilities - batch mode\nThis notebook is similar to previous (04_feature-engineering-neighboring-facilities), except, this one runs for all shortlisted facilities and adds these as features.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\nfrom pprint import pprint\n%matplotlib inline\n\nfrom arcgis.gis import GIS\nfrom arcgis.geocoding import geocode, batch_geocode\nfrom arcgis.features import Feature, FeatureLayer, FeatureSet, GeoAccessor, GeoSeriesAccessor\nfrom arcgis.features import SpatialDataFrame\nfrom arcgis.geometry import Geometry, Point\nfrom arcgis.geometry.functions import buffer\nfrom arcgis.network import RouteLayer", "_____no_output_____" ] ], [ [ "Connect to GIS", "_____no_output_____" ] ], [ [ "gis = GIS(profile='')\nroute_service_url = gis.properties.helperServices.route.url\nroute_service = RouteLayer(route_service_url, gis=gis)", "_____no_output_____" ] ], [ [ "## Read the shortlisted properties", "_____no_output_____" ] ], [ [ "prop_list_df = pd.read_csv('resources/houses_for_sale_att_filtered.csv')\nprop_list_df.shape", "_____no_output_____" ], [ "prop_list_df = pd.DataFrame.spatial.from_xy(prop_list_df, 'LONGITUDE','LATITUDE')\ntype(prop_list_df)", "_____no_output_____" ] ], [ [ "## Loop through each property and build the neighborhood facility table", "_____no_output_____" ] ], [ [ "groceries_count = []\nrestaurants_count = []\nhospitals_count = []\ncoffee_count = []\nbars_count = []\ngas_count = []\nshops_service_count = []\ntravel_transport_count = []\nparks_count = []\neducation_count = []\nroute_length = []\nroute_duration = []\n\ndestination_address = '309 SW 6th Ave #600, Portland, OR 97204'", "_____no_output_____" ], [ "count=0\nfor index, prop in prop_list_df.iterrows():\n count+=1\n print(str(count), end=\": \")\n # geocode the property\n paddress = prop['ADDRESS'] + \", \" + prop['CITY'] + \", \" + prop['STATE']\n prop_geom_fset = geocode(paddress, as_featureset=True)\n \n print(prop['MLS'], end=\" : \")\n \n # create an envelope around each property\n prop_geom = prop_geom_fset.features[0]\n \n # create buffer of 5 miles\n prop_buffer = buffer([prop_geom.geometry], \n in_sr = 102100, buffer_sr=102100,\n distances=0.05, unit=9001)[0]\n\n prop_buffer_f = Feature(geometry=prop_buffer)\n prop_buffer_fset = FeatureSet([prop_buffer_f])\n \n # geocode for Groceries\n groceries = geocode('groceries', search_extent=prop_buffer.extent, \n max_locations=20, as_featureset=True)\n groceries_count.append(len(groceries.features))\n print('Groc', end=\" : \")\n \n # restaurants\n restaurants = geocode('restaurant', search_extent=prop_buffer.extent, max_locations=200)\n restaurants_count.append(len(restaurants))\n print('Rest', end=\" : \")\n \n # hospitals\n hospitals = geocode('hospital', search_extent=prop_buffer.extent, max_locations=50)\n hospitals_count.append(len(hospitals))\n print('Hosp', end =\" : \")\n \n # coffee shop\n coffees = geocode('coffee', search_extent=prop_buffer.extent, max_locations=50)\n coffee_count.append(len(coffees))\n print('Coffee', end=\" : \")\n \n # bars\n bars = geocode('bar', search_extent=prop_buffer.extent, max_locations=50)\n bars_count.append(len(bars))\n print('Bars', end=\" : \")\n \n # gas stations\n gas = geocode('gas station', search_extent=prop_buffer.extent, max_locations=50)\n gas_count.append(len(gas))\n print('Gas', end=\" : \")\n \n # shops\n shops_service = geocode(\"\",category='shops and service', \n search_extent=prop_buffer.extent, max_locations=50)\n shops_service_count.append(len(shops_service))\n print('Shops', end=\" : \")\n \n # travel & transport\n transport = geocode(\"\",category='travel and transport', \n search_extent=prop_buffer.extent, max_locations=50)\n travel_transport_count.append(len(transport))\n print(\"Travel\", end =\" : \")\n \n # parks\n parks = geocode(\"\",category='parks and outdoors', \n search_extent=prop_buffer.extent, max_locations=50)\n parks_count.append(len(parks))\n print('Parks', end=\" : \")\n \n # education\n education = geocode(\"\",category='education', search_extent=prop_buffer.extent, \n max_locations=50)\n education_count.append(len(education))\n print(\"Edu\", end=\" : \")\n \n # get route\n stops = [paddress, destination_address]\n stops_geocoded = batch_geocode(stops)\n\n stops_geocoded = [item['location'] for item in stops_geocoded]\n stops_geocoded2 = '{},{};{},{}'.format(stops_geocoded[0]['x'],stops_geocoded[0]['y'],\n stops_geocoded[1]['x'],stops_geocoded[1]['y'])\n\n route_result = route_service.solve(stops_geocoded2, return_routes=True, \n return_stops=False, return_directions=True,\n impedance_attribute_name='TravelTime',\n start_time=644511600000,\n return_barriers=False, return_polygon_barriers=False,\n return_polyline_barriers=False)\n route_length.append(route_result['directions'][0]['summary']['totalLength'])\n route_duration.append(route_result['directions'][0]['summary']['totalTime'])\n print(\"Route\")", "1: 18517652 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n2: 18465613 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n3: 18005102 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n4: 18216924 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n5: 18647164 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n6: 18229660 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n7: 18586790 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n8: 18314898 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n9: 18085278 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n10: 18406186 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n11: 18020972 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n12: 18283940 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n13: 18166839 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n14: 18390189 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n15: 18281614 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n16: 18159838 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n17: 18697929 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n18: 18184111 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n19: 18381383 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n20: 18036264 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n21: 18352241 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n22: 18415529 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n23: 18052500 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n24: 18615156 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n25: 18663618 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n26: 18524346 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n27: 18287995 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n28: 18496603 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n29: 18306005 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n30: 18017989 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n31: 18630734 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n32: 18362577 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n33: 18185462 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n34: 18525026 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n35: 18268485 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n36: 18077776 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n37: 18404645 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n38: 18543295 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n39: 18268007 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n40: 18565385 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n41: 18327093 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n42: 18300182 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n43: 18130222 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n44: 18390288 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n45: 18244519 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n46: 18533145 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n47: 18317832 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n48: 18625148 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n49: 18046552 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n50: 18613304 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n51: 18035240 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n52: 18170798 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n53: 18479918 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n54: 18134679 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n55: 18175979 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n56: 18489535 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n57: 18136667 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n58: 18330192 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n59: 18035177 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n60: 18117942 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n61: 18467170 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n62: 18021854 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n63: 18453150 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n64: 17228335 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n65: 18385412 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n66: 18303159 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n67: 18260962 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n68: 18077788 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n69: 1443106 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n70: 18363380 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n71: 18038013 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n72: 18592639 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n73: 18031108 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n74: 18529842 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n75: 18346063 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n76: 18058515 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n77: 18312442 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n78: 18062133 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n79: 18292112 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n80: 18021426 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n81: 18243995 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n82: 18225445 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n83: 18294835 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n84: 18036942 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n85: 18164206 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n86: 18112429 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n87: 18401204 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n88: 18185257 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n89: 18056844 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n90: 18158997 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n91: 18089346 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n92: 18432056 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n93: 18248238 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n94: 18042235 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n95: 18376736 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n96: 17468803 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n97: 18002685 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n98: 18609084 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n99: 18115705 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n100: 18197366 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n101: 18367807 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n102: 18289907 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n103: 1456193 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n104: 18047073 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n105: 18379344 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n106: 18487228 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n107: 18123653 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n108: 18310961 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n109: 18671543 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n110: 18384430 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n111: 18292179 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n112: 18545728 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n113: 18317933 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n114: 18131614 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n115: 18359207 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n116: 18188909 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n117: 18257295 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n118: 18454489 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n119: 18644320 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n120: 18103930 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n121: 18213089 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n122: 18436725 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n123: 18217493 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n124: 18534351 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n125: 18391773 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n126: 17138243 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n127: 18580547 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n128: 18385584 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n129: 1516748 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n130: 18108097 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n131: 18343244 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n132: 18518261 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n133: 18496090 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n134: 18084407 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n135: 18622197 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n136: 18590953 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n137: 18548575 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n138: 18361209 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n139: 18190102 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n140: 18306032 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n141: 18546456 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n142: 17218625 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n143: 18360477 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n144: 18447320 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n145: 18127206 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n146: 18489822 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n147: 18218285 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n148: 18647011 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n149: 18193927 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n150: 18595550 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n151: 18317847 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n152: 18108604 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n153: 18410442 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n154: 17515218 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n155: 18578497 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n156: 18293042 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n157: 18401105 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n158: 18095097 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n159: 18030852 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n160: 18299685 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n161: 18583483 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n162: 18171268 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n163: 18053604 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n164: 18622831 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n165: 18586722 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n166: 18467321 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n167: 18383920 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n168: 18599482 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n169: 18500611 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n170: 18345364 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n171: 18116411 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route\n" ] ], [ [ "## Feature engineer with access to amenities", "_____no_output_____" ] ], [ [ "prop_list_df['grocery_count'] = groceries_count\nprop_list_df['restaurant_count']= restaurants_count\nprop_list_df['hospitals_count']= hospitals_count\nprop_list_df['coffee_count']= coffee_count\nprop_list_df['bars_count']=bars_count\nprop_list_df['gas_count']=gas_count\nprop_list_df['shops_count']=shops_service_count\nprop_list_df['travel_count']=travel_transport_count\nprop_list_df['parks_count']=parks_count\nprop_list_df['edu_count']=education_count\nprop_list_df['commute_length']=route_length\nprop_list_df['commute_duration']=route_duration\nprop_list_df.head()", "_____no_output_____" ] ], [ [ "### Plot the distribution of facility access", "_____no_output_____" ] ], [ [ "prop_list_df.columns", "_____no_output_____" ], [ "facility_list = ['grocery_count', 'restaurant_count', 'hospitals_count', 'coffee_count',\n 'bars_count', 'gas_count', 'shops_count', 'travel_count', 'parks_count',\n 'edu_count', 'commute_length', 'commute_duration']\n\naxes = prop_list_df[facility_list].hist(bins=25, layout=(3,4), figsize=(15,10))", "_____no_output_____" ] ], [ [ "From the histograms above, most houses don't have very many bars in 5 miles around them. The commute length and duration appears to be tightly clustered around the lower end of the spectrum. Most houses have at least 1 hospital or medical center near them and a large number of parks, restaurants, educational institutions.", "_____no_output_____" ], [ "### Store to disk", "_____no_output_____" ] ], [ [ "prop_list_df.to_csv('resources/houses_facility_counts.csv')\nprop_list_df.spatial.to_featureclass('resources/shp/houses_facility_counts.shp')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e762a616ab4b83b191cf0b94a5497f33274edd6c
649,393
ipynb
Jupyter Notebook
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
aac03eaec3b769613b4692a99a0adc27564424b9
[ "Apache-2.0" ]
null
null
null
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
aac03eaec3b769613b4692a99a0adc27564424b9
[ "Apache-2.0" ]
null
null
null
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
aac03eaec3b769613b4692a99a0adc27564424b9
[ "Apache-2.0" ]
null
null
null
210.431951
215,562
0.830774
[ [ [ "<a href=\"https://colab.research.google.com/github/Ishitha2003/-fmml20211052/blob/main/module1_lab_2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# **The aim of this lab is to introduce DATA and FEATURES.**", "_____no_output_____" ], [ "Extracting features from data \nFMML Module 1, Lab 2 \nModule Coordinator : [email protected]", "_____no_output_____" ] ], [ [ "! pip install wikipedia\n\nimport wikipedia\nimport nltk\nfrom nltk.util import ngrams\nfrom collections import Counter\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport re\nimport unicodedata\nimport plotly.express as px\nimport pandas as pd", "Collecting wikipedia\n Downloading wikipedia-1.4.0.tar.gz (27 kB)\nRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from wikipedia) (4.6.3)\nRequirement already satisfied: requests<3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from wikipedia) (2.23.0)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2021.10.8)\nBuilding wheels for collected packages: wikipedia\n Building wheel for wikipedia (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for wikipedia: filename=wikipedia-1.4.0-py3-none-any.whl size=11695 sha256=1bd6bc02a88d2783cdab8fa979ba87dd77c590685818e4995e42f62ac8980708\n Stored in directory: /root/.cache/pip/wheels/15/93/6d/5b2c68b8a64c7a7a04947b4ed6d89fb557dcc6bc27d1d7f3ba\nSuccessfully built wikipedia\nInstalling collected packages: wikipedia\nSuccessfully installed wikipedia-1.4.0\n" ] ], [ [ "# **What are features?**\n\nfeatures are individual independent variables that act like a input to your system.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nfrom matplotlib import cm\nimport numpy as np\n\nfrom mpl_toolkits.mplot3d.axes3d import get_test_data\n\n \n# set up a figure twice as wide as it is tall\nfig = plt.figure(figsize=plt.figaspect(0.9))\n\n# =============\n# First subplot\n# =============\n# set up the axes for the first plot\nax = fig.add_subplot(1, 2, 1, projection='3d')\n\n# plot a 3D surface like in the example mplot3d/surface3d_demo\nX = np.arange(-5, 5, 0.25) # feature 1\nY = np.arange(-5, 5, 0.25) # feature 2\nX, Y = np.meshgrid(X, Y)\nR = np.sqrt(X**2 + Y**2)\nZ = np.sin(R) #output\nsurf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0.4, antialiased=False)\nax.set_zlim(-1.01, 1.01)\nfig.colorbar(surf, shrink=0.5, aspect=10)", "_____no_output_____" ] ], [ [ "# **Part 2: Features of text**\n\nHow do we apply machine learning on text? We can't directly use the text as input to our algorithms. We need to convert them to features. In this notebook, we will explore a simple way of converting text to features.\n\nLet us download a few documents off Wikipedia.", "_____no_output_____" ] ], [ [ "topic1 = 'Giraffe'\ntopic2 = 'Elephant'\nwikipedia.set_lang('en') \neng1 = wikipedia.page(topic1).content\neng2 = wikipedia.page(topic2).content\nwikipedia.set_lang('fr')\nfr1 = wikipedia.page(topic1).content\nfr2 = wikipedia.page(topic2).content\n", "_____no_output_____" ], [ "fr2", "_____no_output_____" ] ], [ [ "We need to clean this up a bit. Let us remove all the special characters and keep only 26 letters and space. Note that this will remove accented characters in French also. We are also removing all the numbers and spaces. So this is not an ideal solution.", "_____no_output_____" ] ], [ [ "def cleanup(text):\n text = text.lower() # make it lowercase\n text = re.sub('[^a-z]+', '', text) # only keep characters\n return text", "_____no_output_____" ], [ "print(eng1)", "The giraffe is a tall African mammal belonging to the genus Giraffa. Specifically, It is an even-toed ungulate. It is the tallest living terrestrial animal and the largest ruminant on Earth. Traditionally, giraffes were thought to be one species, Giraffa camelopardalis, with nine subspecies. Most recently, researchers proposed dividing giraffes into up to eight extant species due to new research into their mitochondrial and nuclear DNA, as well as morphological measurements. Seven other extinct species of Giraffa are known from the fossil record.\nThe giraffe's chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its spotted coat patterns. It is classified under the family Giraffidae, along with its closest extant relative, the okapi. Its scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannahs and woodlands. Their food source is leaves, fruits, and flowers of woody plants, primarily acacia species, which they browse at heights most other herbivores cannot reach.\nLions, leopards, spotted hyenas, and African wild dogs may prey upon giraffes. Giraffes live in herds of related females and their offspring, or bachelor herds of unrelated adult males, but are gregarious and may gather in large aggregations. Males establish social hierarchies through \"necking,” which are combat bouts where the neck is used as a weapon. Dominant males gain mating access to females, which bear the sole responsibility for raising the young.\nThe giraffe has intrigued various ancient and modern cultures for its peculiar appearance, and has often been featured in paintings, books, and cartoons. It is classified by the International Union for Conservation of Nature (IUCN) as vulnerable to extinction and has been extirpated from many parts of its former range. Giraffes are still found in numerous national parks and game reserves, but estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. More than 1,600 were kept in zoos in 2010.\n\n\n== Etymology ==\nThe name \"giraffe\" has its earliest known origins in the Arabic word zarāfah (زرافة), perhaps borrowed from the animal's Somali name geri. The Arab name is translated as \"fast-walker\". In early Modern English the spellings jarraf and ziraph were used, probably directly from the Arabic, and in Middle English orafle and gyrfaunt, gerfaunt. The Italian form giraffa arose in the 1590s. The modern English form developed around 1600 from the French girafe.\"Camelopard\" is an archaic English name for the giraffe; it derives from the Ancient Greek καμηλοπάρδαλις (kamēlopárdalis), from κάμηλος (kámēlos), \"camel\", and πάρδαλις (párdalis), \"leopard\", referring to its camel-like shape and leopard-like colouration.\n\n\n== Taxonomy ==\nCarl Linnaeus originally classified living giraffes as one species in 1758. He gave it the binomial name Cervus camelopardalis. Morten Thrane Brünnich classified the genus Giraffa in 1762. The species name camelopardalis is from Latin.\n\n\n=== Evolution ===\nThe giraffe is one of only two living genera of the family Giraffidae in the order Artiodactyla, the other being the okapi. The family was once much more extensive, with over 10 fossil genera described. The elongation of the neck appears to have started early in the giraffe lineage. Comparisons between giraffes and their ancient relatives suggest vertebrae close to the skull lengthened earlier, followed by lengthening of vertebrae further down. One early giraffid ancestor was Canthumeryx which has been dated variously to have lived 25–20 million years ago (mya), 17–15 mya or 18–14.3 mya and whose deposits have been found in Libya. This animal was medium-sized, slender and antelope-like. Giraffokeryx appeared 15 mya on the Indian subcontinent and resembled an okapi or a small giraffe, and had a longer neck and similar ossicones. Giraffokeryx may have shared a clade with more massively built giraffids like Sivatherium and Bramatherium.Giraffids like Palaeotragus, Shansitherium and Samotherium appeared 14 mya and lived throughout Africa and Eurasia. These animals had bare ossicones and small cranial sinuses and were longer with broader skulls. Paleotragus resembled the okapi and may have been its ancestor. Others find that the okapi lineage diverged earlier, before Giraffokeryx. Samotherium was a particularly important transitional fossil in the giraffe lineage, as its cervical vertebrae were intermediate in length and structure between a modern giraffe and an okapi, and were more vertical than the okapi's. Bohlinia, which first appeared in southeastern Europe and lived 9–7 mya was likely a direct ancestor of the giraffe. Bohlinia closely resembled modern giraffes, having a long neck and legs and similar ossicones and dentition.\n\nBohlinia entered China and northern India in response to climate change. From there, the genus Giraffa evolved and, around 7 mya, entered Africa. Further climate changes caused the extinction of the Asian giraffes, while the African giraffes survived and radiated into several new species. Living giraffes appear to have arisen around 1 mya in eastern Africa during the Pleistocene. Some biologists suggest the modern giraffes descended from G. jumae; others find G. gracilis a more likely candidate. G. jumae was larger and more heavily built, while G. gracilis was smaller and more lightly built.The changes from extensive forests to more open habitats, which began 8 mya, are believed to be the main driver for the evolution of giraffes. During this time, tropical plants disappeared and were replaced by arid C4 plants, and a dry savannah emerged across eastern and northern Africa and western India. Some researchers have hypothesised that this new habitat coupled with a different diet, including acacia species, may have exposed giraffe ancestors to toxins that caused higher mutation rates and a higher rate of evolution. The coat patterns of modern giraffes may also have coincided with these habitat changes. Asian giraffes are hypothesised to have had more okapi-like colourations.The giraffe genome is around 2.9 billion base pairs in length compared to the 3.3 billion base pairs of the okapi. Of the proteins in giraffe and okapi genes, 19.4% are identical. The divergence of giraffe and okapi lineages dates to around 11.5 mya. A small group of regulatory genes in the giraffe appear to be responsible for the animal's stature and associated circulatory adaptations.\n\n\n=== Species and subspecies ===\n\nThe International Union for Conservation of Nature (IUCN) currently recognises only one species of giraffe with nine subspecies. During the 1900s, various taxonomies with two or three species were proposed. A 2007 study on the genetics of giraffes using mitochondrial DNA suggested at least six lineages could be recognised as species. A 2011 study using detailed analyses of the morphology of giraffes, and application of the phylogenetic species concept, described eight species of living giraffes. A 2016 study also concluded that living giraffes consist of multiple species. The researchers suggested the existence of four species, which have not exchanged genetic information between each other for 1 to 2 million years.A 2020 study showed that depending on the method chosen, different taxonomic hypotheses recognizing from two to six species can be considered for the genus Giraffa. That study also found that multi-species coalescent methods can lead to taxonomic over-splitting, as those methods delimit geographic structures rather than species. The three-species hypothesis, which recognises G. camelopardalis, G. giraffa, and G. tippelskirchi, is highly supported by phylogenetic analyses and also corroborated by most population genetic and multi-species coalescent analyses. A 2021 whole genome sequencing study suggests the existence of four distinct species and seven subspecies.The cladogram below shows the phylogenetic relationship between the four proposed species and seven subspecies based on the genome analysis. Note the eight lineages correspond to eight of the traditional subspecies in the one species hypothesis. The Rothschild giraffe is subsumed into G. camelopardalis camelopardalis.\n\nThe following table compares the different hypotheses for giraffe species. The description column shows the traditional nine subspecies in the one species hypothesis.\nThe first extinct species to be described was Giraffa sivalensis Falconer and Cautley 1843, a reevaluation of a vertebra that was initially described as a fossil of the living giraffe. While taxonomic opinion may be lacking on some names, the extinct species that have been published include:\nGiraffa gracilis\nGiraffa jumae\nGiraffa priscilla\nGiraffa pomeli\nGiraffa punjabiensis\nGiraffa pygmaea\nGiraffa sivalensis\nGiraffa stillei\n\n\n== Appearance and anatomy ==\n\nFully grown giraffes stand 4.3–5.7 m (14.1–18.7 ft) tall, with males taller than females. The average weight is 1,192 kg (2,628 lb) for an adult male and 828 kg (1,825 lb) for an adult female. Despite its long neck and legs, the giraffe's body is relatively short.: 66  The skin of a giraffe is mostly gray, or tan, and can reach a thickness of 20 mm (0.79 in).: 87  The 80–100 centimetres (31–39 in) long tail ends in a long, dark tuft of hair and is used as a defense against insects.: 94 The coat has dark blotches or patches, which can be orange, chestnut, brown, or nearly black, separated by light hair, usually white or cream coloured. Male giraffes become darker as they age. The coat pattern has been claimed to serve as camouflage in the light and shade patterns of savannah woodlands. When standing among trees and bushes, they are hard to see at even a few metres distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves rather than on camouflage, which may be more important for calves. Each individual giraffe has a unique coat pattern. Giraffe calves inherit some coat pattern traits from their mothers, and variation in some spot traits is correlated with neonatal survival. The skin underneath the blotches may serve as windows for thermoregulation, being sites for complex blood vessel systems and large sweat glands.The fur may give the animal chemical defense, as its parasite repellents give it a characteristic scent. At least 11 main aromatic chemicals are in the fur, although indole and 3-methylindole are responsible for most of the smell. Because the males have a stronger odour than the females, the odour may also have sexual function.\n\n\n=== Head ===\n\nBoth sexes have prominent horn-like structures called ossicones, formed from ossified cartilage, covered in skin and fused to the skull at the parietal bones. Being vascularised, the ossicones may have a role in thermoregulation, and are used in combat between males. Appearance is a reliable guide to the sex or age of a giraffe: the ossicones of females and young are thin and display tufts of hair on top, whereas those of adult males end in knobs and tend to be bald on top. Also, a median lump, which is more prominent in males, emerges at the front of the skull. Males develop calcium deposits that form bumps on their skulls as they age. Multiple sinuses lighten a giraffe's skull.: 103  However, as males age, their skulls become heavier and more club-like, helping them become more dominant in combat. The occipital condyles of the skull allow the animal to tilt its head straight up and grab food on the branches above with the tongue.: 103, 110 Located on both sides of the head, the giraffe's eyes give it good eyesight and a wide field of vision from its great height.: 85, 102  The eye is larger than in other ungulates, with a greater retinal surface area. Giraffes possibly see in colour: 85  and their senses of hearing and smell are sharp. The ears are movable: 95  and the nostrils are slit-shaped, which may be an adaptation against blowing sand.The giraffe's prehensile tongue is about 45 cm (18 in) long. It is black, perhaps to protect against sunburn and is useful for grasping foliage, and delicately removing leaves from branches.: 109–110  The giraffe's upper lip is prehensile and useful when foraging, and is covered in hair to protect against thorns. Papillae cover the tongue and the inside of the mouth. The upper jaw has a hard palate and lacks front teeth. The molars and premolars have a low-crowned, broad surface with an almost square cross-section.: 106 \n\n\n=== Legs, locomotion and posture ===\n\nA giraffe's front and back legs are about the same length. The radius and ulna of the front legs are articulated by the carpus, which, while structurally equivalent to the human wrist, functions as a knee. It appears that a suspensory ligament allows the lanky legs to support the animal's great weight. The hooves of large male giraffes reach a diameter of 31 cm × 23 cm (12.2 in × 9.1 in).: 98  The rear of each hoof is low, and the fetlock is close to the ground, allowing the foot to provide additional support for the animal's weight. Giraffes lack dewclaws and interdigital glands. The giraffe's pelvis, though relatively short, has an ilium that is outspread at the upper ends.A giraffe has only two gaits: walking and galloping. Walking is done by moving the legs on one side of the body, then doing the same on the other side. When galloping, the hind legs move around the front legs before the latter move forward, and the tail will curl up. The animal relies on the forward and backward motions of its head and neck to maintain balance and the counter momentum while galloping.: 327–29  The giraffe can reach a sprint speed of up to 60 km/h (37 mph), and can sustain 50 km/h (31 mph) for several kilometres. Giraffes would probably not be competent swimmers as their long legs would be highly cumbersome in the water, although they could possibly float. When swimming, the thorax would be weighed down by the front legs, making it difficult for the animal to move its neck and legs in harmony or keep its head above the water's surface.A giraffe rests by lying with its body on top of its folded legs.: 329  To lie down, the animal kneels on its front legs and then lowers the rest of its body. To get back up, it first gets on its front knees and shifts hindquarters onto its back feet. It then moves from kneeling to standing on its front legs and pulls the rest of its body upwards, swinging its head for balance.: 67  If the giraffe wants to bend down to drink, it either spreads its front legs or bends its knees. Studies in captivity found the giraffe sleeps intermittently around 4.6 hours per day, mostly at night. It usually sleeps lying down; however, standing sleeps have been recorded, particularly in older individuals. Intermittent short \"deep sleep\" phases while lying are characterised by the giraffe bending its neck backwards and resting its head on the hip or thigh, a position believed to indicate paradoxical sleep.\n\n\n=== Neck ===\nThe giraffe has an extremely elongated neck, which can be up to 2.4 m (7.9 ft) in length. Along the neck is a mane made of short, erect hairs. The neck typically rests at an angle of 50–60 degrees, though juveniles have straighter necks and rest at 70 degrees.: 94  The long neck results from a disproportionate lengthening of the cervical vertebrae, not from the addition of more vertebrae. Each cervical vertebra is over 28 cm (11 in) long.: 71  They comprise 52–54 per cent of the length of the giraffe's vertebral column, compared with the 27–33 percent typical of similar large ungulates, including the giraffe's closest living relative, the okapi. This elongation largely takes place after birth, perhaps because giraffe mothers would have a difficult time giving birth to young with the same neck proportions as adults. The giraffe's head and neck are held up by large muscles and a strengthened nuchal ligament, which are anchored by long dorsal spines on the anterior thoracic vertebrae, giving the animal a hump.\n\nThe giraffe's neck vertebrae have ball and socket joints.: 71  The point of articulation between the cervical and thoracic vertebrae of giraffes is shifted to lie between the first and second thoracic vertebrae (T1 and T2), unlike most other ruminants where the articulation is between the seventh cervical vertebra (C7) and T1. This allows C7 to contribute directly to increased neck length and has given rise to the suggestion that T1 is actually C8, and that giraffes have added an extra cervical vertebra. However, this proposition is not generally accepted, as T1 has other morphological features, such as an articulating rib, deemed diagnostic of thoracic vertebrae, and because exceptions to the mammalian limit of seven cervical vertebrae are generally characterised by increased neurological anomalies and maladies.There are several hypotheses regarding the evolutionary origin and maintenance of elongation in giraffe necks. Charles Darwin originally suggested the \"competing browsers hypothesis\", which has been challenged only recently. It suggests that competitive pressure from smaller browsers, like kudu, steenbok and impala, encouraged the elongation of the neck, as it enabled giraffes to reach food that competitors could not. This advantage is real, as giraffes can and do feed up to 4.5 m (15 ft) high, while even quite large competitors, such as kudu, can feed up to only about 2 m (6 ft 7 in) high. There is also research suggesting that browsing competition is intense at lower levels, and giraffes feed more efficiently (gaining more leaf biomass with each mouthful) high in the canopy. However, scientists disagree about just how much time giraffes spend feeding at levels beyond the reach of other browsers,\nand a 2010 study found that adult giraffes with longer necks actually suffered higher mortality rates under drought conditions than their shorter-necked counterparts. This study suggests that maintaining a longer neck requires more nutrients, which puts longer-necked giraffes at risk during a food shortage.Another theory, the sexual selection hypothesis, proposes the long necks evolved as a secondary sexual characteristic, giving males an advantage in \"necking\" contests (see below) to establish dominance and obtain access to sexually receptive females. In support of this theory, necks are longer and heavier for males than females of the same age, and males do not employ other forms of combat. However, one objection is it fails to explain why female giraffes also have long necks. It has also been proposed that the neck serves to give the animal greater vigilance.\n\n\n=== Internal systems ===\n\nIn mammals, the left recurrent laryngeal nerve is longer than the right; in the giraffe, it is over 30 cm (12 in) longer. These nerves are longer in the giraffe than in any other living animal; the left nerve is over 2 m (6 ft 7 in) long. Each nerve cell in this path begins in the brainstem and passes down the neck along the vagus nerve, then branches off into the recurrent laryngeal nerve which passes back up the neck to the larynx. Thus, these nerve cells have a length of nearly 5 m (16 ft) in the largest giraffes. Despite its long neck and large skull, the brain of the giraffe is typical for an ungulate. Evaporative heat loss in the nasal passages keep the giraffe's brain cool. The shape of the skeleton gives the giraffe a small lung volume relative to its mass. Its long neck gives it a large amount of dead space, in spite of its narrow windpipe. The giraffe also has an increased level of tidal volume so the ratio of dead space to tidal volume is similar to other mammals. The animal can still supply enough oxygen to its tissues, and it can increase its respiratory rate and oxygen diffusion when running.\n\nThe circulatory system of the giraffe has several adaptations for its great height. Its heart, which can weigh more than 11 kg (25 lb) and measures about 60 cm (2 ft) long, must generate approximately double the blood pressure required for a human to maintain blood flow to the brain. As such, the wall of the heart can be as thick as 7.5 cm (3.0 in). Giraffes have unusually high heart rates for their size, at 150 beats per minute.: 76  When the animal lowers its head, the blood rushes down fairly unopposed and a rete mirabile in the upper neck, with its large cross-sectional area, prevents excess blood flow to the brain. When it raises again, the blood vessels constrict and direct blood into the brain so the animal does not faint. The jugular veins contain several (most commonly seven) valves to prevent blood flowing back into the head from the inferior vena cava and right atrium while the head is lowered. Conversely, the blood vessels in the lower legs are under great pressure because of the weight of fluid pressing down on them. To solve this problem, the skin of the lower legs is thick and tight, preventing too much blood from pouring into them.Giraffes have oesophageal muscles that are unusually strong to allow regurgitation of food from the stomach up the neck and into the mouth for rumination.: 78  They have four chambered stomachs, as in all ruminants; the first chamber has adapted to their specialized diet. The intestines of an adult giraffe measure more than 70 m (230 ft) in length and have a relatively small ratio of small to large intestine. The liver of the giraffe is small and compact.: 76  A gallbladder is generally present during fetal life, but it may disappear before birth.\n\n\n== Behaviour and ecology ==\n\n\n=== Habitat and feeding ===\n\nGiraffes usually inhabit savannahs and open woodlands. They prefer Acacieae, Commiphora, Combretum and open Terminalia woodlands over denser environments like Brachystegia woodlands.: 322  The Angolan giraffe can be found in desert environments. Giraffes browse on the twigs of trees, preferring those of the subfamily Acacieae and the genera Commiphora and Terminalia, which are important sources of calcium and protein to sustain the giraffe's growth rate. They also feed on shrubs, grass and fruit.: 324  A giraffe eats around 34 kg (75 lb) of foliage daily. When stressed, giraffes may chew the bark off branches.: 325  Giraffes are also recorded to chew old bones.: 102 During the wet season, food is abundant and giraffes are more spread out, while during the dry season, they gather around the remaining evergreen trees and bushes. Mothers tend to feed in open areas, presumably to make it easier to detect predators, although this may reduce their feeding efficiency. As a ruminant, the giraffe first chews its food, then swallows it for processing and then visibly passes the half-digested cud up the neck and back into the mouth to chew again.: 78–79  The giraffe requires less food than many other herbivores because the foliage it eats has more concentrated nutrients and it has a more efficient digestive system. The animal's faeces come in the form of small pellets. When it has access to water, a giraffe drinks at intervals no longer than three days.Giraffes have a great effect on the trees that they feed on, delaying the growth of young trees for some years and giving \"waistlines\" to too tall trees. Feeding is at its highest during the first and last hours of daytime. Between these hours, giraffes mostly stand and ruminate. Rumination is the dominant activity during the night, when it is mostly done lying down.\n\n\n=== Social life ===\n\nGiraffes are usually found in groups that vary in size and composition according to ecological, anthropogenic, temporal, and social factors. Traditionally, the composition of these groups had been described as open and ever-changing. For research purposes, a \"group\" has been defined as \"a collection of individuals that are less than a kilometre apart and moving in the same general direction\". More recent studies have found that giraffes have long-term social associations and may form groups or pairs based on kinship, sex or other factors, and these groups regularly associate with other groups in larger communities or sub-communities within a fission–fusion society. Proximity to humans can disrupt social arrangements. Masai giraffes of Tanzania live in distinct social subpopulations that overlap spatially, but have different reproductive rates and calf survival rates.\nThe number of giraffes in a group can range from one up to 66 individuals. Giraffe groups tend to be sex-segregated although mixed-sex groups made of adult females and young males also occur. Female groups may be matrilineally related. Generally females are more selective than males in who they associate with regarding individuals of the same sex. Particularly stable giraffe groups are those made of mothers and their young, which can last weeks or months. Young males also form groups and will engage in playfights. However, as they get older, males become more solitary but may also associate in pairs or with female groups. Giraffes are not territorial, but they have home ranges that vary according to rainfall and proximity to human settlements. Male giraffes occasionally wander far from areas that they normally frequent.: 329 Early biologists suggested giraffes were mute and unable to produce air flow of sufficient velocity to vibrate their vocal folds. To the contrary; they have been recorded to communicate using snorts, sneezes, coughs, snores, hisses, bursts, moans, grunts, growls and flute-like sounds. During courtship, males emit loud coughs. Females call their young by bellowing. Calves will emit snorts, bleats, mooing and mewing sounds. Snorting and hissing in adults is associated with vigilance. During nighttime, giraffes appear to hum to each other above the infrasound range. The purpose is unclear. Dominant males display to other males with an erect posture; holding the chin and head high while walking stiffly and approaching them laterally. The less dominant show submissiveness by lowing the head and ears with the chin moved in and then jump and flee.\n\n\n=== Reproduction and parental care ===\n\nReproduction in giraffes is broadly polygamous: a few older males mate with the fertile females. Females can reproduce throughout the year and experience oestrus cycling approximately every 15 days. Female giraffes in oestrous are dispersed over space and time, so reproductive adult males adopt a strategy of roaming among female groups to seek mating opportunities, with periodic hormone-induced rutting behaviour approximately every two weeks. Males prefer young adult females over juveniles and older adults.Male giraffes assess female fertility by tasting the female's urine to detect oestrus, in a multi-step process known as the flehmen response. Once an oestrous female is detected, the male will attempt to court her. When courting, dominant males will keep subordinate ones at bay. A courting male may lick a female's tail, rest his head and neck on her body or nudge her with his ossicones. During copulation, the male stands on his hind legs with his head held up and his front legs resting on the female's sides.Giraffe gestation lasts 400–460 days, after which a single calf is normally born, although twins occur on rare occasions. The mother gives birth standing up. The calf emerges head and front legs first, having broken through the fetal membranes, and falls to the ground, severing the umbilical cord. A newborn giraffe is 1.7–2 m (5.6–6.6 ft) tall. Within a few hours of birth, the calf can run around and is almost indistinguishable from a one-week-old. However, for the first one to three weeks, it spends most of its time hiding; its coat pattern providing camouflage. The ossicones, which have lain flat while it was in the womb, become erect within a few days.\n\nMothers with calves will gather in nursery herds, moving or browsing together. Mothers in such a group may sometimes leave their calves with one female while they forage and drink elsewhere. This is known as a \"calving pool\". Adult males play almost no role in raising the young,: 337  although they appear to have friendly interactions. Calves are at risk of predation, and a mother giraffe will stand over her calf and kick at an approaching predator. Females watching calving pools will only alert their own young if they detect a disturbance, although the others will take notice and follow. Calves may be weaned at six to eight months old but can remain with their mothers for up to 14 months.: 49  Females become sexually mature when they are four years old, while males become mature at four or five years. Spermatogenesis in male giraffes begins at three to four years of age. Males must wait until they are at least seven years old to gain the opportunity to mate.\n\n\n=== Necking ===\n\nMale giraffes use their necks as weapons in combat, a behaviour known as \"necking\". Necking is used to establish dominance and males that win necking bouts have greater reproductive success. This behaviour occurs at low or high intensity. In low-intensity necking, the combatants rub and lean against each other. The male that can hold itself more erect wins the bout. In high-intensity necking, the combatants will spread their front legs and swing their necks at each other, attempting to land blows with their ossicones. The contestants will try to dodge each other's blows and then get ready to counter. The power of a blow depends on the weight of the skull and the arc of the swing. A necking duel can last more than half an hour, depending on how well matched the combatants are.: 331  Although most fights do not lead to serious injury, there have been records of broken jaws, broken necks, and even deaths.After a duel, it is common for two male giraffes to caress and court each other. Such interactions between males have been found to be more frequent than heterosexual coupling. In one study, up to 94 percent of observed mounting incidents took place between males. The proportion of same-sex activities varied from 30 to 75 percent. Only one percent of same-sex mounting incidents occurred between females.\n\n\n=== Mortality and health ===\n\nGiraffes have high adult survival probability, and an unusually long lifespan compared to other ruminants, up to 38 years. Because of their size, eyesight and powerful kicks, adult giraffes are usually not subject to predation, although lions may regularly prey on individuals up to 550 kg (1,210 lb). Giraffes are the most common food source for the big cats in Kruger National Park, comprising nearly a third of the meat consumed, although only a small portion of the giraffes were probably killed by predators, as a majority of the consumed giraffes appeared to be scavenged. Adult female survival is significantly correlated with gregariousness, the average number of other females she is seen associating with. Calves are much more vulnerable than adults and are also preyed on by leopards, spotted hyenas and wild dogs. A quarter to a half of giraffe calves reach adulthood. Calf survival varies according to the season of birth, with calves born during the dry season having higher survival rates.The local, seasonal presence of large herds of migratory wildebeests and zebras reduces predation pressure on giraffe calves and increases their survival probability. In turn, it has been suggested that other ungulates may benefit from associating with giraffes, as their height allows them to spot predators from further away. Zebras were found to glean information on predation risk from giraffe body language and spend less time scanning the environment when giraffes are present.Some parasites feed on giraffes. They are often hosts for ticks, especially in the area around the genitals, which have thinner skin than other areas. Tick species that commonly feed on giraffes are those of genera Hyalomma, Amblyomma and Rhipicephalus. Giraffes may rely on red-billed and yellow-billed oxpeckers to clean them of ticks and alert them to danger. Giraffes host numerous species of internal parasites and are susceptible to various diseases. They were victims of the (now eradicated) viral illness rinderpest. Giraffes can also suffer from a skin disorder, which comes in the form of wrinkles, lesions or raw fissures. As much as 79% of giraffes show signs of the disease in Ruaha National Park, but it did not cause mortality in Tarangire and is less prevalent in areas with fertile soils.\n\n\n== Relationship with humans ==\n\n\n=== Cultural significance ===\nWith its lanky build and spotted coat, the giraffe has been a source of fascination throughout human history, and its image is widespread in culture. It has been used to symbolise flexibility, far-sightedness, femininity, fragility, passivity, grace, beauty and the continent of Africa itself.: 7, 116 \n\nGiraffes were depicted in art throughout the African continent, including that of the Kiffians, Egyptians, and Kushites.: 45–47  The Kiffians were responsible for a life-size rock engraving of two giraffes, dated 8,000 years ago, that has been called the \"world's largest rock art petroglyph\".: 45  How the giraffe got its height has been the subject of various African folktales. The Tugen people of modern Kenya used the giraffe to depict their god Mda. The Egyptians gave the giraffe its own hieroglyph, named 'sr' in Old Egyptian and 'mmy' in later periods.: 49 Giraffes have a presence in modern Western culture. Salvador Dalí depicted them with burning manes in some of his surrealist paintings. Dali considered the giraffe to be a symbol of masculinity, and a flaming giraffe was meant to be a \"masculine cosmic apocalyptic monster\".: 123  Several children's books feature the giraffe, including David A. Ufer's The Giraffe Who Was Afraid of Heights, Giles Andreae's Giraffes Can't Dance and Roald Dahl's The Giraffe and the Pelly and Me. Giraffes have appeared in animated films, as minor characters in Disney's The Lion King and Dumbo, and in more prominent roles in The Wild and the Madagascar films. Sophie the Giraffe has been a popular teether since 1961. Another famous fictional giraffe is the Toys \"R\" Us mascot Geoffrey the Giraffe.: 127 The giraffe has also been used for some scientific experiments and discoveries. Scientists have looked at the properties of giraffe skin when developing suits for astronauts and fighter pilots: 76  because the people in these professions are in danger of passing out if blood rushes to their legs. Computer scientists have modeled the coat patterns of several subspecies using reaction–diffusion mechanisms. The constellation of Camelopardalis, introduced in the seventeenth century, depicts a giraffe.: 119–20  The Tswana people of Botswana traditionally see the constellation Crux as two giraffes—Acrux and Mimosa forming a male, and Gacrux and Delta Crucis forming the female.\n\n\n=== Captivity ===\nThe Egyptians kept giraffes as pets and shipped them around the Mediterranean.: 48–49  The giraffe was among the many animals collected and displayed by the Romans. The first one in Rome was brought in by Julius Caesar in 46 BC and exhibited to the public.: 52  With the fall of the Western Roman Empire, the housing of giraffes in Europe declined.: 54  During the Middle Ages, giraffes were known to Europeans through contact with the Arabs, who revered the giraffe for its peculiar appearance.Individual captive giraffes were given celebrity status throughout history. In 1414, a giraffe was shipped from Malindi to Bengal. It was then taken to China by explorer Zheng He and placed in a Ming dynasty zoo. The animal was a source of fascination for the Chinese people, who associated it with the mythical Qilin.: 56  The Medici giraffe was a giraffe presented to Lorenzo de' Medici in 1486. It caused a great stir on its arrival in Florence. Zarafa, another famous giraffe, was brought from Egypt to Paris in the early 19th century as a gift from Muhammad Ali of Egypt to Charles X of France. A sensation, the giraffe was the subject of numerous memorabilia or \"giraffanalia\".: 81 Giraffes have become popular attractions in modern zoos, though keeping them healthy is difficult as they require wide areas and high amounts of browse for food. Captive giraffes in North America and Europe appear to have a higher mortality rate than in the wild; causes of death include poor husbandry, nutrition and management decisions.: 153  Giraffes in zoos display stereotypical behaviours, the most common being the licking of non-food items.: 164  Zookeepers may offer various activities to stimulate giraffes, including training them to accept food from visitors.: 175  Stables for giraffes are built particularly high to accommodate their height.: 183 \n\n\n=== Exploitation ===\nGiraffes were probably common targets for hunters throughout Africa. Different parts of their bodies were used for different purposes. Their meat was used for food. The tail hairs served as flyswatters, bracelets, necklaces, and thread. Shields, sandals, and drums were made using the skin, and the strings of musical instruments were from the tendons. The smoke from burning giraffe skins was used by the medicine men of Buganda to treat nose bleeds. The Humr people of Kordofan consume the drink Umm Nyolokh, which is prepared from the liver and bone marrow of giraffes. Richard Rudgley hypothesised that Umm Nyolokh might contain DMT. The drink is said to cause hallucinations of giraffes, believed to be the giraffes' ghosts, by the Humr.\n\n\n=== Conservation status ===\nIn 2016, giraffes were assessed as Vulnerable from a conservation perspective by the IUCN. In 1985, it was estimated there were 155,000 giraffes in the wild. This declined to over 140,000 in 1999. Estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. The Masai and reticulated subspecies are endangered, and the Rothschild subspecies is near threatened. The Nubian subspecies is critically endangered.\n\nThe primary causes for giraffe population declines are habitat loss and direct killing for bushmeat markets. Giraffes have been extirpated from much of their historic range, including Eritrea, Guinea, Mauritania and Senegal. They may also have disappeared from Angola, Mali, and Nigeria, but have been introduced to Rwanda and Eswatini. As of 2010, there were more than 1,600 in captivity at Species360-registered zoos. Habitat destruction has hurt the giraffe. In the Sahel, the need for firewood and grazing room for livestock has led to deforestation. Normally, giraffes can coexist with livestock, since they do not directly compete with them. In 2017, severe droughts in northern Kenya led to increased tensions over land and the killing of wildlife by herders, with giraffe populations being particularly hit.Protected areas like national parks provide important habitat and anti-poaching protection to giraffe populations. Community-based conservation efforts outside national parks are also effective at protecting giraffes and their habitats. Private game reserves have contributed to the preservation of giraffe populations in southern Africa. The giraffe is a protected species in most of its range. It is the national animal of Tanzania, and is protected by law, and unauthorised killing can result in imprisonment. The UN backed Convention of Migratory Species selected giraffes for protection in 2017. In 2019, giraffes were listed under Appendix II of the Convention on International Trade in Endangered Species (CITES), which means international trade including in parts/derivatives is regulated.Translocations are sometimes used to augment or re-establish diminished or extirpated populations, but these activities are risky and difficult to undertake using the best practices of extensive pre- and post-translocation studies and ensuring a viable founding population. Aerial survey is the most common method of monitoring giraffe population trends in the vast roadless tracts of African landscapes, but aerial methods are known to undercount giraffes. Ground-based survey methods are more accurate and can be used in conjunction with aerial surveys to make accurate estimates of population sizes and trends. The Giraffe Conservation Foundation has been criticized for alleged mistreatment of giraffes and giraffe scientists.\n\n\n== See also ==\nFauna of Africa\nGiraffe Centre\nGiraffe Manor - hotel in Nairobi with giraffes\n\n\n== References ==\n\n\n== External links ==\nGiraffe Conservation Foundation\n" ] ], [ [ "Instead of directly using characters as the features, to understand a text better, we may consider group of tokens i.e. ngrams as features.\nfor this example let us consider that each character is one word, and let us see how n-grams work.\n", "_____no_output_____" ], [ "# **nltk library provides many tools for text processing, please explore them.**", "_____no_output_____" ], [ "Now let us calculate the frequency of the character n-grams. N-grams are groups of characters of size n. A unigram is a single character and a bigram is a group of two characters and so on.\n\nLet us count the frequency of each character in a text and plot it in a histogram.", "_____no_output_____" ] ], [ [ "# convert a tuple of characters to a string\ndef tuple2string(tup):\n st = ''\n for ii in tup:\n st = st + ii\n return st\n\n# convert a tuple of tuples to a list of strings\ndef key2string(keys):\n return [tuple2string(i) for i in keys]\n\n# plot the histogram\ndef plothistogram(ngram):\n keys = key2string(ngram.keys()) \n values = list(ngram.values())\n \n # sort the keys in alphabetic order\n combined = zip(keys, values)\n zipped_sorted = sorted(combined, key=lambda x: x[0])\n keys, values = map(list, zip(*zipped_sorted))\n plt.bar(keys, values)", "_____no_output_____" ] ], [ [ "Let us compare the histograms of English pages and French pages. Can you spot a difference?", "_____no_output_____" ] ], [ [ "## we passed ngrams 'n' as 1 to get unigrams. Unigram is nothing but single token (in this case character).\n\nunigram_eng1 = Counter(ngrams(eng1,1))\nplothistogram(unigram_eng1)\nplt.title('English 1')\nplt.show()\nunigram_eng2 = Counter(ngrams(eng2,1))\nplothistogram(unigram_eng2)\nplt.title('English 2')\nplt.show()", "_____no_output_____" ], [ "unigram_fr1 = Counter(ngrams(fr1,1))\nplothistogram(unigram_eng1)\nplt.title('French 1')\nplt.show()\nunigram_fr2 = Counter(ngrams(fr2,1))\nplothistogram(unigram_fr2)\nplt.title('French 2')\nplt.show()", "_____no_output_____" ] ], [ [ "A good feature is one that helps in easy prediction and classification.\nfor ex : if you wish to differentiate between grapes and apples, size can be one of the useful features.", "_____no_output_____" ], [ "We can see that the unigrams for French and English are very similar. So this is not a good feature if we want to distinguish between English and French. Let us look at bigrams.", "_____no_output_____" ] ], [ [ "## Now instead of unigram, we will use bigrams as features, and see how useful bigrams are as features.\n\nbigram_eng1 = Counter(ngrams(eng1,2)) # bigrams\nplothistogram(bigram_eng1)\nplt.title('English 1')\nplt.show()\n\nbigram_eng2 = Counter(ngrams(eng2,2))\nplothistogram(bigram_eng2)\nplt.title('English 2')\nplt.show()\n\nbigram_fr1 = Counter(ngrams(fr1,2))\nplothistogram(bigram_eng1)\nplt.title('French 1')\nplt.show()\n\nbigram_fr2 = Counter(ngrams(fr2,2))\nplothistogram(bigram_fr2)\nplt.title('French 2')\nplt.show()", "_____no_output_____" ] ], [ [ "Another way to visualize bigrams is to use a 2-dimensional graph.", "_____no_output_____" ] ], [ [ "## lets have a lot at bigrams.\n\nbigram_eng1", "_____no_output_____" ], [ "def plotbihistogram(ngram):\n freq = np.zeros((26,26))\n for ii in range(26):\n for jj in range(26):\n freq[ii,jj] = ngram[(chr(ord('a')+ii), chr(ord('a')+jj))]\n plt.imshow(freq, cmap = 'jet')\n return freq", "_____no_output_____" ], [ "bieng1 = plotbihistogram(bigram_eng1)\nplt.show()\nbieng2 = plotbihistogram(bigram_eng2)", "_____no_output_____" ], [ "bifr1 = plotbihistogram(bigram_fr1)\nplt.show()\nbifr2 = plotbihistogram(bigram_fr2)", "_____no_output_____" ] ], [ [ "Let us look at the top 10 ngrams for each text.", "_____no_output_____" ] ], [ [ "from IPython.core.debugger import set_trace\n\ndef ind2tup(ind):\n ind = int(ind)\n i = int(ind/26)\n j = int(ind%26)\n return (chr(ord('a')+i), chr(ord('a')+j))\n\ndef ShowTopN(bifreq, n=10):\n f = bifreq.flatten()\n arg = np.argsort(-f)\n for ii in range(n):\n print(f'{ind2tup(arg[ii])} : {f[arg[ii]]}')\n", "_____no_output_____" ], [ "print('\\nEnglish 1:')\nShowTopN(bieng1)\nprint('\\nEnglish 2:')\nShowTopN(bieng2)\nprint('\\nFrench 1:')\nShowTopN(bifr1)\nprint('\\nFrench 2:')\nShowTopN(bifr2)", "\nEnglish 1:\n('t', 'h') : 714.0\n('h', 'e') : 705.0\n('i', 'n') : 577.0\n('e', 's') : 546.0\n('a', 'n') : 541.0\n('e', 'r') : 457.0\n('r', 'e') : 445.0\n('r', 'a') : 418.0\n('a', 'l') : 407.0\n('n', 'd') : 379.0\n\nEnglish 2:\n('a', 'n') : 1344.0\n('t', 'h') : 1271.0\n('h', 'e') : 1163.0\n('i', 'n') : 946.0\n('e', 'r') : 744.0\n('l', 'e') : 707.0\n('r', 'e') : 704.0\n('n', 'd') : 670.0\n('n', 't') : 642.0\n('h', 'a') : 632.0\n\nFrench 1:\n('e', 's') : 546.0\n('l', 'e') : 337.0\n('d', 'e') : 328.0\n('e', 'n') : 315.0\n('o', 'n') : 306.0\n('n', 't') : 275.0\n('r', 'e') : 268.0\n('r', 'a') : 216.0\n('a', 'n') : 202.0\n('o', 'u') : 193.0\n\nFrench 2:\n('e', 's') : 920.0\n('n', 't') : 773.0\n('d', 'e') : 623.0\n('e', 'n') : 598.0\n('a', 'n') : 538.0\n('l', 'e') : 511.0\n('o', 'n') : 479.0\n('r', 'e') : 455.0\n('u', 'r') : 295.0\n('t', 'i') : 280.0\n" ] ], [ [ "## **At times, we need to reduce the number of features. We will discuss this more in the upcoming sessions, but a small example has been discussed here. Instead of using each unique token (a word) as a feature, we reduced the number of features by using 1-gram and 2-gram of characters as features.**", "_____no_output_____" ], [ "We observe that the bigrams are similar across different topics but different across languages. Thus, the bigram frequency is a good feature for distinguishing languages, but not for distinguishing topics.\n\nThus, we were able to convert a many-dimensional input (the text) to 26 dimesions (unigrams) or 26*26 dimensions (bigrams).\n\nA few ways to explore:\n\nTry with different languages.\nThe topics we used are quite similar, wikipedia articles of 'elephant' and 'giraffe'. What happens if we use very different topics? What if we use text from another source than Wikipedia?\nHow can we use and visualize trigrams and higher n-grams?", "_____no_output_____" ], [ "Features of Images.\nImages in digital format are stored as numeric values, and hence we can use these values as features. for ex : a black and white (binary) image is stored as an array of 0 and 255 or 0 and 1.", "_____no_output_____" ], [ "# **Part 2: Written numbers**\n\nWe will use a subset of the MNIST dataset. Each input character is represented in a 28*28 array. Let us see if we can extract some simple features from these images which can help us distinguish between the digits.\n\nLoad the dataset:", "_____no_output_____" ] ], [ [ "from keras.datasets import mnist\n \n#loading the dataset\n(train_X, train_y), (test_X, test_y) = mnist.load_data()", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\n11501568/11490434 [==============================] - 0s 0us/step\n" ] ], [ [ "Extract a subset of the data for our experiment:", "_____no_output_____" ] ], [ [ "no1 = train_X[train_y==1,:,:] ## dataset corresponding to number = 1.\nno0 = train_X[train_y==0,:,:] ## dataset corresponding to number = 0.", "_____no_output_____" ] ], [ [ "Let us visualize a few images here", "_____no_output_____" ] ], [ [ "for ii in range(5):\n plt.subplot(1, 5, ii+1)\n plt.imshow(no1[ii,:,:])\nplt.show()\nfor ii in range(5):\n plt.subplot(1, 5, ii+1)\n plt.imshow(no0[ii,:,:])\nplt.show()", "_____no_output_____" ] ], [ [ "We can even use value of each pixel as a feature. But let us see how to derive other features.\n", "_____no_output_____" ], [ "\nNow, let us start with a simple feature: the sum of all pixels and see how good this feature is.", "_____no_output_____" ] ], [ [ "## sum of pixel values.\n\nsum1 = np.sum(no1>0, (1,2)) # threshold before adding up\nsum0 = np.sum(no0>0, (1,2))", "_____no_output_____" ] ], [ [ "Let us visualize how good this feature is: (X-axis is mean, y-axis is the digit)", "_____no_output_____" ] ], [ [ "plt.hist(sum1, alpha=0.7);\nplt.hist(sum0, alpha=0.7);", "_____no_output_____" ] ], [ [ "We can already see that this feature separates the two classes quite well.\n\nLet us look at another, more complicated feature. We will count the number black pixels that are surrounded on four sides by non-black pixels, or \"hole pixels\".", "_____no_output_____" ] ], [ [ "def cumArray(img):\n img2 = img.copy()\n for ii in range(1, img2.shape[1]):\n img2[ii,:] = img2[ii,:] + img2[ii-1,:] # for every row, add up all the rows above it.\n #print(img2)\n img2 = img2>0\n #print(img2)\n return img2\n\ndef getHolePixels(img):\n im1 = cumArray(img)\n im2 = np.rot90(cumArray(np.rot90(img)), 3) # rotate and cumulate it again for differnt direction\n im3 = np.rot90(cumArray(np.rot90(img, 2)), 2)\n im4 = np.rot90(cumArray(np.rot90(img, 3)), 1)\n hull = im1 & im2 & im3 & im4 # this will create a binary image with all the holes filled in.\n hole = hull & ~ (img>0) # remove the original digit to leave behind the holes\n return hole", "_____no_output_____" ] ], [ [ "Visualize a few:", "_____no_output_____" ] ], [ [ "imgs = [no1[456,:,:], no0[456,:,:]]\nfor img in imgs:\n plt.subplot(1,2,1)\n plt.imshow(getHolePixels(img))\n plt.subplot(1,2,2)\n plt.imshow(img)\n plt.show()", "_____no_output_____" ] ], [ [ "Now let us plot the number of hole pixels and see how this feature behaves", "_____no_output_____" ] ], [ [ "hole1 = np.array([getHolePixels(i).sum() for i in no1])\nhole0 = np.array([getHolePixels(i).sum() for i in no0])\n \nplt.hist(hole1, alpha=0.7);\nplt.hist(hole0, alpha=0.7);", "_____no_output_____" ] ], [ [ "This feature works even better to distinguish between one and zero.\n\nNow let us try the number of pixels in the 'hull' or the number with the holes filled in:", "_____no_output_____" ], [ "Let us try one more feature, where we look at the number of boundary pixels in each image.", "_____no_output_____" ] ], [ [ "def minus(a, b):\n return a & ~ b\n\ndef getBoundaryPixels(img):\n img = img.copy()>0 # binarize the image\n rshift = np.roll(img, 1, 1)\n lshift = np.roll(img, -1 ,1)\n ushift = np.roll(img, -1, 0)\n dshift = np.roll(img, 1, 0)\n boundary = minus(img, rshift) | minus(img, lshift) | minus(img, ushift) | minus(img, dshift)\n return boundary", "_____no_output_____" ], [ "imgs = [no1[456,:,:], no0[456,:,:]]\nfor img in imgs:\n plt.subplot(1,2,1)\n plt.imshow(getBoundaryPixels(img))\n plt.subplot(1,2,2)\n plt.imshow(img)\n plt.show()", "_____no_output_____" ], [ "bound1 = np.array([getBoundaryPixels(i).sum() for i in no1])\nbound0= np.array([getBoundaryPixels(i).sum() for i in no0])\n\nplt.hist(bound1, alpha=0.7);\nplt.hist(bound0, alpha=0.7);", "_____no_output_____" ] ], [ [ "\n\nWhat will happen if we plot two features together?", "_____no_output_____" ], [ "Feel free to explore the above graph with your mouse.\n\nWe have seen that we extracted four features from a 28*28 dimensional image.\n\nSome questions to explore:\n\nWhich is the best combination of features?\nHow would you test or visualize four or more features?\nCan you come up with your own features?\nWill these features work for different classes other than 0 and 1?\nWhat will happen if we take more that two classes at a time?", "_____no_output_____" ], [ "# **Features from CSV file**", "_____no_output_____" ] ], [ [ "import pandas as pd\n\ndf = pd.read_csv('/content/sample_data/california_housing_train.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "df = df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'})", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\nsns.set(style = \"darkgrid\")\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection = '3d')\n\nx = df['total_bedrooms'][:50]\ny = df['housing_median_age'][:50]\nz = df['median_house_value'][:50]\n\nax.set_xlabel(\"total_bedrooms\")\nax.set_ylabel(\"housing_median_age\")\nax.set_zlabel(\"median_house_value\")\n\nax.scatter(x, y, z)\n\nplt.show()", "_____no_output_____" ] ], [ [ "# **Task :**\n\n Download a CSV file from the internet, upload it to your google drive.\n Read the CSV file and plot graphs using different combination of features and write your analysis\n \n Ex : IRIS flower datasaet", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nfrom google.colab import drive\ndrive.mount('/content/drive')\niris = pd.read_csv('/content/drive/MyDrive/iris_csv.csv')\n", "Mounted at /content/drive\n" ], [ "iris.head()", "_____no_output_____" ], [ "iris.columns", "_____no_output_____" ], [ "iris_trimmed = iris[['sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'class']]", "_____no_output_____" ], [ "import seaborn as sns\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "sns.pairplot(iris_trimmed, hue = 'class');\nplt.savefig('iris.png')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e762b4f150f5ada502d16e4fb1de8f5a4f4bf97a
1,657
ipynb
Jupyter Notebook
algorithms/111-Minimum-Depth-of-Binary-Tree.ipynb
DjangoPeng/leetcode-solutions
9aa2b911b0278e743448f04241828d33182c9d76
[ "Apache-2.0" ]
10
2019-03-23T15:15:55.000Z
2020-07-12T02:37:31.000Z
algorithms/111-Minimum-Depth-of-Binary-Tree.ipynb
DjangoPeng/leetcode-solutions
9aa2b911b0278e743448f04241828d33182c9d76
[ "Apache-2.0" ]
null
null
null
algorithms/111-Minimum-Depth-of-Binary-Tree.ipynb
DjangoPeng/leetcode-solutions
9aa2b911b0278e743448f04241828d33182c9d76
[ "Apache-2.0" ]
3
2019-06-21T12:13:23.000Z
2020-12-08T07:49:33.000Z
26.301587
92
0.429692
[ [ [ "# Definition for a binary tree node.\n# class TreeNode:\n# def __init__(self, x):\n# self.val = x\n# self.left = None\n# self.right = None\n\nclass Solution:\n def minDepth(self, root: TreeNode) -> int: \n if root is None:\n return 0\n \n def dfs(node: TreeNode, depth: int):\n # deeper than self.depth, return\n if depth > self.depth:\n return\n if node:\n # leaf node, return\n if node.left is None and node.right is None and self.depth > depth:\n self.depth = depth\n return\n dfs(node.left, depth+1)\n dfs(node.right, depth+1)\n \n return\n \n self.depth = 2**31\n dfs(root, 1)\n return self.depth", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e762b8a380a5acd677c51670b09bfd16b5e6270b
13,673
ipynb
Jupyter Notebook
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
2327bef1d2a25aacdf4e39dccf2d2e77191a0f35
[ "Apache-2.0" ]
null
null
null
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
2327bef1d2a25aacdf4e39dccf2d2e77191a0f35
[ "Apache-2.0" ]
null
null
null
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
2327bef1d2a25aacdf4e39dccf2d2e77191a0f35
[ "Apache-2.0" ]
1
2022-01-02T14:58:38.000Z
2022-01-02T14:58:38.000Z
19.14986
238
0.490017
[ [ [ "# varibles and data types \n\nvaribles are container is a name is given to a memory allocated with program datatypes values and datatypes are the varibles kinds like int datatype, string datatype etc. ", "_____no_output_____" ], [ "# how python can identify \n\nvariable and data types . so basically identify if you write a= 30 it sees no double quotes(\") no decimal point (.) so its int literal which means by default int datatypes and rapidly identified as well as string, int ,float etc.", "_____no_output_____" ] ], [ [ "p=\"harry\"\na=348\nb=3434.334343\nprint(type(p))\nprint(type(a))\nprint(type(b))", "<class 'str'>\n<class 'int'>\n<class 'float'>\n" ] ], [ [ "# variable = are to store a value\n keyword = reserverd word in python ......................................\n identifier = class / function / variables Name", "_____no_output_____" ], [ "# EXAMPLE \nDEF, CLASS ARE THE RESERVED WORD IN PYTHON", "_____no_output_____" ], [ "# WHAT ARE DATA TYPES \n\nsome data types like \n\nint = -34,-3,-1,0,3,4,6,7.. are int data types", "_____no_output_____" ], [ "# float = decimal within a number is floating point number in float data types ", "_____no_output_____" ], [ "# strings\n\nstring is simply a text representation within a single, double quotes or trippple quotes for multiline strings.", "_____no_output_____" ], [ "# boolean \nboolean gives output only True or False recommendation in programming language. IF YOU DEAL WITH two value if it is false or true than you can use this boolean data types for your codes. example right below down.", "_____no_output_____" ] ], [ [ "if 89798>3:\n print(True)\nelse:\n print(False)\n ", "True\n" ] ], [ [ "# NONE \nis simply denoted for represent if you want to give none values then you can use it as a=None to show in code ", "_____no_output_____" ] ], [ [ "d=None\nprint(type(d))\n\n", "<class 'NoneType'>\n" ] ], [ [ "# what is type ?\n# python has class and objects we will discuss later about that.type is function which we call and gives the outputs of which class of variable present in name or variable you created . \n", "_____no_output_____" ], [ "# rule for creating variable names\n1. variable name contains names underscore and digits.\n2. A variable can start with alphabets or underscore.\n3. A variable can not start with digit.\n4. A Variable is case-sensitive mean a or A are both different aspect of variable .\n5. NO white space are allowed to be used in variable names.\n", "_____no_output_____" ], [ "# Operators in python\ncommon operators in python:\n\n1. arithimetic operators = +,-,*,/ etc.\n2. assignment operators :- =, += , -= , etc. \n3. camparision operators:- == , > , >=, <, <= ,!= , etc. \n4. logical operators :- and, or, not.\n", "_____no_output_____" ], [ "# arithmetic opertor.py", "_____no_output_____" ] ], [ [ "a =343\nb=38437498\n\nprint(\"sum of a+b\",a+b)", "_____no_output_____" ] ], [ [ "# assignment operator .py\n", "_____no_output_____" ], [ "# if you add 3 to a int variable just follow step using assignment operator\n", "_____no_output_____" ] ], [ [ "i=8\ni+=3\nprint(i)", "11\n" ], [ "i=34\ni-=34\nprint(i)\n", "0\n" ], [ "p=3\np*=4\nprint(p)", "12\n" ], [ "o=344\no/=34\nprint(o)", "10.117647058823529\n" ] ], [ [ "# camparision operator \ncamparision operator campare between two entities to which is True or False like boolean.", "_____no_output_____" ] ], [ [ "b=4>6\nprint(b)", "False\n" ], [ "b=34>33\nprint(b)", "True\n" ], [ "b=(34>=3)\nprint(b)", "True\n" ], [ "n=(3434==34343)\nprint(n)\n", "False\n" ], [ "p=24!=98\nprint(p)", "True\n" ] ], [ [ "# logical operator \n\nAND , OR and NOT are the most usable of all the time which is related to boolean algebra concept here. NOT is use only for one variable .\n\n", "_____no_output_____" ] ], [ [ "bool1=True\nbool2=False\n\nprint(\"the value of bool1 and bool2\",bool1 and bool2)\nprint(\"the value of bool1 or bool2\",bool1 or bool2)\nprint(\"the value of not bool2\", not bool2)\n", "the value of bool1 and bool2 False\nthe value of bool1 or bool2 True\nthe value of not bool2 True\n" ] ], [ [ "# type funcion and typecasting\n\ntype is used to find the data type of given variable in python.\nand typecasting is used to change one type to another datatypes like int variable to float variable.\n\n", "_____no_output_____" ], [ "# typecasting.py", "_____no_output_____" ] ], [ [ "a=43\na=float(a)\nprint(a)", "43.0\n" ] ], [ [ "# string to int literal\nint to string literal", "_____no_output_____" ], [ "# what is input function?\n\ninput function allows to you to take input values from the user through Keyboard as a string or int value under the string datatype etc.\n\n", "_____no_output_____" ], [ "# input function.py\n", "_____no_output_____" ], [ "a=input(\"enter your name\")\na=int(a)\nprint(a)\n", "_____no_output_____" ], [ "# PRACTICE SET", "_____no_output_____" ], [ "# add.py", "_____no_output_____" ] ], [ [ "# write a program to add two number ", "the sum is a+b 68\n" ] ], [ [ "a=34\nb=34\n\nprint(\"the sum is a+b\",a+b)", "_____no_output_____" ] ], [ [ "# write a program to find the remainder if a number is divisible by 2.", "22.5\n" ], [ "p=45\n\np/=2\nprint(p)\n", "the remainder when a is divisible by b is 0\n" ] ], [ [ "a= 45 \nb= 15\n\nprint(\"the remainder when a is divisible by b is\",a%b)\n", "_____no_output_____" ] ], [ [ "# check the type of a funtion using input funtion", "<class 'str'>\n" ] ], [ [ "a=input(\"enter a number \")\nprint(type(a))", "_____no_output_____" ] ], [ [ "# use camparision between two variable having a=34 and b =80 and is greator or not.", "a is greator than b is False\n" ], [ "a=34\nb=80\n\nprint(\"a is greator than b is \",a>b)\n", "_____no_output_____" ] ], [ [ "a=34\nb=80\n\naverage=(34+80)/2 # please keep number under bracket cause if you do wihtout a giving that than your coding may break the exact answer try this .\nprint(average)\n\n", "_____no_output_____" ] ], [ [ "# write a program find the average between two number .", "74.0\n" ] ], [ [ "p=34+80/2\nprint(p)\n", "_____no_output_____" ] ], [ [ "# 05 write a program to to calculate the square of a number entered by the user\n", "1058\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e762be8c4eeefe9a232305689637b31ae50445a2
128,697
ipynb
Jupyter Notebook
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
38e786e30ab833524eb28b66fe6942141428c444
[ "MIT" ]
null
null
null
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
38e786e30ab833524eb28b66fe6942141428c444
[ "MIT" ]
null
null
null
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
38e786e30ab833524eb28b66fe6942141428c444
[ "MIT" ]
1
2019-05-14T04:25:54.000Z
2019-05-14T04:25:54.000Z
202.353774
104,408
0.893952
[ [ [ "# Deep Convolutional GANs\n\nIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the [original paper here](https://arxiv.org/pdf/1511.06434.pdf).\n\nYou'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. \n\n![SVHN Examples](assets/SVHN_examples.png)\n\nSo, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what [you saw previously](https://github.com/udacity/deep-learning/tree/master/gan_mnist) are in the generator and discriminator, otherwise the rest of the implementation is the same.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport pickle as pkl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\nimport tensorflow as tf", "/home/paperspace/anaconda3/envs/tensorflow/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n" ], [ "!mkdir data", "mkdir: cannot create directory ‘data’: File exists\r\n" ] ], [ [ "## Getting the data\n\nHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.", "_____no_output_____" ] ], [ [ "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\ndata_dir = 'data/'\n\nif not isdir(data_dir):\n raise Exception(\"Data directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(data_dir + \"train_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',\n data_dir + 'train_32x32.mat',\n pbar.hook)\n\nif not isfile(data_dir + \"test_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',\n data_dir + 'test_32x32.mat',\n pbar.hook)", "_____no_output_____" ] ], [ [ "These SVHN files are `.mat` files typically used with Matlab. However, we can load them in with `scipy.io.loadmat` which we imported above.", "_____no_output_____" ] ], [ [ "trainset = loadmat(data_dir + 'train_32x32.mat')\ntestset = loadmat(data_dir + 'test_32x32.mat')", "_____no_output_____" ] ], [ [ "Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.", "_____no_output_____" ] ], [ [ "idx = np.random.randint(0, trainset['X'].shape[3], size=36)\nfig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)\nfor ii, ax in zip(idx, axes.flatten()):\n ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\nplt.subplots_adjust(wspace=0, hspace=0)", "_____no_output_____" ] ], [ [ "Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.", "_____no_output_____" ] ], [ [ "def scale(x, feature_range=(-1, 1)):\n # scale to (0, 1)\n x = ((x - x.min())/(255 - x.min()))\n \n # scale to feature_range\n min, max = feature_range\n x = x * (max - min) + min\n return x", "_____no_output_____" ], [ "class Dataset:\n def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):\n split_idx = int(len(test['y'])*(1 - val_frac))\n self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]\n self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]\n self.train_x, self.train_y = train['X'], train['y']\n \n self.train_x = np.rollaxis(self.train_x, 3)\n self.valid_x = np.rollaxis(self.valid_x, 3)\n self.test_x = np.rollaxis(self.test_x, 3)\n \n if scale_func is None:\n self.scaler = scale\n else:\n self.scaler = scale_func\n self.shuffle = shuffle\n \n def batches(self, batch_size):\n if self.shuffle:\n idx = np.arange(len(dataset.train_x))\n np.random.shuffle(idx)\n self.train_x = self.train_x[idx]\n self.train_y = self.train_y[idx]\n \n n_batches = len(self.train_y)//batch_size\n for ii in range(0, len(self.train_y), batch_size):\n x = self.train_x[ii:ii+batch_size]\n y = self.train_y[ii:ii+batch_size]\n \n yield self.scaler(x), y", "_____no_output_____" ] ], [ [ "## Network Inputs\n\nHere, just creating some placeholders like normal.", "_____no_output_____" ] ], [ [ "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z", "_____no_output_____" ] ], [ [ "## Generator\n\nHere you'll build the generator network. The input will be our noise vector `z` as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.\n\nWhat's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.\n\nYou keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:\n\n![DCGAN Generator](assets/dcgan.png)\n\nNote that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.", "_____no_output_____" ] ], [ [ "def generator(z, output_dim, reuse=False, alpha=0.2, training=True):\n with tf.variable_scope('generator', reuse=reuse):\n # First fully connected layer\n x1 = tf.layers.dense(z, 4*4*512)\n # Reshape it to start the convolutional stack\n x1 = tf.reshape(x1, (-1, 4, 4, 512))\n x1 = tf.layers.batch_normalization(x1, training=training)\n x1 = tf.maximum(alpha * x1, x1)\n # 4x4x512 now\n \n x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')\n x2 = tf.layers.batch_normalization(x2, training=training)\n x2 = tf.maximum(alpha * x2, x2)\n # 8x8x256 now\n \n x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')\n x3 = tf.layers.batch_normalization(x3, training=training)\n x3 = tf.maximum(alpha * x3, x3)\n # 16x16x128 now\n \n # Output layer\n logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')\n # 32x32x3 now\n \n out = tf.tanh(logits)\n \n return out", "_____no_output_____" ] ], [ [ "## Discriminator\n\nHere you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.\n\nYou'll also want to use batch normalization with `tf.layers.batch_normalization` on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. \n\nNote: in this project, your batch normalization layers will always use batch statistics. (That is, always set `training` to `True`.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the `training` parameter appropriately.", "_____no_output_____" ] ], [ [ "def discriminator(x, reuse=False, alpha=0.2):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Input layer is 32x32x3\n x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')\n relu1 = tf.maximum(alpha * x1, x1)\n # 16x16x64\n \n x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')\n bn2 = tf.layers.batch_normalization(x2, training=True)\n relu2 = tf.maximum(alpha * bn2, bn2)\n # 8x8x128\n \n x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')\n bn3 = tf.layers.batch_normalization(x3, training=True)\n relu3 = tf.maximum(alpha * bn3, bn3)\n # 4x4x256\n\n # Flatten it\n flat = tf.reshape(relu3, (-1, 4*4*256))\n logits = tf.layers.dense(flat, 1)\n out = tf.sigmoid(logits)\n \n return out, logits", "_____no_output_____" ] ], [ [ "## Model Loss\n\nCalculating the loss like before, nothing new here.", "_____no_output_____" ] ], [ [ "def model_loss(input_real, input_z, output_dim, alpha=0.2):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n g_model = generator(input_z, output_dim, alpha=alpha)\n d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)\n d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)\n\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))\n\n d_loss = d_loss_real + d_loss_fake\n\n return d_loss, g_loss", "_____no_output_____" ] ], [ [ "## Optimizers\n\nNot much new here, but notice how the train operations are wrapped in a `with tf.control_dependencies` block so the batch normalization layers can update their population statistics.", "_____no_output_____" ] ], [ [ "def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n # Get weights and bias to update\n t_vars = tf.trainable_variables()\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n\n # Optimize\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n\n return d_train_opt, g_train_opt", "_____no_output_____" ] ], [ [ "## Building the model\n\nHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.", "_____no_output_____" ] ], [ [ "class GAN:\n def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):\n tf.reset_default_graph()\n \n self.input_real, self.input_z = model_inputs(real_size, z_size)\n \n self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,\n real_size[2], alpha=alpha)\n \n self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)", "_____no_output_____" ] ], [ [ "Here is a function for displaying generated images.", "_____no_output_____" ] ], [ [ "def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):\n fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, \n sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.axis('off')\n img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)\n ax.set_adjustable('box-forced')\n im = ax.imshow(img, aspect='equal')\n \n plt.subplots_adjust(wspace=0, hspace=0)\n return fig, axes", "_____no_output_____" ] ], [ [ "And another function we can use to train our network. Notice when we call `generator` to create the samples to display, we set `training` to `False`. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the `net.input_real` placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the `tf.control_dependencies` block we created in `model_opt`. ", "_____no_output_____" ] ], [ [ "def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):\n saver = tf.train.Saver()\n sample_z = np.random.uniform(-1, 1, size=(72, z_size))\n\n samples, losses = [], []\n steps = 0\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in dataset.batches(batch_size):\n steps += 1\n\n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n\n # Run optimizers\n _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})\n _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})\n\n if steps % print_every == 0:\n # At the end of each epoch, get the losses and print them out\n train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})\n train_loss_g = net.g_loss.eval({net.input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g))\n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n\n if steps % show_every == 0:\n gen_samples = sess.run(\n generator(net.input_z, 3, reuse=True, training=False),\n feed_dict={net.input_z: sample_z})\n samples.append(gen_samples)\n _ = view_samples(-1, samples, 6, 12, figsize=figsize)\n plt.show()\n\n saver.save(sess, './checkpoints/generator.ckpt')\n\n with open('samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n return losses, samples", "_____no_output_____" ] ], [ [ "## Hyperparameters\n\nGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them.", "_____no_output_____" ] ], [ [ "real_size = (32,32,3)\nz_size = 100\nlearning_rate = 0.0002\nbatch_size = 128\nepochs = 25\nalpha = 0.2\nbeta1 = 0.5\n\n# Create the network\nnet = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)", "_____no_output_____" ], [ "dataset = Dataset(trainset, testset)\n\nlosses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()", "_____no_output_____" ], [ "_ = view_samples(-1, samples, 6, 12, figsize=(10,5))", "_____no_output_____" ], [ "_ = view_samples(-1, samples, 6, 12, figsize=(10,5))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e762d5fe46dcfd7f50ad99f38f5e68ae2ec4a20a
4,524
ipynb
Jupyter Notebook
Final_Exam.ipynb
JohnLouie16/CPEN-21A-ECE-2-2
3e6f2423bfce2d67cd3dc31fbf656368d90962a0
[ "Apache-2.0" ]
null
null
null
Final_Exam.ipynb
JohnLouie16/CPEN-21A-ECE-2-2
3e6f2423bfce2d67cd3dc31fbf656368d90962a0
[ "Apache-2.0" ]
null
null
null
Final_Exam.ipynb
JohnLouie16/CPEN-21A-ECE-2-2
3e6f2423bfce2d67cd3dc31fbf656368d90962a0
[ "Apache-2.0" ]
null
null
null
24.586957
235
0.413793
[ [ [ "<a href=\"https://colab.research.google.com/github/JohnLouie16/CPEN-21A-ECE-2-2/blob/main/Final_Exam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "#Final Exam", "_____no_output_____" ], [ "####Problem Statement 1", "_____no_output_____" ] ], [ [ "n=0\nfor i in range(0,10):\n a=int(input(\"Enter a number: \"))\n while a>5:\n print(\"The number you entered is greater than 5, please enter another number\")\n a=int(input(\"Enter a number: \"))\n n+=a\n\nprint(n)", "Enter a number: 8\nThe number you entered is greater than 5, please enter another number\nEnter a number: 1\nEnter a number: 2\nEnter a number: 3\nEnter a number: 4\nEnter a number: 5\nEnter a number: 0\nEnter a number: -1\nEnter a number: -6\nEnter a number: -2\nEnter a number: -1\n5\n" ] ], [ [ "####Problem Statement 2", "_____no_output_____" ] ], [ [ "n=1\nsum=0\nprint(\"Enter 5 numbers: \")\n\nwhile n<=5:\n number=int(input(\"\"))\n if n==1 or n==5:\n sum=sum+number\n n=n+1\nprint(\"The sum of first and last number entered is\", sum)", "Enter 5 numbers: \n5\n6\n7\n8\n9\nThe sum of first and last number entered is 14\n" ] ], [ [ "####Problem Statement 3", "_____no_output_____" ] ], [ [ "x = int(input(\"Enter Grade: \"))\n\nif x>=90:\n print(\"Grade = A\")\nelif x>=80 and x<90:\n print(\"Grade = B\")\nelif x>=70 and x<80:\n print(\"Grade = C\")\nelif x>=60 and x<70:\n print(\"Grade = D\")\nelse:\n print(\"Grade = F\")", "Enter Grade: 68\nGrade = D\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e762e6d8bf1b11031287bea6b6ed86f7c6a22ae1
46,434
ipynb
Jupyter Notebook
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
b05cffac6fe46c2a3cc77b556af262b1a5b7f8a0
[ "MIT" ]
2
2022-01-11T09:49:13.000Z
2022-02-20T15:30:12.000Z
2-Chapter-2/Python Basics.ipynb
Hardik1809-coder/Practical-Data-Science-with-Python
4eb557492974c7c3e587338102b5897b532564e6
[ "MIT" ]
1
2021-12-28T21:57:57.000Z
2021-12-28T21:57:57.000Z
2-Chapter-2/Python Basics.ipynb
MathMachado/Practical-Data-Science-with-Python
4eb557492974c7c3e587338102b5897b532564e6
[ "MIT" ]
null
null
null
16.660926
307
0.440625
[ [ [ "# single-line comments can be done with the pound character\n\"\"\"\nmulti-line\ncomments\ncan be done like this\n\"\"\"", "_____no_output_____" ] ], [ [ "## Numbers", "_____no_output_____" ] ], [ [ "type(2)", "_____no_output_____" ], [ "type(2.0)", "_____no_output_____" ], [ "2 + 2 # addition", "_____no_output_____" ], [ "2 - 2 # subtraction", "_____no_output_____" ], [ "2 * 2 # multiplication", "_____no_output_____" ], [ "2 / 2 # float division", "_____no_output_____" ], [ "2 // 2 # integer division", "_____no_output_____" ], [ "2 ** 3 # exponents", "_____no_output_____" ], [ "2 ** 0.5 # square root", "_____no_output_____" ], [ "5 % 2 # modulo operator (calculates the remainder)", "_____no_output_____" ], [ "int(2.0) # conversion to an integer", "_____no_output_____" ], [ "int(2.1) # this rounds down", "_____no_output_____" ], [ "round(2.11) # rounds to the nearest integer", "_____no_output_____" ], [ "round(2.11, ndigits=1) # round to the nearest 0.1", "_____no_output_____" ], [ "float(2) # conversion to float", "_____no_output_____" ], [ "import math\nmath.pi", "_____no_output_____" ], [ "math.lcm(2, 3, 5) # least common multiple", "_____no_output_____" ] ], [ [ "## Strings", "_____no_output_____" ] ], [ [ "'a string'", "_____no_output_____" ], [ "\"a string\"", "_____no_output_____" ], [ "print(\"\"\"\nmulti-\nline\nstring\n\"\"\")", "\nmulti-\nline\nstring\n\n" ], [ "'a' + 'string' # concatenate strings", "_____no_output_____" ], [ "'a' * 2 # repeat strings", "_____no_output_____" ], [ "str(2) # convert a number to a string", "_____no_output_____" ], [ "# raw string -- the last backslash must be escaped with another backslash if it is at the end\nr'C:\\Users\\Me\\A folder\\\\'\nr'C:\\Users\\Me\\A folder\\a file.txt'", "_____no_output_____" ] ], [ [ "### String indexing", "_____no_output_____" ] ], [ [ "'a string'[0] # first character of a string", "_____no_output_____" ], [ "'a string'[-1] # last character of a string", "_____no_output_____" ], [ "'a string'[0:4] # index a string to get first 4 characters", "_____no_output_____" ], [ "'a string'[:4] # index a string to get first 4 characters", "_____no_output_____" ], [ "'a string'[::2] # get every other letter", "_____no_output_____" ], [ "'a string'[::-1] # reverse the string", "_____no_output_____" ], [ "'a string'[:5:2] # every other letter in the first 5 characters", "_____no_output_____" ] ], [ [ "### Built-in string methods", "_____no_output_____" ] ], [ [ "'-'.join(['this', 'is', 'a', 'test'])", "_____no_output_____" ], [ "'this is a test'.split()", "_____no_output_____" ], [ "'\\t\\n - remove left'.lstrip() # remove whitespace on the left", "_____no_output_____" ], [ "'\\t\\n - remove left'.rstrip() # remove whitespace on the right", "_____no_output_____" ], [ "'testtest - remove left'.lstrip('test') # remove all instances of 'test' from the left of the sting", "_____no_output_____" ], [ "'testtest - remove left'.lstrip('tes') # remove all instances of 'tes' characters from the left of the sting", "_____no_output_____" ], [ "'testtest - remove left'.removeprefix('test') # remove one instance of 'test' from the left of the string", "_____no_output_____" ], [ "'testtest - remove left'.removesuffix('left')", "_____no_output_____" ], [ "f'string formatting {2 + 2}'", "_____no_output_____" ], [ "print('tabs\\tand\\nnewlines')", "tabs\tand\nnewlines\n" ], [ "print(r'tabs\\tand newlines\\n')", "tabs\\tand newlines\\n\n" ], [ "print('\\t\\n - tabs and newlines') # tab and newline at the beginning of a string, along with some spaces", "\t\n - tabs and newlines\n" ] ], [ [ "## Variables", "_____no_output_____" ] ], [ [ "books = 1", "_____no_output_____" ], [ "books # print out our variable", "_____no_output_____" ], [ "books = books + 1\nbooks", "_____no_output_____" ], [ "books += 1\nbooks", "_____no_output_____" ], [ "books -= 1\nbooks", "_____no_output_____" ], [ "books *= 2\nbooks", "_____no_output_____" ], [ "books /= 2\nbooks", "_____no_output_____" ], [ "books **= 2\nbooks", "_____no_output_____" ], [ "books %= 2\nbooks", "_____no_output_____" ], [ "# concatenate two string variables\na = 'string 1'\nb = 'another string'\na + b", "_____no_output_____" ], [ "# check variable type\ntype(a)", "_____no_output_____" ], [ "# don't do this!\n# type = 'test'\n# type(a) # if you try this, the type() function will no longer work", "_____no_output_____" ] ], [ [ "## Lists, Tuples, Sets, and Dictionaries", "_____no_output_____" ] ], [ [ "# a basic list\n[1, 2, 3]", "_____no_output_____" ], [ "# lists can contain different data types\n[1, 'a', 3]", "_____no_output_____" ], [ "# lists can contain other lists\n[1, [1, 2, 3], 3]", "_____no_output_____" ], [ "# join lists\n[1, 2, 3] + [4, 5]", "_____no_output_____" ], [ "# repeat a list\n[1, 2, 3] * 2", "_____no_output_____" ], [ "# get the length of a list\nlen([1, 2, 3])", "_____no_output_____" ], [ "# make a blank list and add the element '1' to it\na_list = []\na_list.append(1)\na_list", "_____no_output_____" ], [ "# sort in-place\na_list = [1, 3, 2]\na_list.sort()\na_list", "_____no_output_____" ], [ "# sort\na_list = [1, 3, 2]\nsorted(a_list)", "_____no_output_____" ], [ "# indexing: [start:stop:step]\na_list = [1, 2, 3, 4, 5]\na_list[0]", "_____no_output_____" ], [ "a_list[-1]", "_____no_output_____" ], [ "a_list[0:3]", "_____no_output_____" ], [ "a_list[:3]", "_____no_output_____" ], [ "a_list[::2]", "_____no_output_____" ], [ "a_list[0:3:2]", "_____no_output_____" ], [ "# reverse a list\na_list[::-1]", "_____no_output_____" ] ], [ [ "### Tuples", "_____no_output_____" ] ], [ [ "a_tuple = (2, 3)\na_tuple", "_____no_output_____" ], [ "tuple(a_list)", "_____no_output_____" ] ], [ [ "### Sets", "_____no_output_____" ] ], [ [ "set(a_list)", "_____no_output_____" ], [ "a_set = {1, 2, 3, 3}\na_set", "_____no_output_____" ], [ "set_1 = {1, 2, 3}\nset_2 = {2, 3, 4}\nset_1.union(set_2)", "_____no_output_____" ], [ "set_1 | set_2", "_____no_output_____" ], [ "set_1.difference(set_2)", "_____no_output_____" ], [ "# shorthand for different operator\nset_1 - set_2", "_____no_output_____" ] ], [ [ "### Dictionaries", "_____no_output_____" ] ], [ [ "a_dict = {'books': 1, 'magazines': 2, 'articles': 7}\na_dict", "_____no_output_____" ], [ "a_dict['books']", "_____no_output_____" ], [ "another_dict = {'movies': 4}\na_dict | another_dict", "_____no_output_____" ], [ "a_dict['shows'] = 12", "_____no_output_____" ], [ "a_dict", "_____no_output_____" ] ], [ [ "## Loops and Comprehensions", "_____no_output_____" ] ], [ [ "a_list = [1, 2, 3]\nfor element in a_list:\n print(element)", "1\n2\n3\n" ], [ "a_list = [1, 2, 3]\nfor index in range(len(a_list)):\n print(index)", "0\n1\n2\n" ] ], [ [ "This brings up the documentation for a function.", "_____no_output_____" ] ], [ [ "?range", "_____no_output_____" ], [ "a_list = [1, 2, 3]\nfor index, element in enumerate(a_list):\n print(index, element)", "0 1\n1 2\n2 3\n" ], [ "a_list = []\nfor i in range(3):\n a_list.append(i)\n\na_list", "_____no_output_____" ], [ "# a list comprehension\na_list = [i for i in range(3)]\na_list", "_____no_output_____" ], [ "a_dict = {'books': 1, 'magazines': 2, 'articles': 7}\nfor key, value in a_dict.items():\n print(f'{key}:{value}')", "books:1\nmagazines:2\narticles:7\n" ], [ "# a dictionary comprehension\na_dict = {i: i ** 2 for i in range(1, 4)}\na_dict", "_____no_output_____" ] ], [ [ "## Booleans and Conditionals", "_____no_output_____" ] ], [ [ "books_read = 11\nbooks_read > 10", "_____no_output_____" ], [ "none_var = None\nnone_var is None", "_____no_output_____" ], [ "books_read = 12\nif books_read < 10:\n print(\"You have only read a few books.\")\nelif books_read >= 12:\n print(\"You've read lots of books!\")\nelse:\n print(\"You've read 10 or 11 books.\")", "You've read lots of books!\n" ], [ "a = 'test'\ntype(a) is str", "_____no_output_____" ], [ "type(a) is not str", "_____no_output_____" ], [ "'st' in 'a string' # check for a substring in a string", "_____no_output_____" ], [ "a_set = {1, 2, 3}\n1 in a_set", "_____no_output_____" ], [ "a_list = [1, 2, 3]\n1 in a_list", "_____no_output_____" ], [ "a_dict = {1: 'val1', 2: 'val2', 3: 'val3'}\n1 in a_dict", "_____no_output_____" ], [ "if 1 in a_set:\n print('1 is in there')", "1 is in there\n" ], [ "condition = False\nif condition != False:\n print('not false')\nelif condition == False:\n print('is false')", "is false\n" ] ], [ [ "## Libraries and Imports", "_____no_output_____" ] ], [ [ "import time\ntime.time()", "_____no_output_____" ], [ "import time as t\nt.time()", "_____no_output_____" ], [ "import urllib.request\nurllib.request.urlopen('https://www.pypi.org')", "_____no_output_____" ], [ "from urllib.request import urlopen\nurlopen('https://www.pypi.org')", "_____no_output_____" ], [ "# importing a function from a subpackage of a library, and aliasing it\nfrom urllib.request import urlopen as uo\nuo('https://www.pypi.org')", "_____no_output_____" ] ], [ [ "## Functions", "_____no_output_____" ] ], [ [ "def test_function(doPrint, printAdd='more'):\n \"\"\"\n A demo function.\n \"\"\"\n if doPrint:\n print('test' + printAdd)\n return printAdd", "_____no_output_____" ], [ "value = test_function(True)\nprint(value)", "testmore\nmore\n" ], [ "# brings up documentation for sorted()\n?sorted", "_____no_output_____" ], [ "a_list = [2, 4, 1]\nsorted(a_list, reverse=True)", "_____no_output_____" ], [ "def test_function():\n \"\"\"\n A demo function.\n \"\"\"\n func_var = 'testing'\n print(func_var)", "_____no_output_____" ], [ "test_function()", "testing\n" ], [ "func_var", "_____no_output_____" ], [ "add10 = lambda x, y: x + y + 10\nadd10(10, 3)", "_____no_output_____" ] ], [ [ "## Classes", "_____no_output_____" ] ], [ [ "class testObject:\n def __init__(self, attr):\n self.test_attribute = attr\n \n \n def test_function(self):\n print('testing123')\n print(f'testing{self.test_attribute}')", "_____no_output_____" ], [ "to = testObject(123)\nto.test_attribute", "_____no_output_____" ], [ "to.test_function()", "testing123\ntesting123\n" ] ], [ [ "Here is another module from core Python.", "_____no_output_____" ] ], [ [ "import calendar\n# creates a new instance of a Calendar object\nc = calendar.Calendar()\ntype(c)", "_____no_output_____" ], [ "# an attribute\nc.firstweekday", "_____no_output_____" ], [ "# a method/function\nlist(c.iterweekdays())", "_____no_output_____" ] ], [ [ "## Multithreading and Multiprocessing", "_____no_output_____" ], [ "The multiprocessing and threading libraries are ways you will see many people recommend, but I prefer the concurrent.futures library myself. See the multiprocessing_demo.py file for more. Note that you should run the file like `python multiprocessing_demo.py`, and not in Jupyter notebook or IPython.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
e762e852bf0088de3354dbad2fd42f07c91671f0
14,967
ipynb
Jupyter Notebook
Preprocessing/wordIndicators.ipynb
samp891216/Portafolio-SERGIO-MARIN
03da4fae7c4842c53cc778b504304ecae0569dfe
[ "MIT" ]
null
null
null
Preprocessing/wordIndicators.ipynb
samp891216/Portafolio-SERGIO-MARIN
03da4fae7c4842c53cc778b504304ecae0569dfe
[ "MIT" ]
null
null
null
Preprocessing/wordIndicators.ipynb
samp891216/Portafolio-SERGIO-MARIN
03da4fae7c4842c53cc778b504304ecae0569dfe
[ "MIT" ]
null
null
null
29.289628
169
0.411438
[ [ [ "# Cargamos librerías.\n\nimport pandas as pd\nimport os\nos.chdir('E:\\Backup Sergio IPROCOM\\Documentos\\SMARIN\\SAMP\\MaestriaAnalitica\\DataCamp\\Python Data Science Toolbox (Part 2)')", "_____no_output_____" ], [ "# Cargamos dataset WORLD INDICATORS que contiene índices de muchos paises del mundo.\n\ndf = pd.read_csv(\"world_ind_pop_data.csv\")", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ] ], [ [ "### Uso ed zip()", "_____no_output_____" ] ], [ [ "# Para practicar el uso de zip() vamos a extraer los nombres de las columnas del dataframe y una de las filas de la data.\n\ncolnames = list(df)\nrow_df = list(df.loc[1])", "_____no_output_____" ], [ "# Zip sirve para unir dos listas en forma de diccionario.\n\n# Creo el objeto zipped_list con la función zip()\nzipped_list = zip(colnames, row_df)\n\n# Imprimo el objeto zipped_list para conocer que me arroja dicho objeto.\nprint(zipped_list)\n\n# Convierto el ibjeto zipped_list wn un diccionario.\nrs_dict = dict(zipped_list)\n\n\n# Imprimo el diccionario.\nprint(rs_dict)\n", "<zip object at 0x000001F06A751A48>\n{'CountryName': 'Caribbean small states', 'CountryCode': 'CSS', 'Year': 1960, 'Total Population': 4190810.0, 'Urban population (% of total)': 31.5974898513652}\n" ], [ "# Escribiremos una función para el proceso anterior.\n\ndef list2dict(list1, list2):\n \n # Creo el objeto zipped_list con la función zip()\n zipped_list = zip(list1, list2)\n\n # Convierto el ibjeto zipped_list wn un diccionario.\n rs_dict = dict(zipped_list)\n\n # Retorna el diccionario.\n return rs_dict", "_____no_output_____" ], [ "rsx_dict = list2dict(colnames, row_df)\n\nprint(rsx_dict)", "{'CountryName': 'Caribbean small states', 'CountryCode': 'CSS', 'Year': 1960, 'Total Population': 4190810.0, 'Urban population (% of total)': 31.5974898513652}\n" ], [ "# Aplicamos el método tolist() que convierte el dataframe en una lista de listas\n\nlist_rows = df.values.tolist()\n\nlist_rows[0]", "_____no_output_____" ], [ "# Ahora llamaremos la función list_of_list para generar los diccionarios de cada fila del dataframe.\n\nlist_of_dicts = [list2dict(colnames,sublist) for sublist in list_rows]\n\nlist_of_dicts[100]", "_____no_output_____" ], [ "df = pd.DataFrame(list_of_dicts)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e762f0ab1cd4314d0668e963fb01abafd3c5faa6
11,192
ipynb
Jupyter Notebook
RAIS/kickoff.ipynb
dsfloripa/challenges
0abb6fc0bbf5c424d1fb4ef67e415e5c51c5ad16
[ "MIT" ]
1
2020-02-18T02:42:36.000Z
2020-02-18T02:42:36.000Z
RAIS/kickoff.ipynb
dsfloripa/challenges
0abb6fc0bbf5c424d1fb4ef67e415e5c51c5ad16
[ "MIT" ]
3
2019-11-30T04:50:23.000Z
2019-11-30T04:53:41.000Z
RAIS/kickoff.ipynb
dsfloripa/challenges
0abb6fc0bbf5c424d1fb4ef67e415e5c51c5ad16
[ "MIT" ]
null
null
null
25.552511
235
0.439242
[ [ [ "# Kickoff - CHALLENGE - RAIS\n\n**E**xploratory **D**ata **A**nalysis on RAIS Database - Florianópolis, SC - Brasil\n\n**Authors:**\n- Luis Felipe Pelison\n- Fernando Battisti\n- Ígor Yamamoto\n\n## Objective\n\nHow socialeconomic characteristics impacts how much you earn?", "_____no_output_____" ], [ "# Imports\nHere is where you declare the external dependencies required for running the notebook", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n# here you can import your libraries\n\npd.set_option('max_rows', 200)", "_____no_output_____" ] ], [ [ "# Open Data\n\nHere is where your data is loaded from different file formats (e.g.: .csv, .json, .parquet, .xlsx) into pandas data frames", "_____no_output_____" ] ], [ [ "df = pd.read_parquet('data/rais_floripa_2018.parquet')\nprint(df.shape)\ndf.head(2)", "(432486, 16)\n" ] ], [ [ "# Pre Processing\nThe real world is a mess. We need to do some manipulations in order to clean the data.", "_____no_output_____" ], [ "## Converting to the right types\n\nIn order to be eaiser or possible to operate, we need to assign the most appropriate type for each column ", "_____no_output_____" ] ], [ [ "CAT_FEATURES = ['CNAE 2.0 Subclasse', 'Escolaridade após 2005', 'Mês Admissão', 'Mês Desligamento', 'Motivo Desligamento', 'Município', 'Raça Cor', 'Sexo Trabalhador', 'Tamanho Estabelecimento', 'Tipo Defic', 'UF', 'CBO 2002']\n\nfor cat_feat in CAT_FEATURES:\n df[cat_feat] = df[cat_feat].astype('str')\n\ndf['Tempo Emprego'] = df['Tempo Emprego'].str.replace(',','.').astype('float')\n\ndf['Vl Remun Média Nom'] = df['Vl Remun Média Nom'].str.replace('.', '').str.replace(',','.').astype('float')", "_____no_output_____" ] ], [ [ "## Mapping categories\n\nSometimes, real categories are not so understanble, then we map to more readable ones", "_____no_output_____" ] ], [ [ "df['Tamanho Estabelecimento'].value_counts()", "_____no_output_____" ], [ "df['Tamanho Estabelecimento'] = (\n df['Tamanho Estabelecimento']\n .map(\n {\n '1': 'ZERO',\n '2': 'ATE_4',\n '3': 'DE_5_A_9',\n '4': 'DE_10_A_19',\n '5': 'DE_20_A_49',\n '6': 'DE_50_A_99',\n '7': 'DE_100_A_249',\n '8': 'DE_250_A_499',\n '9': 'DE_500_A_999',\n '10': '1000_OU_MAIS',\n '-1': 'IGNORADO',\n }\n )\n)", "_____no_output_____" ], [ "df['Tamanho Estabelecimento'].value_counts()", "_____no_output_____" ] ], [ [ "## Removing wrong categories\n\nSometimes there are wrong or meaningless categories. In those cases we need to treat this.", "_____no_output_____" ] ], [ [ "df['CBO 2002'] = df['CBO 2002'].apply(lambda x: 'Unknown' if x == '0000-1' else x)", "_____no_output_____" ] ], [ [ "# Analysis\n\nNow we can do our exploratory analysis. Be criative!", "_____no_output_____" ] ], [ [ "# All from Florianopolis\n\ndf['Município'].value_counts()", "_____no_output_____" ] ], [ [ "## Challenge 0. List the most popular occupations (CBO)", "_____no_output_____" ] ], [ [ "# Code here", "_____no_output_____" ] ], [ [ "# Main Challenge\n\nHere we will develop the answer for the main challenge described at the beginning.", "_____no_output_____" ] ], [ [ "# Code here", "_____no_output_____" ] ], [ [ "# Future Work\n\nIf your time has ended and you have another insights, you can list them here for future work\n\n- Insight 1\n- Insight 2", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e762f21d584996c3f26017feef91cb788ca9abe6
9,960
ipynb
Jupyter Notebook
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
9b73d55ee4b328883e0a2b0eb6809fb4941f8451
[ "Apache-2.0" ]
2
2021-05-17T08:02:03.000Z
2021-06-17T09:52:45.000Z
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
9b73d55ee4b328883e0a2b0eb6809fb4941f8451
[ "Apache-2.0" ]
2
2020-03-11T04:22:39.000Z
2020-03-12T18:26:11.000Z
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
9b73d55ee4b328883e0a2b0eb6809fb4941f8451
[ "Apache-2.0" ]
null
null
null
30.931677
118
0.593373
[ [ [ "!pip install autokeras\n!pip install git+https://github.com/keras-team/[email protected]\n", "_____no_output_____" ] ], [ [ "## A Simple Example\nThe first step is to prepare your data. Here we use the [IMDB\ndataset](https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification) as\nan example.\n", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom tensorflow.keras.datasets import imdb\n\n# Load the integer sequence the IMDB dataset with Keras.\nindex_offset = 3 # word index offset\n(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000,\n index_from=index_offset)\ny_train = y_train.reshape(-1, 1)\ny_test = y_test.reshape(-1, 1)\n# Prepare the dictionary of index to word.\nword_to_id = imdb.get_word_index()\nword_to_id = {k: (v + index_offset) for k, v in word_to_id.items()}\nword_to_id[\"<PAD>\"] = 0\nword_to_id[\"<START>\"] = 1\nword_to_id[\"<UNK>\"] = 2\nid_to_word = {value: key for key, value in word_to_id.items()}\n# Convert the word indices to words.\nx_train = list(map(lambda sentence: ' '.join(\n id_to_word[i] for i in sentence), x_train))\nx_test = list(map(lambda sentence: ' '.join(\n id_to_word[i] for i in sentence), x_test))\nx_train = np.array(x_train, dtype=np.str)\nx_test = np.array(x_test, dtype=np.str)\nprint(x_train.shape) # (25000,)\nprint(y_train.shape) # (25000, 1)\nprint(x_train[0][:50]) # <START> this film was just brilliant casting <UNK>\n", "_____no_output_____" ] ], [ [ "The second step is to run the [TextClassifier](/text_classifier).\n", "_____no_output_____" ] ], [ [ "import autokeras as ak\n\n# Initialize the text classifier.\nclf = ak.TextClassifier(\n overwrite=True,\n max_trials=1) # It tries 10 different models.\n# Feed the text classifier with training data.\nclf.fit(x_train, y_train, epochs=2)\n# Predict with the best model.\npredicted_y = clf.predict(x_test)\n# Evaluate the best model with testing data.\nprint(clf.evaluate(x_test, y_test))\n\n", "_____no_output_____" ] ], [ [ "## Validation Data\nBy default, AutoKeras use the last 20% of training data as validation data.\nAs shown in the example below, you can use `validation_split` to specify the percentage.\n", "_____no_output_____" ] ], [ [ "clf.fit(x_train,\n y_train,\n # Split the training data and use the last 15% as validation data.\n validation_split=0.15)\n", "_____no_output_____" ] ], [ [ "You can also use your own validation set\ninstead of splitting it from the training data with `validation_data`.\n", "_____no_output_____" ] ], [ [ "split = 5000\nx_val = x_train[split:]\ny_val = y_train[split:]\nx_train = x_train[:split]\ny_train = y_train[:split]\nclf.fit(x_train,\n y_train,\n epochs=2,\n # Use your own validation set.\n validation_data=(x_val, y_val))\n", "_____no_output_____" ] ], [ [ "## Customized Search Space\nFor advanced users, you may customize your search space by using\n[AutoModel](/auto_model/#automodel-class) instead of\n[TextClassifier](/text_classifier). You can configure the\n[TextBlock](/block/#textblock-class) for some high-level configurations, e.g., `vectorizer`\nfor the type of text vectorization method to use. You can use 'sequence', which uses\n[TextToInteSequence](/block/#texttointsequence-class) to convert the words to\nintegers and use [Embedding](/block/#embedding-class) for embedding the\ninteger sequences, or you can use 'ngram', which uses\n[TextToNgramVector](/block/#texttongramvector-class) to vectorize the\nsentences. You can also do not specify these arguments, which would leave the\ndifferent choices to be tuned automatically. See the following example for detail.\n", "_____no_output_____" ] ], [ [ "import autokeras as ak\n\ninput_node = ak.TextInput()\noutput_node = ak.TextBlock(vectorizer='ngram')(input_node)\noutput_node = ak.ClassificationHead()(output_node)\nclf = ak.AutoModel(\n inputs=input_node,\n outputs=output_node,\n overwrite=True,\n max_trials=1)\nclf.fit(x_train, y_train, epochs=2)\n", "_____no_output_____" ] ], [ [ "The usage of [AutoModel](/auto_model/#automodel-class) is similar to the\n[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.\nBasically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.\nTo add an edge from `input_node` to `output_node` with\n`output_node = ak.[some_block]([block_args])(input_node)`.\n\nYou can even also use more fine grained blocks to customize the search space even\nfurther. See the following example.\n", "_____no_output_____" ] ], [ [ "import autokeras as ak\n\ninput_node = ak.TextInput()\noutput_node = ak.TextToIntSequence()(input_node)\noutput_node = ak.Embedding()(output_node)\n# Use separable Conv layers in Keras.\noutput_node = ak.ConvBlock(separable=True)(output_node)\noutput_node = ak.ClassificationHead()(output_node)\nclf = ak.AutoModel(\n inputs=input_node,\n outputs=output_node,\n overwrite=True,\n max_trials=1)\nclf.fit(x_train, y_train, epochs=2)\n", "_____no_output_____" ] ], [ [ "## Data Format\nThe AutoKeras TextClassifier is quite flexible for the data format.\n\nFor the text, the input data should be one-dimensional \nFor the classification labels, AutoKeras accepts both plain labels, i.e. strings or\nintegers, and one-hot encoded encoded labels, i.e. vectors of 0s and 1s.\n\nWe also support using [tf.data.Dataset](\nhttps://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable) format for\nthe training data.\nThe labels have to be one-hot encoded for multi-class\nclassification to be wrapped into tensorflow Dataset.\nSince the IMDB dataset is binary classification, it should not be one-hot encoded.\n", "_____no_output_____" ] ], [ [ "import tensorflow as tf\ntrain_set = tf.data.Dataset.from_tensor_slices(((x_train, ), (y_train, ))).batch(32)\ntest_set = tf.data.Dataset.from_tensor_slices(((x_test, ), (y_test, ))).batch(32)\n\nclf = ak.TextClassifier(\n overwrite=True,\n max_trials=3)\n# Feed the tensorflow Dataset to the classifier.\nclf.fit(train_set, epochs=2)\n# Predict with the best model.\npredicted_y = clf.predict(test_set)\n# Evaluate the best model with testing data.\nprint(clf.evaluate(test_set))\n", "_____no_output_____" ] ], [ [ "## Reference\n[TextClassifier](/text_classifier),\n[AutoModel](/auto_model/#automodel-class),\n[TextBlock](/block/#textblock-class),\n[TextToInteSequence](/block/#texttointsequence-class),\n[Embedding](/block/#embedding-class),\n[TextToNgramVector](/block/#texttongramvector-class),\n[ConvBlock](/block/#convblock-class),\n[TextInput](/node/#textinput-class),\n[ClassificationHead](/block/#classificationhead-class).\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e762f985a140095fe134d37546ae2ebfc918b47f
15,365
ipynb
Jupyter Notebook
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
e7625a4d649e7d61bebcf193d956c5d73ee5e08b
[ "MIT" ]
2
2021-02-10T08:29:19.000Z
2021-04-29T14:33:31.000Z
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
e7625a4d649e7d61bebcf193d956c5d73ee5e08b
[ "MIT" ]
null
null
null
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
e7625a4d649e7d61bebcf193d956c5d73ee5e08b
[ "MIT" ]
null
null
null
36.068075
139
0.589326
[ [ [ "\" Import the libraries \" \n\nimport os\nimport sys \nimport math\nimport copy\n\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.neural_network import MLPRegressor\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler", "_____no_output_____" ], [ "\" Import the scripts of SD for Explaining and the supplementary scripts for neighbors generation\"\n\nabsFilePath = os.path.dirname(os.path.dirname(os.path.dirname(os.getcwd())))\nnewPath = os.path.join(absFilePath, 'SplitSD4X\\\\')\nsys.path.append(newPath)\n\nnewPath_supp = os.path.join(newPath, 'supplementary')\nsys.path.append(newPath_supp)\n\nfrom fill_missing_values import *\nfrom missing_values_table import *\nfrom subgroups_discovery import *\n\nfrom discretization import * \nfrom neighbors_generation import *\n", "_____no_output_____" ] ], [ [ "## Data Preparation ", "_____no_output_____" ] ], [ [ "\" Loading the dataset \"\ndatasets_path = os.path.join(absFilePath, 'Datasets\\\\')\nurl = datasets_path + 'data_bike_hour.csv'\ndf = pd.read_csv(url)\ndf = df.drop(['instant','dteday','casual','registered'],axis =1)\n\n\" Handling some data \"\ndf = df.drop(df[df.weathersit == 4].index)\ndf[df[\"weathersit\"] == 4]\n\n\" Decode Categorical Features \"\n\nweekday_mapper = {0 : 'Sun', \n 1 : 'Mon',\n 2 : 'Tue',\n 3 : 'Wed',\n 4 : 'Thu',\n 5 : 'Fri',\n 6 : 'Sat' }\nweekday_mapper_inv = dict(map(reversed, weekday_mapper.items()))\ndf['weekday'] = df['weekday'].replace(weekday_mapper)\n\n\nholiday_mapper = {0 : 'No_Holiday',\n 1 : 'Holiday'}\nholiday_mapper_inv = dict(map(reversed, holiday_mapper.items()))\ndf['holiday'] = df['holiday'].replace(holiday_mapper)\n\n\n\nworkingday_mapper = {0 : 'No_Working_Day',\n 1 : 'Working_Day'}\nworkingday_mapper_inv = dict(map(reversed, workingday_mapper.items()))\ndf['workingday'] = df['workingday'].replace(workingday_mapper)\n\n\n\nseason_mapper = {1 : 'Spring',\n 2 : 'Summer',\n 3 : 'Fall',\n 4 : 'Winter'}\nseason_mapper_inv = dict(map(reversed, season_mapper.items()))\ndf['season'] = df['season'].replace(season_mapper)\n\nwethersit_mapper = {1 : 'Good',\n 2 : 'Misty',\n 3 : 'Rain_Snow_Storm'}\nwethersit_mapper_inv = dict(map(reversed, wethersit_mapper.items()))\ndf['weathersit'] = df['weathersit'].replace(wethersit_mapper)\n\n\n\nmnth_mapper = {1 : 'Jan',\n 2 : 'Feb',\n 3 : 'Mar',\n 4 : 'Apr',\n 5 : 'May',\n 6 : 'Jun',\n 7 : 'Jul',\n 8 : 'Aug',\n 9 : 'Sep',\n 10 : 'Oct',\n 11 : 'Nov',\n 12 : 'Dec'}\nmnth_mapper_inv = dict(map(reversed, mnth_mapper.items()))\ndf['mnth'] = df['mnth'].replace(mnth_mapper)\n\n\nyr_mapper = {0 : '2011',\n 1 : '2012'}\nyr_mapper_inv = dict(map(reversed, yr_mapper.items()))\ndf['yr'] = df['yr'].replace(yr_mapper)\n\n# Numerical Features\ndf['temp'] = df['temp'] * (39 - (-8)) + (-8)\ndf['atemp'] = df['atemp'] * (50 - (16)) + (16)\ndf['windspeed'] = df['windspeed'] * 67\ndf['hum'] = df['hum']*100\n\n\" separate the data and the target \"\ndata_df = df.drop(columns=['cnt'])\ntarget_df = df['cnt']\n\n\" calculate the categorical features mask \"\ncategorical_feature_mask = (data_df.dtypes == object)\ncategorical_cols_names = data_df.columns[categorical_feature_mask].tolist()\nnumerical_cols_names = data_df.columns[~categorical_feature_mask].tolist()\n\n\" if no values missed we execute this code : \"\ndata_df = pd.concat([data_df[numerical_cols_names], data_df[categorical_cols_names]],axis = 1)\n\n\" Encoding categorical features\"\n\ndata_df['weekday'] = data_df['weekday'].replace(weekday_mapper_inv)\ndata_df['holiday'] = data_df['holiday'].replace(holiday_mapper_inv)\ndata_df['workingday'] = data_df['workingday'].replace(workingday_mapper_inv)\ndata_df['season'] = data_df['season'].replace(season_mapper_inv)\ndata_df['weathersit'] = data_df['weathersit'].replace(wethersit_mapper_inv)\ndata_df['mnth'] = data_df['mnth'].replace(mnth_mapper_inv)\ndata_df['yr'] = data_df['yr'].replace(yr_mapper_inv)\n\ndata_target_df = pd.concat([data_df, target_df], axis=1) ", "_____no_output_____" ], [ "\" generate the Test SET \"\nnb_test_instances = 1000 \ntest_df = data_target_df.sample(n=nb_test_instances)\ndata_test_df = test_df.drop(columns=['cnt'])\ntarget_test_df = test_df['cnt']\n\n\" generate the Training SET \"\ntrain_df = pd.concat([data_target_df,test_df]).drop_duplicates(keep=False)\ndata_train_df = train_df.drop(columns=['cnt'])\ntarget_train_df = train_df['cnt']\n\n\" Extract values of the test set to generate the neighbors\"\n\ndata_test = data_test_df.values\ntarget_test = target_test_df.values\n\nnumerical_cols = np.arange(0,len(numerical_cols_names)) \ncategorical_cols = np.arange(len(numerical_cols_names),data_df.shape[1])", "_____no_output_____" ] ], [ [ "## Neighbors Generation ", "_____no_output_____" ] ], [ [ "nb_neighbors = 20 \nlist_neigh = generate_all_neighbors(data_test,numerical_cols,categorical_cols,nb_neighbors)\n\n\" store all the neighbors together \"\nn = np.size(data_test,0)\nall_neighbors = list_neigh[0]\nfor i in range(1,n) :\n all_neighbors = np.concatenate((all_neighbors, list_neigh[i]), axis=0)\n \n\" One hot encoding \"\n\ndf_neigh = pd.DataFrame(data = all_neighbors,columns= numerical_cols_names + categorical_cols_names)\ndf_neigh[categorical_cols_names] = df_neigh[categorical_cols_names].astype(int,errors='ignore')\n\n\" Decode all the data neighbors to perform one hot encoding \"\ndf_neigh['weekday'] = df_neigh['weekday'].replace(weekday_mapper)\ndf_neigh['holiday'] = df_neigh['holiday'].replace(holiday_mapper)\ndf_neigh['workingday'] = df_neigh['workingday'].replace(workingday_mapper)\ndf_neigh['season'] = df_neigh['season'].replace(season_mapper)\ndf_neigh['weathersit'] = df_neigh['weathersit'].replace(wethersit_mapper)\ndf_neigh['mnth'] = df_neigh['mnth'].replace(mnth_mapper)\ndf_neigh['yr'] = df_neigh['yr'].replace(yr_mapper)\n\n\" One hot encoding \"\ndf_neigh = pd.get_dummies(df_neigh, prefix_sep='_', drop_first=True)\n\n\" Store the neighbors in a list\"\n\ndata_neigh = df_neigh.values\nn = np.size(data_test,0)\nlist_neigh = []\nj = 0\nfor i in range(0,n):\n list_neigh.append(data_neigh[j:(j+nb_neighbors),:])\n j += nb_neighbors", "_____no_output_____" ] ], [ [ "#### One hot encoding for the training and the test sets", "_____no_output_____" ] ], [ [ "data_train_df['weekday'] = data_train_df['weekday'].replace(weekday_mapper)\ndata_train_df['holiday'] = data_train_df['holiday'].replace(holiday_mapper)\ndata_train_df['workingday'] = data_train_df['workingday'].replace(workingday_mapper)\ndata_train_df['season'] = data_train_df['season'].replace(season_mapper)\ndata_train_df['weathersit'] = data_train_df['weathersit'].replace(wethersit_mapper)\ndata_train_df['mnth'] = data_train_df['mnth'].replace(mnth_mapper)\ndata_train_df['yr'] = data_train_df['yr'].replace(yr_mapper)\n\ndata_train_df = pd.get_dummies(data_train_df, prefix_sep='_', drop_first=True)\ndata_train = data_train_df.values\ntarget_train = target_train_df.values\n\ndata_test_df['weekday'] = data_test_df['weekday'].replace(weekday_mapper)\ndata_test_df['holiday'] = data_test_df['holiday'].replace(holiday_mapper)\ndata_test_df['workingday'] = data_test_df['workingday'].replace(workingday_mapper)\ndata_test_df['season'] = data_test_df['season'].replace(season_mapper)\ndata_test_df['weathersit'] = data_test_df['weathersit'].replace(wethersit_mapper)\ndata_test_df['mnth'] = data_test_df['mnth'].replace(mnth_mapper)\ndata_test_df['yr'] = data_test_df['yr'].replace(yr_mapper)\n\ndata_test_df = pd.get_dummies(data_test_df, prefix_sep='_', drop_first=True)\ndata_test = data_test_df.values\ntarget_test = target_test_df.values", "_____no_output_____" ] ], [ [ "## Training the MLP model", "_____no_output_____" ] ], [ [ "\" Sklearn MLP regressor \"\n\nmlp = make_pipeline(StandardScaler(),\n MLPRegressor(hidden_layer_sizes=(50, 50),\n tol=1e-2, \n max_iter=1000, \n random_state=0))\nmodel_nt = mlp.fit(data_train, target_train)\ntarget_pred_nt = model_nt.predict(data_test)", "_____no_output_____" ] ], [ [ "## Execution of Split Based Selection Form Algorithm : ", "_____no_output_____" ], [ "#### Discretization : Equal Frequency ", "_____no_output_____" ] ], [ [ "split_point = len(numerical_cols)\nnb_models = 100\nL_Subgroups_freq = []\n\nL_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,4)[0])\nL_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,5)[0])\nL_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,6)[0])\nL_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,7)[0])\nL_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,8)[0])\nL_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,9)[0])\nL_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,10)[0])\n", "_____no_output_____" ] ], [ [ "#### Discretization : Equal Width ", "_____no_output_____" ] ], [ [ "L_Subgroups_width = []\n\nL_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,4)[0])\nL_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,5)[0])\nL_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,6)[0])\nL_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,7)[0])\nL_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,8)[0])\nL_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,9)[0])\nL_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,10)[0])\n", "_____no_output_____" ], [ "\" Define the functions to save and load data \"\nimport pickle\ndef save_obj(obj, name):\n with open(name + '.pkl', 'wb') as f:\n pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)\n\ndef load_obj(name):\n with open(name + '.pkl', 'rb') as f:\n return pickle.load(f)", "_____no_output_____" ], [ "'SAVE THE DATA'\n\npath = './saved_data/'\nsave_obj(data_train, path + 'data_train_d')\nsave_obj(target_train, path + 'target_train_d')\nsave_obj(data_test, path + 'data_test_d')\nsave_obj(target_test, path + 'target_test_d')\nsave_obj(list_neigh, path + 'list_neighbors_d')", "_____no_output_____" ], [ "'SAVE THE LIST OF THE SUBGROUPS'\nsave_obj(L_Subgroups_freq, path + 'l_list_subgroups_freq')\nsave_obj(L_Subgroups_width, path + 'l_list_subgroups_width')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e762fa0f44fc5650a21ec9de20847d7bb7e74f32
1,094
ipynb
Jupyter Notebook
notebooks/1.0-deeptax_start.ipynb
charlos1204/deeptax
eecb0c11acc69baade9087b361d71f306ddaac7c
[ "MIT" ]
null
null
null
notebooks/1.0-deeptax_start.ipynb
charlos1204/deeptax
eecb0c11acc69baade9087b361d71f306ddaac7c
[ "MIT" ]
null
null
null
notebooks/1.0-deeptax_start.ipynb
charlos1204/deeptax
eecb0c11acc69baade9087b361d71f306ddaac7c
[ "MIT" ]
null
null
null
17.365079
57
0.521938
[ [ [ "# deeptax", "_____no_output_____" ], [ "Description: Deep learning taxonomic classification", "_____no_output_____" ] ], [ [ "import socket\nprint(socket.gethostname())", "_____no_output_____" ], [ "import sys\nsys.path.append('../deeptax')\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ] ]
e7630084906679362f136e084e6155627afc8317
966
ipynb
Jupyter Notebook
Untitled2.ipynb
atapin/Caricature-Your-Face
92fbf9156f0522bcc2592673c23a718e20b5114f
[ "MIT" ]
null
null
null
Untitled2.ipynb
atapin/Caricature-Your-Face
92fbf9156f0522bcc2592673c23a718e20b5114f
[ "MIT" ]
null
null
null
Untitled2.ipynb
atapin/Caricature-Your-Face
92fbf9156f0522bcc2592673c23a718e20b5114f
[ "MIT" ]
null
null
null
23
233
0.503106
[ [ [ "<a href=\"https://colab.research.google.com/github/atapin/Caricature-Your-Face/blob/main/Untitled2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
e7630323f3e2a884ee2ebbb3d6ac32d68ed86e70
591,084
ipynb
Jupyter Notebook
notebooks/.ipynb_checkpoints/historam_equalization-checkpoint.ipynb
jabae/CMontage
b431cf22ef8c8756f2e5d63ba707f17e97dc593f
[ "MIT" ]
1
2021-12-21T16:13:25.000Z
2021-12-21T16:13:25.000Z
notebooks/.ipynb_checkpoints/historam_equalization-checkpoint.ipynb
jabae/CMontage
b431cf22ef8c8756f2e5d63ba707f17e97dc593f
[ "MIT" ]
null
null
null
notebooks/.ipynb_checkpoints/historam_equalization-checkpoint.ipynb
jabae/CMontage
b431cf22ef8c8756f2e5d63ba707f17e97dc593f
[ "MIT" ]
null
null
null
886.182909
153,864
0.952829
[ [ [ "# Histogram equalization", "_____no_output_____" ] ], [ [ "import numpy as np\nimport tifffile as tif\nimport cv2\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline", "_____no_output_____" ], [ "def read_tif(fname):\n \n t = tif.imread(fname)\n img = np.zeros(t.shape)\n img[:,:] = tif.imread(fname)\n \n return img\n\n\ndef normalize(tile):\n \n vmin = tile.min(); vmax = tile.max()\n new_tile = (tile-vmin)*255/(vmax-vmin)\n \n return new_tile.astype(\"uint8\")", "_____no_output_____" ], [ "fname = \"/data/research/se/celegans/dataset3/N2DA_1430-2/L1/tif/L1_s0028.tif\"\ntile = read_tif(fname)\ntile = normalize(tile)\n\nplt.figure()\nplt.imshow(tile, cmap=\"gray\", vmin=0, vmax=255)\nplt.axis(\"off\")\nplt.title(\"Original\", fontsize=15)\nplt.show()\n\ntile_histeq = cv2.equalizeHist(tile)\n\nplt.figure()\nplt.imshow(tile_histeq, cmap=\"gray\", vmin=0, vmax=255)\nplt.axis(\"off\")\nplt.title(\"Histogram equalization\", fontsize=15)\nplt.show()\n\nclahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))\ntile_clahe = clahe.apply(tile)\n\nplt.figure()\nplt.imshow(tile_clahe, cmap=\"gray\", vmin=0, vmax=255)\nplt.axis(\"off\")\nplt.title(\"CLAHE\", fontsize=15)\nplt.show()", "_____no_output_____" ], [ "fname = \"/data/research/se/celegans/dataset3/N2DA_1430-2/L1/tif/L1_s0015.tif\"\ntile = read_tif(fname)\ntile = normalize(tile)\n\nplt.figure()\nplt.imshow(tile, cmap=\"gray\", vmin=0, vmax=255)\nplt.axis(\"off\")\nplt.title(\"Original\", fontsize=15)\nplt.show()\n\ntile_histeq = cv2.equalizeHist(tile)\n\nplt.figure()\nplt.imshow(tile_histeq, cmap=\"gray\", vmin=0, vmax=255)\nplt.axis(\"off\")\nplt.title(\"Histogram equalization\", fontsize=15)\nplt.show()\n\nclahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))\ntile_clahe = clahe.apply(tile)\n\nplt.figure()\nplt.imshow(tile_clahe, cmap=\"gray\", vmin=0, vmax=255)\nplt.axis(\"off\")\nplt.title(\"CLAHE\", fontsize=15) 1\nplt.show()", "_____no_output_____" ], [ "from os import listdir, path\n\nd = \"/data/research/se/celegans/dataset3/N2DA_1430-2/test/tif/\"\nflist = listdir(d)\n\nvar_list = []\n\nfor i in range(len(flist)):\n \n fname = path.join(d, flist[i])\n img = read_tif(fname)\n# img = normalize(img)\n \n var_list.append(np.var(img))\n print(i)", "0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\n101\n102\n103\n104\n105\n106\n107\n108\n109\n110\n111\n112\n113\n114\n115\n116\n117\n118\n119\n120\n121\n122\n123\n124\n125\n126\n127\n128\n129\n130\n131\n132\n133\n134\n135\n136\n137\n138\n139\n140\n141\n142\n143\n144\n145\n146\n147\n148\n149\n150\n151\n152\n153\n154\n155\n156\n157\n158\n159\n160\n161\n162\n163\n164\n165\n166\n167\n168\n169\n170\n171\n172\n173\n174\n175\n176\n177\n178\n179\n180\n181\n182\n183\n184\n185\n186\n187\n188\n189\n190\n191\n192\n193\n194\n195\n196\n197\n198\n199\n200\n201\n202\n203\n204\n205\n206\n207\n208\n209\n210\n211\n212\n213\n214\n215\n216\n217\n218\n219\n220\n221\n222\n223\n224\n225\n226\n227\n228\n229\n230\n231\n232\n233\n234\n235\n236\n237\n238\n239\n240\n241\n242\n243\n244\n245\n246\n247\n248\n249\n250\n251\n252\n253\n254\n255\n256\n257\n258\n259\n260\n261\n262\n263\n264\n265\n266\n267\n268\n269\n270\n271\n272\n273\n274\n275\n276\n277\n278\n279\n280\n281\n282\n283\n284\n285\n286\n287\n288\n289\n290\n291\n292\n293\n294\n295\n296\n297\n298\n299\n300\n301\n302\n303\n304\n305\n306\n307\n308\n309\n310\n311\n312\n313\n314\n315\n316\n317\n318\n319\n320\n321\n322\n323\n324\n325\n326\n327\n328\n329\n330\n331\n332\n333\n334\n335\n336\n337\n338\n339\n340\n341\n342\n343\n344\n345\n346\n347\n348\n349\n350\n351\n352\n353\n354\n355\n356\n357\n358\n359\n360\n361\n362\n363\n364\n365\n366\n367\n368\n369\n370\n371\n372\n373\n374\n375\n376\n377\n378\n379\n380\n381\n382\n383\n384\n385\n386\n387\n388\n389\n390\n391\n392\n393\n394\n395\n396\n" ], [ "plt.figure()\nplt.hist(var_list)\nplt.xlabel(\"Variance\", fontsize=12)\nplt.ylabel(\"Count\", fontsize=12)\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e7630987628d95c1e4ced2d08231b8e4e0e4de49
342,434
ipynb
Jupyter Notebook
Sheet06_Laura.ipynb
lauraflyra/MI_2git
10aa7d5fce20f45279f9a7b90bac35bf8386ab63
[ "MIT" ]
null
null
null
Sheet06_Laura.ipynb
lauraflyra/MI_2git
10aa7d5fce20f45279f9a7b90bac35bf8386ab63
[ "MIT" ]
null
null
null
Sheet06_Laura.ipynb
lauraflyra/MI_2git
10aa7d5fce20f45279f9a7b90bac35bf8386ab63
[ "MIT" ]
null
null
null
610.399287
68,332
0.947975
[ [ [ "# Maximizing nongaussianity\n__Group ALT: Andreea, Laura, Tien __", "_____no_output_____" ], [ "## Exercise H6.1: Kurtosis of Toy Data", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\nimport scipy.io as sio\nimport seaborn as sns", "_____no_output_____" ], [ "########### TIEN\n# https://gist.github.com/dkapitan/fcf45a97caaf48bc3d6be17b5f8b213c\nclass SeabornFig2Grid():\n\n def __init__(self, seaborngrid, fig, subplot_spec):\n self.fig = fig\n self.sg = seaborngrid\n self.subplot = subplot_spec\n if (isinstance(self.sg, sns.axisgrid.FacetGrid) or isinstance(self.sg, sns.axisgrid.PairGrid)):\n self._movegrid()\n elif isinstance(self.sg, sns.axisgrid.JointGrid):\n self._movejointgrid()\n self._finalize()\n\n def _movegrid(self):\n \"\"\" Move PairGrid or Facetgrid \"\"\"\n self._resize()\n n = self.sg.axes.shape[0]\n m = self.sg.axes.shape[1]\n self.subgrid = gridspec.GridSpecFromSubplotSpec(\n n, m, subplot_spec=self.subplot)\n for i in range(n):\n for j in range(m):\n self._moveaxes(self.sg.axes[i, j], self.subgrid[i, j])\n\n def _movejointgrid(self):\n \"\"\" Move Jointgrid \"\"\"\n h = self.sg.ax_joint.get_position().height\n h2 = self.sg.ax_marg_x.get_position().height\n r = int(np.round(h / h2))\n self._resize()\n self.subgrid = (gridspec.GridSpecFromSubplotSpec(r + 1, r + 1,\n subplot_spec=self.subplot))\n\n self._moveaxes(self.sg.ax_joint, self.subgrid[1:, :-1])\n self._moveaxes(self.sg.ax_marg_x, self.subgrid[0, :-1])\n self._moveaxes(self.sg.ax_marg_y, self.subgrid[1:, -1])\n\n def _moveaxes(self, ax, gs):\n # https://stackoverflow.com/a/46906599/4124317\n ax.remove()\n ax.figure = self.fig\n self.fig.axes.append(ax)\n self.fig.add_axes(ax)\n ax._subplotspec = gs\n ax.set_position(gs.get_position(self.fig))\n ax.set_subplotspec(gs)\n\n def _finalize(self):\n plt.close(self.sg.fig)\n self.fig.canvas.mpl_connect(\"resize_event\", self._resize)\n self.fig.canvas.draw()\n\n def _resize(self, evt=None):\n self.sg.fig.set_size_inches(self.fig.get_size_inches())\n \n \ndef plt_jointplot(normal, laplac, unifor, add_title = ''): # data must be of shape 2 x p\n fig = plt.figure(figsize=(13,4))\n gs = gridspec.GridSpec(1, 3)\n\n plt1 = sns.jointplot(x = normal[0], y = normal[1], kind = 'scatter')\n plt2 = sns.jointplot(x = laplac[0], y = laplac[1], kind = 'scatter')\n plt3 = sns.jointplot(x = unifor[0], y = unifor[1], kind = 'scatter')\n \n fig.suptitle(f'Normal, Laplacian and uniformly distributed {add_title}', fontsize = 13,y=1.05)\n \n mg0 = SeabornFig2Grid(plt1, fig, gs[0])\n mg1 = SeabornFig2Grid(plt2, fig, gs[1])\n mg2 = SeabornFig2Grid(plt3, fig, gs[2])\n \n gs.tight_layout(fig)\n plt.show()", "_____no_output_____" ], [ "def principal_components(centered_data):\n \"\"\"\n gets centered data were the columns are different components and rows are different observations\n return: centered data, covariance matrix, sorted eigenvalues and normalized eigenvectors IN COLUMNS \n in descending order\n \"\"\"\n\n cov_matrix = np.cov(centered_data)\n eigenval, eigenvec = np.linalg.eig(cov_matrix)\n \n idx = eigenval.argsort()[::-1]\n sorted_eigenvals = eigenval[idx]\n sorted_eigenvecs = eigenvec[idx,:]\n \n sorted_eigenvecs = sorted_eigenvecs/np.linalg.norm(sorted_eigenvecs,axis=0) # normalize eigenvectors\n return cov_matrix, sorted_eigenvals,sorted_eigenvecs", "_____no_output_____" ], [ "distrib = sio.loadmat(\"distrib.mat\")\ns_unifor = distrib[\"uniform\"]\ns_normal = distrib[\"normal\"]\ns_laplac = distrib[\"laplacian\"]\n\nN = 2\np = 10000", "_____no_output_____" ] ], [ [ "__(a) Apply the mixing matrix $\\mathbf{A}$ to the original sources $\\mathbf{s}$.__", "_____no_output_____" ] ], [ [ "A = np.array([[4,3],[2,1]])\nx_normal = A @ s_normal\nx_laplac = A @ s_laplac\nx_unifor = A @ s_unifor", "_____no_output_____" ] ], [ [ "__(b) Center the mixtures $\\mathbf{x}$ to zero mean.__", "_____no_output_____" ] ], [ [ "x_normal_cent = x_normal - np.mean(x_normal, axis = 1).reshape(2,1)\nx_laplac_cent = x_laplac - np.mean(x_laplac, axis = 1).reshape(2,1)\nx_unifor_cent = x_unifor - np.mean(x_unifor, axis = 1).reshape(2,1)", "_____no_output_____" ] ], [ [ "__(c) Decorrelate the mixtures from (b) by applying principal component analysis (PCA) on them\nand project them onto the PCs.__", "_____no_output_____" ] ], [ [ "_, eigvals_normal ,pca_normal = principal_components(x_normal_cent)\n_, eigvals_laplac ,pca_laplac = principal_components(x_laplac_cent)\n_, eigvals_unifor ,pca_unifor = principal_components(x_unifor_cent)", "_____no_output_____" ], [ "x_normal_decorr = pca_normal.T @ x_normal_cent\nx_laplac_decorr = pca_laplac.T @ x_laplac_cent\nx_unifor_decorr = pca_unifor.T @ x_unifor_cent\n\nlambda_inv_normal = np.diag(1/np.sqrt(eigvals_normal))\nlambda_inv_laplac = np.diag(1/np.sqrt(eigvals_laplac))\nlambda_inv_unifor = np.diag(1/np.sqrt(eigvals_unifor))\n\nx_normal_sphere = lambda_inv_normal @ x_normal_decorr\nx_laplac_sphere = lambda_inv_laplac @ x_laplac_decorr\nx_unifor_sphere = lambda_inv_unifor @ x_unifor_decorr", "_____no_output_____" ] ], [ [ "__(e) Rotate the whitened mixtures by different angles $\\theta$ and calculate the (excess) kurtosis empirically for each dimension in $\\mathbf{x}$.__", "_____no_output_____" ] ], [ [ "def rotate(theta):\n return np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])\n\ndef kurtosis(x, theta):\n R = rotate(theta)\n x_theta = R @ x\n kurt = np.mean(x_theta**4, axis =1) - 3\n return kurt", "_____no_output_____" ], [ "thetas = np.arange(0,2*np.pi+np.pi/50, np.pi/50)\n\nkurt_normal = np.zeros((len(thetas),2))\nkurt_laplac = np.zeros((len(thetas),2))\nkurt_unifor = np.zeros((len(thetas),2))\n\nfor i,theta in enumerate(thetas):\n kurt_normal[i] = kurtosis(x_normal_sphere, theta)\n kurt_laplac[i] = kurtosis(x_laplac_sphere, theta)\n kurt_unifor[i] = kurtosis(x_unifor_sphere, theta)\n", "_____no_output_____" ], [ "fig, axs = plt.subplots(1, 3, figsize = (13, 3), constrained_layout = True)\nkurts = [kurt_normal.T, kurt_laplac.T, kurt_unifor.T]\ntitles = ['normal', 'laplacian', 'uniformly']\n\nfor i, ax in enumerate(axs.flatten()):\n plt.sca(ax)\n plt.plot(thetas, kurts[i][0])\n plt.plot(thetas, kurts[i][1])\n plt.xlabel('theta')\n plt.ylabel('kurtosis')\n plt.title(f'Kurtosis values for {titles[i]} distributed $x$')\n \nfig.suptitle(r'Kurtosis values of each dimension in $\\mathbf{x}$ w.r.t. to $\\theta$', fontsize = 13)\nplt.show()", "_____no_output_____" ] ], [ [ "__(f) Find the minimum and maximum kurtosis value for the first dimension and rotate the data accordingly.__", "_____no_output_____" ] ], [ [ "theta_normal_min, theta_normal_max = thetas[np.argmin(kurt_normal.T[0])],thetas[np.argmax(kurt_normal.T[0])]\ntheta_laplac_min, theta_laplac_max = thetas[np.argmin(kurt_laplac.T[0])],thetas[np.argmax(kurt_laplac.T[0])]\ntheta_unifor_min, theta_unifor_max = thetas[np.argmin(kurt_unifor.T[0])],thetas[np.argmax(kurt_unifor.T[0])]", "_____no_output_____" ], [ "def rotate_min_max(data, theta_min, theta_max):\n R_min = rotate(theta_min)\n R_max = rotate(theta_max)\n x_theta_min = R_min @ data\n x_theta_max = R_max @ data\n return x_theta_min, x_theta_max", "_____no_output_____" ], [ "x_normal_rot_min, x_normal_rot_max = rotate_min_max(x_normal_sphere, theta_normal_min, theta_normal_max)\nx_laplac_rot_min, x_laplac_rot_max = rotate_min_max(x_laplac_sphere, theta_laplac_min, theta_laplac_max)\nx_unifor_rot_min, x_unifor_rot_max = rotate_min_max(x_unifor_sphere, theta_unifor_min, theta_unifor_max)", "_____no_output_____" ], [ "plt_jointplot(s_normal, s_laplac, s_unifor, 'original sources')", "_____no_output_____" ], [ "plt_jointplot(x_normal, x_laplac, x_unifor, 'mixtures')", "_____no_output_____" ], [ "plt_jointplot(x_normal_cent, x_laplac_cent, x_unifor_cent, 'centered mixtures')", "_____no_output_____" ], [ "plt_jointplot(x_normal_decorr, x_laplac_decorr, x_unifor_decorr, 'decorrelated mixtures')", "_____no_output_____" ], [ "plt_jointplot(x_normal_sphere, x_laplac_sphere, x_unifor_sphere, 'sphered mixtures')", "_____no_output_____" ], [ "plt_jointplot(x_normal_rot_min, x_laplac_rot_min, x_unifor_rot_min, r'sphered mixtures after rotation by $\\theta_{min}$ ')", "_____no_output_____" ], [ "plt_jointplot(x_normal_rot_max, x_laplac_rot_max, x_unifor_rot_max, r'sphered mixtures after rotation by $\\theta_{max}$ ')", "_____no_output_____" ], [ "print(f'Minimum kurtosis value of the normal distribution: {np.min(kurt_normal.T[0])}')\nprint(f'Maximum kurtosis value of the normal distribution: {np.max(kurt_normal.T[0])}')\nprint('')\n\nprint(f'Minimum kurtosis value of the Laplace distribution: {np.min(kurt_laplac.T[0])}')\nprint(f'Maximum kurtosis value of the Laplace distribution: {np.max(kurt_laplac.T[0])}')\nprint('')\n\nprint(f'Minimum kurtosis value of the uniform distribution: {np.min(kurt_unifor.T[0])}')\nprint(f'Maximum kurtosis value of the uniform distribution: {np.max(kurt_unifor.T[0])}')", "Minimum kurtosis value of the normal distribution: -0.07290092529895587\nMaximum kurtosis value of the normal distribution: 0.0013696690943216794\n\nMinimum kurtosis value of the Laplace distribution: 1.584103194281238\nMaximum kurtosis value of the Laplace distribution: 3.0186505751018835\n\nMinimum kurtosis value of the uniform distribution: -1.216561706053717\nMaximum kurtosis value of the uniform distribution: -0.581252500548497\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7630a580f1e685c7c942e5d4c22a6ae7473bffa
52,119
ipynb
Jupyter Notebook
day1/tutorial.ipynb
grokkaine/biopycourse
cb8b554abb987e6f657c5e522c7e28ecbc9fb4d5
[ "CC0-1.0" ]
9
2017-05-16T06:07:22.000Z
2021-08-06T14:58:28.000Z
day1/tutorial.ipynb
grokkaine/biopycourse
cb8b554abb987e6f657c5e522c7e28ecbc9fb4d5
[ "CC0-1.0" ]
null
null
null
day1/tutorial.ipynb
grokkaine/biopycourse
cb8b554abb987e6f657c5e522c7e28ecbc9fb4d5
[ "CC0-1.0" ]
18
2017-05-16T07:25:08.000Z
2021-04-22T19:22:53.000Z
24.377456
831
0.485293
[ [ [ "# Python tutorial\n\n- [Basics](#Basics): Math, Variables, Functions, Control flow, Modules\n- [Data representation](#Data-representation): String, Tuple, List, Set, Dictionary, Objects and Classes\n- [Standard library modules](#Standard-library-modules): script arguments, file operations, timing, processes, forks, multiprocessing", "_____no_output_____" ], [ "- Beginners in programming have a hard time knowing where to begin and are overwhelmed.\n- These first steps are not very methodical. Programming language classes take time, start with basic concepts and gradually improve and expand on them. \n- Universities offer more relaxed programming classes for students, but this is a different type of learning. Students have time for many things, for the rest of us working people there are two or perhaps three steps to learning a new language or a new library, and we learn by doing. The [tutorial](https://docs.python.org/3/tutorial/) is a simple exposure that takes you through some key aspects, next comes the programming guide (or any decent book) where concepts are explained in greater detail and at last there is the [reference library](https://docs.python.org/3/library/index.html), where every detail is supposed to be documented in concise format. Good programmers learn to read the tutorial, read some key aspects from the guide that make their library stand out and only check the reference guide when needed.\n- No matter how good I would become at teaching basic Python in one or two hours, it fails to compare with reading the default tutorial, which would take much longer time than we have at our disposal. So please understand that there is a trade-off between time and quality. As we learn by doing I would like to invite you to check whenever possible the documentation for Python and for the libraries we are using.\n- If you want to a certain standard module to be discussed in more depth, mention it in Hackmd/Slack!", "_____no_output_____" ], [ "# Basics", "_____no_output_____" ], [ "### Variables and comments\n\nVariable vs type. 'Native' datatypes. Console output.", "_____no_output_____" ] ], [ [ "# This is a line comment.\n\"\"\"\nA multi-line\ncomment.\n\"\"\"\na = None #Just declared an empty object\nprint(a)\na = 1\nprint(a)\na = 'abc'\nprint(a)\nb = 3\nc = [1, 2, 3]\na = [a, 2, b, 1., 1.2e-5, True] #This is a list.\nprint(a)", "None\n1\nabc\n['abc', 2, 3, 1.0, 1.2e-05, True]\n" ], [ "## Python is a dynamic language\na = 1\nprint(type(a))\nprint(a)\na = \"spam\"\nprint(type(a))\nprint(a)", "<class 'int'>\n1\n<class 'str'>\nspam\n" ], [ "a = 1\na\nb = 'abc'\nprint(b)\n#b", "abc\n" ] ], [ [ "Now let us switch the values of two variables.", "_____no_output_____" ] ], [ [ "print(a, b, c)\nt = c\nc = b\nb = t\nprint(a, b, c)", "['abc', 2, 3, 1.0, 1.2e-05, True] 3 [1, 2, 3]\n['abc', 2, 3, 1.0, 1.2e-05, True] [1, 2, 3] 3\n" ] ], [ [ "### Math operations\n\n#### Arithmetic", "_____no_output_____" ] ], [ [ "a = 2\nb = 1\nb = a*(5 + b) + 1/0.5\nprint(b)\nd = 1/a\nprint(d)", "14.0\n0.5\n" ] ], [ [ "#### Logical operations:", "_____no_output_____" ] ], [ [ "a = True\nb = 3\nprint(b == 5)\nprint(a == False)\nprint(b < 6 and not a)\nprint(b < 6 or not a)\nprint(b < 6 and (not a or not b == 3))", "False\nFalse\nFalse\nTrue\nFalse\n" ], [ "print(False and True)", "False\n" ], [ "True == 1", "_____no_output_____" ] ], [ [ "### Functions\n\nFunctions are a great way to separate code into readable chunks. The exact size and number of functions needed to solve a problem will affect readability.\n\nNew concepts: indentation, namespaces, global and local scope, default parameters, passing arguments by value or by reference is meaningless in Python, what are mutable and imutable types?", "_____no_output_____" ] ], [ [ "## Indentation and function declaration, parameters of a function\ndef operation(a, b):\n c = 2*(5 + b) + 1/0.5\n a = 1\n return a, c\n\na = None\nmu = 2\noperation(mu, 1)\na, op = operation(a, 1)\nprint(a, op)", "1 14.0\n" ], [ "# Function scope, program workflow\ndef f(a):\n print(\"inside the scope of f():\")\n a = 4\n print(\"a =\", a)\n return a\na = 1\nprint(\"f is called\")\nf(a)\nprint(\"outside the scope of f, a=\", a)\nprint(\"also outside the scope of f, f returns\", f(a))", "f is called\ninside the scope of f():\na = 4\noutside the scope of f, a= 1\ninside the scope of f():\na = 4\nalso outside the scope of f, f returns 4\n" ], [ "## Defining default parameters for a function\ndef f2(a, b=1):\n return a + b\n\nprint(f2(5))\nprint(f2(5, b=2))", "6\n7\n" ], [ "## Globals. Never use them!\ng = 0\n\ndef f1():\n # Comment bellow to spot the diference\n global g # Needed to modify global copy of g\n g = 1\n\ndef f2():\n print(\"f2:\",g)\n\nprint(g)\nf1()\nprint(g)\nf2()", "0\n1\nf2: 1\n" ] ], [ [ "Task:\n- Define three functions, f, g and h. Call g and h from inside f. Run f on some value v.\n- You can also have functions that are defined inside the namespace of another function. Try it!", "_____no_output_____" ], [ "#### Data types\n\nEverything is an object in Python, and every object has an ID (or identity), a type, and a value. This means that whenever you assign an expression to a variable, you're not actually copying the value into a memory location denoted by that variable, but you're merely giving a name to the memory location where the value actually exists.\n\n- Once created, the ID of an object never changes. It is a unique identifier for it, and it is used behind the scenes by Python to retrieve the object when we want to use it.\n- The type also never changes. The type tells what operations are supported by the object and the possible values that can be assigned to it.\n- The value can either change or not. If it can, the object is said to be mutable, while when it cannot, the object is said to be immutable.\n", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(url= \"../img/mutability.png\", width=400, height=400)", "_____no_output_____" ], [ "i = 43\nprint(id(i))\nprint(type(i))\nprint(i)\n\ni = 42\nprint(id(i))\nprint(type(i))\nprint(i)", "140734864860896\n<class 'int'>\n43\n140734864860864\n<class 'int'>\n42\n" ], [ "i = 43\nprint(id(i))\nprint(type(i))\nprint(i)\n\ni = i + 1\nprint(id(i))\nprint(type(i))\nprint(i)", "140734997571296\n<class 'int'>\n43\n140734997571328\n<class 'int'>\n44\n" ], [ "# assignments reference the same object as i\ni = 43\nprint(id(i))\nprint(type(i))\nprint(i)\n\nj = i\nprint(id(j))\nprint(type(j))\nprint(j)", "140734864860896\n<class 'int'>\n43\n140734864860896\n<class 'int'>\n43\n" ], [ "# Task: will j also change?\ni = 5", "_____no_output_____" ], [ "# Strings of characters are also immutable, x did not changed its value\nx = 'foo'\ny = x\nprint(x, y) # foo\ny += 'bar'\nprint(x, y) # foo", "foo foo\nfoo foobar\n" ], [ "# lists are mutable\nx = [1, 2, 3]\nprint(x)\nprint(id(x))\nprint(type(x))\nx.pop()\n#x = [1, 2, 3]\nprint(x)\nprint(id(x))", "[1, 2, 3]\n1799133954760\n<class 'list'>\n[1, 2]\n1799133954760\n" ] ], [ [ "Question:\n- Why weren't all data types made mutable only, or immutable only?\n\nBelow, if ints would have been mutable, you would expect both variables to be updated. But you normally want variables pointing to ints to be independent.", "_____no_output_____" ] ], [ [ "a = 5\nb = a\na += 5\nprint(a, b)", "10 5\n" ], [ "## A list however is mutable datatype in Python\nx = [1, 2, 3]\ny = x\nprint(x, y) # [1, 2, 3]\ny += [3, 2, 1]\nprint(x, y) # [1, 2, 3, 3, 2, 1]", "[1, 2, 3] [1, 2, 3]\n[1, 2, 3, 3, 2, 1] [1, 2, 3, 3, 2, 1]\n" ], [ "## String mutable? No\ndef func(val):\n val += 'bar'\n return val\n\nx = 'foo'\nprint(x) # foo\nprint(func(x))\nprint(x) # foo", "foo\nfoobar\nfoo\n" ], [ "## List mutable? Yes.\ndef func(val):\n val += [3, 2, 1]\n return val\n\nx = [1, 2, 3]\nprint(x) # [1, 2, 3]\nprint(func(x))\nprint(x) # [1, 2, 3, 3, 2, 1]", "[1, 2, 3]\n[1, 2, 3, 3, 2, 1]\n[1, 2, 3, 3, 2, 1]\n" ] ], [ [ "**Control flow**\n\nThere are two major types of programming languages, procedural and functional. Python is mostly procedural, with very simple functional elements. Procedural languages typicaly have very strong control flow specifications. Programmers spend time specifying how a program should run. In functional languages the time is spent defining the program while how to run it is left to the computer. Scala is the most used functional language in Bioinformatics.", "_____no_output_____" ] ], [ [ "# for loops\nfor b in [1, 2, 3]:\n print(b)", "1\n2\n3\n" ], [ "# while, break and continue\nb = 0\nwhile b < 10:\n b += 1\n a = 2\n if b%a == 0:\n #break\n continue\n print(b)\n\n# Now do the same, but using the for loop", "1\n3\n5\n7\n9\n" ], [ "## if else: use different logical operators and see if it makes sense\na = 1\nif a == 3:\n print('3')\nelif a == 4:\n print('4')\nelse:\n print('something else..')", "something else..\n" ], [ "## error handling - use sparingly!\n## python culture: better to apologise than to verify!\ndef divide(x, y):\n \"\"\"catches an exception\"\"\"\n try:\n result = x / y\n except ZeroDivisionError:\n print(\"division by zero!\")\n #raise ZeroDivisionError\n #pass\n else:\n print(\"result is\", result)\n finally:\n print(\"executing finally code block..\")\ndivide(1,0)", "division by zero!\nexecuting finally code block..\n" ] ], [ [ "# Python modules\n\n```\nimport xls\n\"How can you simply import Excel !?!\"\n```\n\n- How Python is structured:\n\nPackages are the way code libraries are distributed. Libraries contain one or several modules. Each module can contain object classes, functions and submodules.\n\n- Object introspection.\n\nIt happens often that some Python code that you require is not well documented. To understand how to use the code one can interogate any object during runtime. Aditionally the code is always located somewhere on your computer.\n", "_____no_output_____" ] ], [ [ "import math\nprint(dir())\nprint(dir(math))\nprint(help(math.log))\na = 3\nprint(type(a))\nimport numpy\nprint(numpy.__version__)\nimport os\nprint(os.getcwd())", "['In', 'Out', '_', '_1', '_7', '__', '___', '__builtin__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', '_dh', '_i', '_i1', '_i10', '_i11', '_i12', '_i13', '_i2', '_i3', '_i4', '_i5', '_i6', '_i7', '_i8', '_i9', '_ih', '_ii', '_iii', '_oh', '_sh', 'a', 'b', 'c', 'd', 'exit', 'func', 'get_ipython', 'math', 'quit', 't', 'x']\n['__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign', 'cos', 'cosh', 'degrees', 'e', 'erf', 'erfc', 'exp', 'expm1', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'gcd', 'hypot', 'inf', 'isclose', 'isfinite', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'log2', 'modf', 'nan', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'tau', 'trunc']\nHelp on built-in function log in module math:\n\nlog(...)\n log(x[, base])\n \n Return the logarithm of x to the given base.\n If the base not specified, returns the natural logarithm (base e) of x.\n\nNone\n<class 'int'>\n1.12.1\n/home/sergiu/data/work/course/short/github/biopycourse/day1\n" ] ], [ [ "**Task:**\n\n- Compute the distance between 2D points.\n- `d(p1, p2)=sqrt((x1-x2)**2+(y1-y2)**2), where pi(xi,yi)`\n- Define a module containing a function that computes the euclidian distance. Use the Spyder code editor and save the module on your filesystem.\n- Import that module into a new code cell bellow.\n- Make the module location available to Jupyter.", "_____no_output_____" ] ], [ [ "\"\"\"\n%run full(relative)path/distance.py\n\nor\nos.setcwd(path)\n\n\"\"\"\nimport distance\nprint(distance.euclidian(1, 2, 4.5 , 6))\n\nfrom distance import euclidian\nprint(euclidian(1, 2, 4.5 , 6))\n\nimport distance as d\nprint(d.euclidian(1, 2, 4.5 , 6))\n", "5.315072906367325\n5.315072906367325\n5.315072906367325\n" ], [ "import sys\nprint(sys.path)\nsys.path.append('/my/custom/path')\nprint(sys.path)", "['', '/home/sergiu/programs/miniconda3/envs/lts/lib/python36.zip', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/lib-dynload', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/site-packages', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/site-packages/IPython/extensions', '/home/sergiu/.ipython']\n['', '/home/sergiu/programs/miniconda3/envs/lts/lib/python36.zip', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/lib-dynload', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/site-packages', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/site-packages/IPython/extensions', '/home/sergiu/.ipython', '/my/custom/path']\n" ] ], [ [ "## Data representation\n\n\n### Strings", "_____no_output_____" ] ], [ [ "#String declarations\nstatement = \"Gene IDs are great. My favorite gene ID is\"\nname = \"At5G001024\"\nstatement = statement + \" \" + name\nprint(statement)\n\nstatement2 = 'Genes names \\n \\'are great. My favorite gene name is ' + 'Afldtjahd'\nstatement3 = \"\"\"\nGene IDs are great.\nMy favorite genes are {} and {}.\"\"\".format(name, 'ksdyfngusy')\n\nprint(statement2)\nprint(statement3)\nprint('.\\n'.join(statement.split(\". \")))\nprint (statement.split(\". \"))", "Gene IDs are great. My favorite gene ID is At5G001024\nGenes names \n 'are great. My favorite gene name is Afldtjahd\n\nGene IDs are great.\nMy favorite genes are At5G001024 and ksdyfngusy.\nGene IDs are great.\nMy favorite gene ID is At5G001024\n['Gene IDs are great', 'My favorite gene ID is At5G001024']\n" ], [ "#String methods\nname = \"At5G001024\"\nprint(name.lower())\nprint(name.index('G00'))\nprint(name.rstrip('402'))\nprint(name.strip('Add34'))", "at5g001024\n3\nAt5G001\nt5G00102\n" ], [ "#Splits, joins\nstatement = \"Gene IDs are great. My favorite gene ID is At5G001024\"\nwords = statement.split()\nprint(\"Splitting a string:\", words)\nprint(\"Joining into a string:\", \"\\t \".join(words))\nimport random\nrandom.shuffle(words)\nprint(\"Fun:\", \" \".join(words))", "Splitting a string: ['Gene', 'IDs', 'are', 'great.', 'My', 'favorite', 'gene', 'ID', 'is', 'At5G001024']\nJoining into a string: Gene\t IDs\t are\t great.\t My\t favorite\t gene\t ID\t is\t At5G001024\nFun: great. favorite are IDs gene Gene At5G001024 My is ID\n" ], [ "#Strings are immutable lists of characters!\nprint(statement)\nprint(statement[0:5] + \" blabla \" + statement[-10:-5])", "Gene IDs are great. My favorite gene ID is At5G001024\nGene blabla At5G0\n" ] ], [ [ "### Tuples\n\nA few pros for tuples:\n- Tuples are faster than lists\n- Tuples can be keys to dictionaires (they are immutable types)", "_____no_output_____" ] ], [ [ "#a tupple is an immutable list\na = (1, \"spam\", 5)\n#a.append(\"eggs\")\n\nprint(a[1])\nb = (1, \"one\")\nc = (a, b, 3)\nprint(c)\n\n#unpacking a collection into positional arguments\ndef sum(a, b):\n return a + b\nvalues = (5, 2)\ns = sum(*values)\nprint(s)", "spam\n((1, 'spam', 5), (1, 'one'), 3)\n7\n" ] ], [ [ "## Lists\n", "_____no_output_____" ] ], [ [ "a = [1,\"one\",(2,\"two\")]\nprint(a[0])\nprint(a)\na.append(3)\nprint(a)\nb = a + a[:2]\nprint(b)", "1\n[1, 'one', (2, 'two')]\n[1, 'one', (2, 'two'), 3]\n[1, 'one', (2, 'two'), 3, 1, 'one']\n" ], [ "## slicing and indexing\nprint(b[2:5])\ndel a[-1]\nprint(a)\nprint(a.index(\"one\"))\nprint(len(a))", "[(2, 'two'), 3, 1]\n[1, 'one', (2, 'two')]\n1\n3\n" ], [ "## not just list size but list elements too are scoping free! (list is mutable)\ndef f(a, b):\n a[1] = \"changed\"\n b = [1,2]\n return\na = [(2, 'two'), 3, 1]\nb = [2, \"two\"]\nf(a, b)\nprint(a, b)", "[(2, 'two'), 'changed', 1] [2, 'two']\n" ], [ "## matrix\nmatrix = [\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n [9, 10, 11, 12]]\n\nprint(matrix)\nprint(matrix[0][1])\nprint(list(range(2,10,3)))\nfor x in range(len(matrix)):\n for y in range(len(matrix[x])):\n print(x,y, matrix[x][y])", "[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]\n2\n[2, 5, 8]\n0 0 1\n0 1 2\n0 2 3\n0 3 4\n1 0 5\n1 1 6\n1 2 7\n1 3 8\n2 0 9\n2 1 10\n2 2 11\n2 3 12\n" ], [ "## ranges\nr = range(0, 5)\nfor i in r: print(\"step\", i)", "step 0\nstep 1\nstep 2\nstep 3\nstep 4\n" ], [ "## list comprehensions\ndef f(i):\n return 2*i\na = [2*i for i in range(10)]\na = [f(i) for i in range(10)]\nprint(a)\nb = [str(e) for e in a[4:] if e%3==0]\nprint(b)", "[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]\n['12', '18']\n" ], [ "## sorting a list of tupples\na = [(str(i), str(j)) for i in a for j in range(3)]\nprint(a)\na.sort(key=lambda tup: tup[1])\na.sort(key=lambda tup: len(tup[1]), reverse = True)\nprint(a)", "[('s', '0'), ('s', '1'), ('s', '2'), ('p', '0'), ('p', '1'), ('p', '2'), ('a', '0'), ('a', '1'), ('a', '2'), ('m', '0'), ('m', '1'), ('m', '2')]\n[('s', '0'), ('p', '0'), ('a', '0'), ('m', '0'), ('s', '1'), ('p', '1'), ('a', '1'), ('m', '1'), ('s', '2'), ('p', '2'), ('a', '2'), ('m', '2')]\n" ], [ "#zipping and enumerating\n\ny = zip('abc', 'def')\nprint(list(y)) # y is a generator\nprint(list(y)) # second cast to list, content is empty!\n\nprint(list(zip(['one', 'two', 'three'], [1, 2, 3])))\n\nx = [1, 2, 3]\ny = [4, 5, 6]\nzipped = zip(x, y)\n#print(type(zipped))\nprint(zipped)\n\nx2, y2 = zip(*zipped)\nprint (x == list(x2) and y == list(y2))\nprint (x2, y2)\n\nalist = ['a1', 'a2', 'a3']\nfor i, e in enumerate(alist): print (i, e) #this is called a one liner\nfor i in range(len(alist)):\n print(i, alist[i])\nprint(list(range(len(alist))))", "[('a', 'd'), ('b', 'e'), ('c', 'f')]\n[]\n[('one', 1), ('two', 2), ('three', 3)]\n<zip object at 0x7f56500f4fc8>\nTrue\n(1, 2, 3) (4, 5, 6)\n0 a1\n1 a2\n2 a3\n0 a1\n1 a2\n2 a3\n[0, 1, 2]\n" ], [ "# mapping\ndef f(a):\n return max(a)\n\na = [1, 2, 3, 4, 5]\nb = [2, 2, 9, 0, 9]\nprint(list(map(lambda x: max(x), zip(a, b))))\nprint(list(map(lambda x: f(x), zip(a, b))))\n#print(list(zip(a, b)))\n", "[2, 2, 9, 4, 9]\n[2, 2, 9, 4, 9]\n" ], [ "# deep and shallow copies on mutable objects or collections of mutable objects\nlst1 = ['a','b',['ab','ba']]\nlst2 = lst1 #this is a shallow copy of the entire list\nlst2[0]='e'\nprint(lst1)\n\nlst1 = ['a','b',['ab','ba']]\nlst2 = lst1[:] #this is a shallow copy of each element\nlst2[0] = 'e'\nlst2[2][1] = 'd'\nprint(lst1)\n\nfrom copy import deepcopy\nlst1 = ['a','b',['ab','ba']]\nlst2 = deepcopy(lst1) #this is a deep copy\nlst2[2][1] = \"d\"\nlst2[0] = \"c\";\nprint(lst2)\nprint(lst1)", "['e', 'b', ['ab', 'ba']]\n['a', 'b', ['ab', 'd']]\n['c', 'b', ['ab', 'd']]\n['a', 'b', ['ab', 'ba']]\n" ] ], [ [ "### Sets\n\nSets have no order and cannot include identical elements. Use them when the position of elements is not relevant. Finding elements is faster than in a list. Also set operations are more straightforward. A frozen set has a hash value.\n\n#### Task:\n- Find on the Internet the official reference documentation for the Python sets", "_____no_output_____" ] ], [ [ "# set vs. frozenset\ns = set()\n#s = frozenset()\ns.add(1)\ns = s | set([2,\"three\"])\ns |= set([2,\"three\"])\ns.add(2)\ns.remove(1)\nprint(s)\nprint(\"three\" in s)", "{2, 'three'}\nTrue\n" ], [ "s1 = set(range(10))\ns2 = set(range(5,15))\ns3 = s1 & s2\nprint(s1, s2, s3)\ns3 = s1 - s2\nprint(s1, s2, s3)\nprint(s3 <= s1)\ns3 = s1 ^ s2\nprint(s1, s2, s3)", "{0, 1, 2, 3, 4, 5, 6, 7, 8, 9} {5, 6, 7, 8, 9, 10, 11, 12, 13, 14} {8, 9, 5, 6, 7}\n{0, 1, 2, 3, 4, 5, 6, 7, 8, 9} {5, 6, 7, 8, 9, 10, 11, 12, 13, 14} {0, 1, 2, 3, 4}\nTrue\n{0, 1, 2, 3, 4, 5, 6, 7, 8, 9} {5, 6, 7, 8, 9, 10, 11, 12, 13, 14} {0, 1, 2, 3, 4, 10, 11, 12, 13, 14}\n" ] ], [ [ "### Dictionary\n\n- considered one of the most elegant data structure in Python\n- A set of key: value pairs.\n- Keys must be hashable elements, values can be any Python datatype.\n- The keys of the dictionary are hashable i.e. the are generated by hashing function which generates unique result for each unique value supplied to the hash function. This makes a dictionary value retrieval by key much faster than if using a list!\nTODO: !timeit, sha() function", "_____no_output_____" ] ], [ [ "d = {'geneid9': 100, 'geneid8': 90, 'geneid7': 80, 'geneid6': 70, 'geneid5': 60, 'geneid4': 50}\nd", "_____no_output_____" ], [ "d = {}\nd['geneid10'] = 110\nd", "_____no_output_____" ], [ "#Creation: dict(list)\ngenes = ['geneid1', 'geneid2', 'geneid3']\nvalues = [20, 30, 40]\nd = dict(zip(genes, values))\nprint(d)", "{'geneid1': 20, 'geneid3': 40, 'geneid2': 30}\n" ], [ "#Creation: dictionary comprehensions\nd2 = { 'geneid'+str(i):10*(i+1) for i in range(4, 10) }\nprint(d2)\n\n#Keys and values\nprint(d2.keys())\nprint(d2.values())\nfor k in d2.keys(): print(k, d2[k])\n", "{'geneid4': 50, 'geneid5': 60, 'geneid6': 70, 'geneid7': 80, 'geneid8': 90, 'geneid9': 100}\ndict_keys(['geneid4', 'geneid5', 'geneid6', 'geneid7', 'geneid8', 'geneid9'])\ndict_values([50, 60, 70, 80, 90, 100])\ngeneid4 50\ngeneid5 60\ngeneid6 70\ngeneid7 80\ngeneid8 90\ngeneid9 100\n" ] ], [ [ "#### Task:\n\nFind the dictionary key corresponding to a certain value. Why is Python not offering a native method for this?", "_____no_output_____" ] ], [ [ "d = {'geneid9': 100, 'geneid8': 90, 'geneid7': 90, 'geneid6': 70, 'geneid5': 60, 'geneid4': 50}\n\ndef getkey(value):\n ks = set()\n # .. your code here\n return ks\n\nprint(getkey(90))", "set()\n" ] ], [ [ "### Objects and Classes\n\nEverything is an object in Python and every variable is a reference to an object. References map the adress in memory where an object lies. However this is kept hidden in Python. C was famous for not cleaning up automatically the adress space after alocating memory for its data structures. This was causing memory leaks that makes some programs gain more and more RAM space. Modern languages cleanup dynamically after the scope of a variable ended, something called \"garbage collecting\". However this is afecting their speed of computation.\n\nNew concepts:\n- Instantiation, Fields, Methods, Decomposition into classes, Inheritance", "_____no_output_____" ] ], [ [ "class Dog(object):\n \n def __init__(self, name):\n self.name = name\n return\n \n def bark_if_called(self, call):\n if call[:-1]==self.name:\n print(\"Woof Woof!\")\n else:\n print(\"*sniffs..\")\n return\n \n def get_ball(self):\n print(self.name + \" brings back ball\")\n\nd = Dog(\"Buffy\")\nprint(d.name, \"was created from Ether!\") #name is an attribute\n\nd.bark_if_called(\"Bambi!\") #bark_if_called is a method\n#dog.bark_if_called(\"Buffy!\")\n", "Buffy was created from Ether!\n*sniffs..\n" ], [ "class PitBull(Dog):\n \n def get_ball(self):\n super(PitBull, self).get_ball()\n print(\"*hates you\")\n return\n \n def chew_boots(self):\n print(\"*drools\")\n return\n\nd2 = PitBull(\"Georgie\")\n\nd2.bark_if_called(\"Loopie!\")\nd2.bark_if_called(\"Georgie!\")\n\nd2.chew_boots()\n#d.chew_boots()\n\nd2.get_ball()\nprint(d2.name)", "*sniffs..\nWoof Woof!\n*drools\nGeorgie brings back ball\n*hates you\nGeorgie\n" ] ], [ [ "### Decorators", "_____no_output_____" ] ], [ [ "from time import sleep\n\n\ndef sleep_decorator(function):\n\n \"\"\"\n Limits how fast the function is\n called.\n \"\"\"\n\n def wrapper(*args, **kwargs):\n sleep(2)\n return function(*args, **kwargs)\n return wrapper\n\n\n@sleep_decorator\ndef print_number(num):\n return num\n\nprint(print_number(222))\n\nfor num in range(1, 6):\n print(print_number(num))", "222\n1\n2\n3\n4\n5\n" ] ], [ [ "## Standard library modules", "_____no_output_____" ], [ "https://docs.python.org/3/library/\n\n- sys - system-specific parameters and functions\n- os - operating system interface\n- shutil - shell utilities\n- math - mathematical functions and constants\n- random - pseudorandom number generator\n- timeit - time it\n- format - number and text formating\n- zlib - file archiving\n- ... etc ...\n\nReccomendation: Take time to explore the [Python module of the week](https://pymotw.com/3/). It is a very good way to learn why Python comes \"with batteries included\".", "_____no_output_____" ], [ "### The sys module. Command line arguments.", "_____no_output_____" ] ], [ [ "import sys\nprint(sys.argv)\nsys.exit()\n\n##getopt, sys.exit()\n##getopt.getopt(args, options[, long_options])\n# import getopt\n# try:\n# opts, args = getopt.getopt(sys.argv[1:],\"hi:o:\",[\"ifile=\",\"ofile=\"])\n# except getopt.GetoptError:\n# print 'test.py -i <inputfile> -o <outputfile>'\n# sys.exit(2)\n# for opt, arg in opts:\n# if opt == '-h':\n# print 'test.py -i <inputfile> -o <outputfile>'\n# sys.exit()\n# elif opt in (\"-i\", \"--ifile\"):\n# inputfile = arg\n# elif opt in (\"-o\", \"--ofile\"):\n# outputfile = arg\n# print inputfile, outputfile\n\n", "['/home/sergiun/programs/anaconda3/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py', '-f', '/run/user/1000/jupyter/kernel-13c582f7-e031-4ca3-8c2e-ec3cc87d2d2c.json']\n" ] ], [ [ "#### Task:\n - Create a second script that contains command line arguments and imports the distance module above. If an -n 8 is provided in the arguments, it must generate 8 random points and compute a matrix of all pair distances.", "_____no_output_____" ], [ "### os module: File operations\n\nThe working directory, file IO, copy, rename and delete\n", "_____no_output_____" ] ], [ [ "import os\nprint(os.getcwd())\n#os.chdir(newpath)\nos.system('mkdir testdir')\n\nf = open('testfile.txt','wt')\nf.write('One line of text\\n')\nf.write('Another line of text\\n')\nf.close()\n\nimport shutil\n#shutil.copy('testfile.txt', 'testdir/')\nshutil.copyfile('testfile.txt', 'testdir/testfile1.txt')\nshutil.copyfile('testfile.txt', 'testdir/testfile2.txt')\n\nwith open('testdir/testfile1.txt','rt') as f:\n for l in f: print(l)\n\nfor fn in os.listdir(\"testdir/\"):\n print(fn)\n #fpath = os.path.join(dirpath,filename)\n os.rename('testdir/'+fn, 'testdir/file'+fn[-5]+'.txt')\n\nimport glob\nprint (glob.glob('testdir/*'))\n\nos.remove('testdir/file2.txt')\n#os.rmdir('testdir')\n#shutil.rmtree(path)", "/home/sergiun/projects/work/course\nOne line of text\n\nAnother line of text\n\ntestfile2.txt\ntestfile1.txt\n['testdir/file1.txt', 'testdir/file2.txt']\n" ] ], [ [ "#### Task:\n- Add a function to save the random vectors and the generated matrix into a file.", "_____no_output_____" ], [ "### Timing", "_____no_output_____" ] ], [ [ "from datetime import datetime\nstartTime = datetime.now()\n\nn = 10**8\nfor i in range(n):\n continue\n\nprint datetime.now() - startTime", "0:00:06.661880\n" ] ], [ [ "### Processes\n\nLaunching a process, Paralellization: shared resources, clusters, clouds", "_____no_output_____" ] ], [ [ "import os\n\n#print os.system('/path/yourshellscript.sh args')\nsubprocess.run([\"ls\", \"-l\", \"/dev/null\"], stdout=subprocess.PIPE)\nsubprocess.run(\"exit 1\", shell=True, check=True)\n\nfrom subprocess import call\ncall([\"ls\", \"-l\"])\n", "0\n" ], [ "\nargs = ['/path/yourshellscript.sh', '-arg1', 'value1', '-arg2', 'value2']\np = Popen(args, shell=True, bufsize=bufsize, \n stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)\np.wait()\n(child_stdin, child_stdout, child_stderr) = (p.stdin, p.stdout, p.stderr)\n", "_____no_output_____" ], [ "# def child():\n# print 'A new child ', os.getpid( )\n# os._exit(0) \n\n# def parent():\n# while True:\n# newpid = os.fork()\n# if newpid == 0:\n# child()\n# else:\n# pids = (os.getpid(), newpid)\n# print \"parent: %d, child: %d\" % pids\n# if raw_input( ) == 'q': break\n\n# parent()", "_____no_output_____" ] ], [ [ "How to do the equivalent of shell piping in Python? This is the basic step of an automated pipeline.\n\n`cat test.txt | grep something`\n\n**Task**:\n- Test this!\n- Uncomment `p1.stdout.close()`. Why is it not working?\n- What are signals? Read about SIGPIPE.", "_____no_output_____" ] ], [ [ "\np1 = Popen([\"cat\", \"test.txt\"], stdout=PIPE)\np2 = Popen([\"grep\", \"something\"], stdin=p1.stdout, stdout=PIPE)\np1.stdout.close()\noutput = p2.communicate()[0]", "_____no_output_____" ] ], [ [ "Questions:\n- What are the Python's native datatypes? Have a look at the Python online documentation for each datatype.\n- How many data types does Python have?\n- Python is a \"dynamic\" language. What does it mean?\n- Python is an \"interpreted\" language. What does it mean?\n- Which data strutures are mutable and which are immutable. When does this matters?\n- What is \"hash\" and how does it influences set and dictionary operations?\n- What are the most important Python libraries for you? Read through Anaconda's collection of libraries and check out some of them.", "_____no_output_____" ], [ "Task. Explain why this happens:", "_____no_output_____" ] ], [ [ "def run(l=[]):\n l.append(len(l))\n return l\n\nprint(run())\nprint(run())\nprint(run())", "[0]\n[0, 1]\n[0, 1, 2]\n" ] ], [ [ "Task.\n\ndic = {'a':[1,2,3], 'b':[4,5,6,7]}\n\nUsing list comprehension, return:\n\n[1, 2, 3, 4, 5, 6, 7]\n['a', 'a', 'a', 'b', 'b', 'b', 'b']", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e7631c8525ac6621dd0e8339543f0b2d2d48e9c3
23,785
ipynb
Jupyter Notebook
Data science assig#2.ipynb
shafiqshah99/python-data-science
64aaf1b03765befc40faeec6e7ec2d1ec0c1d349
[ "Apache-2.0" ]
null
null
null
Data science assig#2.ipynb
shafiqshah99/python-data-science
64aaf1b03765befc40faeec6e7ec2d1ec0c1d349
[ "Apache-2.0" ]
null
null
null
Data science assig#2.ipynb
shafiqshah99/python-data-science
64aaf1b03765befc40faeec6e7ec2d1ec0c1d349
[ "Apache-2.0" ]
null
null
null
32.806897
92
0.312802
[ [ [ "import pandas as pd", "_____no_output_____" ], [ "data=pd.read_csv('Pokemon.csv')\nprint(data)", " # Name Type 1 Type 2 Total HP Attack Defense \\\n0 1 Bulbasaur Grass Poison 318 45 49 49 \n1 2 Ivysaur Grass Poison 405 60 62 63 \n2 3 Venusaur Grass Poison 525 80 82 83 \n3 3 VenusaurMega Venusaur Grass Poison 625 80 100 123 \n4 4 Charmander Fire NaN 309 39 52 43 \n.. ... ... ... ... ... .. ... ... \n795 719 Diancie Rock Fairy 600 50 100 150 \n796 719 DiancieMega Diancie Rock Fairy 700 50 160 110 \n797 720 HoopaHoopa Confined Psychic Ghost 600 80 110 60 \n798 720 HoopaHoopa Unbound Psychic Dark 680 80 160 60 \n799 721 Volcanion Fire Water 600 80 110 120 \n\n Sp. Atk Sp. Def Speed Generation Legendary \n0 65 65 45 1 False \n1 80 80 60 1 False \n2 100 100 80 1 False \n3 122 120 80 1 False \n4 60 50 65 1 False \n.. ... ... ... ... ... \n795 100 150 50 6 True \n796 160 110 110 6 True \n797 150 130 70 6 True \n798 170 130 80 6 True \n799 130 90 70 6 True \n\n[800 rows x 13 columns]\n" ], [ "data[\"Speed\"].mean()", "_____no_output_____" ], [ "Speed_mean=data[\"Speed\"].mean()\ndef set_Speed(value):\n if value < Speed_mean:\n return \"Speed Low\"\n else:\n return \"Speed high\"", "_____no_output_____" ], [ "data[\"Speed_high_Low\"]=data[\"Speed\"].apply(set_Speed)\ndata", "_____no_output_____" ], [ "data[\"HP\"].mean()", "_____no_output_____" ], [ "HP_mean=data[\"HP\"].mean()\ndef set_HP(val):\n if val < HP_mean:\n return \"HP Low\"\n else:\n return \"HP high\"", "_____no_output_____" ], [ "data[\"HP_high_Low\"]=data[\"HP\"].apply(set_HP)\ndata", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7632c4dd5c3862f50544b47f63c1a39f3445b19
19,767
ipynb
Jupyter Notebook
data/.ipynb_checkpoints/FPL-understat-checkpoint.ipynb
kyleschan/fpl
e41c320a5f272a4c013728768593da7bf86575c4
[ "MIT" ]
null
null
null
data/.ipynb_checkpoints/FPL-understat-checkpoint.ipynb
kyleschan/fpl
e41c320a5f272a4c013728768593da7bf86575c4
[ "MIT" ]
null
null
null
data/.ipynb_checkpoints/FPL-understat-checkpoint.ipynb
kyleschan/fpl
e41c320a5f272a4c013728768593da7bf86575c4
[ "MIT" ]
null
null
null
34.198962
149
0.418779
[ [ [ "import asyncio\nimport json\ntry:\n import cPickle as pickle\nexcept:\n import pickle\nimport pandas as pd\nimport aiohttp\nimport requests\n\nfrom understat import Understat", "_____no_output_____" ], [ "def update_id_map():\n \n # FPL team codes\n LEI = 9\n LEE = 10\n \n with open('id_map', 'rb') as input_file:\n id_map = pickle.load(input_file)\n new_df = fpl_df[~fpl_df['id'].isin(id_map.keys())]\n new_u_df = df[~df['id'].isin(id_map.values())]\n \n if new_u_df.shape[0] > 0 and new_df.shape[0] == new_u_df.shape[0]:\n lei_lee_swap = dict(enumerate([LEE, LEI], LEI)) # Mapping to swap LEE AND LEI\n new_df['team'] = new_df['team'].map(lei_lee_swap)\n new_df.sort_values(['team', 'first_name', 'second_name'], inplace=True)\n new_u_df.sort_values(['team_title', 'player_name'], inplace=True)\n new_ids = {k:v for k, v in zip(new_df['id'], new_u_df['id'])}\n new_id_map = {**id_map, **new_ids}\n with open('id_map', 'wb') as output_file:\n pickle.dump(new_id_map, output_file)", "_____no_output_____" ], [ "async def main():\n async with aiohttp.ClientSession() as session:\n understat = Understat(session)\n players = await understat.get_league_players(\n \"epl\", 2020,\n )\n return json.dumps(players)", "_____no_output_____" ], [ "df = pd.read_json(await main())\n\nfpl_data = requests.get(url='https://fantasy.premierleague.com/api/bootstrap-static/').json()\nfpl_df = pd.DataFrame(fpl_data['elements'])\nfpl_df = fpl_df[['first_name', 'second_name', 'web_name', 'element_type', \n 'ep_next', 'ep_this', 'event_points', 'form', 'id',\n 'now_cost', 'points_per_game', 'selected_by_percent',\n 'team', 'total_points', 'value_form', 'value_season',\n 'minutes', 'goals_scored', 'assists', 'clean_sheets',\n 'goals_conceded', 'yellow_cards', 'red_cards','saves',\n 'bonus', 'bps', 'influence', 'creativity', 'threat',\n 'ict_index']]\nfpl_df = fpl_df[fpl_df['minutes'] > 0]", "_____no_output_____" ], [ "update_id_map()\nwith open('id_map', 'rb') as file:\n id_map = pickle.load(file)\nfpl_df = fpl_df.assign(understat_id=fpl_df['id'].map(id_map))\ncombined_df = df.join(fpl_df.set_index('understat_id'), on='id', rsuffix='_fpl')\ncombined_df.columns", "<ipython-input-2-1c4e4a3f2aa7>:14: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n new_df['team'] = new_df['team'].map(lei_lee_swap)\n<ipython-input-2-1c4e4a3f2aa7>:15: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n new_df.sort_values(['team', 'first_name', 'second_name'], inplace=True)\n<ipython-input-2-1c4e4a3f2aa7>:16: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n new_u_df.sort_values(['team_title', 'player_name'], inplace=True)\n" ], [ "combined_df = combined_df[['id', 'id_fpl', 'player_name', 'web_name', 'team_title',\n 'team', 'element_type', 'games', 'minutes', 'goals_scored',\n 'xG', 'assists_fpl', 'xA', 'shots', 'key_passes', 'npg',\n 'npxG', 'xGChain', 'xGBuildup', 'ep_next',\n 'ep_this', 'event_points', 'form', 'now_cost',\n 'points_per_game', 'selected_by_percent', 'total_points',\n 'value_form', 'value_season', 'clean_sheets',\n 'yellow_cards_fpl', 'red_cards_fpl', 'saves', 'bonus',\n 'bps', 'influence', 'creativity', 'threat', 'ict_index']]", "_____no_output_____" ], [ "current_column_names = combined_df.columns\nnew_column_names = ['player_id', 'player_id_fpl', 'player_name', 'web_name',\n 'Team', 'team_id', 'Position', 'Games', 'Minutes',\n 'Goals', 'Expected Goals', 'Assists', 'Expected Assists',\n 'Shots', 'Key Passes', 'Non-Penalty Goals',\n 'Expected Goals (Non-Penalty)', 'Expected Goals (Chain)',\n 'Expected Goals (Build-Up)', 'Expected Points (Next GW)',\n 'Expected Points (This GW)', 'Event Points', 'Form', 'Cost',\n 'PPG', 'Selected By (%)', 'Total Points', 'Value (Form)',\n 'Value (Season)', 'Clean Sheets', 'Yellow Cards', 'Red Cards',\n 'Saves', 'Bonus', 'BPS', 'Influence', 'Creativity', 'Threat',\n 'ICT Index']\ncolumn_map = dict(zip(current_column_names, new_column_names))\ncombined_df = combined_df.rename(columns=column_map)", "_____no_output_____" ], [ "combined_df[combined_df['Team'].str.contains(',')]", "_____no_output_____" ], [ "combined_df.to_csv('combined_df.csv', index=False)", "_____no_output_____" ], [ "combined_df.loc[:, 'Games':].columns", "_____no_output_____" ], [ "combined_df.loc[:, 'web_name': 'position']", "_____no_output_____" ], [ "update_id_map()", "None\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e763324bba58e5b0b8eafdef9c5e27194229d912
73,132
ipynb
Jupyter Notebook
nbs/mdx.ipynb
outerbounds/nbdoc
b642d0f684c28594e601898ab6cbd12af3b9d7c1
[ "Apache-2.0" ]
15
2022-02-15T09:38:07.000Z
2022-03-26T09:12:02.000Z
nbs/mdx.ipynb
outerbounds/nbdoc
b642d0f684c28594e601898ab6cbd12af3b9d7c1
[ "Apache-2.0" ]
10
2022-03-16T23:20:54.000Z
2022-03-30T14:58:15.000Z
nbs/mdx.ipynb
outerbounds/nbdoc
b642d0f684c28594e601898ab6cbd12af3b9d7c1
[ "Apache-2.0" ]
null
null
null
32.809332
363
0.539723
[ [ [ "#default_exp mdx", "_____no_output_____" ] ], [ [ "# Preprocessors For MDX\n\n> Custom preprocessors that help convert notebook content into MDX\n\nThis module defines [nbconvert.Custom Preprocessors](https://nbconvert.readthedocs.io/en/latest/nbconvert_library.html#Custom-Preprocessors) that facilitate transforming notebook content into MDX, which is a variation of markdown.", "_____no_output_____" ], [ "## Cell Tag Cheatsheet\n\nThese preprocessors allow you to make special comments to enable/disable them. Here is a list of all special comments:\n\nAll comments start with `#meta` or `#cell_meta`, which are both aliases for the same thing. For brevity, we will use `#meta` in this cheatsheet.\n\n### Black code formatting\n\n`#meta:tag=black` will apply black code formatting.\n\n### Show/Hide Cells\n\n1. Remvoe entire cells: `#meta:tag=remove_cell` or `#meta:tag=hide`\n2. Remove output: `#meta:tag=remove_output` or `#meta:tag=remove_output` or `#meta:tag=hide_outputs` or `#meta:tag=hide_output`\n3. Remove input: same as above, except `input` instead of `output`.\n\n\n### Selecting Metaflow Steps\n\nYou can selectively show meataflow steps in the output logs:\n\n1. Show one step: `#meta:show_steps=<step_name>`\n2. Show multiple steps: `#meta:show_steps=<step1_name>,<step2_name>`", "_____no_output_____" ] ], [ [ "# export\nfrom nbconvert.preprocessors import Preprocessor\nfrom nbconvert import MarkdownExporter\nfrom nbconvert.preprocessors import TagRemovePreprocessor\nfrom nbdev.imports import get_config\nfrom traitlets.config import Config\nfrom pathlib import Path\nimport re, uuid\nfrom fastcore.basics import AttrDict\nfrom nbdoc.media import ImagePath, ImageSave, HTMLEscape\nfrom black import format_str, Mode", "_____no_output_____" ], [ "#hide\nfrom nbdev.export import read_nb\nfrom nbconvert import NotebookExporter\nfrom nbdoc.test_utils import run_preprocessor, show_plain_md\nfrom nbdoc.run import _gen_nb\nimport json\n\n__file__ = str(get_config().path(\"lib_path\")/'preproc.py')", "_____no_output_____" ], [ "#export\n_re_meta= r'^\\s*#(?:cell_meta|meta):\\S+\\s*[\\n\\r]'", "_____no_output_____" ] ], [ [ "## Injecting Metadata Into Cells -", "_____no_output_____" ] ], [ [ "#export\nclass InjectMeta(Preprocessor):\n \"\"\"\n Allows you to inject metadata into a cell for further preprocessing with a comment.\n \"\"\"\n pattern = r'(^\\s*#(?:cell_meta|meta):)(\\S+)(\\s*[\\n\\r])'\n \n def preprocess_cell(self, cell, resources, index):\n if cell.cell_type == 'code' and re.search(_re_meta, cell.source, flags=re.MULTILINE):\n cell_meta = re.findall(self.pattern, cell.source, re.MULTILINE)\n d = cell.metadata.get('nbdoc', {})\n for _, m, _ in cell_meta:\n if '=' in m:\n k,v = m.split('=')\n d[k] = v\n else: print(f\"Warning cell_meta:{m} does not have '=' will be ignored.\")\n cell.metadata['nbdoc'] = d\n return cell, resources", "_____no_output_____" ] ], [ [ "To inject metadata make a comment in a cell with the following pattern: `#cell_meta:{key=value}`. Note that `#meta` is an alias for `#cell_meta`\n\nFor example, consider the following code:", "_____no_output_____" ] ], [ [ "\n_test_file = 'test_files/hello_world.ipynb'\nfirst_cell = read_nb(_test_file)['cells'][0]\nprint(first_cell['source'])", "#meta:show_steps=start,train\nprint('hello world')\n" ] ], [ [ "At the moment, this cell has no metadata:", "_____no_output_____" ] ], [ [ "print(first_cell['metadata'])", "{}\n" ] ], [ [ "However, after we process this notebook with `InjectMeta`, the appropriate metadata will be injected:", "_____no_output_____" ] ], [ [ "c = Config()\nc.NotebookExporter.preprocessors = [InjectMeta]\nexp = NotebookExporter(config=c)\ncells, _ = exp.from_filename(_test_file)\nfirst_cell = json.loads(cells)['cells'][0]\n\nassert first_cell['metadata'] == {'nbdoc': {'show_steps': 'start,train'}}\nfirst_cell['metadata']", "_____no_output_____" ] ], [ [ "## Strip Ansi Characters From Output -", "_____no_output_____" ] ], [ [ "#export\n_re_ansi_escape = re.compile(r'\\x1B(?:[@-Z\\\\-_]|\\[[0-?]*[ -/]*[@-~])')\n\nclass StripAnsi(Preprocessor):\n \"\"\"Strip Ansi Characters.\"\"\"\n \n def preprocess_cell(self, cell, resources, index):\n for o in cell.get('outputs', []):\n if o.get('name') and o.name == 'stdout': \n o['text'] = _re_ansi_escape.sub('', o.text)\n return cell, resources", "_____no_output_____" ] ], [ [ "Gets rid of colors that are streamed from standard out, which can interfere with static site generators:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([StripAnsi], 'test_files/run_flow.ipynb')\nassert not _re_ansi_escape.findall(c)", "_____no_output_____" ], [ "# export\ndef _get_cell_id(id_length=36):\n \"generate random id for artifical notebook cell\"\n return uuid.uuid4().hex[:id_length]\n\ndef _get_md_cell(content=\"<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->\"):\n \"generate markdown cell with content\"\n cell = AttrDict({'cell_type': 'markdown',\n 'id': f'{_get_cell_id()}',\n 'metadata': {},\n 'source': f'{content}'})\n return cell", "_____no_output_____" ] ], [ [ "## Insert Warning Into Markdown -", "_____no_output_____" ] ], [ [ "# export\nclass InsertWarning(Preprocessor):\n \"\"\"Insert Autogenerated Warning Into Notebook after the first cell.\"\"\"\n def preprocess(self, nb, resources):\n nb.cells = nb.cells[:1] + [_get_md_cell()] + nb.cells[1:]\n return nb, resources", "_____no_output_____" ] ], [ [ "This preprocessor inserts a warning in the markdown destination that the file is autogenerated. This warning is inserted in the second cell so we do not interfere with front matter.", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([InsertWarning], 'test_files/hello_world.ipynb', display_results=True)\nassert \"<!-- WARNING: THIS FILE WAS AUTOGENERATED!\" in c", "```python\n#meta:show_steps=start,train\nprint('hello world')\n```\n\n<CodeOutputBlock lang=\"python\">\n\n```\nhello world\n```\n\n</CodeOutputBlock>\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->\n\n\n```python\n\n```\n\n" ] ], [ [ "## Remove Empty Code Cells -", "_____no_output_____" ] ], [ [ "# export\ndef _emptyCodeCell(cell):\n \"Return True if cell is an empty Code Cell.\"\n if cell['cell_type'] == 'code':\n if not cell.source or not cell.source.strip(): return True\n else: return False\n\n\nclass RmEmptyCode(Preprocessor):\n \"\"\"Remove empty code cells.\"\"\"\n def preprocess(self, nb, resources):\n new_cells = [c for c in nb.cells if not _emptyCodeCell(c)]\n nb.cells = new_cells\n return nb, resources", "_____no_output_____" ] ], [ [ "Notice how this notebook has an empty code cell at the end:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/hello_world.ipynb')", "```python\n#meta:show_steps=start,train\nprint('hello world')\n```\n\n hello world\n\n\n\n```python\n\n```\n\n" ] ], [ [ "With `RmEmptyCode` these empty code cells are stripped from the markdown:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([RmEmptyCode], 'test_files/hello_world.ipynb', display_results=True)\nassert len(re.findall('```python',c)) == 1", "```python\n#meta:show_steps=start,train\nprint('hello world')\n```\n\n<CodeOutputBlock lang=\"python\">\n\n```\nhello world\n```\n\n</CodeOutputBlock>\n\n" ] ], [ [ "## Truncate Metaflow Output -", "_____no_output_____" ] ], [ [ "#export\nclass MetaflowTruncate(Preprocessor):\n \"\"\"Remove the preamble and timestamp from Metaflow output.\"\"\"\n _re_pre = re.compile(r'([\\s\\S]*Metaflow[\\s\\S]*Validating[\\s\\S]+The graph[\\s\\S]+)(\\n[\\s\\S]+Workflow starting[\\s\\S]+)')\n _re_time = re.compile('\\d{4}-\\d{2}-\\d{2}\\s\\d{2}\\:\\d{2}\\:\\d{2}.\\d{3}')\n \n def preprocess_cell(self, cell, resources, index):\n if re.search('\\s*python.+run.*', cell.source) and 'outputs' in cell:\n for o in cell.outputs:\n if o.name == 'stdout':\n o['text'] = self._re_time.sub('', self._re_pre.sub(r'\\2', o.text)).strip()\n return cell, resources", "_____no_output_____" ] ], [ [ "When you run a metaflow Flow, you are presented with a fair amount of boilerpalte before the job starts running that is not necesary to show in the documentation:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/run_flow.ipynb')", "```python\n#meta:show_steps=start\n!python myflow.py run\n```\n\n \u001b[35m\u001b[1mMetaflow 2.5.3\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mMyFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m2022-03-14 17:28:44.983 \u001b[0m\u001b[1mWorkflow starting (run-id 1647304124981100):\u001b[0m\n \u001b[35m2022-03-14 17:28:44.990 \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-14 17:28:45.630 \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[22mthis is the start\u001b[0m\n \u001b[35m2022-03-14 17:28:45.704 \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-14 17:28:45.710 \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-14 17:28:46.348 \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[22mthis is the end\u001b[0m\n \u001b[35m2022-03-14 17:28:46.422 \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-14 17:28:46.423 \u001b[0m\u001b[1mDone!\u001b[0m\n \u001b[0m\n\n" ] ], [ [ "We don't need to see the beginning part that validates the graph, and we don't need the time-stamps either. We can remove these with the `MetaflowTruncate` preprocessor:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([MetaflowTruncate], 'test_files/run_flow.ipynb', display_results=True)\nassert 'Validating your flow...' not in c", "```python\n#meta:show_steps=start\n!python myflow.py run\n```\n\n<CodeOutputBlock lang=\"python\">\n\n```\n\u001b[35m \u001b[0m\u001b[1mWorkflow starting (run-id 1647304124981100):\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[22mthis is the start\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[22mthis is the end\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m \u001b[0m\u001b[1mDone!\u001b[0m\n \u001b[0m\n```\n\n</CodeOutputBlock>\n\n" ] ], [ [ "## Turn Metadata into Cell Tags -", "_____no_output_____" ] ], [ [ "#export\nclass UpdateTags(Preprocessor):\n \"\"\"\n Create cell tags based upon comment `#cell_meta:tags=<tag>`\n \"\"\"\n \n def preprocess_cell(self, cell, resources, index):\n root = cell.metadata.get('nbdoc', {})\n tags = root.get('tags', root.get('tag')) # allow the singular also\n if tags: cell.metadata['tags'] = cell.metadata.get('tags', []) + tags.split(',')\n return cell, resources", "_____no_output_____" ] ], [ [ "Consider this python notebook prior to processing. The comments can be used configure the visibility of cells. \n\n- `#cell_meta:tags=remove_output` will just remove the output\n- `#cell_meta:tags=remove_input` will just remove the input\n- `#cell_meta:tags=remove_cell` will remove both the input and output\n\nNote that you can use `#cell_meta:tag` or `#cell_meta:tags` as they are both aliases for the same thing. Here is a notebook before preprocessing:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/visibility.ipynb')", "# Configuring Cell Visibility\n\n#### Cell with the comment `#cell_meta:tag=remove_output`\n\n\n```\n#cell_meta:tag=remove_output\nprint('the output is removed, so you can only see the print statement.')\n```\n\n the output is removed, so you can only see the print statement.\n\n\n#### Cell with the comment `#cell_meta:tag=remove_input`\n\n\n```\n#cell_meta:tag=remove_input\nprint('hello, you cannot see the code that created me.')\n```\n\n hello, you cannot see the code that created me.\n\n\n#### Cell with the comment `#cell_meta:tag=remove_cell`\n\n\n```\n#cell_meta:tag=remove_cell\nprint('you will not be able to see this cell at all')\n```\n\n you will not be able to see this cell at all\n\n\n\n```\n#cell_meta:tags=remove_input,remove_output\nprint('you will not be able to see this cell at all either')\n```\n\n you will not be able to see this cell at all either\n\n\n" ] ], [ [ "`UpdateTags` is meant to be used with `InjectMeta` and `TagRemovePreprocessor` to configure the visibility of cells in rendered docs. Here you can see what the notebook looks like after pre-processing:", "_____no_output_____" ] ], [ [ "# Configure an exporter from scratch\n_test_file = 'test_files/visibility.ipynb'\nc = Config()\nc.TagRemovePreprocessor.remove_cell_tags = (\"remove_cell\",)\nc.TagRemovePreprocessor.remove_all_outputs_tags = ('remove_output',)\nc.TagRemovePreprocessor.remove_input_tags = ('remove_input',)\nc.MarkdownExporter.preprocessors = [InjectMeta, UpdateTags, TagRemovePreprocessor]\nexp = MarkdownExporter(config=c)\nresult = exp.from_filename(_test_file)[0]\n\n# show the results\nassert 'you will not be able to see this cell at all either' not in result\nprint(result)", "# Configuring Cell Visibility\n\n#### Cell with the comment `#cell_meta:tag=remove_output`\n\n\n```\n#cell_meta:tag=remove_output\nprint('the output is removed, so you can only see the print statement.')\n```\n\n#### Cell with the comment `#cell_meta:tag=remove_input`\n\n hello, you cannot see the code that created me.\n\n\n#### Cell with the comment `#cell_meta:tag=remove_cell`\n\n" ] ], [ [ "## Selecting Metaflow Steps In Output -", "_____no_output_____" ] ], [ [ "#export\nclass MetaflowSelectSteps(Preprocessor):\n \"\"\"\n Hide Metaflow steps in output based on cell metadata.\n \"\"\"\n re_step = r'.*\\d+/{0}/\\d+\\s\\(pid\\s\\d+\\).*'\n \n def preprocess_cell(self, cell, resources, index):\n root = cell.metadata.get('nbdoc', {})\n steps = root.get('show_steps', root.get('show_step'))\n if re.search('\\s*python.+run.*', cell.source) and 'outputs' in cell and steps:\n for o in cell.outputs:\n if o.name == 'stdout':\n final_steps = []\n for s in steps.split(','):\n found_steps = re.compile(self.re_step.format(s)).findall(o['text'])\n if found_steps: \n final_steps += found_steps + ['...']\n o['text'] = '\\n'.join(['...'] + final_steps)\n return cell, resources", "_____no_output_____" ] ], [ [ "`MetaflowSelectSteps` is meant to be used with `InjectMeta` to only show specific steps in the output logs from Metaflow. \n\nFor example, if you want to only show the `start` and `train` steps in your flow, you would annotate your cell with the following pattern: `#cell_meta:show_steps=<step_name>`\n\nNote that `show_step` and `show_steps` are aliases for convenience, so you don't need to worry about the `s` at the end.\n\nIn the below example, `#cell_meta:show_steps=start,train` shows the `start` and `train` steps, whereas `#cell_meta:show_steps=train` only shows the `train` step:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([InjectMeta, MetaflowSelectSteps], \n 'test_files/run_flow_showstep.ipynb', \n display_results=True)\nassert 'end' not in c", "```\n#cell_meta:show_steps=start,train\n!python myflow.py run\n```\n\n<CodeOutputBlock lang=\"\">\n\n```\n...\n \u001b[35m2022-02-15 14:01:14.810 \u001b[0m\u001b[32m[1644962474801237/start/1 (pid 46758)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-02-15 14:01:15.433 \u001b[0m\u001b[32m[1644962474801237/start/1 (pid 46758)] \u001b[0m\u001b[22mthis is the start\u001b[0m\n \u001b[35m2022-02-15 14:01:15.500 \u001b[0m\u001b[32m[1644962474801237/start/1 (pid 46758)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n ...\n \u001b[35m2022-02-15 14:01:15.507 \u001b[0m\u001b[32m[1644962474801237/train/2 (pid 46763)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-02-15 14:01:16.123 \u001b[0m\u001b[32m[1644962474801237/train/2 (pid 46763)] \u001b[0m\u001b[22mthe train step\u001b[0m\n \u001b[35m2022-02-15 14:01:16.188 \u001b[0m\u001b[32m[1644962474801237/train/2 (pid 46763)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n ...\n```\n\n</CodeOutputBlock>\n\n\n```\n#cell_meta:show_steps=train\n!python myflow.py run\n```\n\n<CodeOutputBlock lang=\"\">\n\n```\n...\n \u001b[35m2022-02-15 14:01:18.924 \u001b[0m\u001b[32m[1644962478210532/train/2 (pid 46783)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-02-15 14:01:19.566 \u001b[0m\u001b[32m[1644962478210532/train/2 (pid 46783)] \u001b[0m\u001b[22mthe train step\u001b[0m\n \u001b[35m2022-02-15 14:01:19.632 \u001b[0m\u001b[32m[1644962478210532/train/2 (pid 46783)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n ...\n```\n\n</CodeOutputBlock>\n\n" ] ], [ [ "## Hide Specific Lines of Output With Keywords -", "_____no_output_____" ] ], [ [ "#export\nclass FilterOutput(Preprocessor):\n \"\"\"\n Hide Output Based on Keywords.\n \"\"\"\n def preprocess_cell(self, cell, resources, index):\n root = cell.metadata.get('nbdoc', {})\n words = root.get('filter_words', root.get('filter_word'))\n if 'outputs' in cell and words:\n _re = f\"^(?!.*({'|'.join(words.split(','))}))\"\n for o in cell.outputs:\n if o.name == 'stdout':\n filtered_lines = [l for l in o['text'].splitlines() if re.findall(_re, l)]\n o['text'] = '\\n'.join(filtered_lines)\n return cell, resources", "_____no_output_____" ] ], [ [ "If we want to exclude output with certain keywords, we can use the `#meta:filter_words` comment. For example, if we wanted to ignore all output that contains the text `FutureWarning` or `MultiIndex` we can use the comment:\n\n`#meta:filter_words=FutureWarning,MultiIndex`\n\nConsider this output below:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/strip_out.ipynb')", "```python\n#meta:filter_words=FutureWarning,MultiIndex\n#meta:show_steps=end\n!python serialize_xgb_dmatrix.py run\n```\n\n /Users/hamel/opt/anaconda3/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n from pandas import MultiIndex, Int64Index\n \u001b[35m\u001b[1mMetaflow 2.5.3\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mSerializeXGBDataFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m2022-03-30 07:04:02.315 \u001b[0m\u001b[1mWorkflow starting (run-id 1648649042312116):\u001b[0m\n \u001b[35m2022-03-30 07:04:02.322 \u001b[0m\u001b[32m[1648649042312116/start/1 (pid 2459)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-30 07:04:03.122 \u001b[0m\u001b[32m[1648649042312116/start/1 (pid 2459)] \u001b[0m\u001b[22m/Users/hamel/opt/anaconda3/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\u001b[0m\n \u001b[35m2022-03-30 07:04:03.508 \u001b[0m\u001b[32m[1648649042312116/start/1 (pid 2459)] \u001b[0m\u001b[22mfrom pandas import MultiIndex, Int64Index\u001b[0m\n \u001b[35m2022-03-30 07:04:03.510 \u001b[0m\u001b[32m[1648649042312116/start/1 (pid 2459)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-30 07:04:03.517 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-30 07:04:04.315 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[22m/Users/hamel/opt/anaconda3/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\u001b[0m\n \u001b[35m2022-03-30 07:04:04.563 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[22mthere are 5 rows in the data.\u001b[0m\n \u001b[35m2022-03-30 07:04:04.705 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[22mfrom pandas import MultiIndex, Int64Index\u001b[0m\n \u001b[35m2022-03-30 07:04:04.707 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-30 07:04:04.707 \u001b[0m\u001b[1mDone!\u001b[0m\n \u001b[0m\n\n" ] ], [ [ "Notice how the lines containing the terms `FutureWarning` or `MultiIndex` are stripped out:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([InjectMeta, FilterOutput], \n 'test_files/strip_out.ipynb', \n display_results=True)\nassert 'FutureWarning:' not in c and 'from pandas import MultiIndex, Int64Index' not in c", "```python\n#meta:filter_words=FutureWarning,MultiIndex\n#meta:show_steps=end\n!python serialize_xgb_dmatrix.py run\n```\n\n<CodeOutputBlock lang=\"python\">\n\n```\n\u001b[35m\u001b[1mMetaflow 2.5.3\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mSerializeXGBDataFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m2022-03-30 07:04:02.315 \u001b[0m\u001b[1mWorkflow starting (run-id 1648649042312116):\u001b[0m\n \u001b[35m2022-03-30 07:04:02.322 \u001b[0m\u001b[32m[1648649042312116/start/1 (pid 2459)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-30 07:04:03.510 \u001b[0m\u001b[32m[1648649042312116/start/1 (pid 2459)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-30 07:04:03.517 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-30 07:04:04.563 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[22mthere are 5 rows in the data.\u001b[0m\n \u001b[35m2022-03-30 07:04:04.707 \u001b[0m\u001b[32m[1648649042312116/end/2 (pid 2462)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-30 07:04:04.707 \u001b[0m\u001b[1mDone!\u001b[0m\n \u001b[0m\n```\n\n</CodeOutputBlock>\n\n" ] ], [ [ "## Hide Specific Lines of Code -", "_____no_output_____" ] ], [ [ "#export\nclass HideInputLines(Preprocessor):\n \"\"\"\n Hide lines of code in code cells with the comment `#meta_hide_line` at the end of a line of code.\n \"\"\"\n tok = '#meta_hide_line'\n \n def preprocess_cell(self, cell, resources, index):\n if cell.cell_type == 'code':\n if self.tok in cell.source:\n cell.source = '\\n'.join([c for c in cell.source.split('\\n') if not c.strip().endswith(self.tok)])\n return cell, resources", "_____no_output_____" ] ], [ [ "You can use the special comment `#meta_hide_line` to hide a specific line of code in a code cell. This is what the code looks like before:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/hide_lines.ipynb')", "```python\ndef show():\n a = 2\n b = 3 #meta_hide_line\n```\n\n" ] ], [ [ "and after:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([InjectMeta, HideInputLines], \n 'test_files/hide_lines.ipynb', \n display_results=True)", "```python\ndef show():\n a = 2\n```\n\n" ], [ "#hide \n_res = \"\"\"```python\ndef show():\n a = 2\n```\"\"\"\nassert _res in c", "_____no_output_____" ] ], [ [ "## Handle Scripts With `%%writefile` -", "_____no_output_____" ] ], [ [ "#export\nclass WriteTitle(Preprocessor):\n \"\"\"Modify the code-fence with the filename upon %%writefile cell magic.\"\"\"\n pattern = r'(^[\\S\\s]*%%writefile\\s)(\\S+)\\n'\n \n def preprocess_cell(self, cell, resources, index):\n m = re.match(self.pattern, cell.source)\n if m: \n filename = m.group(2)\n ext = filename.split('.')[-1]\n cell.metadata.magics_language = f'{ext} title=\"{filename}\"'\n cell.metadata.script = True\n cell.metadata.file_ext = ext\n cell.metadata.filename = filename\n cell.outputs = []\n return cell, resources", "_____no_output_____" ] ], [ [ "`WriteTitle` creates the proper code-fence with a title in the situation where the `%%writefile` magic is used.\n\nFor example, here are contents before pre-processing:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/writefile.ipynb')", "A test notebook\n\n\n```python\n%%writefile myflow.py\nfrom metaflow import FlowSpec, step\n\nclass MyFlow(FlowSpec):\n \n @step\n def start(self):\n print('this is the start')\n self.next(self.train)\n \n @step\n def train(self):\n print('the train step')\n self.next(self.end)\n \n @step\n def end(self):\n print('this is the end')\n\nif __name__ == '__main__':\n MyFlow()\n```\n\n Overwriting myflow.py\n\n\n\n```python\n%%writefile hello.txt\n\nHello World\n```\n\n Overwriting hello.txt\n\n\n" ] ], [ [ "When we use `WriteTitle`, you will see the code-fence will change appropriately:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([WriteTitle], 'test_files/writefile.ipynb', display_results=True)\nassert '```py title=\"myflow.py\"' in c and '```txt title=\"hello.txt\"' in c", "A test notebook\n\n\n```py title=\"myflow.py\"\n%%writefile myflow.py\nfrom metaflow import FlowSpec, step\n\nclass MyFlow(FlowSpec):\n \n @step\n def start(self):\n print('this is the start')\n self.next(self.train)\n \n @step\n def train(self):\n print('the train step')\n self.next(self.end)\n \n @step\n def end(self):\n print('this is the end')\n\nif __name__ == '__main__':\n MyFlow()\n```\n\n\n```txt title=\"hello.txt\"\n%%writefile hello.txt\n\nHello World\n```\n\n" ] ], [ [ "## Clean Flags and Magics -", "_____no_output_____" ] ], [ [ "#export\n_tst_flags = get_config()['tst_flags'].split('|')\n\nclass CleanFlags(Preprocessor):\n \"\"\"A preprocessor to remove Flags\"\"\"\n patterns = [re.compile(r'^#\\s*{0}\\s*'.format(f), re.MULTILINE) for f in _tst_flags]\n \n def preprocess_cell(self, cell, resources, index):\n if cell.cell_type == 'code':\n for p in self.patterns:\n cell.source = p.sub('', cell.source).strip()\n return cell, resources", "_____no_output_____" ], [ "c, _ = run_preprocessor([CleanFlags], _gen_nb())\nassert '#notest' not in c", "_____no_output_____" ], [ "#export\nclass CleanMagics(Preprocessor):\n \"\"\"A preprocessor to remove cell magic commands and #cell_meta: comments\"\"\"\n pattern = re.compile(r'(^\\s*(%%|%).+?[\\n\\r])|({0})'.format(_re_meta), re.MULTILINE)\n \n def preprocess_cell(self, cell, resources, index):\n if cell.cell_type == 'code': \n cell.source = self.pattern.sub('', cell.source).strip()\n return cell, resources", "_____no_output_____" ] ], [ [ "`CleanMagics` strips magic cell commands `%%` so they do not appear in rendered markdown files:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([WriteTitle, CleanMagics], 'test_files/writefile.ipynb', display_results=True)\nassert '%%' not in c", "A test notebook\n\n\n```py title=\"myflow.py\"\nfrom metaflow import FlowSpec, step\n\nclass MyFlow(FlowSpec):\n \n @step\n def start(self):\n print('this is the start')\n self.next(self.train)\n \n @step\n def train(self):\n print('the train step')\n self.next(self.end)\n \n @step\n def end(self):\n print('this is the end')\n\nif __name__ == '__main__':\n MyFlow()\n```\n\n\n```txt title=\"hello.txt\"\nHello World\n```\n\n" ] ], [ [ "Here is how `CleanMagics` Works on the file with the Metaflow log outputs from earlier, we can see that the `#cell_meta` comments are gone:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([InjectMeta, MetaflowSelectSteps, CleanMagics], \n 'test_files/run_flow_showstep.ipynb', display_results=True)", "```\n!python myflow.py run\n```\n\n<CodeOutputBlock lang=\"\">\n\n```\n...\n \u001b[35m2022-02-15 14:01:14.810 \u001b[0m\u001b[32m[1644962474801237/start/1 (pid 46758)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-02-15 14:01:15.433 \u001b[0m\u001b[32m[1644962474801237/start/1 (pid 46758)] \u001b[0m\u001b[22mthis is the start\u001b[0m\n \u001b[35m2022-02-15 14:01:15.500 \u001b[0m\u001b[32m[1644962474801237/start/1 (pid 46758)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n ...\n \u001b[35m2022-02-15 14:01:15.507 \u001b[0m\u001b[32m[1644962474801237/train/2 (pid 46763)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-02-15 14:01:16.123 \u001b[0m\u001b[32m[1644962474801237/train/2 (pid 46763)] \u001b[0m\u001b[22mthe train step\u001b[0m\n \u001b[35m2022-02-15 14:01:16.188 \u001b[0m\u001b[32m[1644962474801237/train/2 (pid 46763)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n ...\n```\n\n</CodeOutputBlock>\n\n\n```\n!python myflow.py run\n```\n\n<CodeOutputBlock lang=\"\">\n\n```\n...\n \u001b[35m2022-02-15 14:01:18.924 \u001b[0m\u001b[32m[1644962478210532/train/2 (pid 46783)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-02-15 14:01:19.566 \u001b[0m\u001b[32m[1644962478210532/train/2 (pid 46783)] \u001b[0m\u001b[22mthe train step\u001b[0m\n \u001b[35m2022-02-15 14:01:19.632 \u001b[0m\u001b[32m[1644962478210532/train/2 (pid 46783)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n ...\n```\n\n</CodeOutputBlock>\n\n" ], [ "#hide\nc, _ = run_preprocessor([WriteTitle, CleanMagics], 'test_files/hello_world.ipynb')\nassert '#cell_meta' not in c", "_____no_output_____" ] ], [ [ "## Formatting Code With Black -", "_____no_output_____" ] ], [ [ "#export\nblack_mode = Mode()\n\nclass Black(Preprocessor):\n \"\"\"Format code that has a cell tag `black`\"\"\"\n def preprocess_cell(self, cell, resources, index):\n tags = cell.metadata.get('tags', [])\n if cell.cell_type == 'code' and 'black' in tags:\n cell.source = format_str(src_contents=cell.source, mode=black_mode).strip()\n return cell, resources", "_____no_output_____" ] ], [ [ "`Black` is a preprocessor that will format cells that have the cell tag `black` with [Python black](https://github.com/psf/black) code formatting. You can apply tags via the notebook interface or with a comment `meta:tag=black`.", "_____no_output_____" ], [ "This is how cell formatting looks before [black](https://github.com/psf/black) formatting:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/black.ipynb')", "Format with black\n\n\n```python\n#meta:tag=black\nj = [1,\n 2,\n 3\n]\n```\n\n\n```python\n%%writefile black_test.py\n#meta:tag=black\n\n\ndef very_important_function(template: str, *variables, file: os.PathLike, engine: str, header: bool = True, debug: bool = False):\n \"\"\"Applies `variables` to the `template` and writes to `file`.\"\"\"\n with open(file, 'w') as f:\n pass\n```\n\n" ] ], [ [ "After black is applied, the code looks like this:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([InjectMeta, UpdateTags, CleanMagics, Black], 'test_files/black.ipynb', display_results=True)\nassert '[1, 2, 3]' in c\nassert 'very_important_function(\\n template: str,' in c", "Format with black\n\n\n```python\nj = [1, 2, 3]\n```\n\n\n```python\ndef very_important_function(\n template: str,\n *variables,\n file: os.PathLike,\n engine: str,\n header: bool = True,\n debug: bool = False\n):\n \"\"\"Applies `variables` to the `template` and writes to `file`.\"\"\"\n with open(file, \"w\") as f:\n pass\n```\n\n" ] ], [ [ "## Show File Contents -", "_____no_output_____" ] ], [ [ "#export\nclass CatFiles(Preprocessor):\n \"\"\"Cat arbitrary files with %cat\"\"\"\n pattern = '^\\s*!'\n \n def preprocess_cell(self, cell, resources, index):\n if cell.cell_type == 'code' and re.search(self.pattern, cell.source):\n cell.metadata.magics_language = 'bash'\n cell.source = re.sub(self.pattern, '', cell.source).strip()\n return cell, resources", "_____no_output_____" ] ], [ [ "## Format Shell Commands -", "_____no_output_____" ] ], [ [ "#export\nclass BashIdentify(Preprocessor):\n \"\"\"A preprocessor to identify bash commands and mark them appropriately\"\"\"\n pattern = re.compile('^\\s*!', flags=re.MULTILINE)\n \n def preprocess_cell(self, cell, resources, index):\n if cell.cell_type == 'code' and self.pattern.search(cell.source):\n cell.metadata.magics_language = 'bash'\n cell.source = self.pattern.sub('', cell.source).strip()\n return cell, resources", "_____no_output_____" ] ], [ [ "When we issue a shell command in a notebook with `!`, we need to change the code-fence from `python` to `bash` and remove the `!`:", "_____no_output_____" ] ], [ [ "c, _ = run_preprocessor([MetaflowTruncate, CleanMagics, BashIdentify], 'test_files/run_flow.ipynb', display_results=True)\nassert \"```bash\" in c and '!python' not in c", "```bash\npython myflow.py run\n```\n\n<CodeOutputBlock lang=\"bash\">\n\n```\n\u001b[35m \u001b[0m\u001b[1mWorkflow starting (run-id 1647304124981100):\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[22mthis is the start\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/start/1 (pid 41951)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[22mthis is the end\u001b[0m\n \u001b[35m \u001b[0m\u001b[32m[1647304124981100/end/2 (pid 41954)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m \u001b[0m\u001b[1mDone!\u001b[0m\n \u001b[0m\n```\n\n</CodeOutputBlock>\n\n" ] ], [ [ "## Remove `ShowDoc` Input Cells -", "_____no_output_____" ] ], [ [ "#export\n_re_showdoc = re.compile(r'^ShowDoc', re.MULTILINE)\n\n\ndef _isShowDoc(cell):\n \"Return True if cell contains ShowDoc.\"\n if cell['cell_type'] == 'code':\n if _re_showdoc.search(cell.source): return True\n else: return False\n\n\nclass CleanShowDoc(Preprocessor):\n \"\"\"Ensure that ShowDoc output gets cleaned in the associated notebook.\"\"\"\n _re_html = re.compile(r'<HTMLRemove>.*</HTMLRemove>', re.DOTALL)\n \n def preprocess_cell(self, cell, resources, index):\n \"Convert cell to a raw cell with just the stripped portion of the output.\"\n if _isShowDoc(cell):\n all_outs = [o['data'] for o in cell.outputs if 'data' in o]\n html_outs = [o['text/html'] for o in all_outs if 'text/html' in o]\n if len(html_outs) != 1:\n return cell, resources\n cleaned_html = self._re_html.sub('', html_outs[0])\n cell = AttrDict({'cell_type':'raw', 'id':cell.id, 'metadata':cell.metadata, 'source':cleaned_html})\n \n return cell, resources", "_____no_output_____" ], [ "_result, _ = run_preprocessor([CleanShowDoc], 'test_files/doc.ipynb')\nassert '<HTMLRemove>' not in _result\nprint(_result)", "```python\nfrom fastcore.all import test_eq\nfrom nbdoc.showdoc import ShowDoc\n```\n\n\n<DocSection type=\"function\" name=\"test_eq\" module=\"fastcore.test\" link=\"https://github.com/fastcore/tree/masterhttps://github.com/fastai/fastcore/tree/master/fastcore/test.py#L34\">\n<SigArgSection>\n<SigArg name=\"a\" /><SigArg name=\"b\" />\n</SigArgSection>\n<Description summary=\"`test` that `a==b`\" />\n\n</DocSection>\n\n\n" ] ], [ [ "## Composing Preprocessors Into A Pipeline\n\nLets see how you can compose all of these preprocessors together to process notebooks appropriately:", "_____no_output_____" ] ], [ [ "#export\ndef get_mdx_exporter(template_file='ob.tpl'):\n \"\"\"A mdx notebook exporter which composes many pre-processors together.\"\"\"\n c = Config()\n c.TagRemovePreprocessor.remove_cell_tags = (\"remove_cell\", \"hide\")\n c.TagRemovePreprocessor.remove_all_outputs_tags = (\"remove_output\", \"remove_outputs\", \"hide_output\", \"hide_outputs\")\n c.TagRemovePreprocessor.remove_input_tags = ('remove_input', 'remove_inputs', \"hide_input\", \"hide_inputs\")\n pp = [InjectMeta, WriteTitle, CleanMagics, BashIdentify, MetaflowTruncate,\n MetaflowSelectSteps, UpdateTags, InsertWarning, TagRemovePreprocessor, CleanFlags, CleanShowDoc, RmEmptyCode, \n StripAnsi, HideInputLines, FilterOutput, Black, ImageSave, ImagePath, HTMLEscape]\n c.MarkdownExporter.preprocessors = pp\n tmp_dir = Path(__file__).parent/'templates/'\n tmp_file = tmp_dir/f\"{template_file}\"\n if not tmp_file.exists(): raise ValueError(f\"{tmp_file} does not exist in {tmp_dir}\")\n c.MarkdownExporter.template_file = str(tmp_file)\n return MarkdownExporter(config=c)", "_____no_output_____" ] ], [ [ "`get_mdx_exporter` combines all of the previous preprocessors, along with the built in `TagRemovePreprocessor` to allow for hiding cell inputs/outputs based on cell tags. Here is an example of markdown generated from a notebook with the default preprocessing:", "_____no_output_____" ] ], [ [ "show_plain_md('test_files/example_input.ipynb')", "---\ntitle: my hello page title\ndescription: my hello page description\nhide_table_of_contents: true\n---\n## This is a test notebook\n\nThis is a shell command:\n\n\n```python\n! echo hello\n```\n\n hello\n\n\nWe are writing a python script to disk:\n\n\n```python\n%%writefile myflow.py\n\nfrom metaflow import FlowSpec, step\n\nclass MyFlow(FlowSpec):\n \n @step\n def start(self):\n print('this is the start')\n self.next(self.end)\n \n @step\n def end(self):\n print('this is the end')\n\nif __name__ == '__main__':\n MyFlow()\n```\n\n Overwriting myflow.py\n\n\nAnother shell command where we run a flow:\n\n\n```python\n#cell_meta:show_steps=start\n! python myflow.py run\n```\n\n \u001b[35m\u001b[1mMetaflow 2.5.3\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mMyFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n \u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n \u001b[35m2022-03-10 22:52:37.069 \u001b[0m\u001b[1mWorkflow starting (run-id 1646981557065941):\u001b[0m\n \u001b[35m2022-03-10 22:52:37.077 \u001b[0m\u001b[32m[1646981557065941/start/1 (pid 54733)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-10 22:52:37.752 \u001b[0m\u001b[32m[1646981557065941/start/1 (pid 54733)] \u001b[0m\u001b[22mthis is the start\u001b[0m\n \u001b[35m2022-03-10 22:52:37.841 \u001b[0m\u001b[32m[1646981557065941/start/1 (pid 54733)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-10 22:52:37.849 \u001b[0m\u001b[32m[1646981557065941/end/2 (pid 54736)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n \u001b[35m2022-03-10 22:52:38.519 \u001b[0m\u001b[32m[1646981557065941/end/2 (pid 54736)] \u001b[0m\u001b[22mthis is the end\u001b[0m\n \u001b[35m2022-03-10 22:52:38.604 \u001b[0m\u001b[32m[1646981557065941/end/2 (pid 54736)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n \u001b[35m2022-03-10 22:52:38.604 \u001b[0m\u001b[1mDone!\u001b[0m\n \u001b[0m\n\nThis is a normal python cell:\n\n\n```python\na = 2\na\n```\n\n\n\n\n 2\n\n\n\nThe next cell has a cell tag of `remove_input`, so you should only see the output of the cell:\n\n\n```python\n#meta:tag=remove_input\nprint('hello, you should not see the print statement that produced me')\n```\n\n hello, you should not see the print statement that produced me\n\n\nPandas DataFrame:\n\n\n```python\nimport pandas as pd\npd.read_csv('https://github.com/outerbounds/.data/raw/main/hospital_readmission.csv').head(3).iloc[:, :3]\n```\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>time_in_hospital</th>\n <th>num_lab_procedures</th>\n <th>num_procedures</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>14</td>\n <td>41</td>\n <td>0</td>\n </tr>\n <tr>\n <th>1</th>\n <td>2</td>\n <td>30</td>\n <td>0</td>\n </tr>\n <tr>\n <th>2</th>\n <td>5</td>\n <td>66</td>\n <td>0</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\nA matplotlib plot:\n\n\n```python\nfrom matplotlib import pyplot as plt\nplt.plot(range(20), range(20))\nplt.plot(range(10), range(10))\nplt.show()\n```\n\n\n \n![png](output_15_0.png)\n \n\n\n" ] ], [ [ "Here is the same notebook, but with all of the preprocessors that we defined in this module. Additionally, we hide the input of the last cell which prints `hello, you should not see the print statement...` by using the built in `TagRemovePreprocessor`:", "_____no_output_____" ] ], [ [ "exp = get_mdx_exporter()\nprint(exp.from_filename('test_files/example_input.ipynb')[0])", "---\ntitle: my hello page title\ndescription: my hello page description\nhide_table_of_contents: true\n---\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->\n\n## This is a test notebook\n\nThis is a shell command:\n\n\n```bash\necho hello\n```\n\n<CodeOutputBlock lang=\"bash\">\n\n```\nhello\n```\n\n</CodeOutputBlock>\n\nWe are writing a python script to disk:\n\n\n```py title=\"myflow.py\"\nfrom metaflow import FlowSpec, step\n\nclass MyFlow(FlowSpec):\n \n @step\n def start(self):\n print('this is the start')\n self.next(self.end)\n \n @step\n def end(self):\n print('this is the end')\n\nif __name__ == '__main__':\n MyFlow()\n```\n\nAnother shell command where we run a flow:\n\n\n```bash\npython myflow.py run\n```\n\n<CodeOutputBlock lang=\"bash\">\n\n```\n...\n [1646981557065941/start/1 (pid 54733)] Task is starting.\n [1646981557065941/start/1 (pid 54733)] this is the start\n [1646981557065941/start/1 (pid 54733)] Task finished successfully.\n ...\n```\n\n</CodeOutputBlock>\n\nThis is a normal python cell:\n\n\n```python\na = 2\na\n```\n\n<CodeOutputBlock lang=\"python\">\n\n```\n2\n```\n\n</CodeOutputBlock>\n\nThe next cell has a cell tag of `remove_input`, so you should only see the output of the cell:\n\n<CodeOutputBlock lang=\"python\">\n\n```\nhello, you should not see the print statement that produced me\n```\n\n</CodeOutputBlock>\n\nPandas DataFrame:\n\n\n```python\nimport pandas as pd\npd.read_csv('https://github.com/outerbounds/.data/raw/main/hospital_readmission.csv').head(3).iloc[:, :3]\n```\n \n<HTMLOutputBlock >\n\n\n\n\n```html\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>time_in_hospital</th>\n <th>num_lab_procedures</th>\n <th>num_procedures</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>14</td>\n <td>41</td>\n <td>0</td>\n </tr>\n <tr>\n <th>1</th>\n <td>2</td>\n <td>30</td>\n <td>0</td>\n </tr>\n <tr>\n <th>2</th>\n <td>5</td>\n <td>66</td>\n <td>0</td>\n </tr>\n </tbody>\n</table>\n</div>\n```\n\n\n\n</HTMLOutputBlock>\n\nA matplotlib plot:\n\n\n```python\nfrom matplotlib import pyplot as plt\nplt.plot(range(20), range(20))\nplt.plot(range(10), range(10))\nplt.show()\n```\n\n<CodeOutputBlock lang=\"python\">\n\n```\n![png](_example_input_files/output_15_0.png)\n```\n\n</CodeOutputBlock>\n\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7635ed0015bdb681a269755b4ce993a7bcc0766
26,306
ipynb
Jupyter Notebook
7.20.ipynb
ML00001/Python
95c1e0c1ecbf0742699bdd07d04f62e3b33be437
[ "Apache-2.0" ]
1
2019-07-30T08:30:06.000Z
2019-07-30T08:30:06.000Z
7.20.ipynb
ML00001/Python
95c1e0c1ecbf0742699bdd07d04f62e3b33be437
[ "Apache-2.0" ]
null
null
null
7.20.ipynb
ML00001/Python
95c1e0c1ecbf0742699bdd07d04f62e3b33be437
[ "Apache-2.0" ]
null
null
null
22.502994
1,394
0.438987
[ [ [ "# 函数\n\n- 函数可以用来定义可重复代码,组织和简化\n- 一般来说一个函数在实际开发中为一个小功能\n- 一个类为一个大功能\n- 同样函数的长度不要超过一屏", "_____no_output_____" ], [ "Python中的所有函数实际上都是有返回值(return None),\n\n如果你没有设置return,那么Python将不显示None.\n\n如果你设置return,那么将返回出return这个值.", "_____no_output_____" ] ], [ [ "def HJN():\n print('Hello')\n return 1000", "_____no_output_____" ], [ "b=HJN()\nprint(b)", "Hello\n1000\n" ], [ "HJN", "_____no_output_____" ], [ "def panduan(number):\n if number % 2 == 0:\n print('O')\n else:\n print('J')", "_____no_output_____" ], [ "panduan(number=1)", "J\n" ], [ "panduan(2)", "O\n" ] ], [ [ "## 定义一个函数\n\ndef function_name(list of parameters):\n \n do something\n![](../Photo/69.png)\n- 以前使用的random 或者range 或者print.. 其实都是函数或者类", "_____no_output_____" ], [ "函数的参数如果有默认值的情况,当你调用该函数的时候:\n可以不给予参数值,那么就会走该参数的默认值\n否则的话,就走你给予的参数值.", "_____no_output_____" ] ], [ [ "import random", "_____no_output_____" ], [ "def hahah():\n n = random.randint(0,5)\n while 1:\n N = eval(input('>>'))\n if n == N:\n print('smart')\n break\n elif n < N:\n print('太小了')\n elif n > N:\n print('太大了')\n", "_____no_output_____" ] ], [ [ "## 调用一个函数\n- functionName()\n- \"()\" 就代表调用", "_____no_output_____" ] ], [ [ "def H():\n print('hahaha')", "_____no_output_____" ], [ "def B():\n H()", "_____no_output_____" ], [ "B()", "hahaha\n" ], [ "def A(f):\n f()", "_____no_output_____" ], [ "A(B)", "hahaha\n" ] ], [ [ "![](../Photo/70.png)", "_____no_output_____" ], [ "## 带返回值和不带返回值的函数\n- return 返回的内容\n- return 返回多个值\n- 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值", "_____no_output_____" ], [ "![](../Photo/71.png)\n\n- 当然也可以自定义返回None", "_____no_output_____" ], [ "## EP:\n![](../Photo/72.png)", "_____no_output_____" ] ], [ [ "def main():\n print(min(min(5,6),(51,6)))\ndef min(n1,n2):\n a = n1\n if n2 < a:\n a = n2", "_____no_output_____" ], [ "main()", "_____no_output_____" ] ], [ [ "## 类型和关键字参数\n- 普通参数\n- 多个参数\n- 默认值参数\n- 不定长参数", "_____no_output_____" ], [ "## 普通参数", "_____no_output_____" ], [ "## 多个参数", "_____no_output_____" ], [ "## 默认值参数", "_____no_output_____" ], [ "## 强制命名", "_____no_output_____" ] ], [ [ "def U(str_):\n xiaoxie = 0\n for i in str_:\n ASCII = ord(i)\n if 97<=ASCII<=122:\n xiaoxie +=1\n elif xxxx:\n daxie += 1\n elif xxxx:\n shuzi += 1\n return xiaoxie,daxie,shuzi", "_____no_output_____" ], [ "U('HJi12')", "H\nJ\ni\n1\n2\n" ] ], [ [ "## 不定长参数\n- \\*args\n> - 不定长,来多少装多少,不装也是可以的\n - 返回的数据类型是元组\n - args 名字是可以修改的,只是我们约定俗成的是args\n- \\**kwargs \n> - 返回的字典\n - 输入的一定要是表达式(键值对)\n- name,\\*args,name2,\\**kwargs 使用参数名", "_____no_output_____" ] ], [ [ "def TT(a,b)", "_____no_output_____" ], [ "def TT(*args,**kwargs):\n print(kwargs)\n print(args)\nTT(1,2,3,4,6,a=100,b=1000)", "{'a': 100, 'b': 1000}\n(1, 2, 3, 4, 6)\n" ], [ "{'key':'value'}", "()\n" ], [ "TT(1,2,4,5,7,8,9,)", "(1, 2, 4, 5, 7, 8, 9)\n" ], [ "def B(name1,nam3):\n pass", "_____no_output_____" ], [ "B(name1=100,2)", "_____no_output_____" ], [ "def sum_(*args,A='sum'):\n \n res = 0\n count = 0\n for i in args:\n res +=i\n count += 1\n if A == \"sum\":\n return res\n elif A == \"mean\":\n mean = res / count\n return res,mean\n else:\n print(A,'还未开放')\n \n ", "_____no_output_____" ], [ "sum_(-1,0,1,4,A='var')", "var 还未开放\n" ], [ "'aHbK134'.__iter__", "_____no_output_____" ], [ "b = 'asdkjfh'\nfor i in b :\n print(i)", "a\ns\nd\nk\nj\nf\nh\n" ], [ "2,5\n2 + 22 + 222 + 2222 + 22222", "_____no_output_____" ] ], [ [ "## 变量的作用域\n- 局部变量 local\n- 全局变量 global\n- globals 函数返回一个全局变量的字典,包括所有导入的变量\n- locals() 函数会以字典类型返回当前位置的全部局部变量。", "_____no_output_____" ] ], [ [ "a = 1000\nb = 10\ndef Y():\n global a,b\n a += 100\n print(a)\nY()", "1100\n" ], [ "def YY(a1):\n a1 += 100\n print(a1)\nYY(a)\nprint(a)", "1200\n1100\n" ] ], [ [ "## 注意:\n- global :在进行赋值操作的时候需要声明\n- 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.\n- ![](../Photo/73.png)", "_____no_output_____" ], [ "# Homework\n- 1\n![](../Photo/74.png)", "_____no_output_____" ] ], [ [ "def getPentagonalNumber(a):\n Num=0\n for i in range(1,a+1):\n Num_1=i*(3*i-1)/2\n print('%5.0f'%Num_1,end=' ')\n Num+=1\n if Num%10==0:\n print()\n\n\ngetPentagonalNumber(100)", " 1 5 12 22 35 51 70 92 117 145 \n 176 210 247 287 330 376 425 477 532 590 \n 651 715 782 852 925 1001 1080 1162 1247 1335 \n 1426 1520 1617 1717 1820 1926 2035 2147 2262 2380 \n 2501 2625 2752 2882 3015 3151 3290 3432 3577 3725 \n 3876 4030 4187 4347 4510 4676 4845 5017 5192 5370 \n 5551 5735 5922 6112 6305 6501 6700 6902 7107 7315 \n 7526 7740 7957 8177 8400 8626 8855 9087 9322 9560 \n 9801 10045 10292 10542 10795 11051 11310 11572 11837 12105 \n12376 12650 12927 13207 13490 13776 14065 14357 14652 14950 \n" ] ], [ [ "- 2 \n![](../Photo/75.png)", "_____no_output_____" ] ], [ [ "def sumDigits(n):\n import math\n a=n\n Num=0\n while a!=0:\n b=a%10\n a=math.floor(a/10)\n Num+=b\n return Num\nprint(sumDigits(336))", "12\n" ] ], [ [ "- 3\n![](../Photo/76.png)", "_____no_output_____" ] ], [ [ "def sortnum(num1,num2,num3):\n a=[num1,num2,num3]\n a.sort()\n print(a)\ndef start():\n num1,num2,num3=map(float,input().split(','))\n sortnum(num1,num2,num3)\nstart()", "25,2.6,98\n[2.6, 25.0, 98.0]\n" ] ], [ [ "- 4\n![](../Photo/77.png)", "_____no_output_____" ] ], [ [ "def futureIn(money,rate,year):\n for i in range(1,year*12+1):\n money=float(1+rate/100/12)*money\n if i%12==0:\n print('%9d %10.2f'%((i/12),money))\n\ndef start():\n money,rate,year=map(int,input().split(','))\n futureIn(money,rate,year)\nstart()", "1000,3,30\n 1 1030.42\n 2 1061.76\n 3 1094.05\n 4 1127.33\n 5 1161.62\n 6 1196.95\n 7 1233.35\n 8 1270.87\n 9 1309.52\n 10 1349.35\n 11 1390.40\n 12 1432.69\n 13 1476.26\n 14 1521.16\n 15 1567.43\n 16 1615.11\n 17 1664.23\n 18 1714.85\n 19 1767.01\n 20 1820.75\n 21 1876.14\n 22 1933.20\n 23 1992.00\n 24 2052.59\n 25 2115.02\n 26 2179.35\n 27 2245.64\n 28 2313.94\n 29 2384.32\n 30 2456.84\n" ] ], [ [ "- 5\n![](../Photo/78.png)", "_____no_output_____" ] ], [ [ "def printChars(ch1, ch2,numberPerLine):\n # print(ord(ch2))\n # print(ord(ch1))\n num=ord(ch2)-ord(ch1)+1\n num_2=int(numberPerLine)\n # print(num_2)\n num_1=0\n for i in range(ord(ch1),ord(ch2)+1):\n print(chr(i),end=\" \")\n num_1+=1\n if num_1%num_2==0:\n print()\ndef start():\n ch1, ch2,numberPerLine=map(str,input().split(','))\n printChars(ch1, ch2,numberPerLine)\n\nstart()\n\n", "a,o,3\na b c \nd e f \ng h i \nj k l \nm n o \n" ] ], [ [ "- 6\n![](../Photo/79.png)", "_____no_output_____" ] ], [ [ "def neumber(year):\n \n for i in range(11):\n if year%4==0:\n print('%20d年 %20.2f天'%(year,366))\n if year%4!=0:\n print('%20d年 %20.2f天'%(year,365))\n year+=1\nneumber(2010)", " 2010年 365.00天\n 2011年 365.00天\n 2012年 366.00天\n 2013年 365.00天\n 2014年 365.00天\n 2015年 365.00天\n 2016年 366.00天\n 2017年 365.00天\n 2018年 365.00天\n 2019年 365.00天\n 2020年 366.00天\n" ] ], [ [ "- 7\n![](../Photo/80.png)", "_____no_output_____" ] ], [ [ "#俩点之间距离\nimport math\ndef distance(x1,y1,x2,y2):\n a=(x1-x2)**2\n b=(y1-y2)**2\n c=math.sqrt(a+b)\n print(c)\ndef start():\n x1,y1,x2,y2=map(float,input().split(','))\n distance(x1,y1,x2,y2)\nstart()", "9,11,12,15\n5.0\n" ] ], [ [ "- 8\n![](../Photo/81.png)", "_____no_output_____" ] ], [ [ "def Mersenne_prime(P):\n print('P值 梅森素数(P^2-1)')\n for i in range(1,P+1):\n Number1=2**i-1\n Number2=0\n for j in range(2,Number1):\n if Number1%j==0:\n Number2+=1\n if Number2==0:\n print('%3d %10d'%(i,Number1))\ndef start():\n P=int(input())\n Mersenne_prime(P)\nstart()", "31\nP值 梅森素数(P^2-1)\n 1 1\n 2 3\n 3 7\n 5 31\n 7 127\n 13 8191\n 17 131071\n 19 524287\n" ] ], [ [ "- 9\n![](../Photo/82.png)\n![](../Photo/83.png)", "_____no_output_____" ] ], [ [ "def time():\n import time\n localtime = time.asctime(time.localtime(time.time()))\n print(localtime)\ntime()", "Fri Aug 2 16:00:20 2019\n" ] ], [ [ "- 10\n![](../Photo/84.png)", "_____no_output_____" ] ], [ [ "def Roll_dice():\n import random\n dice1=random.randint(1,6)\n dice2=random.randint(1,6)\n print('Dice1:%d Dice2:%d'%(dice1,dice2))\n sum = dice1+dice2\n return sum\ndef Judge():\n sum=Roll_dice()\n print('点数:%d'%sum)\n a=(2,3,12)\n b=(7,11)\n c=(4,5,6,8,9,10)\n if sum in a:\n print('你输了')\n if sum in b:\n print('你赢了')\n if sum in c:\n Judge1(sum)\ndef Judge1(sum):\n sum_1=sum\n while True:\n sum=Roll_dice()\n print('点数:%d'%sum)\n if sum==sum_1:\n print('你赢了')\n break\n if sum==7:\n print('你输了')\n break\ndef start():\n Judge()\nstart()", "Dice1:4 Dice2:5\n点数:9\nDice1:3 Dice2:1\n点数:4\nDice1:6 Dice2:1\n点数:7\n你输了\n" ] ], [ [ "- 11 \n### 去网上寻找如何用Python代码发送邮件", "_____no_output_____" ] ], [ [ "\ndef send_first_email():\n \"\"\"\n #发送一个纯文本邮件\n \"\"\"\n # 使用email模块 构造邮件\n from email.mime.text import MIMEText\n # 第一个参数,邮件正文\n # 第二个参数,_subtype='plain', Content-Type:text/plain,这个应该是一种规范吧?\n # 第三个参数 utf-8编码保证兼容\n msg = MIMEText(\"你害怕大雨吗?\",\"plain\",\"utf-8\")\n # 添加信息防止失败,如果收件人 发件人 标题出现乱码情况, from email.header import Header等方法进行转码\n msg['From'] = \"Tsun<[email protected]>\" # 收件人看到的发件人信息\n msg['To'] = \"[email protected]>\" #\n from email.header import Header\n msg['Subject'] = \"人生苦短,这是标题\"\n\n # 发送email\n import smtplib\n # 邮件SMTP服务器,默认25端口\n server = smtplib.SMTP(\"smtp.163.com\",25)\n server.set_debuglevel(1) # 这句话,可以打印出和SMTP交互的所有信息\n # 发送人的邮件和密码\n server.login(\"[email protected]\", \"123456malong\")\n # 发件人,收件人列表,msg.as_string()把MIMEText对象变成str\n server.sendmail(\"[email protected]\", [\"[email protected]\"], msg.as_string())\n print(\"发送成功了\")\n server.quit()\n print(\"退出\")\nsend_first_email()\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7636281fd11d5fa38395bf0ad9d1b436a1f5313
115,068
ipynb
Jupyter Notebook
chapter3_NN/linear-regression-gradient-descend.ipynb
kumi123/pytorch-learning
29f5b4d53f4e72b95b3fab979b1bc496ef23674c
[ "MIT" ]
null
null
null
chapter3_NN/linear-regression-gradient-descend.ipynb
kumi123/pytorch-learning
29f5b4d53f4e72b95b3fab979b1bc496ef23674c
[ "MIT" ]
null
null
null
chapter3_NN/linear-regression-gradient-descend.ipynb
kumi123/pytorch-learning
29f5b4d53f4e72b95b3fab979b1bc496ef23674c
[ "MIT" ]
null
null
null
103.292639
16,216
0.866792
[ [ [ "# 线性模型和梯度下降\n这是神经网络的第一课,我们会学习一个非常简单的模型,线性回归,同时也会学习一个优化算法-梯度下降法,对这个模型进行优化。线性回归是监督学习里面一个非常简单的模型,同时梯度下降也是深度学习中应用最广的优化算法,我们将从这里开始我们的深度学习之旅", "_____no_output_____" ], [ "\n", "_____no_output_____" ], [ "## 一元线性回归\n一元线性模型非常简单,假设我们有变量 $x_i$ 和目标 $y_i$,每个 i 对应于一个数据点,希望建立一个模型\n\n$$\n\\hat{y}_i = w x_i + b\n$$\n\n$\\hat{y}_i$ 是我们预测的结果,希望通过 $\\hat{y}_i$ 来拟合目标 $y_i$,通俗来讲就是找到这个函数拟合 $y_i$ 使得误差最小,即最小化\n\n$$\n\\frac{1}{n} \\sum_{i=1}^n(\\hat{y}_i - y_i)^2\n$$", "_____no_output_____" ], [ "那么如何最小化这个误差呢?\n\n这里需要用到**梯度下降**,这是我们接触到的第一个优化算法,非常简单,但是却非常强大,在深度学习中被大量使用,所以让我们从简单的例子出发了解梯度下降法的原理", "_____no_output_____" ], [ "## 梯度下降法\n在梯度下降法中,我们首先要明确梯度的概念,随后我们再了解如何使用梯度进行下降。", "_____no_output_____" ], [ "### 梯度\n梯度在数学上就是导数,如果是一个多元函数,那么梯度就是偏导数。比如一个函数f(x, y),那么 f 的梯度就是 \n\n$$\n(\\frac{\\partial f}{\\partial x},\\ \\frac{\\partial f}{\\partial y})\n$$\n\n可以称为 grad f(x, y) 或者 $\\nabla f(x, y)$。具体某一点 $(x_0,\\ y_0)$ 的梯度就是 $\\nabla f(x_0,\\ y_0)$。\n\n下面这个图片是 $f(x) = x^2$ 这个函数在 x=1 处的梯度\n\n![](https://ws3.sinaimg.cn/large/006tNc79ly1fmarbuh2j3j30ba0b80sy.jpg)", "_____no_output_____" ], [ "梯度有什么意义呢?从几何意义来讲,一个点的梯度值是这个函数变化最快的地方,具体来说,对于函数 f(x, y),在点 $(x_0, y_0)$ 处,沿着梯度 $\\nabla f(x_0,\\ y_0)$ 的方向,函数增加最快,也就是说沿着梯度的方向,我们能够更快地找到函数的极大值点,或者反过来沿着梯度的反方向,我们能够更快地找到函数的最小值点。", "_____no_output_____" ], [ "### 梯度下降法\n有了对梯度的理解,我们就能了解梯度下降发的原理了。上面我们需要最小化这个误差,也就是需要找到这个误差的最小值点,那么沿着梯度的反方向我们就能够找到这个最小值点。\n\n我们可以来看一个直观的解释。比如我们在一座大山上的某处位置,由于我们不知道怎么下山,于是决定走一步算一步,也就是在每走到一个位置的时候,求解当前位置的梯度,沿着梯度的负方向,也就是当前最陡峭的位置向下走一步,然后继续求解当前位置梯度,向这一步所在位置沿着最陡峭最易下山的位置走一步。这样一步步的走下去,一直走到觉得我们已经到了山脚。当然这样走下去,有可能我们不能走到山脚,而是到了某一个局部的山峰低处。\n\n类比我们的问题,就是沿着梯度的反方向,我们不断改变 w 和 b 的值,最终找到一组最好的 w 和 b 使得误差最小。\n\n在更新的时候,我们需要决定每次更新的幅度,比如在下山的例子中,我们需要每次往下走的那一步的长度,这个长度称为学习率,用 $\\eta$ 表示,这个学习率非常重要,不同的学习率都会导致不同的结果,学习率太小会导致下降非常缓慢,学习率太大又会导致跳动非常明显,可以看看下面的例子\n\n![](https://ws2.sinaimg.cn/large/006tNc79ly1fmgn23lnzjg30980gogso.gif)\n\n可以看到上面的学习率较为合适,而下面的学习率太大,就会导致不断跳动\n\n最后我们的更新公式就是\n\n$$\nw := w - \\eta \\frac{\\partial f(w,\\ b)}{\\partial w} \\\\\nb := b - \\eta \\frac{\\partial f(w,\\ b)}{\\partial b}\n$$\n\n通过不断地迭代更新,最终我们能够找到一组最优的 w 和 b,这就是梯度下降法的原理。\n\n最后可以通过这张图形象地说明一下这个方法\n\n![](https://ws3.sinaimg.cn/large/006tNc79ly1fmarxsltfqj30gx091gn4.jpg)", "_____no_output_____" ], [ "\n", "_____no_output_____" ], [ "上面是原理部分,下面通过一个例子来进一步学习线性模型", "_____no_output_____" ] ], [ [ "import torch\nimport numpy as np\nfrom torch.autograd import Variable\n\ntorch.manual_seed(2017)", "_____no_output_____" ], [ "# 读入数据 x 和 y\nx_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],\n [9.779], [6.182], [7.59], [2.167], [7.042],\n [10.791], [5.313], [7.997], [3.1]], dtype=np.float32)\n\ny_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],\n [3.366], [2.596], [2.53], [1.221], [2.827],\n [3.465], [1.65], [2.904], [1.3]], dtype=np.float32)", "_____no_output_____" ], [ "# 画出图像\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(x_train, y_train, 'go')", "_____no_output_____" ], [ "# 转换成 Tensor,,因为内部必须的要使用tensor学习\nx_train = torch.from_numpy(x_train)\ny_train = torch.from_numpy(y_train)\n\n# 定义参数 w 和 b 必须的variable格式\nw = Variable(torch.randn(1), requires_grad=True) # 随机初始化,注意需要这个求导是true\nb = Variable(torch.zeros(1), requires_grad=True) # 使用 0 进行初始化", "_____no_output_____" ], [ "# 构建线性回归模型\nx_train = Variable(x_train)#转化成这个variable格式\ny_train = Variable(y_train)\n\ndef linear_model(x):\n return x * w + b", "_____no_output_____" ], [ "y_ = linear_model(x_train)#具体预测值", "_____no_output_____" ] ], [ [ "经过上面的步骤我们就定义好了模型,在进行参数更新之前,我们可以先看看模型的输出结果长什么样", "_____no_output_____" ] ], [ [ "plt.plot(x_train.data.numpy(), y_train.data.numpy(), 'bo', label='real')#实际的\nplt.plot(x_train.data.numpy(), y_.data.numpy(), 'ro', label='estimated')#实际的预测\nplt.legend()", "_____no_output_____" ] ], [ [ "**思考:红色的点表示预测值,似乎排列成一条直线,请思考一下这些点是否在一条直线上?**", "_____no_output_____" ], [ "这个时候需要计算我们的误差函数,也就是\n\n$$\n\\frac{1}{n} \\sum_{i=1}^n(\\hat{y}_i - y_i)^2\n$$", "_____no_output_____" ] ], [ [ "# 计算误差\ndef get_loss(y_, y):\n return torch.mean((y_ - y_train) ** 2)\n\nloss = get_loss(y_, y_train)", "_____no_output_____" ], [ "# 打印一下看看 loss 的大小\nprint(loss)", "tensor(153.3520, grad_fn=<MeanBackward1>)\n" ] ], [ [ "定义好了误差函数,接下来我们需要计算 w 和 b 的梯度了,这时得益于 PyTorch 的自动求导,我们不需要手动去算梯度,有兴趣的同学可以手动计算一下,w 和 b 的梯度分别是\n\n$$\n\\frac{\\partial}{\\partial w} = \\frac{2}{n} \\sum_{i=1}^n x_i(w x_i + b - y_i) \\\\\n\\frac{\\partial}{\\partial b} = \\frac{2}{n} \\sum_{i=1}^n (w x_i + b - y_i)\n$$", "_____no_output_____" ] ], [ [ "# 自动求导\nloss.backward()", "_____no_output_____" ], [ "# 查看 w 和 b 的梯度\nprint(w.grad)\nprint(b.grad)", "tensor([161.0043])\ntensor([22.8730])\n" ], [ "# 更新一次参数\nw.data = w.data - 1e-2 * w.grad.data\nb.data = b.data - 1e-2 * b.grad.data", "_____no_output_____" ] ], [ [ "更新完成参数之后,我们再一次看看模型输出的结果", "_____no_output_____" ] ], [ [ "y_ = linear_model(x_train)\nplt.plot(x_train.data.numpy(), y_train.data.numpy(), 'bo', label='real')\nplt.plot(x_train.data.numpy(), y_.data.numpy(), 'ro', label='estimated')\nplt.legend()", "_____no_output_____" ] ], [ [ "从上面的例子可以看到,更新之后红色的线跑到了蓝色的线下面,没有特别好的拟合蓝色的真实值,所以我们需要在进行几次更新", "_____no_output_____" ] ], [ [ "for e in range(100): # 10 次更新\n y_ = linear_model(x_train)\n loss = get_loss(y_, y_train)\n \n w.grad.zero_() # 记得归零梯度\n b.grad.zero_() # 记得归零梯度\n loss.backward()#这时候w.grad.data 和b.grad.data 是可以内存之中,要清零提前好吧,否则不是梯度下降的情况\n w.data = w.data - 1e-2 * w.grad.data # 更新 w\n b.data = b.data - 1e-2 * b.grad.data # 更新 b \n ", "_____no_output_____" ], [ "y_ = linear_model(x_train)#这里y_是按照实际的预测值来进行的\nplt.plot(x_train.data.numpy(), y_train.data.numpy(), 'bo', label='real')\nplt.plot(x_train.data.numpy(), y_.data.numpy(), 'ro', label='estimated')\nplt.legend()", "_____no_output_____" ] ], [ [ "经过 10 次更新,我们发现红色的预测结果已经比较好的拟合了蓝色的真实值。\n\n现在你已经学会了你的第一个机器学习模型了,再接再厉,完成下面的小练习。", "_____no_output_____" ], [ "**小练习:**\n\n重启 notebook 运行上面的线性回归模型,但是改变训练次数以及不同的学习率进行尝试得到不同的结果", "_____no_output_____" ], [ "## 多项式回归模型", "_____no_output_____" ], [ "下面我们更进一步,讲一讲多项式回归。什么是多项式回归呢?非常简单,根据上面的线性回归模型\n\n$$\n\\hat{y} = w x + b\n$$\n\n这里是关于 x 的一个一次多项式,这个模型比较简单,没有办法拟合比较复杂的模型,所以我们可以使用更高次的模型,比如\n\n$$\n\\hat{y} = w_0 + w_1 x + w_2 x^2 + w_3 x^3 + \\cdots\n$$\n\n这样就能够拟合更加复杂的模型,这就是多项式模型,这里使用了 x 的更高次,同理还有多元回归模型,形式也是一样的,只是出了使用 x,还是更多的变量,比如 y、z 等等,同时他们的 loss 函数和简单的线性回归模型是一致的。", "_____no_output_____" ], [ "\n", "_____no_output_____" ], [ "首先我们可以先定义一个需要拟合的目标函数,这个函数是个三次的多项式", "_____no_output_____" ] ], [ [ "# 定义一个多变量函数\nw_target = np.array([0.5, 3, 2.4]) # 定义参数 目标参数\nb_target = np.array([0.9]) # 定义参数\n\nf_des = 'y = {:.2f} + {:.2f} * x + {:.2f} * x^2 + {:.2f} * x^3'.format(\n b_target[0], w_target[0], w_target[1], w_target[2]) # 打印出函数的式子\n\nprint(f_des)", "y = 0.90 + 0.50 * x + 3.00 * x^2 + 2.40 * x^3\n" ] ], [ [ "我们可以先画出这个多项式的图像", "_____no_output_____" ] ], [ [ "# 画出这个函数的曲线\nx_sample = np.arange(-3, 3.1, 0.1)\ny_sample = b_target[0] + w_target[0] * x_sample + w_target[1] * x_sample ** 2 + w_target[2] * x_sample ** 3\n\nplt.plot(x_sample, y_sample, label='real curve')\nplt.legend()", "_____no_output_____" ] ], [ [ "接着我们可以构建数据集,需要 x 和 y,同时是一个三次多项式,所以我们取了 $x,\\ x^2, x^3$", "_____no_output_____" ], [ "<span class=\"mark\">下边的数据分析不是很懂</span>", "_____no_output_____" ] ], [ [ "# 构建数据 x 和 y\n# x 是一个如下矩阵 [x, x^2, x^3]\n# y 是函数的结果 [y]\n\nx_train = np.stack([x_sample ** i for i in range(1, 4)], axis=1)\nx_train = torch.from_numpy(x_train).float() # 转换成 float tensor\n\ny_train = torch.from_numpy(y_sample).float().unsqueeze(1) # 转化成 float tensor ", "_____no_output_____" ] ], [ [ "接着我们可以定义需要优化的参数,就是前面这个函数里面的 $w_i$", "_____no_output_____" ] ], [ [ "# 定义要估计参数和模型,参数必须是variable\nw = Variable(torch.randn(3, 1), requires_grad=True)\nb = Variable(torch.zeros(1), requires_grad=True)\n# 将 x 和 y 转换成 Variable #数据也要用variable\nx_train = Variable(x_train)\ny_train = Variable(y_train)\ndef multi_linear(x):\n return torch.mm(x, w) + b", "_____no_output_____" ] ], [ [ "我们可以画出没有更新之前的模型和真实的模型之间的对比", "_____no_output_____" ] ], [ [ "# 画出更新之前的模型\ny_pred = multi_linear(x_train)\n\nplt.plot(x_train.data.numpy()[:, 0], y_pred.data.numpy(), label='fitting curve', color='r')\nplt.plot(x_train.data.numpy()[:, 0], y_sample, label='real curve', color='b')\nplt.legend()", "_____no_output_____" ] ], [ [ "可以发现,这两条曲线之间存在差异,我们计算一下他们之间的误差", "_____no_output_____" ] ], [ [ "# 计算误差,这里的误差和一元的线性模型的误差是相同的,前面已经定义过了 get_loss\nloss = get_loss(y_pred, y_train)\nprint(loss)", "tensor(756.3784, grad_fn=<MeanBackward1>)\n" ], [ "# 自动求导\nloss.backward()", "_____no_output_____" ], [ "# 查看一下 w 和 b 的梯度 \nprint(w.grad)\nprint(b.grad)", "tensor([[ -88.0635],\n [ -50.3721],\n [-574.2020]])\ntensor([-9.8301])\n" ], [ "# 更新一下参数 其实直接用那个优化器还是 easy的\nw.data = w.data - 0.001 * w.grad.data\nb.data = b.data - 0.001 * b.grad.data", "_____no_output_____" ], [ "# 画出更新一次之后的模型\ny_pred = multi_linear(x_train)\n\nplt.plot(x_train.data.numpy()[:, 0], y_pred.data.numpy(), label='fitting curve', color='r')\nplt.plot(x_train.data.numpy()[:, 0], y_sample, label='real curve', color='b')\nplt.legend()", "_____no_output_____" ] ], [ [ "因为只更新了一次,所以两条曲线之间的差异仍然存在,我们进行 100 次迭代", "_____no_output_____" ], [ "<span class=\"mark\">这里先清零在进行梯度下降,因为梯度下降之后在清零就不对了,梯度编程零0了</span>", "_____no_output_____" ] ], [ [ "# 进行 100 次参数更新\nfor e in range(100):\n y_pred = multi_linear(x_train)\n loss = get_loss(y_pred, y_train)\n #这里先清零在进行梯度下降\n w.grad.data.zero_()\n b.grad.data.zero_()\n loss.backward()\n \n # 更新参数\n w.data = w.data - 0.001 * w.grad.data\n b.data = b.data - 0.001 * b.grad.data\n ", "_____no_output_____" ] ], [ [ "可以看到更新完成之后 loss 已经非常小了,我们画出更新之后的曲线对比", "_____no_output_____" ] ], [ [ "# 画出更新之后的结果\ny_pred = multi_linear(x_train)\n\nplt.plot(x_train.data.numpy()[:, 0], y_pred.data.numpy(), label='fitting curve', color='r')\nplt.plot(x_train.data.numpy()[:, 0], y_sample, label='real curve', color='b')\nplt.legend()", "_____no_output_____" ] ], [ [ "可以看到,经过 100 次更新之后,可以看到拟合的线和真实的线已经完全重合了", "_____no_output_____" ], [ "**小练习:上面的例子是一个三次的多项式,尝试使用二次的多项式去拟合它,看看最后能做到多好**\n\n**提示:参数 `w = torch.randn(2, 1)`,同时重新构建 x 数据集**", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e763864a1ca826d73333be6ff1272e12a5689bcf
13,495
ipynb
Jupyter Notebook
examples/PreProcess.ipynb
ppmdatix/rtdl
a01ecd9ae6b673f4e82e51f804ffd7031c7350a0
[ "Apache-2.0" ]
null
null
null
examples/PreProcess.ipynb
ppmdatix/rtdl
a01ecd9ae6b673f4e82e51f804ffd7031c7350a0
[ "Apache-2.0" ]
null
null
null
examples/PreProcess.ipynb
ppmdatix/rtdl
a01ecd9ae6b673f4e82e51f804ffd7031c7350a0
[ "Apache-2.0" ]
null
null
null
33.486352
1,118
0.567766
[ [ [ "# Requirements:\n# !pip install rtdl\n# !pip install libzero==0.0.4", "_____no_output_____" ], [ "from typing import Any, Dict\n\nimport numpy as np\nimport rtdl\nimport scipy.special\nimport sklearn.datasets\nimport sklearn.metrics\nimport sklearn.model_selection\nimport sklearn.preprocessing\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport zero\nimport pandas as pd", "_____no_output_____" ], [ "device = torch.device('cpu')\n# Docs: https://yura52.github.io/zero/0.0.4/reference/api/zero.improve_reproducibility.html\n# zero.improve_reproducibility(seed=123456)", "_____no_output_____" ] ], [ [ "### Data", "_____no_output_____" ] ], [ [ "from sklearn.datasets import fetch_kddcup99 \ndata = fetch_kddcup99()\n\nfrom pprint import pprint\nprint(list(data.target_names))", "['labels']\n" ], [ "df = pd.DataFrame(data.data, columns=data.feature_names)\ndf[\"labels\"] = data.target\ndf.to_csv(\"data/KDD99/fetch_kddcup99.csv\", index=False)", "_____no_output_____" ], [ "path = \"data/Forest_Cover/train_centered.csv\"\n\ntarget = \"Cover_Type\"\ndf = pd.read_csv(path)", "_____no_output_____" ], [ "df[\"target\"] = df[target]# df.income\ndf = df.drop(target, axis=1)", "_____no_output_____" ], [ "strCol = []\nfor column in df.columns:\n print(column)\n if type(df[column][0]) is str:\n strCol.append(column)\n", "duration\nprotocol_type\nservice\nflag\nsrc_bytes\ndst_bytes\nland\nwrong_fragment\nurgent\nhot\nnum_failed_logins\nlogged_in\nnum_compromised\nroot_shell\nsu_attempted\nnum_root\nnum_file_creations\nnum_shells\nnum_access_files\nnum_outbound_cmds\nis_host_login\nis_guest_login\ncount\nsrv_count\nserror_rate\nsrv_serror_rate\nrerror_rate\nsrv_rerror_rate\nsame_srv_rate\ndiff_srv_rate\nsrv_diff_host_rate\ndst_host_count\ndst_host_srv_count\ndst_host_same_srv_rate\ndst_host_diff_srv_rate\ndst_host_same_src_port_rate\ndst_host_srv_diff_host_rate\ndst_host_serror_rate\ndst_host_srv_serror_rate\ndst_host_rerror_rate\ndst_host_srv_rerror_rate\ntarget\n" ], [ "strCol = [\"Soil_Type\", \"Wilderness_Area\"]\n\nstrCol = [\"protocol_type\", \"service\", \"flag\", \"land\"]\n\n", "_____no_output_____" ], [ "# df = pd.DataFrame(data.data.tolist(), columns=data.feature_names)\noldNames = df.columns# data.feature_names", "_____no_output_____" ], [ "# df[\"target\"] = data.target\n# df[\"target\"] = df.income", "_____no_output_____" ], [ "output = df.target.values\nlabels = set(output)\nprint('The different type of output labels are:',labels)\nprint('='*125)\nprint('No. of different output labels are:', len(labels))", "The different type of output labels are: {b'multihop.', b'buffer_overflow.', b'teardrop.', b'ipsweep.', b'satan.', b'pod.', b'portsweep.', b'loadmodule.', b'nmap.', b'back.', b'imap.', b'perl.', b'normal.', b'neptune.', b'guess_passwd.', b'warezmaster.', b'smurf.', b'rootkit.', b'land.', b'phf.', b'warezclient.', b'ftp_write.', b'spy.'}\n=============================================================================================================================\nNo. of different output labels are: 23\n" ], [ "def onhotencode(_df, _col):\n _values = set(_df[_col].values)\n for v in _values:\n _df[v] = _df[_col].apply(lambda x : float(x == v) )\n return _df", "_____no_output_____" ], [ "for c in df.columns:\n if (not c in strCol) and (c != \"target\"):\n df = df.drop(c, axis=1)", "_____no_output_____" ], [ "for col in df.columns:\n if col != \"target\":\n df = onhotencode(df, col)\n df = df.drop(col, axis=1)", "_____no_output_____" ], [ "df[\"target\"] = df[\"target\"].apply(lambda x: str(x))", "_____no_output_____" ], [ "df.to_csv(\"data/KDD99/training_processed.csv\", index=False)", "_____no_output_____" ], [ "# df.to_csv(\"data/Forest_Cover/training_processed.csv\")", "_____no_output_____" ], [ "set(list(df.target))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7638b9f76d8995b26aaa0e162060f9d42d16449
29,645
ipynb
Jupyter Notebook
lec12.ipynb
variah-hauge/IA241
482a36fc1398d0ccd2ebc568fd7345f66c812c19
[ "MIT" ]
null
null
null
lec12.ipynb
variah-hauge/IA241
482a36fc1398d0ccd2ebc568fd7345f66c812c19
[ "MIT" ]
null
null
null
lec12.ipynb
variah-hauge/IA241
482a36fc1398d0ccd2ebc568fd7345f66c812c19
[ "MIT" ]
null
null
null
26.731289
155
0.343262
[ [ [ "import pandas ", "_____no_output_____" ], [ "df = pandas.read_excel('s3://hauge-ia241-2021spring/Diamonds.xls')", "_____no_output_____" ], [ "df[:10]", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df['PRICE']", "_____no_output_____" ], [ "df[1:5]", "_____no_output_____" ], [ "df.loc[df['PRICE']>1500]", "_____no_output_____" ], [ "df['COLOR'].value_counts()", "_____no_output_____" ], [ "df['COLOR'].count()", "_____no_output_____" ], [ "df['PRICE'].sem()", "_____no_output_____" ], [ "df.groupby('COLOR').std()", "_____no_output_____" ], [ "df[:5]", "_____no_output_____" ], [ "df['unit_price'] = df['PRICE']/df['WEIGHT']\ndf[:5]", "_____no_output_____" ], [ "df['unit_price'].mean()", "_____no_output_____" ], [ "from scipy import stats ", "_____no_output_____" ], [ "result = stats.linregress( df['WEIGHT'], df['PRICE'] )\nprint('Slope is {}'.format(result.slope))\nprint('Intercept is {}'.format(result.intercept))\nprint('R Square is {}'.format(result.rvalue * result.rvalue))\nprint('P value is {}'.format(result.pvalue))", "Slope is 11598.884012882309\nIntercept is -2298.3576018937993\nR Square is 0.8925083858672289\nP value is 3.0448096265906994e-150\n" ], [ "print('the price of a diamond with the weight of {} is ${}'.format(0.9,0.9*result.slope+result.intercept))", "the price of a diamond with the weight of 0.9 is $8140.638009700279\n" ], [ "!pip install textblob", "Collecting textblob\n Downloading textblob-0.15.3-py2.py3-none-any.whl (636 kB)\n\u001b[K |████████████████████████████████| 636 kB 15.3 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: nltk>=3.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from textblob) (3.4.4)\nRequirement already satisfied: six in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from nltk>=3.1->textblob) (1.15.0)\nInstalling collected packages: textblob\nSuccessfully installed textblob-0.15.3\n" ], [ "from textblob import TextBlob\n\nresult = TextBlob('I hate dog')", "_____no_output_____" ], [ "print('The polarity is {}'.format(result.sentiment.polarity))\nprint('The subjectivity is {}'.format(result.sentiment.subjectivity))", "The polarity is -0.8\nThe subjectivity is 0.9\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7638de98370e63d1fc24d540967a7b626e2746a
265,245
ipynb
Jupyter Notebook
exercise/build_vocab.ipynb
aisolab/first_project_nlp
87df06020548140d7d4fde1d7ba382d2c8ace010
[ "MIT" ]
8
2019-10-14T13:52:27.000Z
2021-06-17T06:39:55.000Z
exercise/build_vocab.ipynb
aisolab/first_project_nlp
87df06020548140d7d4fde1d7ba382d2c8ace010
[ "MIT" ]
null
null
null
exercise/build_vocab.ipynb
aisolab/first_project_nlp
87df06020548140d7d4fde1d7ba382d2c8ace010
[ "MIT" ]
4
2019-12-04T07:01:04.000Z
2020-11-10T01:04:47.000Z
348.547963
122,969
0.52545
[ [ [ "# build vocab\n본 노트에서는 자연어처리 관련 딥러닝 모형학습을 위한 필수적인 전처리중 하나인 training corpus에 존재하는 token들의 집합인 **vocabulary**를 만들어 봅니다. vocabulary 구성 자체는 미리 제공하는 `model.utils` module에 있는 `Vocab` class를 활용해봅니다. `model.utils` module에 있는 `Tokenizer` class, `PadSequence` class를 같이 활용하여, 효율적인 전처리를 어떻게 할 수 있는 지 확인합니다.", "_____no_output_____" ], [ "### Setup", "_____no_output_____" ] ], [ [ "import sys\nimport pickle\nimport itertools\nimport pandas as pd\nfrom pathlib import Path\nfrom pprint import pprint\nfrom typing import List\nfrom collections import Counter\nfrom model.utils import Vocab, Tokenizer, PadSequence", "_____no_output_____" ] ], [ [ "### Load dataset", "_____no_output_____" ] ], [ [ "data_dir = Path.cwd() / 'data'\nlist_of_dataset = list(data_dir.iterdir())\npprint(list_of_dataset)", "[PosixPath('/root/Documents/archive/strnlp/exercise/data/.DS_Store'),\n PosixPath('/root/Documents/archive/strnlp/exercise/data/morphs_eojeol.pkl'),\n PosixPath('/root/Documents/archive/strnlp/exercise/data/train.txt'),\n PosixPath('/root/Documents/archive/strnlp/exercise/data/morphs_vec.pkl'),\n PosixPath('/root/Documents/archive/strnlp/exercise/data/validation.txt'),\n PosixPath('/root/Documents/archive/strnlp/exercise/data/tokenizer.pkl'),\n PosixPath('/root/Documents/archive/strnlp/exercise/data/test.txt'),\n PosixPath('/root/Documents/archive/strnlp/exercise/data/vocab.pkl')]\n" ], [ "tr_dataset = pd.read_csv(list_of_dataset[2], sep='\\t')\ntr_dataset.head()", "_____no_output_____" ] ], [ [ "### Split training corpus and count each tokens\n앞선 `eda.ipynb` 노트에서 활용한 `Mecab` class의 instance의 멤버함수인 `morphs` 이용, **training corpus를 sequence of tokens의 형태로 변환하고, training corpus에서 token의 출현빈도를 계산합니다.**", "_____no_output_____" ] ], [ [ "# 문장을 어절기준으로 보는 split_fn을 작성\ndef split_eojeol(s: str) -> List[str]:\n return s.split(' ')", "_____no_output_____" ], [ "training_corpus = tr_dataset['document'].apply(lambda sen: split_eojeol(sen)).tolist()\npprint(training_corpus[:5])", "[['애들',\n '욕하지마라',\n '지들은',\n '뭐',\n '그렇게',\n '잘났나?',\n '솔까',\n '거기',\n '나오는',\n '귀여운',\n '애들이',\n '당신들보다',\n '훨',\n '낮다.'],\n ['여전히', '반복되고', '있는', '80년대', '한국', '멜로', '영화의', '유치함.'],\n ['쉐임리스', '스티브와', '피오나가', '손오공', '부르마로', 'ㅋㅋㅋ'],\n ['0점은', '없나요?...'],\n ['제발', '시즌2', 'ㅜㅜ']]\n" ], [ "count_tokens = Counter(itertools.chain.from_iterable(training_corpus))\nprint(len(count_tokens))", "299101\n" ] ], [ [ "### Build vocab\n`min_freq`를 설정하고, training corpus에서 출현빈도가 `min_freq` 미만인 token을 제외하여, `model.utils` module에 있는 `Vocab` class를 이용하여 Vocabulary를 구축합니다. `min_freq`보다 낮은 token들은 `unknown`으로 처리됩니다. 아래의 순서로 진행합니다.\n\n1. `list_of_tokens`을 만듭니다. `list_of_tokens`은 `min_freq` 이상 출현한 token들을 모아놓은 `list`입니다.\n2. `Vocab` class의 instance인 `vocab`을 만듭니다. `list_of_tokens`를 parameter로 전달받습니다.\n\nps. tutorial에 사용되는 논문은 pretrained word vector를 사용하는 데, 원활한 진행을 위해서 제가 미리 준비해놓은 pretrained wordvector를 사용하도록 하겠습니다.", "_____no_output_____" ] ], [ [ "min_freq = 10", "_____no_output_____" ], [ "list_of_tokens = [token for token in count_tokens.keys() if count_tokens.get(token) >= min_freq]", "_____no_output_____" ], [ "list_of_tokens = sorted(list_of_tokens)\nprint(len(list_of_tokens))", "9394\n" ], [ "vocab = Vocab(list_of_tokens=list_of_tokens, bos_token=None, eos_token=None, unknown_token_idx=0)", "_____no_output_____" ], [ "with open('data/morphs_eojeol.pkl', mode='wb') as io:\n# pickle.dump(array, io)", "_____no_output_____" ], [ "# tutorial 진행을 위해서 위에서 생성한 vocabulary의 token들의 embedding vector를 가져옵니다.\nwith open('data/morphs_eojeol.pkl', mode='rb') as io:\n morphs_eojeol = pickle.load(io)\nprint(morphs_eojeol)", "[[ 0. 0. 0. ... 0. 0. 0. ]\n [ 0. 0. 0. ... 0. 0. 0. ]\n [-0.25759 -0.014235 -1.2187 ... -0.097028 0.48018 0.28739 ]\n ...\n [-0.13822 0.23503 -0.29733 ... -0.29198 0.18319 -0.10823 ]\n [-0.028508 0.47924 0.0081392 ... -0.27024 0.17462 0.28043 ]\n [-0.11004 0.12825 -0.10196 ... -0.14461 -0.085906 0.18802 ]]\n" ], [ "vocab.embedding = morphs_eojeol", "_____no_output_____" ], [ "vocab.to_indices('40대')", "_____no_output_____" ], [ "len(vocab)", "_____no_output_____" ], [ "vocab.embedding.shape", "_____no_output_____" ] ], [ [ "### How to use `Vocab`\n`list_of_tokens`로 생성한 `Vocab` class의 instance인 `vocab`은 아래와 같은 멤버들을 가지고 있습니다.", "_____no_output_____" ] ], [ [ "help(Vocab)", "Help on class Vocab in module model.utils:\n\nclass Vocab(builtins.object)\n | Methods defined here:\n | \n | __init__(self, list_of_tokens=None, padding_token='<pad>', unknown_token='<unk>', bos_token='<bos>', eos_token='<eos>', reserved_tokens=None, unknown_token_idx=0)\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | __len__(self)\n | \n | to_indices(self, tokens:Union[str, List[str]]) -> Union[int, List[int]]\n | \n | to_tokens(self, indices:Union[int, List[int]]) -> Union[str, List[str]]\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | bos_token\n | \n | embedding\n | \n | eos_token\n | \n | idx_to_token\n | \n | padding_token\n | \n | token_to_idx\n | \n | unknown_token\n\n" ], [ "# padding_token, unknown_token, eos_token, bos_token\nprint(vocab.padding_token)\nprint(vocab.unknown_token)\nprint(vocab.eos_token)\nprint(vocab.bos_token)", "<pad>\n<unk>\nNone\nNone\n" ], [ "# token_to_idx\nprint(vocab.token_to_idx)", "{'<unk>': 0, '<pad>': 1, '!': 2, '!!': 3, '!!!': 4, '!!!!': 5, '\"\"': 6, '\"\"\"': 7, '\"이': 8, '&': 9, \"'\": 10, '(10자': 11, ')': 12, '+': 13, ',': 14, ',,': 15, ',,,': 16, '-': 17, '--': 18, '->': 19, '-_': 20, '-_-': 21, '-_-;': 22, '-_-;;': 23, '.': 24, '..': 25, '...': 26, '....': 27, '.....': 28, '......': 29, '/': 30, '//': 31, '0': 32, '007': 33, '0개는': 34, '0점': 35, '0점도': 36, '0점은': 37, '0점을': 38, '0점이': 39, '1': 40, '1,': 41, '1,2': 42, '1.': 43, '10': 44, '10,': 45, '100%': 46, '100배': 47, '100점': 48, '10개': 49, '10년': 50, '10년도': 51, '10년만에': 52, '10년이': 53, '10년전': 54, '10년전에': 55, '10대': 56, '10번': 57, '10번도': 58, '10분': 59, '10자': 60, '10점': 61, '10점!': 62, '10점.': 63, '10점도': 64, '10점만점에': 65, '10점은': 66, '10점을': 67, '10점이': 68, '10점이다.': 69, '10점주는': 70, '10점준': 71, '10점준다': 72, '10점준다.': 73, '10점줌': 74, '10점짜리': 75, '11점을': 76, '12세': 77, '15세': 78, '18': 79, '19금': 80, '1개': 81, '1개도': 82, '1도': 83, '1등': 84, '1시간': 85, '1위': 86, '1은': 87, '1을': 88, '1이': 89, '1인': 90, '1인칭': 91, '1점': 92, '1점.': 93, '1점도': 94, '1점도아깝다': 95, '1점은': 96, '1점을': 97, '1점이': 98, '1점주기도': 99, '1점주는': 100, '1점준다': 101, '1점줌': 102, '1점짜리': 103, '1편': 104, '1편과': 105, '1편도': 106, '1편보다': 107, '1편보단': 108, '1편에': 109, '1편에서': 110, '1편은': 111, '1편을': 112, '1편의': 113, '1편이': 114, '1회부터': 115, '2': 116, '2%': 117, '2,': 118, '2.22': 119, '2000년대': 120, '2003년': 121, '2004년': 122, '2012': 123, '2012년': 124, '2014년': 125, '2014년에': 126, '2015년': 127, '20년': 128, '20년이': 129, '20년전': 130, '20대': 131, '20분': 132, '20세기': 133, '21세기': 134, '21세기에': 135, '2가': 136, '2는': 137, '2도': 138, '2류': 139, '2를': 140, '2배속으로': 141, '2번': 142, '2번째': 143, '2시간': 144, '2시간동안': 145, '2시간을': 146, '2시간이': 147, '2점': 148, '2점도': 149, '2점부터': 150, '2점은': 151, '2점을': 152, '2점준다': 153, '2탄': 154, '2편': 155, '2편도': 156, '2편에': 157, '2편은': 158, '2편을': 159, '2편이': 160, '3': 161, '3,': 162, '30년': 163, '30년전': 164, '30대': 165, '30분': 166, '3D': 167, '3D로': 168, '3개': 169, '3는': 170, '3대': 171, '3류': 172, '3번': 173, '3점': 174, '3점.': 175, '3점도': 176, '3편': 177, '3편은': 178, '3편이': 179, '4': 180, '4,': 181, '4.44': 182, '40대': 183, '4점': 184, '4점이': 185, '5': 186, '5개': 187, '5번': 188, '5분': 189, '5점': 190, '5점은': 191, '60년대': 192, '6살': 193, '6점': 194, '6점대': 195, '70년대': 196, '7점': 197, '7점대': 198, '7점대가': 199, '7점은': 200, '7점이': 201, '80년대': 202, '8점': 203, '8점대': 204, '8점대는': 205, '8점은': 206, '9': 207, '90년대': 208, '90분': 209, '9점': 210, '9점.': 211, '9점대': 212, '9점대는': 213, '9점은': 214, '9점을': 215, ':': 216, ':)': 217, ';': 218, ';;': 219, ';;;': 220, ';;;;': 221, '=': 222, '?': 223, '??': 224, '???': 225, 'A': 226, 'A급': 227, 'Bad': 228, 'Best': 229, 'B급': 230, 'B급도': 231, 'B급영화': 232, 'CG': 233, 'CGV에서': 234, 'CG가': 235, 'CG는': 236, 'CG도': 237, 'CG로': 238, 'CG에': 239, 'CG와': 240, 'C급': 241, 'DVD': 242, 'DVD로': 243, 'EBS': 244, 'EBS에서': 245, 'Good': 246, 'I': 247, 'KBS': 248, 'M창': 249, 'M창있네': 250, 'No': 251, 'OCN에서': 252, 'OO': 253, 'OOO': 254, 'OOOO': 255, 'OOO기': 256, 'OOO기영화': 257, 'OO같은': 258, 'OST': 259, 'OST가': 260, 'OST는': 261, 'OST도': 262, 'SF': 263, 'TV': 264, 'TV로': 265, 'TV에서': 266, 'The': 267, 'Very': 268, 'You': 269, '[': 270, '[4점]': 271, ']': 272, '^': 273, '^-^': 274, '^^': 275, '^^*': 276, '^^;': 277, '^_^': 278, 'a': 279, 'and': 280, 'b': 281, 'bad': 282, 'bb': 283, 'be': 284, 'best': 285, 'boring': 286, 'but': 287, 'b급': 288, 'cg': 289, 'cg가': 290, 'cg는': 291, 'cg도': 292, 'c급': 293, 'dvd': 294, 'dvd로': 295, 'ebs에서': 296, 'for': 297, 'good': 298, 'good!': 299, 'i': 300, 'in': 301, 'is': 302, 'it': 303, 'love': 304, 'me': 305, 'movie': 306, 'my': 307, 'no': 308, 'not': 309, 'ocn에서': 310, 'of': 311, 'ost': 312, 'ost가': 313, 'ost는': 314, 'ost도': 315, 'ost만': 316, 'sf': 317, 'so': 318, 'the': 319, 'this': 320, 'time': 321, 'to': 322, 'tv': 323, 'tv로': 324, 'tv에서': 325, 'very': 326, 'vs': 327, 'you': 328, '~': 329, '~~': 330, '~~~': 331, '~~~~~~': 332, '~~~~~~~': 333, '↓': 334, '♡': 335, '♥': 336, 'ㄱ': 337, 'ㄱ-': 338, 'ㄱㄱ': 339, 'ㄴㄴ': 340, 'ㄷ': 341, 'ㄷㄷ': 342, 'ㄷㄷㄷ': 343, 'ㄹㅇ': 344, 'ㅁㅊ': 345, 'ㅂ': 346, 'ㅄ': 347, 'ㅅ': 348, 'ㅅㄱ': 349, 'ㅅㅂ': 350, 'ㅆㄹㄱ': 351, 'ㅇ': 352, 'ㅇㅇ': 353, 'ㅇㅇㅇ': 354, 'ㅈㄴ': 355, 'ㅉ': 356, 'ㅉㅉ': 357, 'ㅉㅉㅉ': 358, 'ㅋ': 359, 'ㅋㅋ': 360, 'ㅋㅋㅋ': 361, 'ㅋㅋㅋㅋ': 362, 'ㅋㅋㅋㅋㅋ': 363, 'ㅋㅋㅋㅋㅋㅋ': 364, 'ㅋㅋㅋㅋㅋㅋㅋ': 365, 'ㅋㅋㅋㅋㅋㅋㅋㅋ': 366, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 367, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 368, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 369, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 370, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 371, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 372, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 373, 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ': 374, 'ㅎ': 375, 'ㅎㄷㄷ': 376, 'ㅎㅎ': 377, 'ㅎㅎㅎ': 378, 'ㅎㅎㅎㅎ': 379, 'ㅗ': 380, 'ㅜ': 381, 'ㅜ.ㅜ': 382, 'ㅜㅜ': 383, 'ㅜㅜㅜ': 384, 'ㅜㅠ': 385, 'ㅠ': 386, 'ㅠ.ㅠ': 387, 'ㅠ_ㅠ': 388, 'ㅠㅜ': 389, 'ㅠㅠ': 390, 'ㅠㅠㅠ': 391, 'ㅠㅠㅠㅠ': 392, 'ㅡ': 393, 'ㅡ,.ㅡ': 394, 'ㅡ.ㅡ': 395, 'ㅡㅜ': 396, 'ㅡㅡ': 397, 'ㅡㅡ;': 398, 'ㅡㅡ;;': 399, 'ㅡㅡㅋ': 400, '가': 401, '가게': 402, '가고': 403, '가까운': 404, '가까이': 405, '가끔': 406, '가끔씩': 407, '가난한': 408, '가네': 409, '가는': 410, '가는게': 411, '가는데': 412, '가는줄': 413, '가능성을': 414, '가능성이': 415, '가능한': 416, '가득': 417, '가득찬': 418, '가득한': 419, '가만히': 420, '가면': 421, '가면갈수록': 422, '가문의': 423, '가벼운': 424, '가볍게': 425, '가볍고': 426, '가볍지': 427, '가서': 428, '가수': 429, '가슴': 430, '가슴속에': 431, '가슴아프고': 432, '가슴아픈': 433, '가슴에': 434, '가슴으로': 435, '가슴을': 436, '가슴이': 437, '가슴찡한': 438, '가시지': 439, '가야': 440, '가운데': 441, '가자': 442, '가장': 443, '가장한': 444, '가정': 445, '가져다': 446, '가족': 447, '가족,': 448, '가족과': 449, '가족끼리': 450, '가족들': 451, '가족들과': 452, '가족들이': 453, '가족애를': 454, '가족에': 455, '가족영화': 456, '가족영화로': 457, '가족은': 458, '가족을': 459, '가족의': 460, '가족이': 461, '가족이랑': 462, '가지': 463, '가지게': 464, '가지고': 465, '가지는': 466, '가진': 467, '가질': 468, '가짜': 469, '가치': 470, '가치가': 471, '가치는': 472, '가치도': 473, '가치를': 474, '가치있는': 475, '가히': 476, '각': 477, '각각': 478, '각각의': 479, '각본': 480, '각본,': 481, '각본.': 482, '각본과': 483, '각본을': 484, '각본이': 485, '각자': 486, '각자의': 487, '간': 488, '간간히': 489, '간다': 490, '간다.': 491, '간단한': 492, '간만에': 493, '간신히': 494, '간에': 495, '간의': 496, '간절히': 497, '간직하고': 498, '갇힌': 499, '갈': 500, '갈등': 501, '갈등과': 502, '갈등이': 503, '갈리는': 504, '갈수록': 505, '갈피를': 506, '감': 507, '감각이': 508, '감각적인': 509, '감독': 510, '감독,': 511, '감독.': 512, '감독..': 513, '감독과': 514, '감독님': 515, '감독님의': 516, '감독님이': 517, '감독도': 518, '감독들': 519, '감독아': 520, '감독에': 521, '감독에게': 522, '감독은': 523, '감독을': 524, '감독의': 525, '감독이': 526, '감독이나': 527, '감독이랑': 528, '감독한테': 529, '감동': 530, '감동!': 531, '감동,': 532, '감동.': 533, '감동..': 534, '감동...': 535, '감동~': 536, '감동과': 537, '감동까지': 538, '감동도': 539, '감동도없고': 540, '감동도있고': 541, '감동에': 542, '감동으로': 543, '감동은': 544, '감동을': 545, '감동의': 546, '감동이': 547, '감동이나': 548, '감동이다': 549, '감동입니다.': 550, '감동있게': 551, '감동있고': 552, '감동있는': 553, '감동적': 554, '감동적으로': 555, '감동적이고': 556, '감동적이네요': 557, '감동적이네요.': 558, '감동적이다': 559, '감동적이다.': 560, '감동적이었다': 561, '감동적이었다.': 562, '감동적이었습니다.': 563, '감동적이에요': 564, '감동적인': 565, '감동적임': 566, '감동적입니다': 567, '감동적입니다.': 568, '감명': 569, '감명깊게': 570, '감명깊은': 571, '감명을': 572, '감사드립니다.': 573, '감사하게': 574, '감사하고': 575, '감사합니다': 576, '감사합니다.': 577, '감상': 578, '감상한': 579, '감성': 580, '감성에': 581, '감성을': 582, '감성의': 583, '감성이': 584, '감성적인': 585, '감성팔이': 586, '감안하고': 587, '감안하면': 588, '감안해도': 589, '감이': 590, '감정': 591, '감정선을': 592, '감정에': 593, '감정은': 594, '감정을': 595, '감정의': 596, '감정이': 597, '감정이입이': 598, '감춰진': 599, '감탄': 600, '감회가': 601, '감흥도': 602, '감흥이': 603, '감히': 604, '갑': 605, '갑니다': 606, '갑니다.': 607, '갑자기': 608, '갑작스런': 609, '갓': 610, '갔는데': 611, '갔다': 612, '갔다.': 613, '갔다가': 614, '갔던': 615, '강동원': 616, '강력': 617, '강력한': 618, '강렬하게': 619, '강렬한': 620, '강아지': 621, '강요하는': 622, '강지환': 623, '강추': 624, '강추!': 625, '강추!!': 626, '강추!!!': 627, '강추.': 628, '강추~': 629, '강추합니다': 630, '강추합니다.': 631, '강하게': 632, '강한': 633, '강해서': 634, '강혜정': 635, '갖게': 636, '갖고': 637, '갖는': 638, '갖다': 639, '갖추고': 640, '갖춘': 641, '같고': 642, '같기도': 643, '같네': 644, '같네요': 645, '같네요.': 646, '같다': 647, '같다!': 648, '같다.': 649, '같다..': 650, '같다...': 651, '같다는': 652, '같습니다': 653, '같습니다.': 654, '같아': 655, '같아서': 656, '같아요': 657, '같아요.': 658, '같아요~': 659, '같았다.': 660, '같은': 661, '같은데': 662, '같은데.': 663, '같은데..': 664, '같은데...': 665, '같음': 666, '같음.': 667, '같음..': 668, '같이': 669, '같잖은': 670, '같지': 671, '같지도': 672, '같지만': 673, '개': 674, '개가': 675, '개같은': 676, '개그': 677, '개그가': 678, '개그는': 679, '개그맨': 680, '개그맨들': 681, '개꿀잼': 682, '개나': 683, '개나소나': 684, '개노잼': 685, '개는': 686, '개막장': 687, '개봉': 688, '개봉당시': 689, '개봉을': 690, '개봉하고': 691, '개봉하는': 692, '개봉하지': 693, '개봉한': 694, '개봉할': 695, '개봉했을때': 696, '개뿔': 697, '개뿔.': 698, '개성이': 699, '개성있는': 700, '개연성': 701, '개연성과': 702, '개연성도': 703, '개연성없는': 704, '개연성은': 705, '개연성이': 706, '개인': 707, '개인의': 708, '개인적': 709, '개인적으로': 710, '개인적으로는': 711, '개인적으론': 712, '개인적인': 713, '개잼': 714, '개콘': 715, '개판': 716, '개판.': 717, '객관적으로': 718, '갠적으로': 719, '걍': 720, '거': 721, '거기': 722, '거기다': 723, '거기다가': 724, '거기서': 725, '거기에': 726, '거꾸로': 727, '거냐?': 728, '거다.': 729, '거대한': 730, '거듭할수록': 731, '거라': 732, '거라는': 733, '거랑': 734, '거리가': 735, '거리는': 736, '거부감이': 737, '거북한': 738, '거슬린다.': 739, '거야': 740, '거의': 741, '거의다': 742, '거장': 743, '거장의': 744, '거지': 745, '거지?': 746, '거지같은': 747, '거짓말': 748, '거짓말을': 749, '거친': 750, '거침없이': 751, '거품': 752, '거품이': 753, '건': 754, '건가?': 755, '건데': 756, '건지': 757, '건지..': 758, '건지...': 759, '건진': 760, '건질': 761, '건질게': 762, '걷는': 763, '걸': 764, '걸까?': 765, '걸로': 766, '걸린': 767, '걸작': 768, '걸작!': 769, '걸작.': 770, '걸작을': 771, '걸작이': 772, '걸작이다': 773, '걸작이다.': 774, '검은': 775, '겁나': 776, '겁나게': 777, '겁니다.': 778, '것': 779, '것!': 780, '것,': 781, '것.': 782, '것..': 783, '것...': 784, '것과': 785, '것도': 786, '것들': 787, '것들은': 788, '것들을': 789, '것들이': 790, '것만': 791, '것만으로도': 792, '것보다': 793, '것에': 794, '것으로': 795, '것은': 796, '것을': 797, '것의': 798, '것이': 799, '것이다': 800, '것이다.': 801, '것인가': 802, '것인가?': 803, '것인지': 804, '것입니다.': 805, '것처럼': 806, '겉만': 807, '겉멋만': 808, '게': 809, '게다가': 810, '게이': 811, '게이는': 812, '게임': 813, '게임을': 814, '게임의': 815, '겨우': 816, '겨울에': 817, '겨울왕국': 818, '겪는': 819, '견자단': 820, '견자단이': 821, '결과가': 822, '결과는': 823, '결과를': 824, '결국': 825, '결국엔': 826, '결국은': 827, '결론': 828, '결론은': 829, '결론이': 830, '결말': 831, '결말,': 832, '결말.': 833, '결말..': 834, '결말...': 835, '결말까지': 836, '결말도': 837, '결말로': 838, '결말만': 839, '결말에': 840, '결말은': 841, '결말을': 842, '결말의': 843, '결말이': 844, '결정적으로': 845, '결코': 846, '결혼': 847, '결혼하고': 848, '경악을': 849, '경의를': 850, '경이로운': 851, '경찰': 852, '경찰은': 853, '경찰을': 854, '경찰이': 855, '경쾌한': 856, '경험을': 857, '경험이': 858, '곁에': 859, '계기가': 860, '계기로': 861, '계속': 862, '계속해서': 863, '고': 864, '고뇌와': 865, '고등학교': 866, '고등학교때': 867, '고딩때': 868, '고로': 869, '고르는': 870, '고마운': 871, '고맙습니다': 872, '고민을': 873, '고생': 874, '고생한': 875, '고스란히': 876, '고양이': 877, '고양이가': 878, '고인의': 879, '고작': 880, '고전': 881, '고질라': 882, '고통과': 883, '고통을': 884, '곧': 885, '골': 886, '골때리는': 887, '골라서': 888, '곳곳에': 889, '곳에': 890, '곳에서': 891, '곳이': 892, '공간': 893, '공간에서': 894, '공감': 895, '공감가는': 896, '공감대': 897, '공감도': 898, '공감되고': 899, '공감되는': 900, '공감되지': 901, '공감은': 902, '공감을': 903, '공감이': 904, '공감하고': 905, '공감하기': 906, '공감할': 907, '공부': 908, '공유': 909, '공존하는': 910, '공중파': 911, '공짜로': 912, '공포': 913, '공포가': 914, '공포는': 915, '공포도': 916, '공포를': 917, '공포물': 918, '공포에': 919, '공포영화': 920, '공포영화.': 921, '공포영화가': 922, '공포영화는': 923, '공포영화라고': 924, '공포영화를': 925, '공포영화의': 926, '공포와': 927, '공포의': 928, '공허한': 929, '과': 930, '과감한': 931, '과거': 932, '과거가': 933, '과거를': 934, '과거에': 935, '과거와': 936, '과거의': 937, '과도한': 938, '과언이': 939, '과연': 940, '과장된': 941, '과정은': 942, '과정을': 943, '과정이': 944, '과하게': 945, '과한': 946, '관객': 947, '관객들': 948, '관객들에게': 949, '관객들을': 950, '관객들의': 951, '관객들이': 952, '관객에게': 953, '관객은': 954, '관객을': 955, '관객의': 956, '관객이': 957, '관계': 958, '관계가': 959, '관계를': 960, '관계에': 961, '관계의': 962, '관람': 963, '관람객': 964, '관련': 965, '관심': 966, '관심을': 967, '관심이': 968, '관점에서': 969, '관점이': 970, '관한': 971, '광고': 972, '광고를': 973, '광기를': 974, '괜찮게': 975, '괜찮고': 976, '괜찮네요': 977, '괜찮다': 978, '괜찮다.': 979, '괜찮다고': 980, '괜찮아서': 981, '괜찮았고': 982, '괜찮았는데': 983, '괜찮았다': 984, '괜찮았다.': 985, '괜찮았던': 986, '괜찮았음': 987, '괜찮았음.': 988, '괜찮은': 989, '괜찮은데': 990, '괜찮을': 991, '괜찮음': 992, '괜찮음.': 993, '괜찮지만': 994, '괜히': 995, '괴물': 996, '괴물이': 997, '굉장한': 998, '굉장히': 999, '교과서': 1000, '교훈': 1001, '교훈과': 1002, '교훈도': 1003, '교훈은': 1004, '교훈을': 1005, '교훈이': 1006, '교훈적인': 1007, '구려': 1008, '구리고': 1009, '구리다': 1010, '구분이': 1011, '구석이': 1012, '구성': 1013, '구성,': 1014, '구성.': 1015, '구성과': 1016, '구성도': 1017, '구성은': 1018, '구성을': 1019, '구성이': 1020, '구지': 1021, '구하기': 1022, '구할': 1023, '구해서': 1024, '구혜선': 1025, '국내': 1026, '국내에서': 1027, '국민': 1028, '국민이': 1029, '국산': 1030, '국어책': 1031, '군': 1032, '군대': 1033, '군대를': 1034, '군대에서': 1035, '군더더기': 1036, '군인': 1037, '굳': 1038, '굳!': 1039, '굳굳': 1040, '굳이': 1041, '굿': 1042, '굿!': 1043, '굿!!': 1044, '굿,': 1045, '굿.': 1046, '굿~': 1047, '굿굿': 1048, '굿굿굿': 1049, '굿바이': 1050, '궁금하게': 1051, '궁금하다': 1052, '궁금하다.': 1053, '궁금해서': 1054, '권상우': 1055, '권하고': 1056, '귀가': 1057, '귀를': 1058, '귀신': 1059, '귀신은': 1060, '귀신이': 1061, '귀에': 1062, '귀여운': 1063, '귀여움': 1064, '귀여워': 1065, '귀여워서': 1066, '귀여워요': 1067, '귀엽고': 1068, '귀엽다': 1069, '귀찮아서': 1070, '그': 1071, '그가': 1072, '그거': 1073, '그건': 1074, '그걸': 1075, '그걸로': 1076, '그것': 1077, '그것도': 1078, '그것은': 1079, '그것을': 1080, '그것이': 1081, '그게': 1082, '그나마': 1083, '그나저나': 1084, '그냥': 1085, '그냥..': 1086, '그냥...': 1087, '그냥저냥': 1088, '그녀': 1089, '그녀가': 1090, '그녀는': 1091, '그녀들의': 1092, '그녀를': 1093, '그녀의': 1094, '그놈의': 1095, '그는': 1096, '그다지': 1097, '그다지...': 1098, '그닥': 1099, '그닥..': 1100, '그닥...': 1101, '그당시': 1102, '그대로': 1103, '그대로의': 1104, '그동안': 1105, '그들': 1106, '그들만의': 1107, '그들에게': 1108, '그들은': 1109, '그들을': 1110, '그들의': 1111, '그들이': 1112, '그때': 1113, '그때는': 1114, '그때의': 1115, '그땐': 1116, '그래': 1117, '그래,': 1118, '그래도': 1119, '그래두': 1120, '그래서': 1121, '그래픽': 1122, '그래픽도': 1123, '그래픽은': 1124, '그래픽이': 1125, '그러고': 1126, '그러나': 1127, '그러니': 1128, '그러니까': 1129, '그러면': 1130, '그러면서': 1131, '그러지': 1132, '그럭저럭': 1133, '그런': 1134, '그런가': 1135, '그런가?': 1136, '그런거': 1137, '그런건': 1138, '그런걸': 1139, '그런게': 1140, '그런대로': 1141, '그런데': 1142, '그런지': 1143, '그럴': 1144, '그럴듯하게': 1145, '그럴듯한': 1146, '그럴싸하게': 1147, '그럼': 1148, '그럼에도': 1149, '그렇게': 1150, '그렇고': 1151, '그렇고..': 1152, '그렇다': 1153, '그렇다.': 1154, '그렇다고': 1155, '그렇다쳐도': 1156, '그렇다치고': 1157, '그렇지': 1158, '그렇지만': 1159, '그려낸': 1160, '그려냈다.': 1161, '그려진': 1162, '그로테스크한': 1163, '그를': 1164, '그리': 1165, '그리고': 1166, '그리고,': 1167, '그리운': 1168, '그리워': 1169, '그린': 1170, '그림': 1171, '그림도': 1172, '그림이': 1173, '그림체가': 1174, '그림체도': 1175, '그립다': 1176, '그립다.': 1177, '그만': 1178, '그만좀': 1179, '그만큼': 1180, '그보다': 1181, '그시절': 1182, '그야말로': 1183, '그에': 1184, '그와': 1185, '그외': 1186, '그은': 1187, '그의': 1188, '그이상': 1189, '그이상도': 1190, '그이하도': 1191, '그자체': 1192, '그저': 1193, '그저그런': 1194, '그정도로': 1195, '그중': 1196, '그지': 1197, '그지같은': 1198, '그치만': 1199, '그토록': 1200, '극': 1201, '극과': 1202, '극단적인': 1203, '극본': 1204, '극으로': 1205, '극을': 1206, '극의': 1207, '극장': 1208, '극장가서': 1209, '극장서': 1210, '극장에': 1211, '극장에서': 1212, '극장판': 1213, '극장판은': 1214, '극장판을': 1215, '극장판이': 1216, '극적인': 1217, '극중': 1218, '극치': 1219, '극치.': 1220, '극치를': 1221, '극한의': 1222, '극혐': 1223, '극히': 1224, '근': 1225, '근데': 1226, '근래': 1227, '근래에': 1228, '글': 1229, '글고': 1230, '글구': 1231, '글쎄': 1232, '글쎄.': 1233, '글쎄..': 1234, '글쎄...': 1235, '글을': 1236, '금방': 1237, '금치': 1238, '급': 1239, '급이': 1240, '급하게': 1241, '기': 1242, '기가': 1243, '기괴한': 1244, '기다리고': 1245, '기다리는': 1246, '기대': 1247, '기대가': 1248, '기대는': 1249, '기대도': 1250, '기대되는': 1251, '기대된다': 1252, '기대된다.': 1253, '기대됨': 1254, '기대됩니다': 1255, '기대됩니다.': 1256, '기대를': 1257, '기대보다': 1258, '기대안하고': 1259, '기대안하고봤는데': 1260, '기대안했는데': 1261, '기대없이': 1262, '기대에': 1263, '기대이상': 1264, '기대이상으로': 1265, '기대이하': 1266, '기대하게': 1267, '기대하고': 1268, '기대하고봤는데': 1269, '기대하는': 1270, '기대하며': 1271, '기대하면서': 1272, '기대하지': 1273, '기대한': 1274, '기대합니다': 1275, '기대했는데': 1276, '기대했다가': 1277, '기대했던': 1278, '기대했지만': 1279, '기독교': 1280, '기막힌': 1281, '기묘한': 1282, '기발하고': 1283, '기발한': 1284, '기본': 1285, '기본적으로': 1286, '기본적인': 1287, '기분': 1288, '기분.': 1289, '기분..': 1290, '기분나쁜': 1291, '기분만': 1292, '기분을': 1293, '기분이': 1294, '기분좋게': 1295, '기분좋은': 1296, '기사': 1297, '기억': 1298, '기억.': 1299, '기억나는': 1300, '기억나는건': 1301, '기억난다': 1302, '기억난다.': 1303, '기억도': 1304, '기억될': 1305, '기억속에': 1306, '기억에': 1307, '기억에서': 1308, '기억은': 1309, '기억을': 1310, '기억이': 1311, '기억이..': 1312, '기억이...': 1313, '기억하고': 1314, '기억하는': 1315, '기자': 1316, '기존': 1317, '기존에': 1318, '기존의': 1319, '기준으로': 1320, '기준이': 1321, '기타': 1322, '기타노': 1323, '기회가': 1324, '기회를': 1325, '긴': 1326, '긴박감': 1327, '긴박감이': 1328, '긴장감': 1329, '긴장감,': 1330, '긴장감과': 1331, '긴장감도': 1332, '긴장감은': 1333, '긴장감을': 1334, '긴장감이': 1335, '긴장을': 1336, '길': 1337, '길게': 1338, '길고': 1339, '길다': 1340, '길다.': 1341, '길어서': 1342, '길을': 1343, '길이': 1344, '김강우': 1345, '김구라': 1346, '김기덕': 1347, '김래원': 1348, '김민종': 1349, '김수로': 1350, '김수현': 1351, '김태희': 1352, '김혜수': 1353, '김희선': 1354, '깊게': 1355, '깊고': 1356, '깊다.': 1357, '깊은': 1358, '깊이': 1359, '깊이가': 1360, '깊이있는': 1361, '까는': 1362, '까지': 1363, '까지는': 1364, '깔끔하게': 1365, '깔끔하고': 1366, '깔끔한': 1367, '깜놀': 1368, '깜짝': 1369, '깨고': 1370, '깨닫게': 1371, '깨닫고': 1372, '깨알같은': 1373, '꺼버렸다.': 1374, '껏다': 1375, '껐다': 1376, '껐다.': 1377, '꼬마': 1378, '꼬박꼬박': 1379, '꼭': 1380, '꼭보세요': 1381, '꼭봐라': 1382, '꼴': 1383, '꼽는': 1384, '꼽으라면': 1385, '꼽을': 1386, '꼽히는': 1387, '꽃보다': 1388, '꽉': 1389, '꽝': 1390, '꽤': 1391, '꽤나': 1392, '꾀': 1393, '꾸역꾸역': 1394, '꾸준히': 1395, '꾹': 1396, '꿀잼': 1397, '꿈': 1398, '꿈과': 1399, '꿈꾸는': 1400, '꿈에': 1401, '꿈을': 1402, '꿈이': 1403, '끄고': 1404, '끄는': 1405, '끊기는': 1406, '끊이지': 1407, '끊임없는': 1408, '끊임없이': 1409, '끌고': 1410, '끌리는': 1411, '끌어': 1412, '끌지': 1413, '끔': 1414, '끔찍한': 1415, '끝': 1416, '끝.': 1417, '끝까지': 1418, '끝나고': 1419, '끝나고도': 1420, '끝나는': 1421, '끝나면': 1422, '끝나서': 1423, '끝나지': 1424, '끝난': 1425, '끝난다.': 1426, '끝날': 1427, '끝날때': 1428, '끝날때까지': 1429, '끝남': 1430, '끝내': 1431, '끝내주게': 1432, '끝내주는': 1433, '끝도': 1434, '끝없는': 1435, '끝에': 1436, '끝에서': 1437, '끝으로': 1438, '끝은': 1439, '끝을': 1440, '끝이': 1441, '끝판왕': 1442, '끼고': 1443, '끼워': 1444, '나': 1445, '나가고': 1446, '나가는': 1447, '나같은': 1448, '나게': 1449, '나고': 1450, '나까지': 1451, '나네': 1452, '나네요': 1453, '나네요.': 1454, '나는': 1455, '나니': 1456, '나도': 1457, '나도모르게': 1458, '나두': 1459, '나라': 1460, '나라가': 1461, '나라는': 1462, '나라를': 1463, '나라면': 1464, '나라에': 1465, '나라의': 1466, '나랑': 1467, '나레이션': 1468, '나로써는': 1469, '나루토': 1470, '나를': 1471, '나름': 1472, '나름대로': 1473, '나름의': 1474, '나만': 1475, '나머지': 1476, '나머지는': 1477, '나머진': 1478, '나면': 1479, '나발이고': 1480, '나쁘게': 1481, '나쁘지': 1482, '나쁘지는': 1483, '나쁘진': 1484, '나쁜': 1485, '나서': 1486, '나서도': 1487, '나약한': 1488, '나에게': 1489, '나에게는': 1490, '나에게도': 1491, '나에겐': 1492, '나오게': 1493, '나오고': 1494, '나오기': 1495, '나오긴': 1496, '나오길': 1497, '나오길래': 1498, '나오네': 1499, '나오네요': 1500, '나오네요.': 1501, '나오는': 1502, '나오는거': 1503, '나오는건': 1504, '나오는게': 1505, '나오는데': 1506, '나오는지': 1507, '나오니': 1508, '나오니까': 1509, '나오다니': 1510, '나오던': 1511, '나오면': 1512, '나오면서': 1513, '나오지': 1514, '나오지도': 1515, '나오지만': 1516, '나온': 1517, '나온거': 1518, '나온건': 1519, '나온게': 1520, '나온다': 1521, '나온다.': 1522, '나온다고': 1523, '나온다는': 1524, '나올': 1525, '나올까': 1526, '나올때': 1527, '나올때마다': 1528, '나올법한': 1529, '나올수': 1530, '나옴': 1531, '나옴.': 1532, '나와': 1533, '나와는': 1534, '나와도': 1535, '나와서': 1536, '나와야': 1537, '나왔는데': 1538, '나왔는지': 1539, '나왔다': 1540, '나왔다.': 1541, '나왔다고': 1542, '나왔다는': 1543, '나왔다면': 1544, '나왔던': 1545, '나왔으면': 1546, '나왔을': 1547, '나왔음': 1548, '나은': 1549, '나은듯': 1550, '나을': 1551, '나을듯': 1552, '나음': 1553, '나음.': 1554, '나의': 1555, '나이': 1556, '나이가': 1557, '나이를': 1558, '나이먹고': 1559, '나이에': 1560, '나이트': 1561, '나중에': 1562, '나중에는': 1563, '나중엔': 1564, '나지': 1565, '나참': 1566, '나처럼': 1567, '나타나서': 1568, '나타난': 1569, '나타낸': 1570, '나한테': 1571, '나한테는': 1572, '나혼자': 1573, '나홀로': 1574, '낚시': 1575, '낚여서': 1576, '낚였네': 1577, '낚였다': 1578, '낚였다.': 1579, '낚이지': 1580, '낚인': 1581, '난': 1582, '난다': 1583, '난다.': 1584, '난무하는': 1585, '난생': 1586, '난잡한': 1587, '난해한': 1588, '날': 1589, '날로': 1590, '날리고': 1591, '날이': 1592, '남': 1593, '남.': 1594, '남고': 1595, '남기고': 1596, '남기는': 1597, '남긴': 1598, '남긴다.': 1599, '남네요': 1600, '남네요.': 1601, '남녀': 1602, '남녀노소': 1603, '남녀의': 1604, '남는': 1605, '남는건': 1606, '남는게': 1607, '남는다': 1608, '남는다.': 1609, '남는다..': 1610, '남들': 1611, '남들이': 1612, '남매의': 1613, '남아': 1614, '남아있는': 1615, '남았다.': 1616, '남았던': 1617, '남은': 1618, '남을': 1619, '남의': 1620, '남이': 1621, '남자': 1622, '남자가': 1623, '남자는': 1624, '남자들': 1625, '남자들은': 1626, '남자들의': 1627, '남자들이': 1628, '남자라면': 1629, '남자랑': 1630, '남자를': 1631, '남자와': 1632, '남자의': 1633, '남자인': 1634, '남자주인공': 1635, '남자주인공의': 1636, '남자주인공이': 1637, '남자지만': 1638, '남주': 1639, '남주가': 1640, '남주는': 1641, '남주의': 1642, '남주인공': 1643, '남지': 1644, '남편': 1645, '남편이': 1646, '납니다.': 1647, '납득이': 1648, '낫겠다': 1649, '낫겠다.': 1650, '낫다': 1651, '낫다.': 1652, '낫다고': 1653, '낫지': 1654, '났다': 1655, '났다.': 1656, '낭비': 1657, '낭비.': 1658, '낭비한': 1659, '낮게': 1660, '낮네': 1661, '낮네..': 1662, '낮네요': 1663, '낮네요.': 1664, '낮다': 1665, '낮다.': 1666, '낮아': 1667, '낮아서': 1668, '낮은': 1669, '낮은거': 1670, '낮은게': 1671, '낮은지': 1672, '낮지': 1673, '낮지?': 1674, '낯선': 1675, '낳은': 1676, '내': 1677, '내가': 1678, '내가본': 1679, '내게': 1680, '내게는': 1681, '내겐': 1682, '내고': 1683, '내내': 1684, '내는': 1685, '내돈': 1686, '내리는': 1687, '내면': 1688, '내면을': 1689, '내면의': 1690, '내생애': 1691, '내생에': 1692, '내생의': 1693, '내세운': 1694, '내시간': 1695, '내용': 1696, '내용,': 1697, '내용.': 1698, '내용과': 1699, '내용까지': 1700, '내용도': 1701, '내용도없고': 1702, '내용들이': 1703, '내용만': 1704, '내용없고': 1705, '내용없는': 1706, '내용에': 1707, '내용으로': 1708, '내용은': 1709, '내용을': 1710, '내용의': 1711, '내용이': 1712, '내용이나': 1713, '내용이라': 1714, '내용이지만': 1715, '내용인지': 1716, '내용인지도': 1717, '내용전개': 1718, '내용전개가': 1719, '내인생': 1720, '내인생의': 1721, '내일': 1722, '내일이': 1723, '내취향은': 1724, '내평생': 1725, '낸': 1726, '냄새가': 1727, '냉혹한': 1728, '너': 1729, '너는': 1730, '너도': 1731, '너를': 1732, '너무': 1733, '너무나': 1734, '너무나도': 1735, '너무너무': 1736, '너무도': 1737, '너무재밌어요': 1738, '너무좋아요': 1739, '너무하네': 1740, '너무하다': 1741, '너의': 1742, '넋을': 1743, '넌': 1744, '널': 1745, '넘': 1746, '넘게': 1747, '넘나드는': 1748, '넘넘': 1749, '넘는': 1750, '넘어': 1751, '넘어가는': 1752, '넘어서': 1753, '넘어선': 1754, '넘은': 1755, '넘치게': 1756, '넘치고': 1757, '넘치는': 1758, '넣고': 1759, '넣어': 1760, '넣어서': 1761, '넣은': 1762, '네': 1763, '네가': 1764, '네이버': 1765, '네이버는': 1766, '네이버에': 1767, '네이버에서': 1768, '네이버평점': 1769, '네티즌': 1770, '년': 1771, '년이': 1772, '노': 1773, '노골적으로': 1774, '노골적인': 1775, '노는': 1776, '노답': 1777, '노래': 1778, '노래가': 1779, '노래는': 1780, '노래도': 1781, '노래를': 1782, '노래만': 1783, '노래와': 1784, '노력은': 1785, '노력이': 1786, '노력한': 1787, '노무현': 1788, '노잼': 1789, '노잼.': 1790, '노출': 1791, '노출도': 1792, '노출은': 1793, '노출이': 1794, '녹아있는': 1795, '놀라게': 1796, '놀라고': 1797, '놀라서': 1798, '놀라운': 1799, '놀라울': 1800, '놀란': 1801, '놀랍고': 1802, '놀랍다': 1803, '놀랍다.': 1804, '놀랐다': 1805, '놀랐다.': 1806, '놈': 1807, '놈들': 1808, '놈들은': 1809, '놈들이': 1810, '놈은': 1811, '놈의': 1812, '놈이': 1813, '높게': 1814, '높고': 1815, '높길래': 1816, '높네요.': 1817, '높다': 1818, '높다.': 1819, '높아': 1820, '높아서': 1821, '높은': 1822, '높은거': 1823, '높은지': 1824, '높이': 1825, '높지': 1826, '놓고': 1827, '놓은': 1828, '놓을': 1829, '놓치면': 1830, '놓치지': 1831, '놓친': 1832, '놓칠': 1833, '뇌가': 1834, '뇌리에': 1835, '누가': 1836, '누가봐도': 1837, '누구': 1838, '누구나': 1839, '누구냐': 1840, '누구도': 1841, '누구든': 1842, '누구를': 1843, '누구보다': 1844, '누구에게나': 1845, '누구인지': 1846, '누군가': 1847, '누군가가': 1848, '누군가를': 1849, '누군가에겐': 1850, '누군지': 1851, '누나': 1852, '누님': 1853, '누워서': 1854, '눈': 1855, '눈과': 1856, '눈도': 1857, '눈뜨고': 1858, '눈만': 1859, '눈물': 1860, '눈물과': 1861, '눈물나게': 1862, '눈물나는': 1863, '눈물도': 1864, '눈물은': 1865, '눈물을': 1866, '눈물이': 1867, '눈부신': 1868, '눈빛': 1869, '눈빛은': 1870, '눈빛이': 1871, '눈에': 1872, '눈으로': 1873, '눈은': 1874, '눈을': 1875, '눈이': 1876, '눈치': 1877, '느껴야': 1878, '느껴져': 1879, '느껴져서': 1880, '느껴졌다': 1881, '느껴졌다.': 1882, '느껴지는': 1883, '느껴지지': 1884, '느껴진': 1885, '느껴진다': 1886, '느껴진다.': 1887, '느껴질': 1888, '느껴짐': 1889, '느꼈다': 1890, '느꼈다.': 1891, '느꼈던': 1892, '느꼈습니다.': 1893, '느끼게': 1894, '느끼고': 1895, '느끼는': 1896, '느끼는게': 1897, '느끼며': 1898, '느끼지': 1899, '느낀': 1900, '느낀건': 1901, '느낀게': 1902, '느낀다': 1903, '느낀다.': 1904, '느낄': 1905, '느낄수': 1906, '느낄수있는': 1907, '느낌': 1908, '느낌!': 1909, '느낌,': 1910, '느낌.': 1911, '느낌..': 1912, '느낌...': 1913, '느낌?': 1914, '느낌과': 1915, '느낌도': 1916, '느낌으로': 1917, '느낌은': 1918, '느낌을': 1919, '느낌의': 1920, '느낌이': 1921, '느낌이다': 1922, '느낌이다.': 1923, '느리고': 1924, '느와르': 1925, '느와르의': 1926, '는': 1927, '늘': 1928, '늘어지는': 1929, '늙은': 1930, '능가하는': 1931, '능력을': 1932, '능력이': 1933, '늦게': 1934, '늦은': 1935, '니': 1936, '니가': 1937, '니네': 1938, '니들': 1939, '니들은': 1940, '니들이': 1941, '니미': 1942, '니콜': 1943, '니콜라스': 1944, '님': 1945, '님들': 1946, '다': 1947, '다가': 1948, '다가오는': 1949, '다같이': 1950, '다녀온': 1951, '다는': 1952, '다니는': 1953, '다니엘': 1954, '다들': 1955, '다루고': 1956, '다루는': 1957, '다룬': 1958, '다르게': 1959, '다르겠지만': 1960, '다르고': 1961, '다르다': 1962, '다르다.': 1963, '다르지': 1964, '다르지만': 1965, '다른': 1966, '다른거': 1967, '다른건': 1968, '다른걸': 1969, '다른게': 1970, '다른영화': 1971, '다를게': 1972, '다름': 1973, '다만': 1974, '다문화': 1975, '다보고': 1976, '다봤는데': 1977, '다세포소녀': 1978, '다소': 1979, '다시': 1980, '다시금': 1981, '다시는': 1982, '다시보게': 1983, '다시보고': 1984, '다시보고싶은': 1985, '다시보기': 1986, '다시보기로': 1987, '다시보니': 1988, '다시보니까': 1989, '다시보면': 1990, '다시봐도': 1991, '다시봤는데': 1992, '다시한번': 1993, '다신': 1994, '다양한': 1995, '다운': 1996, '다운로드': 1997, '다운받아': 1998, '다운받아서': 1999, '다음': 2000, '다음에': 2001, '다음엔': 2002, '다음으로': 2003, '다음편이': 2004, '다이하드': 2005, '다좋은데': 2006, '다짜고짜': 2007, '다큐': 2008, '다큐.': 2009, '다큐가': 2010, '다큐는': 2011, '다큐로': 2012, '다큐를': 2013, '다큐멘터리': 2014, '다큐멘터리가': 2015, '다큐의': 2016, '다크': 2017, '다크나이트': 2018, '다행': 2019, '다행이다': 2020, '다행이다.': 2021, '닥치고': 2022, '단': 2023, '단순': 2024, '단순하고': 2025, '단순한': 2026, '단순히': 2027, '단어': 2028, '단어가': 2029, '단연': 2030, '단연코': 2031, '단점은': 2032, '단지': 2033, '단체로': 2034, '단편': 2035, '달고': 2036, '달달하고': 2037, '달달한': 2038, '달라': 2039, '달라서': 2040, '달리': 2041, '달리는': 2042, '달린': 2043, '달콤한': 2044, '닮은': 2045, '담겨': 2046, '담겨있는': 2047, '담겨져': 2048, '담고': 2049, '담긴': 2050, '담담하게': 2051, '담담한': 2052, '담배': 2053, '담백하게': 2054, '담백한': 2055, '담아낸': 2056, '담은': 2057, '답게': 2058, '답답': 2059, '답답하게': 2060, '답답하고': 2061, '답답하다': 2062, '답답하다.': 2063, '답답한': 2064, '답답해': 2065, '답답해서': 2066, '답을': 2067, '답이': 2068, '답지': 2069, '당당히': 2070, '당대': 2071, '당시': 2072, '당시에': 2073, '당시에는': 2074, '당시에도': 2075, '당시엔': 2076, '당시의': 2077, '당신': 2078, '당신들이': 2079, '당신은': 2080, '당신을': 2081, '당신의': 2082, '당신이': 2083, '당연': 2084, '당연한': 2085, '당연히': 2086, '당장': 2087, '당최': 2088, '당췌': 2089, '당하고': 2090, '당하는': 2091, '당한': 2092, '당할': 2093, '대': 2094, '대놓고': 2095, '대단': 2096, '대단하고': 2097, '대단하다': 2098, '대단하다.': 2099, '대단하다...': 2100, '대단하다고': 2101, '대단한': 2102, '대단합니다': 2103, '대단히': 2104, '대략': 2105, '대로': 2106, '대박': 2107, '대박!': 2108, '대박!!': 2109, '대박.': 2110, '대박...': 2111, '대박이다': 2112, '대박인': 2113, '대박임': 2114, '대본': 2115, '대본을': 2116, '대부': 2117, '대부분': 2118, '대부분의': 2119, '대사': 2120, '대사,': 2121, '대사가': 2122, '대사는': 2123, '대사도': 2124, '대사들이': 2125, '대사로': 2126, '대사를': 2127, '대사만': 2128, '대사에': 2129, '대사와': 2130, '대상이': 2131, '대신': 2132, '대작': 2133, '대작은': 2134, '대작을': 2135, '대체': 2136, '대체로': 2137, '대체적으로': 2138, '대충': 2139, '대통령': 2140, '대표': 2141, '대표작': 2142, '대표적인': 2143, '대하여': 2144, '대학': 2145, '대학교': 2146, '대학생': 2147, '대한': 2148, '대한민국': 2149, '대한민국에서': 2150, '대한민국을': 2151, '대한민국의': 2152, '대해': 2153, '대해서': 2154, '대해서도': 2155, '대화가': 2156, '댓글': 2157, '댓글도': 2158, '댓글보고': 2159, '댓글에': 2160, '댓글을': 2161, '댓글이': 2162, '더': 2163, '더더욱': 2164, '더러운': 2165, '더러워': 2166, '더럽게': 2167, '더럽고': 2168, '더럽다': 2169, '더불어': 2170, '더빙': 2171, '더빙도': 2172, '더빙으로': 2173, '더빙은': 2174, '더빙을': 2175, '더빙이': 2176, '더욱': 2177, '더욱더': 2178, '더이상': 2179, '더한': 2180, '덕분에': 2181, '덕에': 2182, '던지는': 2183, '덜': 2184, '덤으로': 2185, '데': 2186, '데려다': 2187, '데려다가': 2188, '데리고': 2189, '데이': 2190, '데이빗': 2191, '덴젤': 2192, '도': 2193, '도는': 2194, '도대체': 2195, '도대체가': 2196, '도데체': 2197, '도라에몽': 2198, '도무지': 2199, '도움이': 2200, '도저히': 2201, '도중에': 2202, '도통': 2203, '독립': 2204, '독립영화': 2205, '독일': 2206, '독특하게': 2207, '독특하고': 2208, '독특한': 2209, '돈': 2210, '돈과': 2211, '돈낭비': 2212, '돈내고': 2213, '돈도': 2214, '돈만': 2215, '돈아까운': 2216, '돈아까움': 2217, '돈아까워': 2218, '돈아깝고': 2219, '돈아깝다': 2220, '돈에': 2221, '돈으로': 2222, '돈은': 2223, '돈을': 2224, '돈이': 2225, '돈주고': 2226, '돋는': 2227, '돋보였던': 2228, '돋보이는': 2229, '돋보인': 2230, '돋보인다': 2231, '돋보인다.': 2232, '돋보임.': 2233, '돌려': 2234, '돌리고': 2235, '돌리다가': 2236, '돌아': 2237, '돌아가고': 2238, '돌아가는': 2239, '돌아보게': 2240, '돌아온': 2241, '동네': 2242, '동물': 2243, '동물의': 2244, '동생': 2245, '동생이': 2246, '동생이랑': 2247, '동성애': 2248, '동시에': 2249, '동심을': 2250, '동안': 2251, '동영상': 2252, '동화': 2253, '동화같은': 2254, '동화를': 2255, '됐는데': 2256, '됐다.': 2257, '되게': 2258, '되고': 2259, '되기': 2260, '되길': 2261, '되나?': 2262, '되네요': 2263, '되네요.': 2264, '되는': 2265, '되는거': 2266, '되는데': 2267, '되는지': 2268, '되니': 2269, '되도': 2270, '되도않는': 2271, '되돌아': 2272, '되돌아보게': 2273, '되려': 2274, '되면': 2275, '되버린': 2276, '되서': 2277, '되야': 2278, '되어': 2279, '되어서': 2280, '되어야': 2281, '되었는데': 2282, '되었다': 2283, '되었다.': 2284, '되었던': 2285, '되었습니다.': 2286, '되었으면': 2287, '되었지만': 2288, '되지': 2289, '되질': 2290, '된': 2291, '된거': 2292, '된게': 2293, '된다': 2294, '된다.': 2295, '된다고': 2296, '된다는': 2297, '된다면': 2298, '될': 2299, '될것': 2300, '될듯': 2301, '될수': 2302, '될지': 2303, '됨': 2304, '됨.': 2305, '됩니다': 2306, '됩니다.': 2307, '됬는데': 2308, '두': 2309, '두고': 2310, '두고두고': 2311, '두근두근': 2312, '두배우의': 2313, '두번': 2314, '두번이나': 2315, '두번째': 2316, '두분': 2317, '두사람의': 2318, '두시간': 2319, '둔': 2320, '둘': 2321, '둘다': 2322, '둘을': 2323, '둘의': 2324, '둘이': 2325, '둘째': 2326, '둘째치고': 2327, '뒤': 2328, '뒤늦게': 2329, '뒤로': 2330, '뒤로갈수록': 2331, '뒤에': 2332, '뒤죽박죽': 2333, '뒤통수': 2334, '드': 2335, '드네요': 2336, '드네요.': 2337, '드는': 2338, '드니로': 2339, '드니로의': 2340, '드디어': 2341, '드라마': 2342, '드라마!': 2343, '드라마!!': 2344, '드라마!!!': 2345, '드라마,': 2346, '드라마.': 2347, '드라마..': 2348, '드라마...': 2349, '드라마~': 2350, '드라마가': 2351, '드라마같은': 2352, '드라마나': 2353, '드라마네요': 2354, '드라마는': 2355, '드라마다': 2356, '드라마다.': 2357, '드라마도': 2358, '드라마라고': 2359, '드라마로': 2360, '드라마를': 2361, '드라마보다': 2362, '드라마에': 2363, '드라마에서': 2364, '드라마였다.': 2365, '드라마와': 2366, '드라마의': 2367, '드라마인데': 2368, '드라마임': 2369, '드라마입니다.': 2370, '드라마중': 2371, '드라마중에': 2372, '드라마지만': 2373, '드러나는': 2374, '드러난': 2375, '드럽게': 2376, '드릅게': 2377, '드리고': 2378, '드림': 2379, '드림웍스': 2380, '드립니다': 2381, '드립니다.': 2382, '드문': 2383, '든': 2384, '든다': 2385, '든다.': 2386, '듣고': 2387, '듣기': 2388, '듣는': 2389, '듣보잡': 2390, '들': 2391, '들게': 2392, '들고': 2393, '들어': 2394, '들어가': 2395, '들어가는': 2396, '들어가서': 2397, '들어간': 2398, '들어도': 2399, '들어서': 2400, '들었다': 2401, '들었다.': 2402, '들었던': 2403, '들었습니다.': 2404, '들었지만': 2405, '들여서': 2406, '들으면': 2407, '들을': 2408, '들이': 2409, '들지': 2410, '들지만': 2411, '듬': 2412, '듭니다.': 2413, '듯': 2414, '듯!': 2415, '듯.': 2416, '듯..': 2417, '듯...': 2418, '듯이': 2419, '듯한': 2420, '등': 2421, '등등': 2422, '등에': 2423, '등을': 2424, '등의': 2425, '등이': 2426, '등장': 2427, '등장인물': 2428, '등장인물들의': 2429, '등장인물들이': 2430, '등장인물이': 2431, '등장하는': 2432, '디게': 2433, '디워': 2434, '디즈니': 2435, '디즈니의': 2436, '디카프리오': 2437, '디테일': 2438, '디테일이': 2439, '디테일한': 2440, '따듯한': 2441, '따뜻하게': 2442, '따뜻하고': 2443, '따뜻하다.': 2444, '따뜻한': 2445, '따뜻함이': 2446, '따뜻해': 2447, '따뜻해지고': 2448, '따뜻해지는': 2449, '따라': 2450, '따라가지': 2451, '따라서': 2452, '따라하기': 2453, '따라한': 2454, '따로': 2455, '따른': 2456, '따분한': 2457, '따스한': 2458, '따위': 2459, '따윈': 2460, '따지면': 2461, '딱': 2462, '딱봐도': 2463, '딱히': 2464, '딴': 2465, '딴건': 2466, '딸': 2467, '딸과': 2468, '딸을': 2469, '딸의': 2470, '딸이': 2471, '땀을': 2472, '땀이': 2473, '때': 2474, '때,': 2475, '때가': 2476, '때는': 2477, '때도': 2478, '때려': 2479, '때론': 2480, '때리고': 2481, '때마다': 2482, '때매': 2483, '때문': 2484, '때문에': 2485, '때문이다.': 2486, '때묻지': 2487, '때부터': 2488, '때의': 2489, '땐': 2490, '땜에': 2491, '떄문에': 2492, '떠나': 2493, '떠나는': 2494, '떠나서': 2495, '떠나지': 2496, '떠오르는': 2497, '떠올리게': 2498, '떨어져': 2499, '떨어져서': 2500, '떨어지고': 2501, '떨어지는': 2502, '떨어지지만': 2503, '떨어진': 2504, '떨어진다': 2505, '떨어진다.': 2506, '떨어짐': 2507, '떨어짐.': 2508, '뗄': 2509, '또': 2510, '또는': 2511, '또다른': 2512, '또다시': 2513, '또라이': 2514, '또보고': 2515, '또봐도': 2516, '또하나의': 2517, '또한': 2518, '또한번': 2519, '똑같은': 2520, '똑같이': 2521, '똑바로': 2522, '똥': 2523, '똥같은': 2524, '똥을': 2525, '뚜렷한': 2526, '뚝뚝': 2527, '뛰어': 2528, '뛰어나고': 2529, '뛰어난': 2530, '뛰어넘는': 2531, '뛰어넘은': 2532, '뜨거운': 2533, '뜨고': 2534, '뜬': 2535, '뜬금없는': 2536, '뜬금없이': 2537, '뜻이': 2538, '라고': 2539, '라는': 2540, '라스트': 2541, '라이언': 2542, '란': 2543, '랑': 2544, '러닝': 2545, '러닝타임': 2546, '러닝타임이': 2547, '러브': 2548, '러브라인': 2549, '러셀': 2550, '러시아': 2551, '런닝맨': 2552, '런닝타임이': 2553, '레알': 2554, '레옹': 2555, '레이': 2556, '레이싱': 2557, '레이첼': 2558, '레전드': 2559, '로': 2560, '로그인': 2561, '로그인하게': 2562, '로맨스': 2563, '로맨스가': 2564, '로맨스는': 2565, '로맨스도': 2566, '로맨스로': 2567, '로맨스를': 2568, '로맨스영화': 2569, '로맨틱': 2570, '로맨틱코미디': 2571, '로맨틱한': 2572, '로버트': 2573, '로봇': 2574, '로빈': 2575, '뤽': 2576, '류승범': 2577, '류의': 2578, '를': 2579, '리': 2580, '리메이크': 2581, '리메이크가': 2582, '리뷰': 2583, '리뷰를': 2584, '리암': 2585, '리얼': 2586, '리얼리티가': 2587, '리얼리티를': 2588, '리얼하게': 2589, '리얼한': 2590, '리즈': 2591, '링': 2592, '마구': 2593, '마냥': 2594, '마누라': 2595, '마니': 2596, '마다': 2597, '마디로': 2598, '마라': 2599, '마라.': 2600, '마무리': 2601, '마무리.': 2602, '마무리가': 2603, '마무리도': 2604, '마블': 2605, '마세요': 2606, '마세요.': 2607, '마시고': 2608, '마시길': 2609, '마시길..': 2610, '마음': 2611, '마음도': 2612, '마음속': 2613, '마음속에': 2614, '마음에': 2615, '마음으로': 2616, '마음은': 2617, '마음을': 2618, '마음의': 2619, '마음이': 2620, '마이': 2621, '마이너스': 2622, '마이너스가': 2623, '마이너스는': 2624, '마이클': 2625, '마저': 2626, '마지막': 2627, '마지막까지': 2628, '마지막에': 2629, '마지막에는': 2630, '마지막엔': 2631, '마지막으로': 2632, '마지막은': 2633, '마지막을': 2634, '마지막의': 2635, '마지막이': 2636, '마지막장면': 2637, '마지막장면에서': 2638, '마지막장면은': 2639, '마지막장면이': 2640, '마지막회': 2641, '마치': 2642, '마틴': 2643, '막': 2644, '막상': 2645, '막장': 2646, '막장도': 2647, '막장드라마': 2648, '막장에': 2649, '막장으로': 2650, '막장의': 2651, '막장이': 2652, '막판': 2653, '막판에': 2654, '만': 2655, '만나': 2656, '만나고': 2657, '만나는': 2658, '만나면': 2659, '만나서': 2660, '만난': 2661, '만날': 2662, '만남': 2663, '만드나': 2664, '만드냐': 2665, '만드냐?': 2666, '만드네': 2667, '만드는': 2668, '만드는게': 2669, '만드는데': 2670, '만드는지': 2671, '만든': 2672, '만든거': 2673, '만든건지': 2674, '만든것': 2675, '만든다': 2676, '만든다.': 2677, '만든다는': 2678, '만든영화': 2679, '만들': 2680, '만들거면': 2681, '만들고': 2682, '만들기': 2683, '만들기도': 2684, '만들다': 2685, '만들다니': 2686, '만들다니...': 2687, '만들려고': 2688, '만들려면': 2689, '만들면': 2690, '만들수': 2691, '만들어': 2692, '만들어내는': 2693, '만들어낸': 2694, '만들어도': 2695, '만들어라': 2696, '만들어라.': 2697, '만들어버린': 2698, '만들어서': 2699, '만들어야': 2700, '만들어주는': 2701, '만들어주세요': 2702, '만들어준': 2703, '만들어진': 2704, '만들었나': 2705, '만들었나?': 2706, '만들었냐': 2707, '만들었냐?': 2708, '만들었네': 2709, '만들었는데': 2710, '만들었는지': 2711, '만들었다': 2712, '만들었다.': 2713, '만들었다고': 2714, '만들었다면': 2715, '만들었던': 2716, '만들었으면': 2717, '만들었을까': 2718, '만들었을까?': 2719, '만들지': 2720, '만듬': 2721, '만약': 2722, '만에': 2723, '만으로': 2724, '만으로도': 2725, '만점': 2726, '만점에': 2727, '만점을': 2728, '만점짜리': 2729, '만족': 2730, '만족하고': 2731, '만큼': 2732, '만큼은': 2733, '만큼의': 2734, '만큼이나': 2735, '만한': 2736, '만화': 2737, '만화가': 2738, '만화같은': 2739, '만화는': 2740, '만화도': 2741, '만화로': 2742, '만화를': 2743, '만화영화': 2744, '만화의': 2745, '만화책': 2746, '만화책을': 2747, '많고': 2748, '많네': 2749, '많다': 2750, '많다.': 2751, '많아': 2752, '많아서': 2753, '많았다.': 2754, '많았던': 2755, '많으면': 2756, '많은': 2757, '많은걸': 2758, '많은것을': 2759, '많은데': 2760, '많음': 2761, '많이': 2762, '많지': 2763, '많지만': 2764, '말': 2765, '말고': 2766, '말고는': 2767, '말곤': 2768, '말그대로': 2769, '말대로': 2770, '말도': 2771, '말도안되는': 2772, '말로': 2773, '말로는': 2774, '말론': 2775, '말만': 2776, '말밖에': 2777, '말씀': 2778, '말아': 2779, '말아라': 2780, '말아먹은': 2781, '말아야': 2782, '말았다': 2783, '말았다.': 2784, '말았어야': 2785, '말에': 2786, '말은': 2787, '말을': 2788, '말이': 2789, '말이다.': 2790, '말이야': 2791, '말이필요없는': 2792, '말이필요없다': 2793, '말이필요없음': 2794, '말인가?': 2795, '말자': 2796, '말자.': 2797, '말지': 2798, '말처럼': 2799, '말투': 2800, '말투가': 2801, '말하고': 2802, '말하고자': 2803, '말하기': 2804, '말하는': 2805, '말하려고': 2806, '말하려는': 2807, '말하면': 2808, '말하지': 2809, '말한다': 2810, '말할': 2811, '말할것도': 2812, '말할수': 2813, '말해서': 2814, '말해주는': 2815, '맑고': 2816, '맑은': 2817, '맘': 2818, '맘에': 2819, '맘에든다': 2820, '맘이': 2821, '맛': 2822, '맛도': 2823, '맛없는': 2824, '맛에': 2825, '맛은': 2826, '맛을': 2827, '맛이': 2828, '맛있는': 2829, '망가진': 2830, '망작': 2831, '망작.': 2832, '망쳐버린': 2833, '망쳤다.': 2834, '망치는': 2835, '망친': 2836, '망한': 2837, '망할': 2838, '망함': 2839, '망했는지': 2840, '맞게': 2841, '맞고': 2842, '맞나?': 2843, '맞는': 2844, '맞다': 2845, '맞다.': 2846, '맞먹는': 2847, '맞아': 2848, '맞은': 2849, '맞지': 2850, '맞지만': 2851, '맞추기': 2852, '맞춘': 2853, '맞춰': 2854, '맞춰서': 2855, '맡은': 2856, '매': 2857, '매끄럽지': 2858, '매년': 2859, '매력': 2860, '매력.': 2861, '매력과': 2862, '매력도': 2863, '매력에': 2864, '매력은': 2865, '매력을': 2866, '매력의': 2867, '매력이': 2868, '매력있고': 2869, '매력있는': 2870, '매력적': 2871, '매력적으로': 2872, '매력적이고': 2873, '매력적이다': 2874, '매력적이다.': 2875, '매력적인': 2876, '매미': 2877, '매미OO': 2878, '매미OO있네': 2879, '매번': 2880, '매우': 2881, '매일': 2882, '매주': 2883, '매혹적인': 2884, '매회': 2885, '맨': 2886, '맨날': 2887, '머': 2888, '머가': 2889, '머냐': 2890, '머리': 2891, '머리가': 2892, '머리는': 2893, '머리를': 2894, '머리속에': 2895, '머리에': 2896, '머릿속에': 2897, '머릿속을': 2898, '머야': 2899, '머여': 2900, '먹고': 2901, '먹는': 2902, '먹먹하게': 2903, '먹먹하고': 2904, '먹먹하다.': 2905, '먹먹한': 2906, '먹먹해지는': 2907, '먹은': 2908, '먹칠을': 2909, '먼': 2910, '먼가': 2911, '먼저': 2912, '먼지': 2913, '멀리': 2914, '멀쩡한': 2915, '멋있게': 2916, '멋있고': 2917, '멋있는': 2918, '멋있다': 2919, '멋있다.': 2920, '멋있어서': 2921, '멋있어요': 2922, '멋있음': 2923, '멋져': 2924, '멋져요': 2925, '멋졌다.': 2926, '멋지게': 2927, '멋지고': 2928, '멋지다': 2929, '멋지다.': 2930, '멋진': 2931, '멋진영화': 2932, '멋짐': 2933, '멍청하게': 2934, '멍청하고': 2935, '멍청한': 2936, '메마른': 2937, '메세지': 2938, '메세지가': 2939, '메세지는': 2940, '메세지도': 2941, '메세지를': 2942, '메시지': 2943, '메시지가': 2944, '메시지는': 2945, '메시지도': 2946, '메시지를': 2947, '멘탈': 2948, '멜로': 2949, '멜로영화': 2950, '멜로의': 2951, '면도': 2952, '면에서': 2953, '면을': 2954, '면이': 2955, '명': 2956, '명대사': 2957, '명배우': 2958, '명배우들의': 2959, '명백한': 2960, '명복을': 2961, '명불허전': 2962, '명성에': 2963, '명성을': 2964, '명연기': 2965, '명작': 2966, '명작!': 2967, '명작!!': 2968, '명작.': 2969, '명작..': 2970, '명작...': 2971, '명작도': 2972, '명작으로': 2973, '명작은': 2974, '명작을': 2975, '명작이': 2976, '명작이네요': 2977, '명작이다': 2978, '명작이다.': 2979, '명작이다..': 2980, '명작이라': 2981, '명작이라고': 2982, '명작이라는': 2983, '명작인데': 2984, '명작임': 2985, '명작입니다': 2986, '명작입니다.': 2987, '명작중에': 2988, '명작중의': 2989, '명장면': 2990, '명품': 2991, '명화': 2992, '몇': 2993, '몇개': 2994, '몇년': 2995, '몇년만에': 2996, '몇년이': 2997, '몇년전에': 2998, '몇몇': 2999, '몇번': 3000, '몇번을': 3001, '몇번을봐도': 3002, '몇번이고': 3003, '몇번이나': 3004, '몇안되는': 3005, '모': 3006, '모냐': 3007, '모두': 3008, '모두가': 3009, '모두다': 3010, '모두들': 3011, '모두를': 3012, '모두에게': 3013, '모두의': 3014, '모든': 3015, '모든걸': 3016, '모든것을': 3017, '모든것이': 3018, '모든게': 3019, '모든면에서': 3020, '모르게': 3021, '모르겟다': 3022, '모르겠고': 3023, '모르겠네': 3024, '모르겠네요': 3025, '모르겠네요.': 3026, '모르겠는': 3027, '모르겠는데': 3028, '모르겠다': 3029, '모르겠다.': 3030, '모르겠다..': 3031, '모르겠다...': 3032, '모르겠어요.': 3033, '모르겠으나': 3034, '모르겠음': 3035, '모르겠음.': 3036, '모르겠지만': 3037, '모르고': 3038, '모르나': 3039, '모르는': 3040, '모르면': 3041, '모르면서': 3042, '모르지만': 3043, '모른다': 3044, '모른다.': 3045, '모를': 3046, '모습': 3047, '모습.': 3048, '모습과': 3049, '모습도': 3050, '모습만': 3051, '모습에': 3052, '모습에서': 3053, '모습으로': 3054, '모습은': 3055, '모습을': 3056, '모습이': 3057, '모아놓고': 3058, '모아서': 3059, '모야': 3060, '모여': 3061, '모여서': 3062, '모자라': 3063, '모자란': 3064, '모조리': 3065, '모처럼': 3066, '모티브로': 3067, '모험': 3068, '모호한': 3069, '목소리': 3070, '목소리가': 3071, '목소리는': 3072, '목소리도': 3073, '목숨걸고': 3074, '목숨을': 3075, '목을': 3076, '목적을': 3077, '목적이': 3078, '몬가': 3079, '몰라': 3080, '몰라도': 3081, '몰라서': 3082, '몰랐는데': 3083, '몰랐다': 3084, '몰랐다.': 3085, '몰랐던': 3086, '몰래': 3087, '몰입': 3088, '몰입감': 3089, '몰입감도': 3090, '몰입감이': 3091, '몰입도': 3092, '몰입도가': 3093, '몰입도는': 3094, '몰입도도': 3095, '몰입도를': 3096, '몰입을': 3097, '몰입이': 3098, '몰입하게': 3099, '몰입하기': 3100, '몰입하면서': 3101, '몰입할': 3102, '몰입해서': 3103, '몸': 3104, '몸매': 3105, '몸매가': 3106, '몸매는': 3107, '몸에': 3108, '몸을': 3109, '몸이': 3110, '못': 3111, '못된': 3112, '못된다.': 3113, '못미치는': 3114, '못보겠다': 3115, '못보겠다.': 3116, '못보고': 3117, '못보는': 3118, '못본': 3119, '못본게': 3120, '못봐주겠다.': 3121, '못생긴': 3122, '못지': 3123, '못하게': 3124, '못하겠다': 3125, '못하겠다.': 3126, '못하고': 3127, '못하고,': 3128, '못하네': 3129, '못하는': 3130, '못하다': 3131, '못하다.': 3132, '못하다는': 3133, '못하면': 3134, '못하지만': 3135, '못한': 3136, '못한게': 3137, '못한다': 3138, '못한다.': 3139, '못한다는': 3140, '못할': 3141, '못함': 3142, '못함.': 3143, '못해': 3144, '못해도': 3145, '못해서': 3146, '못했다': 3147, '못했다.': 3148, '못했던': 3149, '못했지만': 3150, '몽환적인': 3151, '묘사': 3152, '묘사가': 3153, '묘사한': 3154, '묘하게': 3155, '묘한': 3156, '무간도': 3157, '무거운': 3158, '무고한': 3159, '무난한': 3160, '무너지는': 3161, '무대': 3162, '무려': 3163, '무료로': 3164, '무리가': 3165, '무비': 3166, '무비.': 3167, '무비의': 3168, '무서운': 3169, '무서운거': 3170, '무서울': 3171, '무서움': 3172, '무서워': 3173, '무서워서': 3174, '무섭게': 3175, '무섭고': 3176, '무섭긴': 3177, '무섭다': 3178, '무섭다.': 3179, '무섭다고': 3180, '무섭지': 3181, '무섭지도': 3182, '무섭지도않고': 3183, '무술': 3184, '무슨': 3185, '무슨..': 3186, '무슨...': 3187, '무슨내용인지': 3188, '무슨말이': 3189, '무슨생각으로': 3190, '무시하고': 3191, '무시하는': 3192, '무시한': 3193, '무심코': 3194, '무언가': 3195, '무언가가': 3196, '무언가를': 3197, '무얼': 3198, '무엇보다': 3199, '무엇보다도': 3200, '무엇을': 3201, '무엇이': 3202, '무엇인가': 3203, '무엇인지': 3204, '무엇하나': 3205, '무의미한': 3206, '무작정': 3207, '무조건': 3208, '무지': 3209, '무지하게': 3210, '무척': 3211, '무척이나': 3212, '무튼': 3213, '무한한': 3214, '무협': 3215, '묵직한': 3216, '문득': 3217, '문제': 3218, '문제.': 3219, '문제가': 3220, '문제는': 3221, '문제를': 3222, '문화': 3223, '문화를': 3224, '문화의': 3225, '문화적': 3226, '묻어나는': 3227, '물': 3228, '물론': 3229, '물론이고': 3230, '물씬': 3231, '물에': 3232, '물체가': 3233, '뭉클한': 3234, '뭐': 3235, '뭐,': 3236, '뭐.': 3237, '뭐..': 3238, '뭐...': 3239, '뭐?': 3240, '뭐가': 3241, '뭐고': 3242, '뭐냐': 3243, '뭐냐.': 3244, '뭐냐..': 3245, '뭐냐?': 3246, '뭐니': 3247, '뭐든': 3248, '뭐든지': 3249, '뭐라': 3250, '뭐라고': 3251, '뭐랄까': 3252, '뭐야': 3253, '뭐야.': 3254, '뭐야..': 3255, '뭐야?': 3256, '뭐여': 3257, '뭐이리': 3258, '뭐임': 3259, '뭐임?': 3260, '뭐죠?': 3261, '뭐지': 3262, '뭐지..': 3263, '뭐지?': 3264, '뭐하나': 3265, '뭐하냐': 3266, '뭐하는': 3267, '뭐하러': 3268, '뭐하자는': 3269, '뭔': 3270, '뭔가': 3271, '뭔가가': 3272, '뭔가를': 3273, '뭔내용인지': 3274, '뭔데': 3275, '뭔지': 3276, '뭔지..': 3277, '뭔지도': 3278, '뭘': 3279, '뭘까': 3280, '뭘까?': 3281, '뭡니까': 3282, '뭣도': 3283, '뭥미': 3284, '뭥미?': 3285, '뮤지컬': 3286, '미': 3287, '미개한': 3288, '미국': 3289, '미국식': 3290, '미국에': 3291, '미국에서': 3292, '미국은': 3293, '미국의': 3294, '미국이': 3295, '미국판': 3296, '미드': 3297, '미래는': 3298, '미래를': 3299, '미래의': 3300, '미리': 3301, '미모': 3302, '미모는': 3303, '미모만': 3304, '미모와': 3305, '미묘한': 3306, '미소': 3307, '미소가': 3308, '미소를': 3309, '미스': 3310, '미스터': 3311, '미스테리': 3312, '미안하다': 3313, '미안하지만': 3314, '미야자키': 3315, '미쳤다': 3316, '미치게': 3317, '미치겠다': 3318, '미치는': 3319, '미치도록': 3320, '미치지': 3321, '미친': 3322, '미친듯이': 3323, '미화': 3324, '미화시킨': 3325, '미화하는': 3326, '민망한': 3327, '민폐': 3328, '믿고': 3329, '믿고보는': 3330, '믿기': 3331, '믿기지': 3332, '믿기지가': 3333, '믿는': 3334, '믿어지지': 3335, '믿을': 3336, '믿을게': 3337, '믿음이': 3338, '믿지': 3339, '밀라': 3340, '밀려오는': 3341, '밋밋하고': 3342, '밋밋한': 3343, '및': 3344, '밑도': 3345, '밑에': 3346, '밑에분': 3347, '바가': 3348, '바꾸는': 3349, '바꿔라': 3350, '바꿔서': 3351, '바뀌고': 3352, '바뀌는': 3353, '바라는': 3354, '바라보는': 3355, '바라본': 3356, '바란다': 3357, '바란다.': 3358, '바람': 3359, '바람에': 3360, '바랍니다': 3361, '바랍니다.': 3362, '바로': 3363, '바를': 3364, '바보': 3365, '바보가': 3366, '바보같은': 3367, '바보로': 3368, '바치는': 3369, '바탕으로': 3370, '박보영': 3371, '박수를': 3372, '박중훈': 3373, '박진감': 3374, '박찬욱': 3375, '박평식': 3376, '박힌': 3377, '밖에': 3378, '반': 3379, '반개': 3380, '반개도': 3381, '반담': 3382, '반도': 3383, '반드시': 3384, '반복되는': 3385, '반복해서': 3386, '반성해라': 3387, '반응이': 3388, '반의': 3389, '반전': 3390, '반전,': 3391, '반전.': 3392, '반전과': 3393, '반전까지': 3394, '반전도': 3395, '반전에': 3396, '반전으로': 3397, '반전은': 3398, '반전을': 3399, '반전의': 3400, '반전이': 3401, '반전이라고': 3402, '반지의': 3403, '받고': 3404, '받는': 3405, '받아': 3406, '받아서': 3407, '받아야': 3408, '받았다.': 3409, '받았을': 3410, '받은': 3411, '받을': 3412, '받을만한': 3413, '발': 3414, '발견': 3415, '발견한': 3416, '발랄한': 3417, '발로': 3418, '발상이': 3419, '발연기': 3420, '발연기,': 3421, '발연기가': 3422, '발연기는': 3423, '발연기에': 3424, '발연기와': 3425, '발음': 3426, '발음도': 3427, '발전을': 3428, '발전이': 3429, '밝고': 3430, '밝은': 3431, '밤': 3432, '밤에': 3433, '밥': 3434, '밥을': 3435, '방금': 3436, '방법': 3437, '방법을': 3438, '방법이': 3439, '방송': 3440, '방송에': 3441, '방송을': 3442, '방식으로': 3443, '방식이': 3444, '방향을': 3445, '방황하는': 3446, '배': 3447, '배가': 3448, '배경': 3449, '배경,': 3450, '배경과': 3451, '배경도': 3452, '배경만': 3453, '배경에': 3454, '배경으로': 3455, '배경은': 3456, '배경을': 3457, '배경음악': 3458, '배경음악도': 3459, '배경음악이': 3460, '배경의': 3461, '배경이': 3462, '배급사': 3463, '배급사가': 3464, '배꼽': 3465, '배슬기': 3466, '배역': 3467, '배역을': 3468, '배역이': 3469, '배우': 3470, '배우,': 3471, '배우가': 3472, '배우고': 3473, '배우나': 3474, '배우는': 3475, '배우다.': 3476, '배우도': 3477, '배우들': 3478, '배우들과': 3479, '배우들도': 3480, '배우들로': 3481, '배우들만': 3482, '배우들에': 3483, '배우들은': 3484, '배우들을': 3485, '배우들의': 3486, '배우들이': 3487, '배우로': 3488, '배우를': 3489, '배우만': 3490, '배우분들': 3491, '배우에': 3492, '배우에게': 3493, '배우와': 3494, '배우의': 3495, '배운': 3496, '배울': 3497, '배트맨': 3498, '백배': 3499, '백인': 3500, '뱀파이어': 3501, '버금가는': 3502, '버려': 3503, '버렸다': 3504, '버렸다.': 3505, '버리고': 3506, '버리는': 3507, '버린': 3508, '버무려진': 3509, '버전': 3510, '번': 3511, '번을': 3512, '벌써': 3513, '벌어지는': 3514, '범인': 3515, '범인은': 3516, '범인이': 3517, '범죄': 3518, '범죄를': 3519, '범죄자': 3520, '법.': 3521, '법을': 3522, '법정': 3523, '법한': 3524, '벗어나지': 3525, '베낀': 3526, '베드신': 3527, '베드신이': 3528, '베리': 3529, '베스트': 3530, '베트남': 3531, '벤': 3532, '변태': 3533, '변하는': 3534, '변하지': 3535, '변화가': 3536, '별': 3537, '별거': 3538, '별다른': 3539, '별로': 3540, '별로.': 3541, '별로..': 3542, '별로...': 3543, '별로....': 3544, '별로고': 3545, '별로네요': 3546, '별로다': 3547, '별로다.': 3548, '별로다..': 3549, '별로였는데': 3550, '별로였다.': 3551, '별로였던': 3552, '별로였음': 3553, '별로인': 3554, '별로임': 3555, '별로임.': 3556, '별로지만': 3557, '별론데': 3558, '별루': 3559, '별루다': 3560, '별반': 3561, '별반개도': 3562, '별생각없이': 3563, '별을': 3564, '별이': 3565, '별점': 3566, '별점은': 3567, '별점을': 3568, '별점이': 3569, '별하나': 3570, '별하나도': 3571, '별한개도': 3572, '병': 3573, '병맛': 3574, '병맛.': 3575, '병맛같은': 3576, '병입니다': 3577, '보게': 3578, '보게되는': 3579, '보게된': 3580, '보게됬는데': 3581, '보겠다': 3582, '보겠다.': 3583, '보고': 3584, '보고,': 3585, '보고나니': 3586, '보고나면': 3587, '보고나서': 3588, '보고나서도': 3589, '보고난': 3590, '보고도': 3591, '보고서': 3592, '보고싶네요': 3593, '보고싶다': 3594, '보고싶다.': 3595, '보고싶어': 3596, '보고싶어서': 3597, '보고싶어요': 3598, '보고싶은': 3599, '보고싶은데': 3600, '보고싶지': 3601, '보고있는': 3602, '보고있는데': 3603, '보구': 3604, '보기': 3605, '보기가': 3606, '보기는': 3607, '보기도': 3608, '보기드문': 3609, '보기에': 3610, '보기에는': 3611, '보기에도': 3612, '보기엔': 3613, '보기전에': 3614, '보기좋은': 3615, '보긴': 3616, '보길': 3617, '보나': 3618, '보내는': 3619, '보낸다.': 3620, '보느니': 3621, '보느라': 3622, '보는': 3623, '보는거': 3624, '보는건': 3625, '보는건지': 3626, '보는것': 3627, '보는게': 3628, '보는내내': 3629, '보는데': 3630, '보는동안': 3631, '보는듯한': 3632, '보는줄': 3633, '보니': 3634, '보니까': 3635, '보니깐': 3636, '보다': 3637, '보다.': 3638, '보다가': 3639, '보다는': 3640, '보다니': 3641, '보다도': 3642, '보다보니': 3643, '보다보다': 3644, '보다보면': 3645, '보단': 3646, '보더라도': 3647, '보던': 3648, '보라': 3649, '보라고': 3650, '보라는': 3651, '보러': 3652, '보려고': 3653, '보려는': 3654, '보려면': 3655, '보며': 3656, '보면': 3657, '보면볼수록': 3658, '보면서': 3659, '보면서도': 3660, '보삼': 3661, '보석같은': 3662, '보세요': 3663, '보세요!': 3664, '보세요.': 3665, '보세요..': 3666, '보세요~': 3667, '보셈': 3668, '보셨으면': 3669, '보소': 3670, '보시고': 3671, '보시길': 3672, '보시길.': 3673, '보시길..': 3674, '보시는': 3675, '보시면': 3676, '보아': 3677, '보아도': 3678, '보아야': 3679, '보았는데': 3680, '보았다': 3681, '보았다.': 3682, '보았던': 3683, '보았습니다': 3684, '보았습니다.': 3685, '보았지만': 3686, '보여': 3687, '보여서': 3688, '보여주고': 3689, '보여주는': 3690, '보여주는데': 3691, '보여주지': 3692, '보여준': 3693, '보여준다': 3694, '보여준다.': 3695, '보여줄': 3696, '보여줌.': 3697, '보여줬는데': 3698, '보여줬다.': 3699, '보여줬던': 3700, '보여줬으면': 3701, '보여지는': 3702, '보이고': 3703, '보이네': 3704, '보이는': 3705, '보이려고': 3706, '보이지': 3707, '보이지만': 3708, '보인다': 3709, '보인다.': 3710, '보일': 3711, '보임': 3712, '보자': 3713, '보자.': 3714, '보자고': 3715, '보자마자': 3716, '보지': 3717, '보지도': 3718, '보지마': 3719, '보지마라': 3720, '보지마라.': 3721, '보지마세요': 3722, '보지마세요.': 3723, '보지마셈': 3724, '보지말고': 3725, '보진': 3726, '보통': 3727, '보통의': 3728, '복수': 3729, '복수가': 3730, '복수는': 3731, '복수를': 3732, '복수의': 3733, '복잡하고': 3734, '복잡한': 3735, '본': 3736, '본거': 3737, '본건': 3738, '본건데': 3739, '본것': 3740, '본것중에': 3741, '본게': 3742, '본격': 3743, '본다': 3744, '본다.': 3745, '본다고': 3746, '본다는': 3747, '본다면': 3748, '본듯': 3749, '본듯한': 3750, '본방': 3751, '본방사수': 3752, '본사람들은': 3753, '본사람이': 3754, '본영화': 3755, '본영화인데': 3756, '본영화중': 3757, '본영화중에': 3758, '본인': 3759, '본인은': 3760, '본인의': 3761, '본인이': 3762, '본적이': 3763, '본지': 3764, '본질을': 3765, '본후': 3766, '볼': 3767, '볼거': 3768, '볼거리': 3769, '볼거리가': 3770, '볼거리는': 3771, '볼거리도': 3772, '볼건': 3773, '볼것': 3774, '볼것도': 3775, '볼게': 3776, '볼까': 3777, '볼때': 3778, '볼때는': 3779, '볼때마다': 3780, '볼땐': 3781, '볼라고': 3782, '볼려고': 3783, '볼만': 3784, '볼만은': 3785, '볼만하고': 3786, '볼만하네요': 3787, '볼만하다': 3788, '볼만하다.': 3789, '볼만하던데': 3790, '볼만한': 3791, '볼만한데': 3792, '볼만함': 3793, '볼만함.': 3794, '볼만합니다': 3795, '볼만합니다.': 3796, '볼만해요': 3797, '볼만했는데': 3798, '볼만했다': 3799, '볼만했다.': 3800, '볼만했음': 3801, '볼수': 3802, '볼수가': 3803, '볼수록': 3804, '볼수없는': 3805, '볼수있는': 3806, '볼수있어서': 3807, '볼시간에': 3808, '볼줄': 3809, '봄': 3810, '봄.': 3811, '봄...': 3812, '봅니다': 3813, '봅니다.': 3814, '봐': 3815, '봐도': 3816, '봐도봐도': 3817, '봐라': 3818, '봐라.': 3819, '봐서': 3820, '봐서는': 3821, '봐야': 3822, '봐야겠다': 3823, '봐야겠다.': 3824, '봐야될': 3825, '봐야지': 3826, '봐야하는': 3827, '봐야한다': 3828, '봐야한다.': 3829, '봐야할': 3830, '봐줄': 3831, '봐줄만': 3832, '봣는데': 3833, '봤고': 3834, '봤나': 3835, '봤네': 3836, '봤네요': 3837, '봤네요.': 3838, '봤는대': 3839, '봤는데': 3840, '봤는데,': 3841, '봤는데.': 3842, '봤는데..': 3843, '봤는데...': 3844, '봤는데....': 3845, '봤는데도': 3846, '봤는지': 3847, '봤다': 3848, '봤다!': 3849, '봤다.': 3850, '봤다..': 3851, '봤다...': 3852, '봤다가': 3853, '봤다고': 3854, '봤다는': 3855, '봤다면': 3856, '봤더니': 3857, '봤던': 3858, '봤던건데': 3859, '봤습니다': 3860, '봤습니다.': 3861, '봤습니다^^': 3862, '봤습니다~': 3863, '봤어도': 3864, '봤어야': 3865, '봤어요': 3866, '봤어요!': 3867, '봤어요!!': 3868, '봤어요.': 3869, '봤어요..': 3870, '봤어요^^': 3871, '봤어요~': 3872, '봤었는데': 3873, '봤었다.': 3874, '봤었던': 3875, '봤으니': 3876, '봤으면': 3877, '봤을': 3878, '봤을까': 3879, '봤을까?': 3880, '봤을때': 3881, '봤을때는': 3882, '봤을때도': 3883, '봤을땐': 3884, '봤음': 3885, '봤음.': 3886, '봤음..': 3887, '봤지만': 3888, '봤지만,': 3889, '부끄러운': 3890, '부끄럽게': 3891, '부담없이': 3892, '부드러운': 3893, '부디': 3894, '부럽다': 3895, '부르는': 3896, '부른': 3897, '부를': 3898, '부모': 3899, '부모가': 3900, '부모님': 3901, '부부': 3902, '부분': 3903, '부분도': 3904, '부분들이': 3905, '부분만': 3906, '부분에': 3907, '부분에서': 3908, '부분은': 3909, '부분을': 3910, '부분이': 3911, '부산': 3912, '부수고': 3913, '부실한': 3914, '부자연스러운': 3915, '부족': 3916, '부족.': 3917, '부족하고': 3918, '부족하다': 3919, '부족하다.': 3920, '부족하지만': 3921, '부족한': 3922, '부족함': 3923, '부족함이': 3924, '부족해': 3925, '부터': 3926, '부터가': 3927, '부활': 3928, '북한': 3929, '북한을': 3930, '북한의': 3931, '분': 3932, '분노가': 3933, '분노를': 3934, '분노의': 3935, '분들': 3936, '분들께': 3937, '분들도': 3938, '분들에게': 3939, '분들은': 3940, '분들의': 3941, '분들이': 3942, '분량': 3943, '분명': 3944, '분명하다.': 3945, '분명히': 3946, '분위기': 3947, '분위기,': 3948, '분위기가': 3949, '분위기나': 3950, '분위기는': 3951, '분위기도': 3952, '분위기를': 3953, '분위기만': 3954, '분위기에': 3955, '분위기와': 3956, '분위기의': 3957, '분은': 3958, '분이': 3959, '분이라면': 3960, '불': 3961, '불가능한': 3962, '불구하고': 3963, '불끄고': 3964, '불러': 3965, '불륜': 3966, '불륜에': 3967, '불륜은': 3968, '불륜을': 3969, '불면증': 3970, '불멸의': 3971, '불쌍': 3972, '불쌍하게': 3973, '불쌍하고': 3974, '불쌍하다': 3975, '불쌍하다.': 3976, '불쌍한': 3977, '불쌍함': 3978, '불쌍해': 3979, '불쌍해서': 3980, '불쾌한': 3981, '불편하고': 3982, '불편하다.': 3983, '불편한': 3984, '불필요한': 3985, '불후의': 3986, '붉은': 3987, '브라이언': 3988, '브래드': 3989, '브루스': 3990, '블랙': 3991, '블랙코미디': 3992, '블레이드': 3993, '블록버스터': 3994, '비': 3995, '비가': 3996, '비교': 3997, '비교가': 3998, '비교도': 3999, '비교적': 4000, '비교하면': 4001, '비교할': 4002, '비교해도': 4003, '비교해서': 4004, '비극': 4005, '비극을': 4006, '비극이': 4007, '비극적인': 4008, '비디오': 4009, '비디오로': 4010, '비디오용': 4011, '비로소': 4012, '비록': 4013, '비롯한': 4014, '비밀': 4015, '비밀을': 4016, '비슷하게': 4017, '비슷한': 4018, '비슷해서': 4019, '비싼': 4020, '비오는': 4021, '비운의': 4022, '비주얼': 4023, '비주얼도': 4024, '비주얼은': 4025, '비중이': 4026, '비쥬얼': 4027, '비참한': 4028, '비추': 4029, '비추.': 4030, '비하면': 4031, '비해': 4032, '비해서': 4033, '비행기': 4034, '비현실적이고': 4035, '비현실적인': 4036, '비호감': 4037, '빈': 4038, '빈약한': 4039, '빌려': 4040, '빌려서': 4041, '빌린': 4042, '빌어먹을': 4043, '빕니다': 4044, '빕니다.': 4045, '빚어낸': 4046, '빛': 4047, '빛나는': 4048, '빛을': 4049, '빛이': 4050, '빠르게': 4051, '빠르고': 4052, '빠르기.': 4053, '빠른': 4054, '빠져': 4055, '빠져드는': 4056, '빠져들게': 4057, '빠져서': 4058, '빠지게': 4059, '빠지고': 4060, '빠지는': 4061, '빠지지': 4062, '빠진': 4063, '빠질': 4064, '빠짐없이': 4065, '빡쳐서': 4066, '빨갱이': 4067, '빨리': 4068, '빵': 4069, '빵빵': 4070, '빵점': 4071, '빵터짐': 4072, '빼고': 4073, '빼고는': 4074, '빼곤': 4075, '빼면': 4076, '뻔': 4077, '뻔뻔한': 4078, '뻔하고': 4079, '뻔하다': 4080, '뻔하다.': 4081, '뻔하디': 4082, '뻔하지': 4083, '뻔하지만': 4084, '뻔한': 4085, '뻔한스토리': 4086, '뻔함': 4087, '뻔해': 4088, '뻔해서': 4089, '뻔히': 4090, '뿐': 4091, '뿐,': 4092, '뿐.': 4093, '뿐..': 4094, '뿐...': 4095, '뿐만': 4096, '뿐이다': 4097, '뿐이다.': 4098, '사건': 4099, '사건을': 4100, '사건의': 4101, '사건이': 4102, '사고': 4103, '사극': 4104, '사극을': 4105, '사기': 4106, '사기꾼': 4107, '사는': 4108, '사는게': 4109, '사다코': 4110, '사라지고': 4111, '사라진': 4112, '사람': 4113, '사람.': 4114, '사람과': 4115, '사람도': 4116, '사람들': 4117, '사람들도': 4118, '사람들만': 4119, '사람들에게': 4120, '사람들에겐': 4121, '사람들은': 4122, '사람들을': 4123, '사람들의': 4124, '사람들이': 4125, '사람마다': 4126, '사람만': 4127, '사람사는': 4128, '사람에': 4129, '사람에게': 4130, '사람으로': 4131, '사람으로서': 4132, '사람으로써': 4133, '사람은': 4134, '사람을': 4135, '사람의': 4136, '사람이': 4137, '사람이라면': 4138, '사람이면': 4139, '사람한테': 4140, '사랑': 4141, '사랑,': 4142, '사랑.': 4143, '사랑..': 4144, '사랑...': 4145, '사랑?': 4146, '사랑과': 4147, '사랑도': 4148, '사랑스러운': 4149, '사랑스런': 4150, '사랑스럽게': 4151, '사랑스럽고': 4152, '사랑스럽다': 4153, '사랑스럽다.': 4154, '사랑에': 4155, '사랑으로': 4156, '사랑은': 4157, '사랑을': 4158, '사랑의': 4159, '사랑이': 4160, '사랑이라는': 4161, '사랑이란': 4162, '사랑이야기': 4163, '사랑이야기.': 4164, '사랑하게': 4165, '사랑하고': 4166, '사랑하는': 4167, '사랑한다': 4168, '사랑한다면': 4169, '사랑할': 4170, '사랑함': 4171, '사랑합니다': 4172, '사랑합니다.': 4173, '사랑해': 4174, '사랑해요': 4175, '사랑했던': 4176, '사뭇': 4177, '사상': 4178, '사서': 4179, '사소한': 4180, '사실': 4181, '사실에': 4182, '사실은': 4183, '사실을': 4184, '사실이': 4185, '사실적으로': 4186, '사실적이고': 4187, '사실적인': 4188, '사운드': 4189, '사이': 4190, '사이에': 4191, '사이에서': 4192, '사이의': 4193, '사이코': 4194, '사진': 4195, '사투리': 4196, '사회': 4197, '사회가': 4198, '사회를': 4199, '사회에': 4200, '사회의': 4201, '사회적': 4202, '산만하고': 4203, '산만한': 4204, '산으로': 4205, '살': 4206, '살고': 4207, '살기': 4208, '살다': 4209, '살다가': 4210, '살다살다': 4211, '살렸다': 4212, '살렸다.': 4213, '살리지': 4214, '살린': 4215, '살릴': 4216, '살면서': 4217, '살아': 4218, '살아가는': 4219, '살아갈': 4220, '살아남은': 4221, '살아서': 4222, '살아야': 4223, '살아있는': 4224, '살인': 4225, '살인을': 4226, '살지': 4227, '살짝': 4228, '삶': 4229, '삶.': 4230, '삶과': 4231, '삶에': 4232, '삶에서': 4233, '삶은': 4234, '삶을': 4235, '삶의': 4236, '삶이': 4237, '삼가': 4238, '삼류': 4239, '삼류영화': 4240, '삼천포로': 4241, '상': 4242, '상관없는': 4243, '상관없이': 4244, '상당한': 4245, '상당히': 4246, '상상': 4247, '상상도': 4248, '상상력과': 4249, '상상력에': 4250, '상상력을': 4251, '상상력이': 4252, '상상을': 4253, '상상이': 4254, '상어': 4255, '상어가': 4256, '상업영화': 4257, '상영': 4258, '상영관이': 4259, '상처가': 4260, '상처를': 4261, '상큼한': 4262, '상태에서': 4263, '상투적인': 4264, '상황': 4265, '상황과': 4266, '상황에': 4267, '상황에서': 4268, '상황을': 4269, '상황이': 4270, '새': 4271, '새끼': 4272, '새끼들': 4273, '새로': 4274, '새로운': 4275, '새록새록': 4276, '새롭게': 4277, '새롭고': 4278, '새벽': 4279, '새벽에': 4280, '새삼': 4281, '색감': 4282, '색감도': 4283, '색감이': 4284, '색깔이': 4285, '색다르고': 4286, '색다른': 4287, '샘': 4288, '생': 4289, '생각': 4290, '생각.': 4291, '생각과': 4292, '생각나게': 4293, '생각나고': 4294, '생각나네': 4295, '생각나네요': 4296, '생각나네요.': 4297, '생각나는': 4298, '생각나서': 4299, '생각난다': 4300, '생각난다.': 4301, '생각남': 4302, '생각도': 4303, '생각되는': 4304, '생각된다.': 4305, '생각됩니다.': 4306, '생각만': 4307, '생각밖에': 4308, '생각보다': 4309, '생각보단': 4310, '생각없이': 4311, '생각에': 4312, '생각외로': 4313, '생각으로': 4314, '생각은': 4315, '생각을': 4316, '생각이': 4317, '생각지도': 4318, '생각하게': 4319, '생각하고': 4320, '생각하는': 4321, '생각하니': 4322, '생각하며': 4323, '생각하면': 4324, '생각하지': 4325, '생각한': 4326, '생각한다': 4327, '생각한다.': 4328, '생각한다면': 4329, '생각할': 4330, '생각함': 4331, '생각함.': 4332, '생각합니다': 4333, '생각합니다.': 4334, '생각해': 4335, '생각해도': 4336, '생각해보게': 4337, '생각해보면': 4338, '생각해볼': 4339, '생각해서': 4340, '생각했는데': 4341, '생각했다': 4342, '생각했던': 4343, '생겨서': 4344, '생기는': 4345, '생긴': 4346, '생명을': 4347, '생명의': 4348, '생생하게': 4349, '생생한': 4350, '생생히': 4351, '생애': 4352, '생에': 4353, '생의': 4354, '샤를리즈': 4355, '서': 4356, '서로': 4357, '서로를': 4358, '서로의': 4359, '서부극': 4360, '서서히': 4361, '서스펜스': 4362, '서우': 4363, '서울': 4364, '서정적인': 4365, '서프라이즈': 4366, '섞어': 4367, '섞인': 4368, '선': 4369, '선동영화': 4370, '선물': 4371, '선사하는': 4372, '선생님': 4373, '선생님의': 4374, '선생님이': 4375, '선입견을': 4376, '선택': 4377, '선택은': 4378, '선택의': 4379, '선택한': 4380, '선한': 4381, '설경구': 4382, '설득력': 4383, '설득력이': 4384, '설레게': 4385, '설레고': 4386, '설레는': 4387, '설마': 4388, '설명': 4389, '설명도': 4390, '설명은': 4391, '설명이': 4392, '설명할': 4393, '설정': 4394, '설정,': 4395, '설정.': 4396, '설정과': 4397, '설정도': 4398, '설정만': 4399, '설정에': 4400, '설정으로': 4401, '설정은': 4402, '설정을': 4403, '설정의': 4404, '설정이': 4405, '섬뜩한': 4406, '섬세하게': 4407, '섬세하고': 4408, '섬세한': 4409, '성격': 4410, '성격이': 4411, '성공': 4412, '성공한': 4413, '성룡': 4414, '성룡은': 4415, '성룡의': 4416, '성룡이': 4417, '성우': 4418, '성우가': 4419, '성우들': 4420, '성우를': 4421, '성인': 4422, '성인이': 4423, '성장': 4424, '성장하는': 4425, '세': 4426, '세계': 4427, '세계가': 4428, '세계관이': 4429, '세계를': 4430, '세계에': 4431, '세계에서': 4432, '세계적인': 4433, '세기의': 4434, '세련된': 4435, '세번': 4436, '세상': 4437, '세상에': 4438, '세상에서': 4439, '세상은': 4440, '세상을': 4441, '세상의': 4442, '세상이': 4443, '세심한': 4444, '세월을': 4445, '세월이': 4446, '섹스': 4447, '섹시': 4448, '섹시하고': 4449, '섹시한': 4450, '소녀': 4451, '소녀의': 4452, '소년의': 4453, '소름': 4454, '소름끼치게': 4455, '소름끼치는': 4456, '소름돋는': 4457, '소름이': 4458, '소리': 4459, '소리가': 4460, '소리를': 4461, '소리만': 4462, '소설': 4463, '소설도': 4464, '소설로': 4465, '소설을': 4466, '소설의': 4467, '소설이': 4468, '소소하게': 4469, '소소하고': 4470, '소소한': 4471, '소장': 4472, '소장하고': 4473, '소재': 4474, '소재,': 4475, '소재.': 4476, '소재가': 4477, '소재나': 4478, '소재는': 4479, '소재도': 4480, '소재로': 4481, '소재로한': 4482, '소재를': 4483, '소재만': 4484, '소재에': 4485, '소재와': 4486, '소재의': 4487, '소중한': 4488, '소중함을': 4489, '소중히': 4490, '소지섭': 4491, '속': 4492, '속아서': 4493, '속았다': 4494, '속았다.': 4495, '속에': 4496, '속에서': 4497, '속에서도': 4498, '속으로': 4499, '속은': 4500, '속의': 4501, '속이': 4502, '속지': 4503, '속편': 4504, '속편.': 4505, '속편은': 4506, '속편을': 4507, '속편의': 4508, '속편이': 4509, '손': 4510, '손꼽히는': 4511, '손발': 4512, '손발이': 4513, '손색이': 4514, '손에': 4515, '손예진': 4516, '손으로': 4517, '손을': 4518, '손이': 4519, '손잡고': 4520, '솔까': 4521, '솔직하게': 4522, '솔직하고': 4523, '솔직한': 4524, '솔직히': 4525, '솔찍히': 4526, '송강호': 4527, '송강호의': 4528, '송승헌': 4529, '송지효': 4530, '숀': 4531, '수': 4532, '수가': 4533, '수고': 4534, '수는': 4535, '수도': 4536, '수록': 4537, '수많은': 4538, '수면제': 4539, '수밖에': 4540, '수없이': 4541, '수있는': 4542, '수작': 4543, '수작!': 4544, '수작.': 4545, '수작을': 4546, '수작이': 4547, '수작이다': 4548, '수작이다.': 4549, '수작이라고': 4550, '수작입니다.': 4551, '수준': 4552, '수준.': 4553, '수준..': 4554, '수준...': 4555, '수준낮은': 4556, '수준도': 4557, '수준에': 4558, '수준으로': 4559, '수준은': 4560, '수준을': 4561, '수준의': 4562, '수준이': 4563, '수준이다.': 4564, '수준이하': 4565, '순': 4566, '순간': 4567, '순간을': 4568, '순간의': 4569, '순간이': 4570, '순수': 4571, '순수하게': 4572, '순수하고': 4573, '순수한': 4574, '순수함을': 4575, '순수함이': 4576, '순수했던': 4577, '순식간에': 4578, '숨': 4579, '숨겨진': 4580, '숨막히는': 4581, '숨어있는': 4582, '숨은': 4583, '숨이': 4584, '쉬운': 4585, '쉽게': 4586, '쉽지': 4587, '슈스케': 4588, '슈퍼': 4589, '슈퍼맨': 4590, '스릴': 4591, '스릴감': 4592, '스릴과': 4593, '스릴도': 4594, '스릴러': 4595, '스릴러,': 4596, '스릴러.': 4597, '스릴러?': 4598, '스릴러가': 4599, '스릴러는': 4600, '스릴러도': 4601, '스릴러라고': 4602, '스릴러로': 4603, '스릴러를': 4604, '스릴러물': 4605, '스릴러의': 4606, '스릴은': 4607, '스릴이': 4608, '스릴있고': 4609, '스스로': 4610, '스스로를': 4611, '스칼렛': 4612, '스케일': 4613, '스케일도': 4614, '스케일이': 4615, '스콧': 4616, '스크린에': 4617, '스크린에서': 4618, '스크린으로': 4619, '스타': 4620, '스타뎀': 4621, '스타워즈': 4622, '스타일': 4623, '스타일은': 4624, '스타일의': 4625, '스타일이': 4626, '스텝업': 4627, '스토리': 4628, '스토리,': 4629, '스토리.': 4630, '스토리..': 4631, '스토리...': 4632, '스토리가': 4633, '스토리고': 4634, '스토리나': 4635, '스토리는': 4636, '스토리도': 4637, '스토리로': 4638, '스토리를': 4639, '스토리만': 4640, '스토리며': 4641, '스토리에': 4642, '스토리와': 4643, '스토리의': 4644, '스토리전개': 4645, '스토리전개가': 4646, '스토리지만': 4647, '스트레스': 4648, '스티브': 4649, '스티븐': 4650, '스파이': 4651, '스파이더맨': 4652, '스페인': 4653, '스포츠': 4654, '스필버그': 4655, '슬래셔': 4656, '슬슬': 4657, '슬퍼': 4658, '슬퍼서': 4659, '슬퍼요': 4660, '슬펐다.': 4661, '슬펐던': 4662, '슬프게': 4663, '슬프고': 4664, '슬프네요': 4665, '슬프다': 4666, '슬프다.': 4667, '슬프다..': 4668, '슬프다...': 4669, '슬프지만': 4670, '슬픈': 4671, '슬픈영화': 4672, '슬픔': 4673, '슬픔과': 4674, '슬픔을': 4675, '슬픔이': 4676, '승리': 4677, '시': 4678, '시각으로': 4679, '시간': 4680, '시간가는': 4681, '시간가는줄': 4682, '시간과': 4683, '시간낭비': 4684, '시간도': 4685, '시간만': 4686, '시간아까운': 4687, '시간아까움': 4688, '시간아깝다': 4689, '시간아깝다..': 4690, '시간에': 4691, '시간은': 4692, '시간을': 4693, '시간의': 4694, '시간이': 4695, '시걸': 4696, '시걸의': 4697, '시골': 4698, '시끄러워서': 4699, '시끄럽고': 4700, '시나리오': 4701, '시나리오,': 4702, '시나리오.': 4703, '시나리오가': 4704, '시나리오는': 4705, '시나리오도': 4706, '시나리오를': 4707, '시나리오에': 4708, '시나리오와': 4709, '시나리오의': 4710, '시대': 4711, '시대가': 4712, '시대를': 4713, '시대에': 4714, '시대의': 4715, '시대적': 4716, '시덥잖은': 4717, '시도': 4718, '시도가': 4719, '시도는': 4720, '시리즈': 4721, '시리즈.': 4722, '시리즈가': 4723, '시리즈는': 4724, '시리즈로': 4725, '시리즈를': 4726, '시리즈의': 4727, '시리즈중': 4728, '시리즈중에': 4729, '시바': 4730, '시사회': 4731, '시사회로': 4732, '시사회에서': 4733, '시선으로': 4734, '시선을': 4735, '시선이': 4736, '시시하고': 4737, '시시한': 4738, '시원하게': 4739, '시원한': 4740, '시작': 4741, '시작.': 4742, '시작된': 4743, '시작부터': 4744, '시작은': 4745, '시작을': 4746, '시작한': 4747, '시작해': 4748, '시작해서': 4749, '시절': 4750, '시절에': 4751, '시절을': 4752, '시절의': 4753, '시절이': 4754, '시점에서': 4755, '시종일관': 4756, '시즌': 4757, '시즌2': 4758, '시청': 4759, '시청률': 4760, '시청률은': 4761, '시청률을': 4762, '시청률이': 4763, '시청자': 4764, '시청자가': 4765, '시청자를': 4766, '시켜서': 4767, '시키는': 4768, '시키지': 4769, '시트콤': 4770, '시한부': 4771, '식상하고': 4772, '식상하다': 4773, '식상한': 4774, '식스센스': 4775, '식으로': 4776, '식의': 4777, '신': 4778, '신경': 4779, '신경을': 4780, '신고': 4781, '신기하다': 4782, '신기한': 4783, '신기할': 4784, '신나게': 4785, '신나는': 4786, '신데렐라': 4787, '신들린': 4788, '신민아': 4789, '신비로운': 4790, '신비롭고': 4791, '신선하고': 4792, '신선하다.': 4793, '신선한': 4794, '신선함이': 4795, '신선했고': 4796, '신세경': 4797, '신세계': 4798, '신은': 4799, '신은경': 4800, '신을': 4801, '신의': 4802, '신이': 4803, '신인': 4804, '신파극': 4805, '신하균': 4806, '실력': 4807, '실력이': 4808, '실망': 4809, '실망.': 4810, '실망..': 4811, '실망...': 4812, '실망도': 4813, '실망스러운': 4814, '실망스럽다.': 4815, '실망시키지': 4816, '실망을': 4817, '실망이': 4818, '실망이다': 4819, '실망이다.': 4820, '실망한': 4821, '실망했다.': 4822, '실상은': 4823, '실상을': 4824, '실제': 4825, '실제로': 4826, '실제로는': 4827, '실컷': 4828, '실패': 4829, '실패작': 4830, '실패한': 4831, '실화': 4832, '실화라고': 4833, '실화라는': 4834, '실화라는게': 4835, '실화라니': 4836, '실화라서': 4837, '실화를': 4838, '싫다': 4839, '싫다.': 4840, '싫어': 4841, '싫어서': 4842, '싫어하는': 4843, '싫어하는데': 4844, '싫은': 4845, '심각하게': 4846, '심각한': 4847, '심금을': 4848, '심리': 4849, '심리를': 4850, '심리묘사가': 4851, '심사위원': 4852, '심심한': 4853, '심심할때': 4854, '심심해서': 4855, '심오한': 4856, '심장': 4857, '심장을': 4858, '심장이': 4859, '심지어': 4860, '심하게': 4861, '심하다': 4862, '심한': 4863, '심했다': 4864, '심형래': 4865, '심히': 4866, '십점': 4867, '싱거운': 4868, '싶게': 4869, '싶네요': 4870, '싶네요.': 4871, '싶다': 4872, '싶다.': 4873, '싶다..': 4874, '싶다...': 4875, '싶다는': 4876, '싶다면': 4877, '싶습니다.': 4878, '싶어': 4879, '싶어도': 4880, '싶어서': 4881, '싶어요': 4882, '싶어요.': 4883, '싶어지는': 4884, '싶어하는': 4885, '싶었는데': 4886, '싶었다': 4887, '싶었다.': 4888, '싶었던': 4889, '싶었음': 4890, '싶으면': 4891, '싶은': 4892, '싶은건지': 4893, '싶은데': 4894, '싶을': 4895, '싶을때': 4896, '싶을정도로': 4897, '싶음': 4898, '싶음.': 4899, '싶지': 4900, '싶지도': 4901, '싶지만': 4902, '싸구려': 4903, '싸우고': 4904, '싸우는': 4905, '싸움': 4906, '싸이코': 4907, '싸이코패스': 4908, '싹': 4909, '쌈마이': 4910, '쌍벽을': 4911, '쌍팔년도': 4912, '써도': 4913, '써라': 4914, '써서': 4915, '썩': 4916, '썩은': 4917, '쏙': 4918, '쓰': 4919, '쓰고': 4920, '쓰는': 4921, '쓰래기': 4922, '쓰레기': 4923, '쓰레기.': 4924, '쓰레기..': 4925, '쓰레기...': 4926, '쓰레기가': 4927, '쓰레기같은': 4928, '쓰레기네': 4929, '쓰레기는': 4930, '쓰레기다': 4931, '쓰레기다.': 4932, '쓰레기도': 4933, '쓰레기들': 4934, '쓰레기라': 4935, '쓰레기라고': 4936, '쓰레기로': 4937, '쓰레기를': 4938, '쓰레기에': 4939, '쓰레기영화': 4940, '쓰레기영화.': 4941, '쓰레기영화는': 4942, '쓰레기의': 4943, '쓰레기임': 4944, '쓰레기중에': 4945, '쓰렉': 4946, '쓰면': 4947, '쓰지': 4948, '쓴': 4949, '쓸': 4950, '쓸데없는': 4951, '쓸데없이': 4952, '쓸쓸한': 4953, '씁쓸한': 4954, '씨': 4955, '씨네21': 4956, '씨발': 4957, '씬': 4958, '씬은': 4959, '씬을': 4960, '씬이': 4961, '씹노잼': 4962, '아': 4963, '아!': 4964, '아,': 4965, '아.': 4966, '아..': 4967, '아...': 4968, '아....': 4969, '아~': 4970, '아기': 4971, '아기자기하고': 4972, '아기자기한': 4973, '아까운': 4974, '아까운데': 4975, '아까운영화': 4976, '아까울': 4977, '아까움': 4978, '아까움.': 4979, '아까움..': 4980, '아까움...': 4981, '아까워': 4982, '아까워서': 4983, '아까워요': 4984, '아까웠다': 4985, '아까웠다!!!': 4986, '아까웠다.': 4987, '아까웠던': 4988, '아깝': 4989, '아깝게': 4990, '아깝고': 4991, '아깝네': 4992, '아깝네요': 4993, '아깝네요.': 4994, '아깝다': 4995, '아깝다!': 4996, '아깝다!!!': 4997, '아깝다.': 4998, '아깝다..': 4999, '아깝다...': 5000, '아깝다고': 5001, '아깝다는': 5002, '아깝단': 5003, '아깝습니다.': 5004, '아깝지': 5005, '아나': 5006, '아날로그': 5007, '아내가': 5008, '아냐?': 5009, '아놀드': 5010, '아놀드의': 5011, '아놔': 5012, '아는': 5013, '아는데': 5014, '아니': 5015, '아니,': 5016, '아니고': 5017, '아니고,': 5018, '아니고.': 5019, '아니고..': 5020, '아니고...': 5021, '아니네': 5022, '아니네요': 5023, '아니다': 5024, '아니다.': 5025, '아니다..': 5026, '아니다...': 5027, '아니더라도': 5028, '아니라': 5029, '아니라,': 5030, '아니라고': 5031, '아니라는': 5032, '아니라면': 5033, '아니라서': 5034, '아니면': 5035, '아니야': 5036, '아니어도': 5037, '아니었나': 5038, '아니었다': 5039, '아니었다.': 5040, '아니었다면': 5041, '아니었으면': 5042, '아니었음': 5043, '아니였으면': 5044, '아니잖아': 5045, '아니지': 5046, '아니지.': 5047, '아니지..': 5048, '아니지만': 5049, '아니지만,': 5050, '아닌': 5051, '아닌,': 5052, '아닌...': 5053, '아닌가': 5054, '아닌가?': 5055, '아닌가요?': 5056, '아닌거': 5057, '아닌것': 5058, '아닌데': 5059, '아닌데..': 5060, '아닌데...': 5061, '아닌듯': 5062, '아닌듯.': 5063, '아닌지': 5064, '아닐까': 5065, '아닐까.': 5066, '아닐까?': 5067, '아님': 5068, '아님.': 5069, '아님..': 5070, '아님...': 5071, '아님?': 5072, '아닙니다': 5073, '아닙니다.': 5074, '아담': 5075, '아동용': 5076, '아들': 5077, '아들과': 5078, '아들을': 5079, '아들의': 5080, '아들이': 5081, '아래': 5082, '아련하고': 5083, '아련한': 5084, '아류작': 5085, '아름다운': 5086, '아름다움': 5087, '아름다움과': 5088, '아름다움에': 5089, '아름다움을': 5090, '아름다워서': 5091, '아름다웠고': 5092, '아름다웠다.': 5093, '아름다웠던': 5094, '아름답게': 5095, '아름답고': 5096, '아름답다': 5097, '아름답다.': 5098, '아마': 5099, '아마도': 5100, '아마추어': 5101, '아만다': 5102, '아메리칸': 5103, '아무': 5104, '아무것도': 5105, '아무나': 5106, '아무도': 5107, '아무래도': 5108, '아무런': 5109, '아무리': 5110, '아무리봐도': 5111, '아무생각없이': 5112, '아무튼': 5113, '아버지': 5114, '아버지가': 5115, '아버지는': 5116, '아버지를': 5117, '아버지와': 5118, '아버지의': 5119, '아빠': 5120, '아빠가': 5121, '아빠랑': 5122, '아빠와': 5123, '아쉬운': 5124, '아쉬운건': 5125, '아쉬울': 5126, '아쉬움': 5127, '아쉬움.': 5128, '아쉬움이': 5129, '아쉬워': 5130, '아쉬워서': 5131, '아쉬워요': 5132, '아쉬웠다.': 5133, '아쉬웠던': 5134, '아쉬웠지만': 5135, '아쉽': 5136, '아쉽게': 5137, '아쉽고': 5138, '아쉽긴': 5139, '아쉽네요': 5140, '아쉽네요.': 5141, '아쉽다': 5142, '아쉽다.': 5143, '아쉽다..': 5144, '아쉽다...': 5145, '아쉽지만': 5146, '아쉽지만,': 5147, '아역': 5148, '아예': 5149, '아오': 5150, '아오이': 5151, '아우': 5152, '아이': 5153, '아이가': 5154, '아이고': 5155, '아이는': 5156, '아이도': 5157, '아이돌': 5158, '아이들': 5159, '아이들과': 5160, '아이들도': 5161, '아이들에게': 5162, '아이들은': 5163, '아이들을': 5164, '아이들의': 5165, '아이들이': 5166, '아이디어': 5167, '아이디어가': 5168, '아이디어는': 5169, '아이를': 5170, '아이언맨': 5171, '아이에게': 5172, '아이와': 5173, '아이유': 5174, '아이의': 5175, '아저씨': 5176, '아저씨가': 5177, '아저씨는': 5178, '아저씨의': 5179, '아주': 5180, '아줌마': 5181, '아줌마가': 5182, '아직': 5183, '아직까지': 5184, '아직까지도': 5185, '아직도': 5186, '아직은': 5187, '아진짜': 5188, '아청법': 5189, '아침': 5190, '아침부터': 5191, '아침에': 5192, '아카데미': 5193, '아팠다.': 5194, '아프고': 5195, '아프다': 5196, '아프다.': 5197, '아프리카': 5198, '아픈': 5199, '아픔과': 5200, '아픔을': 5201, '아픔이': 5202, '아휴': 5203, '악당': 5204, '악당이': 5205, '악역': 5206, '악역이': 5207, '안': 5208, '안가고': 5209, '안가는': 5210, '안간다': 5211, '안간다.': 5212, '안감': 5213, '안감.': 5214, '안고': 5215, '안나': 5216, '안나고': 5217, '안나오고': 5218, '안나오는': 5219, '안나온다': 5220, '안나온다.': 5221, '안나옴': 5222, '안나와서': 5223, '안나왔으면': 5224, '안나지만': 5225, '안난다.': 5226, '안남기는데': 5227, '안다': 5228, '안돼': 5229, '안돼는': 5230, '안되게': 5231, '안되고': 5232, '안되나?': 5233, '안되네': 5234, '안되는': 5235, '안되는데': 5236, '안되면': 5237, '안되서': 5238, '안되요': 5239, '안된': 5240, '안된다': 5241, '안된다.': 5242, '안된다는': 5243, '안될': 5244, '안됨': 5245, '안됨.': 5246, '안됩니다.': 5247, '안든다.': 5248, '안듬': 5249, '안맞고': 5250, '안맞는': 5251, '안목이': 5252, '안무섭고': 5253, '안무섭다': 5254, '안보고': 5255, '안보길': 5256, '안보는': 5257, '안보는게': 5258, '안보는데': 5259, '안보면': 5260, '안본': 5261, '안본다': 5262, '안본다.': 5263, '안봄': 5264, '안봐도': 5265, '안봐서': 5266, '안봤는데': 5267, '안봤으면': 5268, '안봤지만': 5269, '안습': 5270, '안쓰는데': 5271, '안에': 5272, '안에서': 5273, '안의': 5274, '안젤리나': 5275, '안좋아하는데': 5276, '안좋은': 5277, '안타까운': 5278, '안타깝고': 5279, '안타깝네요': 5280, '안타깝다': 5281, '안타깝다.': 5282, '안타깝습니다.': 5283, '안타깝지만': 5284, '안하고': 5285, '안하는': 5286, '안하는데': 5287, '안하면': 5288, '안함': 5289, '안했는데': 5290, '안했으면': 5291, '앉아': 5292, '앉아서': 5293, '않게': 5294, '않고': 5295, '않고,': 5296, '않고.': 5297, '않고..': 5298, '않고...': 5299, '않기': 5300, '않길': 5301, '않나': 5302, '않나?': 5303, '않네': 5304, '않네요': 5305, '않네요.': 5306, '않는': 5307, '않는게': 5308, '않는다': 5309, '않는다.': 5310, '않는다..': 5311, '않는다...': 5312, '않는다면': 5313, '않는데': 5314, '않다': 5315, '않다.': 5316, '않다..': 5317, '않다는': 5318, '않습니다.': 5319, '않아': 5320, '않아도': 5321, '않아서': 5322, '않아요': 5323, '않아요.': 5324, '않았고': 5325, '않았나': 5326, '않았는데': 5327, '않았다': 5328, '않았다.': 5329, '않았다면': 5330, '않았던': 5331, '않았습니다.': 5332, '않았으면': 5333, '않았을': 5334, '않았지만': 5335, '않으면': 5336, '않으면서': 5337, '않은': 5338, '않은데': 5339, '않을': 5340, '않을까': 5341, '않을까?': 5342, '않음': 5343, '않음.': 5344, '않지만': 5345, '알': 5346, '알게': 5347, '알게된': 5348, '알게해준': 5349, '알겠는데': 5350, '알겠는데,': 5351, '알겠다': 5352, '알겠다.': 5353, '알겠으나': 5354, '알겠지만': 5355, '알고': 5356, '알고보니': 5357, '알고보면': 5358, '알려주는': 5359, '알려준': 5360, '알려지지': 5361, '알려진': 5362, '알면': 5363, '알면서': 5364, '알면서도': 5365, '알바': 5366, '알바가': 5367, '알바들': 5368, '알바들아': 5369, '알바들이': 5370, '알수': 5371, '알수가': 5372, '알수없는': 5373, '알수있는': 5374, '알아': 5375, '알아서': 5376, '알아야': 5377, '알았네': 5378, '알았는데': 5379, '알았는데...': 5380, '알았다': 5381, '알았다.': 5382, '알았다..': 5383, '알았더니': 5384, '알았습니다.': 5385, '알았으면': 5386, '알았음': 5387, '알았음.': 5388, '알았지만': 5389, '알지': 5390, '알지만': 5391, '암': 5392, '암울한': 5393, '암이': 5394, '암튼': 5395, '압권': 5396, '압권.': 5397, '압도적인': 5398, '앞': 5399, '앞뒤': 5400, '앞뒤가': 5401, '앞서간': 5402, '앞에': 5403, '앞에서': 5404, '앞으로': 5405, '앞으로도': 5406, '애': 5407, '애가': 5408, '애기': 5409, '애기가': 5410, '애는': 5411, '애니': 5412, '애니.': 5413, '애니가': 5414, '애니는': 5415, '애니도': 5416, '애니로': 5417, '애니를': 5418, '애니매이션': 5419, '애니메이션': 5420, '애니메이션.': 5421, '애니메이션으로': 5422, '애니메이션은': 5423, '애니메이션을': 5424, '애니메이션의': 5425, '애니메이션이': 5426, '애니의': 5427, '애니중': 5428, '애들': 5429, '애들도': 5430, '애들용': 5431, '애들은': 5432, '애들을': 5433, '애들이': 5434, '애들한테': 5435, '애를': 5436, '애매한': 5437, '애써': 5438, '애잔한': 5439, '애절한': 5440, '애초에': 5441, '애틋한': 5442, '액션': 5443, '액션,': 5444, '액션.': 5445, '액션과': 5446, '액션도': 5447, '액션마저': 5448, '액션만': 5449, '액션물': 5450, '액션신': 5451, '액션신이': 5452, '액션씬': 5453, '액션씬은': 5454, '액션씬이': 5455, '액션에': 5456, '액션영화': 5457, '액션영화가': 5458, '액션영화는': 5459, '액션영화의': 5460, '액션으로': 5461, '액션은': 5462, '액션을': 5463, '액션의': 5464, '액션이': 5465, '앤': 5466, '야': 5467, '야구를': 5468, '야동': 5469, '야동을': 5470, '야이': 5471, '야하지도': 5472, '야한': 5473, '약': 5474, '약간': 5475, '약간은': 5476, '약간의': 5477, '약빨고': 5478, '약을': 5479, '약하고': 5480, '약하다': 5481, '약하다.': 5482, '약한': 5483, '얄팍한': 5484, '양동근': 5485, '양심도': 5486, '양심적으로': 5487, '양아치': 5488, '얘기': 5489, '얘기가': 5490, '얘기는': 5491, '얘기를': 5492, '얘기하고': 5493, '어': 5494, '어거지': 5495, '어거지로': 5496, '어느': 5497, '어느것': 5498, '어느덧': 5499, '어느새': 5500, '어느순간': 5501, '어느정도': 5502, '어느하나': 5503, '어두운': 5504, '어둡고': 5505, '어디': 5506, '어디가': 5507, '어디가서': 5508, '어디까지': 5509, '어디다': 5510, '어디로': 5511, '어디서': 5512, '어디선가': 5513, '어디에': 5514, '어디에도': 5515, '어딘가': 5516, '어딜봐서': 5517, '어땠을까': 5518, '어떠한': 5519, '어떤': 5520, '어떨지': 5521, '어떻게': 5522, '어떻게든': 5523, '어려서': 5524, '어려운': 5525, '어려울': 5526, '어렴풋이': 5527, '어렵게': 5528, '어렵고': 5529, '어렵다': 5530, '어렵다.': 5531, '어렸을': 5532, '어렸을때': 5533, '어렸을때는': 5534, '어렸을땐': 5535, '어렸을적': 5536, '어른': 5537, '어른도': 5538, '어른들을': 5539, '어른들의': 5540, '어른들이': 5541, '어른을': 5542, '어른이': 5543, '어린': 5544, '어린시절': 5545, '어린이': 5546, '어린이들이': 5547, '어린이용': 5548, '어릴': 5549, '어릴때': 5550, '어릴때부터': 5551, '어릴땐': 5552, '어릴적': 5553, '어릴적에': 5554, '어마어마한': 5555, '어머니': 5556, '어머니가': 5557, '어머니의': 5558, '어메이징': 5559, '어색': 5560, '어색하고': 5561, '어색하지': 5562, '어색한': 5563, '어색해서': 5564, '어서': 5565, '어설퍼': 5566, '어설프게': 5567, '어설프고': 5568, '어설프다': 5569, '어설픈': 5570, '어수선하고': 5571, '어우': 5572, '어우러져': 5573, '어우러진': 5574, '어울리는': 5575, '어울리지': 5576, '어울린다.': 5577, '어이': 5578, '어이가': 5579, '어이없게': 5580, '어이없고': 5581, '어이없는': 5582, '어이없다': 5583, '어이없어서': 5584, '어이없음': 5585, '어정쩡한': 5586, '어제': 5587, '어중간한': 5588, '어째': 5589, '어째서': 5590, '어쨋든': 5591, '어쨌든': 5592, '어쩌고': 5593, '어쩌구': 5594, '어쩌다': 5595, '어쩌라고': 5596, '어쩌면': 5597, '어쩐지': 5598, '어쩔': 5599, '어쩔수': 5600, '어쩔수없이': 5601, '어쩜': 5602, '어찌': 5603, '어찌나': 5604, '어찌보면': 5605, '어차피': 5606, '어처구니': 5607, '어처구니가': 5608, '어처구니없는': 5609, '어케': 5610, '어휴': 5611, '어휴..': 5612, '어휴...': 5613, '억지': 5614, '억지가': 5615, '억지감동': 5616, '억지로': 5617, '억지스러운': 5618, '억지스런': 5619, '억지스럽고': 5620, '억지스럽다': 5621, '언니': 5622, '언니가': 5623, '언제': 5624, '언제까지': 5625, '언제나': 5626, '언제봐도': 5627, '언제쯤': 5628, '언젠가': 5629, '언젠가는': 5630, '얻어': 5631, '얻은': 5632, '얻을': 5633, '얼굴': 5634, '얼굴과': 5635, '얼굴도': 5636, '얼굴로': 5637, '얼굴만': 5638, '얼굴에': 5639, '얼굴은': 5640, '얼굴을': 5641, '얼굴이': 5642, '얼른': 5643, '얼마': 5644, '얼마나': 5645, '얼마전': 5646, '엄마': 5647, '엄마가': 5648, '엄마는': 5649, '엄마랑': 5650, '엄마를': 5651, '엄마와': 5652, '엄마의': 5653, '엄마한테': 5654, '엄정화': 5655, '엄청': 5656, '엄청나게': 5657, '엄청난': 5658, '없게': 5659, '없고': 5660, '없고,': 5661, '없고.': 5662, '없고..': 5663, '없고...': 5664, '없구': 5665, '없구나': 5666, '없군': 5667, '없기': 5668, '없나': 5669, '없나?': 5670, '없나요': 5671, '없나요?': 5672, '없냐': 5673, '없냐?': 5674, '없네': 5675, '없네.': 5676, '없네..': 5677, '없네...': 5678, '없네요': 5679, '없네요.': 5680, '없네요..': 5681, '없는': 5682, '없는,': 5683, '없는...': 5684, '없는거': 5685, '없는것': 5686, '없는게': 5687, '없는데': 5688, '없는듯': 5689, '없는영화': 5690, '없는지': 5691, '없다': 5692, '없다!': 5693, '없다.': 5694, '없다..': 5695, '없다...': 5696, '없다....': 5697, '없다고': 5698, '없다는': 5699, '없다는게': 5700, '없다면': 5701, '없던': 5702, '없습니다': 5703, '없습니다.': 5704, '없어': 5705, '없어.': 5706, '없어도': 5707, '없어서': 5708, '없어요': 5709, '없어요.': 5710, '없어요..': 5711, '없었고': 5712, '없었는데': 5713, '없었다': 5714, '없었다.': 5715, '없었다..': 5716, '없었다면': 5717, '없었던': 5718, '없었습니다.': 5719, '없었으면': 5720, '없었음': 5721, '없었음.': 5722, '없었지만': 5723, '없으니': 5724, '없으며': 5725, '없으면': 5726, '없으면서': 5727, '없을': 5728, '없을것': 5729, '없음': 5730, '없음.': 5731, '없음..': 5732, '없음...': 5733, '없이': 5734, '없이도': 5735, '없잖아': 5736, '없지': 5737, '없지만': 5738, '없지만,': 5739, '엉뚱한': 5740, '엉망': 5741, '엉망.': 5742, '엉망이고': 5743, '엉망인': 5744, '엉성': 5745, '엉성하게': 5746, '엉성하고': 5747, '엉성하기': 5748, '엉성하다.': 5749, '엉성한': 5750, '엉엉': 5751, '엉터리': 5752, '에': 5753, '에드워드': 5754, '에라이': 5755, '에로': 5756, '에로물': 5757, '에로영화': 5758, '에릭': 5759, '에밀리': 5760, '에서': 5761, '에이': 5762, '에이리언': 5763, '에피소드': 5764, '에효': 5765, '에휴': 5766, '에휴..': 5767, '에휴...': 5768, '엑스맨': 5769, '엔딩': 5770, '엔딩까지': 5771, '엔딩도': 5772, '엔딩에': 5773, '엔딩에서': 5774, '엔딩은': 5775, '엔딩을': 5776, '엔딩이': 5777, '엔딩크레딧이': 5778, '여': 5779, '여기': 5780, '여기가': 5781, '여기는': 5782, '여기서': 5783, '여기서도': 5784, '여기에': 5785, '여기저기': 5786, '여느': 5787, '여러': 5788, '여러가지': 5789, '여러가지를': 5790, '여러모로': 5791, '여러번': 5792, '여러번봐도': 5793, '여러분': 5794, '여름': 5795, '여름에': 5796, '여배우': 5797, '여배우가': 5798, '여배우는': 5799, '여배우들': 5800, '여배우들의': 5801, '여배우의': 5802, '여성': 5803, '여성을': 5804, '여성의': 5805, '여실히': 5806, '여운': 5807, '여운과': 5808, '여운도': 5809, '여운에': 5810, '여운은': 5811, '여운을': 5812, '여운이': 5813, '여인의': 5814, '여자': 5815, '여자가': 5816, '여자는': 5817, '여자도': 5818, '여자들': 5819, '여자들은': 5820, '여자들의': 5821, '여자들이': 5822, '여자를': 5823, '여자배우': 5824, '여자애': 5825, '여자애가': 5826, '여자에게': 5827, '여자와': 5828, '여자의': 5829, '여자주인공': 5830, '여자주인공이': 5831, '여자친구랑': 5832, '여전히': 5833, '여주': 5834, '여주가': 5835, '여주는': 5836, '여주의': 5837, '여주인공': 5838, '여주인공도': 5839, '여주인공은': 5840, '여주인공의': 5841, '여주인공이': 5842, '여지껏': 5843, '여친이': 5844, '여태': 5845, '여태까지': 5846, '여태껏': 5847, '여튼': 5848, '여행': 5849, '여행을': 5850, '역': 5851, '역겨운': 5852, '역겹고': 5853, '역겹다': 5854, '역겹다.': 5855, '역대': 5856, '역대급': 5857, '역량': 5858, '역량이': 5859, '역사': 5860, '역사가': 5861, '역사는': 5862, '역사를': 5863, '역사상': 5864, '역사에': 5865, '역사와': 5866, '역사의': 5867, '역사적': 5868, '역쉬': 5869, '역시': 5870, '역시..': 5871, '역시나': 5872, '역에': 5873, '역을': 5874, '역의': 5875, '역작.': 5876, '역할': 5877, '역할은': 5878, '역할을': 5879, '역할이': 5880, '연결도': 5881, '연결이': 5882, '연극을': 5883, '연기': 5884, '연기,': 5885, '연기.': 5886, '연기..': 5887, '연기...': 5888, '연기가': 5889, '연기까지': 5890, '연기나': 5891, '연기는': 5892, '연기도': 5893, '연기들도': 5894, '연기들이': 5895, '연기때문에': 5896, '연기력': 5897, '연기력,': 5898, '연기력과': 5899, '연기력도': 5900, '연기력에': 5901, '연기력으로': 5902, '연기력은': 5903, '연기력을': 5904, '연기력이': 5905, '연기로': 5906, '연기를': 5907, '연기만': 5908, '연기며': 5909, '연기에': 5910, '연기와': 5911, '연기의': 5912, '연기자': 5913, '연기자가': 5914, '연기자들': 5915, '연기자들의': 5916, '연기자들이': 5917, '연기잘하고': 5918, '연기잘하는': 5919, '연기파': 5920, '연기하는': 5921, '연기한': 5922, '연속': 5923, '연애': 5924, '연애를': 5925, '연애의': 5926, '연예인': 5927, '연예인이': 5928, '연출': 5929, '연출,': 5930, '연출.': 5931, '연출과': 5932, '연출도': 5933, '연출력': 5934, '연출력.': 5935, '연출력과': 5936, '연출력도': 5937, '연출력은': 5938, '연출력의': 5939, '연출력이': 5940, '연출로': 5941, '연출에': 5942, '연출은': 5943, '연출을': 5944, '연출의': 5945, '연출이': 5946, '연출이나': 5947, '연출한': 5948, '열': 5949, '열까지': 5950, '열라': 5951, '열린': 5952, '열받아서': 5953, '열심히': 5954, '열연이': 5955, '열정과': 5956, '열정을': 5957, '열정이': 5958, '엽기적인': 5959, '였다.': 5960, '영': 5961, '영..': 5962, '영...': 5963, '영~': 5964, '영감을': 5965, '영구와': 5966, '영국': 5967, '영상': 5968, '영상,': 5969, '영상.': 5970, '영상과': 5971, '영상도': 5972, '영상만': 5973, '영상미': 5974, '영상미,': 5975, '영상미가': 5976, '영상미는': 5977, '영상미도': 5978, '영상미에': 5979, '영상미와': 5980, '영상으로': 5981, '영상은': 5982, '영상을': 5983, '영상이': 5984, '영어': 5985, '영웅': 5986, '영웅본색': 5987, '영원한': 5988, '영원히': 5989, '영향을': 5990, '영혼을': 5991, '영혼의': 5992, '영혼이': 5993, '영화': 5994, '영화!': 5995, '영화!!': 5996, '영화!!!': 5997, '영화!!!!': 5998, '영화\"': 5999, '영화,': 6000, '영화,,': 6001, '영화.': 6002, '영화..': 6003, '영화...': 6004, '영화....': 6005, '영화......': 6006, '영화?': 6007, '영화^^': 6008, '영화~': 6009, '영화~!': 6010, '영화~!!': 6011, '영화~~': 6012, '영화~~~': 6013, '영화ㅋ': 6014, '영화ㅋㅋ': 6015, '영화가': 6016, '영화감독': 6017, '영화감독이': 6018, '영화같다': 6019, '영화같다.': 6020, '영화같은': 6021, '영화계의': 6022, '영화고': 6023, '영화관': 6024, '영화관가서': 6025, '영화관에': 6026, '영화관에서': 6027, '영화군': 6028, '영화까지': 6029, '영화나': 6030, '영화내내': 6031, '영화냐': 6032, '영화냐?': 6033, '영화네': 6034, '영화네.': 6035, '영화네요': 6036, '영화네요!': 6037, '영화네요.': 6038, '영화네요..': 6039, '영화네요^^': 6040, '영화네요~': 6041, '영화는': 6042, '영화니까': 6043, '영화다': 6044, '영화다!': 6045, '영화다!!': 6046, '영화다.': 6047, '영화다..': 6048, '영화다...': 6049, '영화다운': 6050, '영화도': 6051, '영화들': 6052, '영화들은': 6053, '영화들을': 6054, '영화들의': 6055, '영화들이': 6056, '영화라': 6057, '영화라고': 6058, '영화라고..': 6059, '영화라고...': 6060, '영화라고는': 6061, '영화라기': 6062, '영화라기보다는': 6063, '영화라는': 6064, '영화라는게': 6065, '영화라니': 6066, '영화라도': 6067, '영화라면': 6068, '영화라서': 6069, '영화라지만': 6070, '영화란': 6071, '영화랑': 6072, '영화로': 6073, '영화로는': 6074, '영화로만': 6075, '영화로서': 6076, '영화를': 6077, '영화만': 6078, '영화만큼': 6079, '영화면': 6080, '영화보고': 6081, '영화보는': 6082, '영화보는내내': 6083, '영화보는데': 6084, '영화보다': 6085, '영화보다가': 6086, '영화보다는': 6087, '영화보다도': 6088, '영화보단': 6089, '영화보면': 6090, '영화보면서': 6091, '영화본': 6092, '영화사에': 6093, '영화속': 6094, '영화속의': 6095, '영화야': 6096, '영화야?': 6097, '영화에': 6098, '영화에는': 6099, '영화에서': 6100, '영화에서는': 6101, '영화에선': 6102, '영화에요': 6103, '영화에요.': 6104, '영화엔': 6105, '영화여서': 6106, '영화였는데': 6107, '영화였다': 6108, '영화였다.': 6109, '영화였습니다': 6110, '영화였습니다.': 6111, '영화였어요': 6112, '영화였어요.': 6113, '영화였음': 6114, '영화였음.': 6115, '영화예요': 6116, '영화예요.': 6117, '영화와': 6118, '영화의': 6119, '영화이고': 6120, '영화이다': 6121, '영화이다.': 6122, '영화이지만': 6123, '영화인': 6124, '영화인가': 6125, '영화인가?': 6126, '영화인거': 6127, '영화인것': 6128, '영화인데': 6129, '영화인데,': 6130, '영화인데..': 6131, '영화인데도': 6132, '영화인듯': 6133, '영화인듯.': 6134, '영화인줄': 6135, '영화인지': 6136, '영화일': 6137, '영화임': 6138, '영화임.': 6139, '영화임..': 6140, '영화임?': 6141, '영화임에도': 6142, '영화입니다': 6143, '영화입니다!': 6144, '영화입니다.': 6145, '영화입니다..': 6146, '영화입니다...': 6147, '영화자체가': 6148, '영화자체는': 6149, '영화자체도': 6150, '영화적': 6151, '영화제': 6152, '영화좀': 6153, '영화중': 6154, '영화중에': 6155, '영화중에서': 6156, '영화지': 6157, '영화지만': 6158, '영화지만,': 6159, '영화처럼': 6160, '영화치고': 6161, '영화치고는': 6162, '영화치곤': 6163, '영화판': 6164, '영화평점': 6165, '영화화': 6166, '영환': 6167, '영환데': 6168, '영환줄': 6169, '영활': 6170, '옆에': 6171, '옆에서': 6172, '예': 6173, '예.': 6174, '예고편': 6175, '예고편만': 6176, '예고편에': 6177, '예고편을': 6178, '예고편이': 6179, '예뻐서': 6180, '예쁘게': 6181, '예쁘고': 6182, '예쁘다': 6183, '예쁘다.': 6184, '예쁜': 6185, '예쁨': 6186, '예산': 6187, '예상': 6188, '예상되는': 6189, '예상외로': 6190, '예상을': 6191, '예상치': 6192, '예술': 6193, '예술로': 6194, '예술은': 6195, '예술을': 6196, '예술의': 6197, '예술이': 6198, '예술이다': 6199, '예전': 6200, '예전에': 6201, '예전에는': 6202, '예전의': 6203, '예측': 6204, '옛': 6205, '옛날': 6206, '옛날에': 6207, '옛날엔': 6208, '옛날영화라': 6209, '옛다': 6210, '오': 6211, '오~': 6212, '오그라드는': 6213, '오글': 6214, '오글거려': 6215, '오글거려서': 6216, '오글거리고': 6217, '오글거리는': 6218, '오글오글': 6219, '오는': 6220, '오늘': 6221, '오늘도': 6222, '오늘은': 6223, '오늘의': 6224, '오드리': 6225, '오락': 6226, '오락영화': 6227, '오래': 6228, '오래간만에': 6229, '오래도록': 6230, '오래된': 6231, '오래오래': 6232, '오래전': 6233, '오래전에': 6234, '오랜': 6235, '오랜만에': 6236, '오랫동안': 6237, '오랫만에': 6238, '오로라': 6239, '오로지': 6240, '오리지널': 6241, '오면': 6242, '오버': 6243, '오브': 6244, '오빠': 6245, '오오': 6246, '오우': 6247, '오우삼': 6248, '오유에서': 6249, '오인혜': 6250, '오지': 6251, '오직': 6252, '오페라의': 6253, '오프닝': 6254, '오히려': 6255, '온': 6256, '온갖': 6257, '온다.': 6258, '온몸에': 6259, '온전히': 6260, '온통': 6261, '올': 6262, '올레': 6263, '올레티비': 6264, '올리고': 6265, '올만에': 6266, '올바른': 6267, '올해': 6268, '올해의': 6269, '옴니버스': 6270, '옷': 6271, '옷을': 6272, '와': 6273, '와..': 6274, '와...': 6275, '와..진짜': 6276, '와~': 6277, '와나': 6278, '와닿는': 6279, '와닿는다.': 6280, '와닿지': 6281, '와서': 6282, '와우': 6283, '와이어': 6284, '와중에': 6285, '완벽': 6286, '완벽하게': 6287, '완벽하다': 6288, '완벽하다.': 6289, '완벽한': 6290, '완벽히': 6291, '완성도': 6292, '완성도가': 6293, '완성도는': 6294, '완성도를': 6295, '완소': 6296, '완전': 6297, '완전한': 6298, '완전히': 6299, '완젼': 6300, '완존': 6301, '완죤': 6302, '왔는데': 6303, '왔다': 6304, '왔다갔다': 6305, '왔습니다': 6306, '왔습니다.': 6307, '왔으면': 6308, '왕': 6309, '왕가위': 6310, '왕가위의': 6311, '왕조현': 6312, '왜': 6313, '왜?': 6314, '왜들': 6315, '왜이래': 6316, '왜이래?': 6317, '왜이렇게': 6318, '왜이리': 6319, '왜자꾸': 6320, '왜캐': 6321, '왜케': 6322, '왠': 6323, '왠만하면': 6324, '왠만한': 6325, '왠만해선': 6326, '왠지': 6327, '왤캐': 6328, '왤케': 6329, '외': 6330, '외계인': 6331, '외계인이': 6332, '외국': 6333, '외국인': 6334, '외로운': 6335, '외로움을': 6336, '외모': 6337, '외모가': 6338, '외에': 6339, '외에는': 6340, '외엔': 6341, '요': 6342, '요근래': 6343, '요리': 6344, '요새': 6345, '요세': 6346, '요소가': 6347, '요소는': 6348, '요소도': 6349, '요소들이': 6350, '요소를': 6351, '요즘': 6352, '요즘에': 6353, '요즘엔': 6354, '요즘은': 6355, '욕': 6356, '욕나오는': 6357, '욕나온다': 6358, '욕나옴': 6359, '욕만': 6360, '욕망을': 6361, '욕망의': 6362, '욕심이': 6363, '욕을': 6364, '욕이': 6365, '욕하고': 6366, '욕하는': 6367, '욕하면서': 6368, '용가리': 6369, '용감한': 6370, '용기를': 6371, '용기에': 6372, '용두사미': 6373, '용서가': 6374, '용서와': 6375, '용서할': 6376, '우는': 6377, '우디': 6378, '우뢰매': 6379, '우리': 6380, '우리가': 6381, '우리나라': 6382, '우리나라는': 6383, '우리나라도': 6384, '우리나라에': 6385, '우리나라에서': 6386, '우리나라의': 6387, '우리네': 6388, '우리는': 6389, '우리도': 6390, '우리들의': 6391, '우리를': 6392, '우리에게': 6393, '우리와': 6394, '우리의': 6395, '우리집': 6396, '우린': 6397, '우선': 6398, '우아한': 6399, '우연히': 6400, '우와': 6401, '우왕': 6402, '우울하고': 6403, '우울한': 6404, '우익': 6405, '우정': 6406, '우정,': 6407, '우정과': 6408, '우정을': 6409, '우정이': 6410, '우주': 6411, '운': 6412, '운동': 6413, '운명을': 6414, '울': 6415, '울게': 6416, '울고': 6417, '울리는': 6418, '울림이': 6419, '울면서': 6420, '울었네요': 6421, '울었다': 6422, '울었다.': 6423, '울었던': 6424, '울었습니다': 6425, '울었어요': 6426, '울었어요.': 6427, '울컥': 6428, '움직이거나': 6429, '움직이는': 6430, '웃게': 6431, '웃겨': 6432, '웃겨서': 6433, '웃겨요': 6434, '웃겼다': 6435, '웃겼음': 6436, '웃고': 6437, '웃기게': 6438, '웃기고': 6439, '웃기기도': 6440, '웃기긴': 6441, '웃기네': 6442, '웃기는': 6443, '웃기다': 6444, '웃기다.': 6445, '웃기려고': 6446, '웃기지': 6447, '웃기지도': 6448, '웃긴': 6449, '웃긴건': 6450, '웃긴다': 6451, '웃김': 6452, '웃김.': 6453, '웃는': 6454, '웃다': 6455, '웃다가': 6456, '웃어야': 6457, '웃었다': 6458, '웃었다.': 6459, '웃으며': 6460, '웃으면서': 6461, '웃을': 6462, '웃음': 6463, '웃음과': 6464, '웃음도': 6465, '웃음만': 6466, '웃음을': 6467, '웃음이': 6468, '웃지': 6469, '웅장한': 6470, '워낙': 6471, '원': 6472, '원래': 6473, '원숭이': 6474, '원작': 6475, '원작과': 6476, '원작도': 6477, '원작보다': 6478, '원작에': 6479, '원작으로': 6480, '원작은': 6481, '원작을': 6482, '원작의': 6483, '원작이': 6484, '원작이랑': 6485, '원작인': 6486, '원작자가': 6487, '원조': 6488, '원주율': 6489, '원표': 6490, '원피스': 6491, '원하는': 6492, '웨슬리': 6493, '웬': 6494, '웬만하면': 6495, '웬만한': 6496, '웬만해선': 6497, '웬지': 6498, '웰메이드': 6499, '웹툰': 6500, '위대한': 6501, '위대함을': 6502, '위로가': 6503, '위안부': 6504, '위에': 6505, '위에서': 6506, '위트있는': 6507, '위하여': 6508, '위한': 6509, '위해': 6510, '위해서': 6511, '위험한': 6512, '윌리스': 6513, '유덕화': 6514, '유덕화가': 6515, '유덕화의': 6516, '유럽': 6517, '유머': 6518, '유머가': 6519, '유머도': 6520, '유머와': 6521, '유명': 6522, '유명한': 6523, '유명해서': 6524, '유발': 6525, '유발하는': 6526, '유승준': 6527, '유아용': 6528, '유아인': 6529, '유익한': 6530, '유일하게': 6531, '유일한': 6532, '유치': 6533, '유치뽕짝': 6534, '유치하게': 6535, '유치하고': 6536, '유치하기': 6537, '유치하다': 6538, '유치하다.': 6539, '유치하다..': 6540, '유치하다고': 6541, '유치하지': 6542, '유치하지만': 6543, '유치한': 6544, '유치함': 6545, '유치함.': 6546, '유치함의': 6547, '유치해': 6548, '유치해서': 6549, '유쾌': 6550, '유쾌하게': 6551, '유쾌하고': 6552, '유쾌한': 6553, '으': 6554, '으로': 6555, '으리!': 6556, '으리가': 6557, '은': 6558, '은근': 6559, '은근히': 6560, '은은한': 6561, '을': 6562, '음': 6563, '음..': 6564, '음...': 6565, '음....': 6566, '음식': 6567, '음악': 6568, '음악,': 6569, '음악.': 6570, '음악과': 6571, '음악까지': 6572, '음악도': 6573, '음악만': 6574, '음악에': 6575, '음악으로': 6576, '음악은': 6577, '음악을': 6578, '음악의': 6579, '음악이': 6580, '음향': 6581, '응?': 6582, '응원합니다.': 6583, '의': 6584, '의도가': 6585, '의도는': 6586, '의도로': 6587, '의도를': 6588, '의도한': 6589, '의리': 6590, '의리로': 6591, '의문을': 6592, '의문이': 6593, '의미': 6594, '의미가': 6595, '의미는': 6596, '의미도': 6597, '의미로': 6598, '의미를': 6599, '의미없는': 6600, '의미에서': 6601, '의미있는': 6602, '의사가': 6603, '의상': 6604, '의심이': 6605, '의외로': 6606, '의외의': 6607, '의지가': 6608, '의한': 6609, '의해': 6610, '이': 6611, '이감독': 6612, '이거': 6613, '이거?': 6614, '이거는': 6615, '이거랑': 6616, '이거밖에': 6617, '이거보고': 6618, '이거보느니': 6619, '이거보다': 6620, '이거보단': 6621, '이거보면': 6622, '이건': 6623, '이건...': 6624, '이건머': 6625, '이건뭐': 6626, '이건좀': 6627, '이건진짜': 6628, '이걸': 6629, '이걸로': 6630, '이걸보고': 6631, '이것': 6632, '이것도': 6633, '이것만': 6634, '이것밖에': 6635, '이것보다': 6636, '이것보단': 6637, '이것은': 6638, '이것을': 6639, '이것이': 6640, '이것저것': 6641, '이게': 6642, '이게뭐야': 6643, '이게왜': 6644, '이기는': 6645, '이기적이고': 6646, '이기적인': 6647, '이길': 6648, '이끌어': 6649, '이나영': 6650, '이다': 6651, '이다.': 6652, '이도': 6653, '이도저도': 6654, '이드라마': 6655, '이들에게': 6656, '이들은': 6657, '이들을': 6658, '이들의': 6659, '이들이': 6660, '이따구로': 6661, '이따위': 6662, '이따위로': 6663, '이딴': 6664, '이딴거': 6665, '이딴걸': 6666, '이딴것도': 6667, '이딴게': 6668, '이딴영화': 6669, '이딴영화가': 6670, '이때': 6671, '이때까지': 6672, '이때부터': 6673, '이라': 6674, '이라고': 6675, '이라는': 6676, '이라도': 6677, '이란': 6678, '이래': 6679, '이래?': 6680, '이래서': 6681, '이러니': 6682, '이러면': 6683, '이러한': 6684, '이런': 6685, '이런거': 6686, '이런건': 6687, '이런걸': 6688, '이런것도': 6689, '이런게': 6690, '이런데': 6691, '이런드라마': 6692, '이런류의': 6693, '이런식으로': 6694, '이런영화': 6695, '이런영화가': 6696, '이런영화는': 6697, '이런영화도': 6698, '이런영화를': 6699, '이런영화에': 6700, '이런일이': 6701, '이런저런': 6702, '이럴': 6703, '이럴거면': 6704, '이렇게': 6705, '이렇게까지': 6706, '이렇게나': 6707, '이렇게도': 6708, '이렇게밖에': 6709, '이렇지': 6710, '이루는': 6711, '이루어진': 6712, '이룬': 6713, '이를': 6714, '이름': 6715, '이름도': 6716, '이름만': 6717, '이름에': 6718, '이름으로': 6719, '이름을': 6720, '이름이': 6721, '이리': 6722, '이리도': 6723, '이리저리': 6724, '이만큼': 6725, '이만한': 6726, '이미': 6727, '이미지': 6728, '이미지가': 6729, '이민기': 6730, '이번': 6731, '이번에': 6732, '이번에는': 6733, '이번엔': 6734, '이범수': 6735, '이병헌': 6736, '이보다': 6737, '이보단': 6738, '이뻐': 6739, '이뻐서': 6740, '이쁘게': 6741, '이쁘고': 6742, '이쁘다': 6743, '이쁘다.': 6744, '이쁜': 6745, '이쁨': 6746, '이상': 6747, '이상도': 6748, '이상우': 6749, '이상으로': 6750, '이상은': 6751, '이상을': 6752, '이상의': 6753, '이상이': 6754, '이상하게': 6755, '이상하고': 6756, '이상하다': 6757, '이상하다.': 6758, '이상한': 6759, '이상해': 6760, '이상해서': 6761, '이세상에': 6762, '이소룡': 6763, '이소룡이': 6764, '이수근': 6765, '이스트우드': 6766, '이승기': 6767, '이시대에': 6768, '이야': 6769, '이야기': 6770, '이야기,': 6771, '이야기.': 6772, '이야기..': 6773, '이야기...': 6774, '이야기가': 6775, '이야기는': 6776, '이야기도': 6777, '이야기들이': 6778, '이야기라': 6779, '이야기로': 6780, '이야기를': 6781, '이야기에': 6782, '이야기와': 6783, '이야기의': 6784, '이야기지만': 6785, '이어': 6786, '이어서': 6787, '이어지는': 6788, '이연걸': 6789, '이연걸의': 6790, '이연걸이': 6791, '이연희': 6792, '이영화': 6793, '이영화가': 6794, '이영화는': 6795, '이영화도': 6796, '이영화로': 6797, '이영화를': 6798, '이영화보고': 6799, '이영화에': 6800, '이영화에서': 6801, '이영화의': 6802, '이용한': 6803, '이용해': 6804, '이유': 6805, '이유.': 6806, '이유가': 6807, '이유는': 6808, '이유도': 6809, '이유로': 6810, '이유를': 6811, '이유없이': 6812, '이은': 6813, '이의': 6814, '이전': 6815, '이전에': 6816, '이정도': 6817, '이정도는': 6818, '이정도로': 6819, '이정도면': 6820, '이정도의': 6821, '이정재': 6822, '이제': 6823, '이제껏': 6824, '이제는': 6825, '이제서야': 6826, '이제야': 6827, '이제와서': 6828, '이젠': 6829, '이지만': 6830, '이처럼': 6831, '이탈리아': 6832, '이토록': 6833, '이하': 6834, '이하도': 6835, '이하의': 6836, '이해': 6837, '이해가': 6838, '이해는': 6839, '이해도': 6840, '이해를': 6841, '이해불가': 6842, '이해안되는': 6843, '이해안됨': 6844, '이해하고': 6845, '이해하기': 6846, '이해하는': 6847, '이해하지': 6848, '이해할': 6849, '이해할수': 6850, '이해할수없는': 6851, '이후': 6852, '이후로': 6853, '이후에': 6854, '이후의': 6855, '익숙한': 6856, '인': 6857, '인가': 6858, '인간': 6859, '인간들': 6860, '인간들은': 6861, '인간들이': 6862, '인간에': 6863, '인간은': 6864, '인간을': 6865, '인간의': 6866, '인간이': 6867, '인간적으로': 6868, '인간적인': 6869, '인것': 6870, '인기': 6871, '인기가': 6872, '인내심': 6873, '인내심을': 6874, '인데': 6875, '인도': 6876, '인도영화': 6877, '인디아나': 6878, '인류의': 6879, '인물': 6880, '인물들': 6881, '인물들의': 6882, '인물들이': 6883, '인물을': 6884, '인물의': 6885, '인물이': 6886, '인상': 6887, '인상깊게': 6888, '인상깊다.': 6889, '인상깊었던': 6890, '인상깊은': 6891, '인상이': 6892, '인상적': 6893, '인상적.': 6894, '인상적이다.': 6895, '인상적인': 6896, '인생': 6897, '인생과': 6898, '인생에': 6899, '인생에서': 6900, '인생영화': 6901, '인생은': 6902, '인생을': 6903, '인생의': 6904, '인생이': 6905, '인연이': 6906, '인정': 6907, '인종차별': 6908, '인줄': 6909, '인지': 6910, '인터넷': 6911, '인피니트': 6912, '인한': 6913, '인해': 6914, '일': 6915, '일깨워': 6916, '일깨워주는': 6917, '일깨워준': 6918, '일단': 6919, '일반': 6920, '일반인이': 6921, '일방적인': 6922, '일본': 6923, '일본식': 6924, '일본에': 6925, '일본에서': 6926, '일본영화': 6927, '일본영화가': 6928, '일본영화는': 6929, '일본영화의': 6930, '일본은': 6931, '일본을': 6932, '일본의': 6933, '일본이': 6934, '일본인': 6935, '일본판': 6936, '일부': 6937, '일부러': 6938, '일상': 6939, '일상을': 6940, '일상의': 6941, '일상적인': 6942, '일어나는': 6943, '일어나지': 6944, '일어난': 6945, '일어날': 6946, '일요일': 6947, '일은': 6948, '일을': 6949, '일이': 6950, '일점도': 6951, '일찍': 6952, '일품': 6953, '읽고': 6954, '읽는': 6955, '읽어보고': 6956, '읽은': 6957, '잃게': 6958, '잃고': 6959, '잃어버린': 6960, '잃은': 6961, '잃지': 6962, '임': 6963, '임권택': 6964, '임수정': 6965, '임창정': 6966, '임창정의': 6967, '임청하': 6968, '임팩트': 6969, '임팩트가': 6970, '입': 6971, '입가에': 6972, '입고': 6973, '입니다': 6974, '입니다.': 6975, '입에': 6976, '입은': 6977, '입을': 6978, '입장에서': 6979, '입장이': 6980, '잇고': 6981, '잇는': 6982, '있게': 6983, '있겠지': 6984, '있겠지만': 6985, '있고': 6986, '있고,': 6987, '있구나': 6988, '있구나.': 6989, '있기는': 6990, '있기에': 6991, '있긴': 6992, '있나': 6993, '있나.': 6994, '있나..': 6995, '있나?': 6996, '있나요?': 6997, '있냐': 6998, '있네': 6999, '있네요': 7000, '있네요.': 7001, '있는': 7002, '있는..': 7003, '있는가?': 7004, '있는거': 7005, '있는건': 7006, '있는것': 7007, '있는것도': 7008, '있는게': 7009, '있는데': 7010, '있는데,': 7011, '있는영화': 7012, '있는지': 7013, '있다': 7014, '있다!': 7015, '있다.': 7016, '있다..': 7017, '있다...': 7018, '있다가': 7019, '있다고': 7020, '있다는': 7021, '있다는걸': 7022, '있다는게': 7023, '있다니': 7024, '있다면': 7025, '있다면..': 7026, '있던': 7027, '있도록': 7028, '있습니다': 7029, '있습니다.': 7030, '있어': 7031, '있어도': 7032, '있어보이려고': 7033, '있어서': 7034, '있어야': 7035, '있어요': 7036, '있어요.': 7037, '있었고': 7038, '있었는데': 7039, '있었다': 7040, '있었다.': 7041, '있었다면': 7042, '있었던': 7043, '있었습니다.': 7044, '있었으나': 7045, '있었으면': 7046, '있었을': 7047, '있었을까?': 7048, '있었음': 7049, '있었지만': 7050, '있었지만,': 7051, '있으나': 7052, '있으니': 7053, '있으면': 7054, '있을': 7055, '있을까': 7056, '있을까.': 7057, '있을까..': 7058, '있을까...': 7059, '있을까?': 7060, '있을때': 7061, '있을수': 7062, '있음': 7063, '있음.': 7064, '있지': 7065, '있지.': 7066, '있지?': 7067, '있지만': 7068, '있지만,': 7069, '잊고': 7070, '잊을': 7071, '잊을수': 7072, '잊을수가': 7073, '잊지': 7074, '잊지못할': 7075, '잊혀지지': 7076, '잊혀지지가': 7077, '자': 7078, '자격이': 7079, '자고': 7080, '자극적이고': 7081, '자극적인': 7082, '자극하는': 7083, '자기': 7084, '자기가': 7085, '자꾸': 7086, '자꾸만': 7087, '자는': 7088, '자다가': 7089, '자동차': 7090, '자리가': 7091, '자리를': 7092, '자리에서': 7093, '자막': 7094, '자막으로': 7095, '자막을': 7096, '자막이': 7097, '자본주의': 7098, '자세한': 7099, '자세히': 7100, '자식': 7101, '자식을': 7102, '자신': 7103, '자신들의': 7104, '자신만의': 7105, '자신에': 7106, '자신에게': 7107, '자신을': 7108, '자신의': 7109, '자신이': 7110, '자연과': 7111, '자연스러운': 7112, '자연스럽게': 7113, '자연스럽고': 7114, '자연스럽지': 7115, '자연스레': 7116, '자연의': 7117, '자유롭게': 7118, '자유를': 7119, '자의': 7120, '자주': 7121, '자체': 7122, '자체.': 7123, '자체...': 7124, '자체가': 7125, '자체는': 7126, '자체도': 7127, '자체로': 7128, '자체를': 7129, '자체에': 7130, '자체의': 7131, '자칫': 7132, '자칭': 7133, '작': 7134, '작가': 7135, '작가가': 7136, '작가는': 7137, '작가님': 7138, '작가의': 7139, '작고': 7140, '작년에': 7141, '작위적이고': 7142, '작위적인': 7143, '작은': 7144, '작품': 7145, '작품!': 7146, '작품!!': 7147, '작품,': 7148, '작품.': 7149, '작품..': 7150, '작품...': 7151, '작품도': 7152, '작품들': 7153, '작품성': 7154, '작품성도': 7155, '작품성은': 7156, '작품성을': 7157, '작품성이': 7158, '작품에': 7159, '작품에서': 7160, '작품으로': 7161, '작품은': 7162, '작품을': 7163, '작품의': 7164, '작품이': 7165, '작품이다': 7166, '작품이다.': 7167, '작품이라': 7168, '작품이라고': 7169, '작품이었다.': 7170, '작품인데': 7171, '작품입니다': 7172, '작품입니다.': 7173, '작품중': 7174, '잔': 7175, '잔뜩': 7176, '잔인하게': 7177, '잔인하고': 7178, '잔인하기만': 7179, '잔인하지도': 7180, '잔인하지만': 7181, '잔인한': 7182, '잔잔하게': 7183, '잔잔하고': 7184, '잔잔하니': 7185, '잔잔하면서': 7186, '잔잔하면서도': 7187, '잔잔하지만': 7188, '잔잔한': 7189, '잔잔히': 7190, '잔혹한': 7191, '잘': 7192, '잘나가다가': 7193, '잘도': 7194, '잘된': 7195, '잘만': 7196, '잘만든': 7197, '잘만들어진': 7198, '잘만들었다': 7199, '잘만들었다.': 7200, '잘못': 7201, '잘못된': 7202, '잘못을': 7203, '잘보고': 7204, '잘봤다.': 7205, '잘봤습니다': 7206, '잘봤습니다.': 7207, '잘봤어요': 7208, '잘생기고': 7209, '잘생긴': 7210, '잘생김': 7211, '잘어울리는': 7212, '잘하고': 7213, '잘하네': 7214, '잘하네요': 7215, '잘하는': 7216, '잘하시고': 7217, '잘한': 7218, '잘한다': 7219, '잘한다.': 7220, '잘함': 7221, '잘해': 7222, '잘해서': 7223, '잠': 7224, '잠깐': 7225, '잠든': 7226, '잠수함': 7227, '잠시': 7228, '잠시나마': 7229, '잠을': 7230, '잠이': 7231, '잠이나': 7232, '잡고': 7233, '잡는': 7234, '잡은': 7235, '잡을': 7236, '잤다': 7237, '잤다.': 7238, '잤음': 7239, '장': 7240, '장국영': 7241, '장근석': 7242, '장나라': 7243, '장난': 7244, '장난으로': 7245, '장난이': 7246, '장난하나': 7247, '장난하냐': 7248, '장난하냐?': 7249, '장르': 7250, '장르가': 7251, '장르는': 7252, '장르를': 7253, '장르에': 7254, '장르의': 7255, '장면': 7256, '장면,': 7257, '장면.': 7258, '장면과': 7259, '장면도': 7260, '장면들': 7261, '장면들은': 7262, '장면들이': 7263, '장면마다': 7264, '장면만': 7265, '장면에': 7266, '장면에서': 7267, '장면에선': 7268, '장면으로': 7269, '장면은': 7270, '장면을': 7271, '장면의': 7272, '장면이': 7273, '장점을': 7274, '장진': 7275, '장혁': 7276, '재': 7277, '재개봉': 7278, '재난': 7279, '재난영화': 7280, '재난영화의': 7281, '재능이': 7282, '재대로': 7283, '재미': 7284, '재미,': 7285, '재미.': 7286, '재미가': 7287, '재미까지': 7288, '재미나게': 7289, '재미난': 7290, '재미는': 7291, '재미도': 7292, '재미도없고': 7293, '재미로': 7294, '재미를': 7295, '재미만': 7296, '재미없게': 7297, '재미없고': 7298, '재미없고,': 7299, '재미없네': 7300, '재미없네.': 7301, '재미없네..': 7302, '재미없네...': 7303, '재미없네요': 7304, '재미없네요.': 7305, '재미없는': 7306, '재미없는데': 7307, '재미없다': 7308, '재미없다.': 7309, '재미없다..': 7310, '재미없다...': 7311, '재미없다고': 7312, '재미없다는': 7313, '재미없습니다.': 7314, '재미없어': 7315, '재미없어..': 7316, '재미없어도': 7317, '재미없어서': 7318, '재미없어요': 7319, '재미없어요.': 7320, '재미없었는데': 7321, '재미없었다': 7322, '재미없었다.': 7323, '재미없었던': 7324, '재미없었음': 7325, '재미없었음.': 7326, '재미없으면': 7327, '재미없을': 7328, '재미없음': 7329, '재미없음.': 7330, '재미없음..': 7331, '재미없음...': 7332, '재미에': 7333, '재미와': 7334, '재미있게': 7335, '재미있고': 7336, '재미있고,': 7337, '재미있네요': 7338, '재미있네요.': 7339, '재미있는': 7340, '재미있는데': 7341, '재미있다': 7342, '재미있다!': 7343, '재미있다.': 7344, '재미있다..': 7345, '재미있다고': 7346, '재미있던데': 7347, '재미있습니다': 7348, '재미있습니다.': 7349, '재미있어': 7350, '재미있어서': 7351, '재미있어요': 7352, '재미있어요!': 7353, '재미있어요.': 7354, '재미있어요^^': 7355, '재미있었고': 7356, '재미있었는데': 7357, '재미있었다': 7358, '재미있었다.': 7359, '재미있었던': 7360, '재미있었습니다': 7361, '재미있었습니다.': 7362, '재미있었어요': 7363, '재미있었어요!': 7364, '재미있었어요.': 7365, '재미있었음': 7366, '재미있을': 7367, '재미있음': 7368, '재미있음.': 7369, '재밋게': 7370, '재밋고': 7371, '재밋네요': 7372, '재밋는': 7373, '재밋는데': 7374, '재밋다': 7375, '재밋다.': 7376, '재밋다고': 7377, '재밋어': 7378, '재밋어요': 7379, '재밋음': 7380, '재밌게': 7381, '재밌게본': 7382, '재밌게봄': 7383, '재밌게봤는데': 7384, '재밌게봤다': 7385, '재밌게봤습니다': 7386, '재밌게봤어요': 7387, '재밌게봤음': 7388, '재밌고': 7389, '재밌고,': 7390, '재밌기만': 7391, '재밌긴': 7392, '재밌네': 7393, '재밌네.': 7394, '재밌네요': 7395, '재밌네요.': 7396, '재밌네요^^': 7397, '재밌는': 7398, '재밌는데': 7399, '재밌는데?': 7400, '재밌는영화': 7401, '재밌다': 7402, '재밌다!': 7403, '재밌다!!': 7404, '재밌다.': 7405, '재밌다..': 7406, '재밌다고': 7407, '재밌다는': 7408, '재밌당': 7409, '재밌던데': 7410, '재밌습니다': 7411, '재밌습니다!': 7412, '재밌습니다.': 7413, '재밌어': 7414, '재밌어서': 7415, '재밌어요': 7416, '재밌어요!': 7417, '재밌어요!!': 7418, '재밌어요.': 7419, '재밌어요^^': 7420, '재밌어요~': 7421, '재밌었고': 7422, '재밌었는데': 7423, '재밌었는데..': 7424, '재밌었는데...': 7425, '재밌었다': 7426, '재밌었다.': 7427, '재밌었다..': 7428, '재밌었던': 7429, '재밌었습니다.': 7430, '재밌었어요': 7431, '재밌었어요!': 7432, '재밌었어요.': 7433, '재밌었어요~': 7434, '재밌었음': 7435, '재밌었음.': 7436, '재밌으면': 7437, '재밌을': 7438, '재밌음': 7439, '재밌음!': 7440, '재밌음.': 7441, '재밌지': 7442, '재밌지는': 7443, '재밌지도': 7444, '재밌지만': 7445, '재치있는': 7446, '재평가': 7447, '잭': 7448, '잼': 7449, '잼나게': 7450, '잼나요': 7451, '잼씀': 7452, '잼없다': 7453, '잼없음': 7454, '잼있게': 7455, '잼있고': 7456, '잼있네요': 7457, '잼있는': 7458, '잼있는데': 7459, '잼있다': 7460, '잼있다.': 7461, '잼있어요': 7462, '잼있어요...': 7463, '잼있었다': 7464, '잼있었음': 7465, '잼있음': 7466, '저': 7467, '저거': 7468, '저건': 7469, '저것도': 7470, '저게': 7471, '저급한': 7472, '저기': 7473, '저는': 7474, '저도': 7475, '저랑': 7476, '저런': 7477, '저렇게': 7478, '저렴한': 7479, '저를': 7480, '저리': 7481, '저만': 7482, '저버리지': 7483, '저에게': 7484, '저에겐': 7485, '저예산': 7486, '저예산으로': 7487, '저의': 7488, '저절로': 7489, '저지른': 7490, '저질': 7491, '저평가': 7492, '저평가된': 7493, '저희': 7494, '적': 7495, '적극': 7496, '적나라하게': 7497, '적당': 7498, '적당하다': 7499, '적당한': 7500, '적당히': 7501, '적어도': 7502, '적은': 7503, '적을': 7504, '적이': 7505, '적인': 7506, '적절하게': 7507, '적절한': 7508, '적절히': 7509, '전': 7510, '전개': 7511, '전개,': 7512, '전개.': 7513, '전개...': 7514, '전개가': 7515, '전개나': 7516, '전개는': 7517, '전개도': 7518, '전개되는': 7519, '전개로': 7520, '전개를': 7521, '전개에': 7522, '전개와': 7523, '전개의': 7524, '전기세가': 7525, '전까지는': 7526, '전달하고자': 7527, '전달하는': 7528, '전달하려는': 7529, '전라도': 7530, '전문': 7531, '전문가': 7532, '전반적으로': 7533, '전반적인': 7534, '전부': 7535, '전부다': 7536, '전부인': 7537, '전설': 7538, '전설의': 7539, '전설이': 7540, '전설적인': 7541, '전성기': 7542, '전에': 7543, '전율이': 7544, '전의': 7545, '전작': 7546, '전작보다': 7547, '전작에': 7548, '전작을': 7549, '전작의': 7550, '전쟁': 7551, '전쟁과': 7552, '전쟁에': 7553, '전쟁영화': 7554, '전쟁은': 7555, '전쟁을': 7556, '전쟁의': 7557, '전쟁이': 7558, '전적으로': 7559, '전지현': 7560, '전체': 7561, '전체가': 7562, '전체를': 7563, '전체적으로': 7564, '전체적인': 7565, '전편': 7566, '전편과': 7567, '전편보다': 7568, '전편에': 7569, '전편을': 7570, '전편의': 7571, '전하고': 7572, '전하고자': 7573, '전하는': 7574, '전해주는': 7575, '전해지는': 7576, '전혀': 7577, '전혀없는': 7578, '전형적': 7579, '전형적인': 7580, '절대': 7581, '절대로': 7582, '절라': 7583, '절로': 7584, '절망적인': 7585, '절묘한': 7586, '절실히': 7587, '절제된': 7588, '젊은': 7589, '젊은이들의': 7590, '점': 7591, '점도': 7592, '점수': 7593, '점수가': 7594, '점수는': 7595, '점수를': 7596, '점수준것들': 7597, '점에서': 7598, '점은': 7599, '점을': 7600, '점이': 7601, '점점': 7602, '접한': 7603, '정': 7604, '정당화': 7605, '정도': 7606, '정도.': 7607, '정도..': 7608, '정도...': 7609, '정도가': 7610, '정도는': 7611, '정도다.': 7612, '정도로': 7613, '정도면': 7614, '정도의': 7615, '정말': 7616, '정말,': 7617, '정말.': 7618, '정말..': 7619, '정말...': 7620, '정말....': 7621, '정말로': 7622, '정말이지': 7623, '정말정말': 7624, '정보': 7625, '정서가': 7626, '정서를': 7627, '정서에': 7628, '정석': 7629, '정신': 7630, '정신건강에': 7631, '정신나간': 7632, '정신병자': 7633, '정신없고': 7634, '정신없는': 7635, '정신없이': 7636, '정신을': 7637, '정신이': 7638, '정우성': 7639, '정을': 7640, '정의가': 7641, '정의는': 7642, '정이': 7643, '정작': 7644, '정적인': 7645, '정점을': 7646, '정주행': 7647, '정체가': 7648, '정체성을': 7649, '정치': 7650, '정치적': 7651, '정통': 7652, '정확하게': 7653, '정확한': 7654, '정확히': 7655, '제': 7656, '제2의': 7657, '제가': 7658, '제니퍼': 7659, '제대로': 7660, '제대로된': 7661, '제레미': 7662, '제로': 7663, '제목': 7664, '제목과': 7665, '제목도': 7666, '제목만': 7667, '제목보고': 7668, '제목부터': 7669, '제목에': 7670, '제목으로': 7671, '제목은': 7672, '제목을': 7673, '제목이': 7674, '제목이랑': 7675, '제목처럼': 7676, '제발': 7677, '제법': 7678, '제시카': 7679, '제왕': 7680, '제외하고': 7681, '제외하고는': 7682, '제외하면': 7683, '제외한': 7684, '제이슨': 7685, '제일': 7686, '제임스': 7687, '제작': 7688, '제작된': 7689, '제작비': 7690, '제작비가': 7691, '제작비로': 7692, '제작을': 7693, '제작진': 7694, '제작진은': 7695, '제작진의': 7696, '제작진이': 7697, '제작한': 7698, '제타': 7699, '제한)': 7700, '젠장': 7701, '젤': 7702, '조': 7703, '조금': 7704, '조금더': 7705, '조금도': 7706, '조금만': 7707, '조금씩': 7708, '조금은': 7709, '조금이나마': 7710, '조금이라도': 7711, '조낸': 7712, '조니뎁': 7713, '조승우': 7714, '조아요': 7715, '조악한': 7716, '조연': 7717, '조연들': 7718, '조연들의': 7719, '조용한': 7720, '조용히': 7721, '조작': 7722, '조잡하고': 7723, '조잡한': 7724, '조절': 7725, '조지': 7726, '조차': 7727, '조카': 7728, '조카가': 7729, '조폭': 7730, '조합': 7731, '조합.': 7732, '조합이': 7733, '조화': 7734, '조화가': 7735, '조화를': 7736, '존': 7737, '존1나': 7738, '존경합니다.': 7739, '존나': 7740, '존나게': 7741, '존내': 7742, '존재': 7743, '존재가': 7744, '존재감': 7745, '존재의': 7746, '존재하는': 7747, '존재하지': 7748, '존잼': 7749, '졸': 7750, '졸라': 7751, '졸려': 7752, '졸려서': 7753, '졸면서': 7754, '졸음이': 7755, '졸작': 7756, '졸작.': 7757, '졸작..': 7758, '졸작으로': 7759, '졸작을': 7760, '졸작이': 7761, '졸작이다': 7762, '졸작이다.': 7763, '졸잼': 7764, '좀': 7765, '좀..': 7766, '좀...': 7767, '좀더': 7768, '좀만': 7769, '좀비': 7770, '좀비가': 7771, '좀비영화': 7772, '종교를': 7773, '종교적': 7774, '종교적인': 7775, '종류의': 7776, '종종': 7777, '종합': 7778, '좋게': 7779, '좋겠네요': 7780, '좋겠네요.': 7781, '좋겠다': 7782, '좋겠다.': 7783, '좋겠다..': 7784, '좋겠습니다': 7785, '좋겠습니다.': 7786, '좋겠어요': 7787, '좋겠어요.': 7788, '좋고': 7789, '좋고,': 7790, '좋고.': 7791, '좋고..': 7792, '좋구': 7793, '좋네': 7794, '좋네요': 7795, '좋네요.': 7796, '좋네요~': 7797, '좋다': 7798, '좋다!': 7799, '좋다.': 7800, '좋다..': 7801, '좋다...': 7802, '좋다~': 7803, '좋다고': 7804, '좋다는': 7805, '좋더라': 7806, '좋습니다': 7807, '좋습니다.': 7808, '좋아': 7809, '좋아.': 7810, '좋아도': 7811, '좋아라': 7812, '좋아서': 7813, '좋아요': 7814, '좋아요!': 7815, '좋아요.': 7816, '좋아요~': 7817, '좋아용': 7818, '좋아지는': 7819, '좋아하게': 7820, '좋아하고': 7821, '좋아하네요': 7822, '좋아하는': 7823, '좋아하는데': 7824, '좋아하던': 7825, '좋아하면': 7826, '좋아하시는': 7827, '좋아하지': 7828, '좋아하지만': 7829, '좋아한다면': 7830, '좋아할': 7831, '좋아할만한': 7832, '좋아해서': 7833, '좋아해요': 7834, '좋아했는데': 7835, '좋아했던': 7836, '좋았고': 7837, '좋았고,': 7838, '좋았는데': 7839, '좋았는데..': 7840, '좋았는데...': 7841, '좋았다': 7842, '좋았다.': 7843, '좋았다..': 7844, '좋았다고': 7845, '좋았던': 7846, '좋았습니다': 7847, '좋았습니다.': 7848, '좋았어': 7849, '좋았어요': 7850, '좋았어요!': 7851, '좋았어요.': 7852, '좋았으나': 7853, '좋았을': 7854, '좋았을텐데': 7855, '좋았음': 7856, '좋았음.': 7857, '좋았지만': 7858, '좋았지만,': 7859, '좋으나': 7860, '좋으면': 7861, '좋은': 7862, '좋은거': 7863, '좋은게': 7864, '좋은데': 7865, '좋은데,': 7866, '좋은데..': 7867, '좋은영화': 7868, '좋은영화.': 7869, '좋을': 7870, '좋을듯': 7871, '좋을듯..': 7872, '좋음': 7873, '좋음.': 7874, '좋지': 7875, '좋지만': 7876, '좋지만,': 7877, '죄가': 7878, '죄다': 7879, '죄를': 7880, '죄송하지만': 7881, '죄없는': 7882, '주': 7883, '주고': 7884, '주고싶다': 7885, '주고싶다.': 7886, '주고싶은': 7887, '주관적인': 7888, '주구장창': 7889, '주기': 7890, '주기도': 7891, '주는': 7892, '주는거': 7893, '주는건': 7894, '주는데': 7895, '주려고': 7896, '주말에': 7897, '주면': 7898, '주변': 7899, '주변에': 7900, '주성치': 7901, '주성치의': 7902, '주세요': 7903, '주셔서': 7904, '주어진': 7905, '주연': 7906, '주연배우': 7907, '주연배우의': 7908, '주연으로': 7909, '주연을': 7910, '주연의': 7911, '주연이': 7912, '주옥같은': 7913, '주온': 7914, '주윤발': 7915, '주윤발의': 7916, '주윤발이': 7917, '주인공': 7918, '주인공과': 7919, '주인공도': 7920, '주인공들': 7921, '주인공들도': 7922, '주인공들의': 7923, '주인공들이': 7924, '주인공으로': 7925, '주인공은': 7926, '주인공을': 7927, '주인공의': 7928, '주인공이': 7929, '주인공인': 7930, '주인공처럼': 7931, '주장하는': 7932, '주제': 7933, '주제가': 7934, '주제는': 7935, '주제도': 7936, '주제로': 7937, '주제를': 7938, '주제에': 7939, '주지': 7940, '죽고': 7941, '죽기': 7942, '죽기전에': 7943, '죽는': 7944, '죽는다': 7945, '죽는줄': 7946, '죽어': 7947, '죽어가는': 7948, '죽어도': 7949, '죽어라': 7950, '죽어서': 7951, '죽은': 7952, '죽을': 7953, '죽을때': 7954, '죽을때까지': 7955, '죽을뻔': 7956, '죽음': 7957, '죽음.': 7958, '죽음에': 7959, '죽음은': 7960, '죽음을': 7961, '죽음의': 7962, '죽음이': 7963, '죽이고': 7964, '죽이네': 7965, '죽이는': 7966, '죽인': 7967, '죽인다.': 7968, '죽일': 7969, '죽지': 7970, '준': 7971, '준다': 7972, '준다.': 7973, '줄': 7974, '줄거리': 7975, '줄거리가': 7976, '줄거리는': 7977, '줄거리도': 7978, '줄거리를': 7979, '줄거리만': 7980, '줄거리에': 7981, '줄도': 7982, '줄리아': 7983, '줄리엣': 7984, '줄수': 7985, '줄은': 7986, '줄이야.': 7987, '줌': 7988, '줌.': 7989, '줍니다': 7990, '줍니다.': 7991, '중': 7992, '중2병': 7993, '중간': 7994, '중간부터': 7995, '중간에': 7996, '중간중간': 7997, '중간중간에': 7998, '중국': 7999, '중국영화': 8000, '중국은': 8001, '중국의': 8002, '중국이': 8003, '중년': 8004, '중딩때': 8005, '중반': 8006, '중반까지는': 8007, '중반부터': 8008, '중심으로': 8009, '중에': 8010, '중에서': 8011, '중에서는': 8012, '중에서도': 8013, '중요성을': 8014, '중요하지': 8015, '중요한': 8016, '중요한건': 8017, '중요한지': 8018, '중의': 8019, '중학교': 8020, '중학교때': 8021, '중학생': 8022, '중학생때': 8023, '줘': 8024, '줘도': 8025, '줘서': 8026, '줘야': 8027, '줬다': 8028, '쥐어짜는': 8029, '쥬라기': 8030, '즉': 8031, '즐거운': 8032, '즐거웠다.': 8033, '즐거웠던': 8034, '즐겁게': 8035, '즐겁고': 8036, '즐기는': 8037, '즐길': 8038, '증말': 8039, '지': 8040, '지가': 8041, '지겨운': 8042, '지겨워': 8043, '지겹고': 8044, '지겹다': 8045, '지겹다.': 8046, '지구': 8047, '지구를': 8048, '지구에': 8049, '지극히': 8050, '지금': 8051, '지금까지': 8052, '지금까지도': 8053, '지금껏': 8054, '지금도': 8055, '지금보니': 8056, '지금보면': 8057, '지금봐도': 8058, '지금에서야': 8059, '지금은': 8060, '지금의': 8061, '지금이': 8062, '지금이나': 8063, '지나': 8064, '지나가는': 8065, '지나고': 8066, '지나도': 8067, '지나면': 8068, '지나서': 8069, '지나지': 8070, '지나치게': 8071, '지나친': 8072, '지난': 8073, '지날수록': 8074, '지났는데도': 8075, '지났지만': 8076, '지는': 8077, '지닌': 8078, '지대로': 8079, '지독한': 8080, '지들': 8081, '지들끼리': 8082, '지들이': 8083, '지루': 8084, '지루...': 8085, '지루하게': 8086, '지루하고': 8087, '지루하고,': 8088, '지루하고..': 8089, '지루하기': 8090, '지루하기도': 8091, '지루하기만': 8092, '지루하네': 8093, '지루하네요': 8094, '지루하다': 8095, '지루하다.': 8096, '지루하다..': 8097, '지루하다...': 8098, '지루하다고': 8099, '지루하다는': 8100, '지루하지': 8101, '지루하지도': 8102, '지루하지만': 8103, '지루하지않고': 8104, '지루하진': 8105, '지루한': 8106, '지루한건': 8107, '지루할': 8108, '지루함': 8109, '지루함.': 8110, '지루함..': 8111, '지루함...': 8112, '지루함과': 8113, '지루함을': 8114, '지루함의': 8115, '지루함이': 8116, '지루합니다.': 8117, '지루해': 8118, '지루해.': 8119, '지루해서': 8120, '지루했다': 8121, '지루했다.': 8122, '지루했다..': 8123, '지루했던': 8124, '지루했어요': 8125, '지루했음': 8126, '지르고': 8127, '지브리': 8128, '지저분한': 8129, '지친': 8130, '지키는': 8131, '직접': 8132, '진': 8133, '진리': 8134, '진리를': 8135, '진부하고': 8136, '진부하다.': 8137, '진부한': 8138, '진솔한': 8139, '진수': 8140, '진수.': 8141, '진수를': 8142, '진실': 8143, '진실된': 8144, '진실은': 8145, '진실을': 8146, '진실이': 8147, '진심': 8148, '진심으로': 8149, '진자': 8150, '진작': 8151, '진작에': 8152, '진정': 8153, '진정으로': 8154, '진정한': 8155, '진지하게': 8156, '진지하고': 8157, '진지한': 8158, '진짜': 8159, '진짜.': 8160, '진짜..': 8161, '진짜...': 8162, '진짜....': 8163, '진짜로': 8164, '진짜진짜': 8165, '진하게': 8166, '진한': 8167, '진행': 8168, '진행도': 8169, '진행되는': 8170, '진행이': 8171, '질': 8172, '질리지': 8173, '질리지않는': 8174, '질이': 8175, '질질': 8176, '질질끌고': 8177, '짐': 8178, '짐캐리': 8179, '집': 8180, '집에': 8181, '집에서': 8182, '집중': 8183, '집중도': 8184, '집중력이': 8185, '집중을': 8186, '집중이': 8187, '집중하게': 8188, '집중할': 8189, '집중해서': 8190, '집착하는': 8191, '짓': 8192, '짓을': 8193, '짙게': 8194, '짙은': 8195, '짜': 8196, '짜고': 8197, '짜리': 8198, '짜릿한': 8199, '짜여진': 8200, '짜임새': 8201, '짜증': 8202, '짜증나': 8203, '짜증나게': 8204, '짜증나고': 8205, '짜증나네': 8206, '짜증나는': 8207, '짜증나서': 8208, '짜증난다': 8209, '짜증난다.': 8210, '짜증남': 8211, '짜증만': 8212, '짜증을': 8213, '짜증이': 8214, '짜집기': 8215, '짝이': 8216, '짝이없는': 8217, '짝퉁': 8218, '짠하고': 8219, '짠한': 8220, '짧게': 8221, '짧고': 8222, '짧아서': 8223, '짧은': 8224, '짧지만': 8225, '짱': 8226, '짱!': 8227, '짱!!': 8228, '짱!!!': 8229, '짱구': 8230, '짱나': 8231, '짱이다': 8232, '짱임': 8233, '짱짱': 8234, '짱짱!': 8235, '짱짱맨': 8236, '쩌네': 8237, '쩌는': 8238, '쩐다': 8239, '쩐다.': 8240, '쩔고': 8241, '쩔어': 8242, '쩝': 8243, '쩝..': 8244, '쩝...': 8245, '쪼금': 8246, '쪽': 8247, '쫌': 8248, '쭉': 8249, '쯤': 8250, '쯧쯧': 8251, '찌질이': 8252, '찌질한': 8253, '찍고': 8254, '찍는': 8255, '찍어': 8256, '찍어도': 8257, '찍어서': 8258, '찍으면': 8259, '찍은': 8260, '찍을': 8261, '찍지': 8262, '찜찜한': 8263, '찝찝하고': 8264, '찝찝한': 8265, '찡하게': 8266, '찡한': 8267, '차': 8268, '차가운': 8269, '차라리': 8270, '차마': 8271, '차승원': 8272, '차원이': 8273, '차이가': 8274, '차이를': 8275, '차태현': 8276, '착하고': 8277, '착한': 8278, '찬': 8279, '찬사를': 8280, '찬양하는': 8281, '찰리': 8282, '참': 8283, '참.': 8284, '참..': 8285, '참...': 8286, '참....': 8287, '참고': 8288, '참고로': 8289, '참나': 8290, '참다참다': 8291, '참된': 8292, '참신하고': 8293, '참신한': 8294, '참으로': 8295, '참을': 8296, '찾고': 8297, '찾기': 8298, '찾는': 8299, '찾아': 8300, '찾아가는': 8301, '찾아보고': 8302, '찾아보는': 8303, '찾아보니': 8304, '찾아볼': 8305, '찾아봐도': 8306, '찾아봤는데': 8307, '찾아서': 8308, '찾은': 8309, '찾을': 8310, '채': 8311, '채널': 8312, '채널에서': 8313, '채널을': 8314, '책': 8315, '책도': 8316, '책으로': 8317, '책으로도': 8318, '책은': 8319, '책을': 8320, '책의': 8321, '책이': 8322, '책임을': 8323, '챙겨보는': 8324, '처': 8325, '처럼': 8326, '처음': 8327, '처음.': 8328, '처음..': 8329, '처음...': 8330, '처음보는': 8331, '처음본다': 8332, '처음부터': 8333, '처음부터끝까지': 8334, '처음에': 8335, '처음에는': 8336, '처음엔': 8337, '처음으로': 8338, '처음이네': 8339, '처음이다': 8340, '처음이다.': 8341, '처음이자': 8342, '처음인': 8343, '처음임': 8344, '처음입니다.': 8345, '처절하게': 8346, '처절한': 8347, '척': 8348, '천재': 8349, '천재다.': 8350, '천재적인': 8351, '천하의': 8352, '철': 8353, '철없는': 8354, '철저하게': 8355, '철저한': 8356, '철저히': 8357, '철학을': 8358, '철학이': 8359, '철학적': 8360, '철학적인': 8361, '첨': 8362, '첨본다': 8363, '첨부터': 8364, '첨에': 8365, '첨엔': 8366, '첨으로': 8367, '첨이네': 8368, '첨이다': 8369, '첨이다.': 8370, '첩보': 8371, '첫': 8372, '첫번째': 8373, '첫사랑': 8374, '청소년': 8375, '청춘': 8376, '청춘들의': 8377, '청춘의': 8378, '청춘이': 8379, '쳐': 8380, '쳐도': 8381, '초': 8382, '초기': 8383, '초등학교': 8384, '초등학교때': 8385, '초등학생': 8386, '초등학생때': 8387, '초딩': 8388, '초딩도': 8389, '초딩때': 8390, '초딩이': 8391, '초반': 8392, '초반부': 8393, '초반부터': 8394, '초반에': 8395, '초반에는': 8396, '초반엔': 8397, '초반은': 8398, '초반의': 8399, '초심을': 8400, '초월하는': 8401, '초월한': 8402, '초점을': 8403, '초호화': 8404, '촌스러운': 8405, '촌스럽고': 8406, '촌스럽지': 8407, '총': 8408, '총으로': 8409, '총을': 8410, '총체적': 8411, '촬영': 8412, '최강': 8413, '최강의': 8414, '최강희': 8415, '최고': 8416, '최고!': 8417, '최고!!': 8418, '최고!!!': 8419, '최고!!!!': 8420, '최고,': 8421, '최고.': 8422, '최고..': 8423, '최고...': 8424, '최고....': 8425, '최고~': 8426, '최고네요': 8427, '최고네요.': 8428, '최고다': 8429, '최고다!': 8430, '최고다.': 8431, '최고다..': 8432, '최고다...': 8433, '최고라': 8434, '최고라고': 8435, '최고라는': 8436, '최고로': 8437, '최고봉': 8438, '최고봉.': 8439, '최고에요': 8440, '최고였다': 8441, '최고였다.': 8442, '최고였습니다.': 8443, '최고였어요': 8444, '최고의': 8445, '최고의영화': 8446, '최고인': 8447, '최고인듯': 8448, '최고임': 8449, '최고입니다': 8450, '최고입니다.': 8451, '최고최고': 8452, '최근': 8453, '최근에': 8454, '최대': 8455, '최대의': 8456, '최대한': 8457, '최민수': 8458, '최선을': 8459, '최소': 8460, '최소한': 8461, '최소한의': 8462, '최악': 8463, '최악!': 8464, '최악.': 8465, '최악..': 8466, '최악...': 8467, '최악으로': 8468, '최악은': 8469, '최악의': 8470, '최악의영화': 8471, '최악이다': 8472, '최악이다.': 8473, '최악임': 8474, '최악입니다.': 8475, '최지우': 8476, '최초': 8477, '최초로': 8478, '최초의': 8479, '최후의': 8480, '쵝오': 8481, '추구하는': 8482, '추리': 8483, '추악한': 8484, '추억': 8485, '추억에': 8486, '추억으로': 8487, '추억은': 8488, '추억을': 8489, '추억의': 8490, '추억이': 8491, '추천': 8492, '추천!': 8493, '추천!!': 8494, '추천.': 8495, '추천으로': 8496, '추천하고': 8497, '추천하는': 8498, '추천한다': 8499, '추천한다.': 8500, '추천합니다': 8501, '추천합니다!': 8502, '추천합니다.': 8503, '추천해': 8504, '추천해주고': 8505, '축구': 8506, '출연': 8507, '출연자': 8508, '출연진': 8509, '출연진에': 8510, '출연하는': 8511, '출연한': 8512, '춤': 8513, '춤도': 8514, '춤을': 8515, '충격': 8516, '충격과': 8517, '충격을': 8518, '충격이': 8519, '충격적이고': 8520, '충격적인': 8521, '충무로': 8522, '충분하다.': 8523, '충분한': 8524, '충분히': 8525, '충실한': 8526, '취해': 8527, '취향': 8528, '취향은': 8529, '취향이': 8530, '치고': 8531, '치고는': 8532, '치곤': 8533, '치는': 8534, '치닫는': 8535, '치면': 8536, '치명적인': 8537, '치밀하고': 8538, '치밀한': 8539, '치열한': 8540, '친': 8541, '친구': 8542, '친구가': 8543, '친구는': 8544, '친구들': 8545, '친구들과': 8546, '친구들이': 8547, '친구들이랑': 8548, '친구랑': 8549, '친구를': 8550, '친구에게': 8551, '친구와': 8552, '친구한테': 8553, '카리스마': 8554, '카리스마가': 8555, '카메라': 8556, '카메론': 8557, '칼': 8558, '칼로': 8559, '캐리': 8560, '캐릭터': 8561, '캐릭터,': 8562, '캐릭터가': 8563, '캐릭터는': 8564, '캐릭터도': 8565, '캐릭터들': 8566, '캐릭터들과': 8567, '캐릭터들도': 8568, '캐릭터들은': 8569, '캐릭터들의': 8570, '캐릭터들이': 8571, '캐릭터로': 8572, '캐릭터를': 8573, '캐릭터에': 8574, '캐릭터와': 8575, '캐릭터의': 8576, '캐서린': 8577, '캐스팅': 8578, '캐스팅,': 8579, '캐스팅도': 8580, '캐스팅에': 8581, '캐스팅은': 8582, '캐스팅을': 8583, '캐스팅이': 8584, '캬': 8585, '커녕': 8586, '커다란': 8587, '커서': 8588, '커플': 8589, '커플이': 8590, '컬트': 8591, '컴퓨터': 8592, '케릭터가': 8593, '케미가': 8594, '케빈': 8595, '케서방': 8596, '케이블': 8597, '케이블로': 8598, '케이블에서': 8599, '케이지': 8600, '케이트': 8601, '코난': 8602, '코난은': 8603, '코난이': 8604, '코드가': 8605, '코메디': 8606, '코메디.': 8607, '코메디가': 8608, '코미디': 8609, '코미디.': 8610, '코미디가': 8611, '코미디는': 8612, '코미디도': 8613, '코미디로': 8614, '코미디를': 8615, '코미디에': 8616, '코미디와': 8617, '코미디의': 8618, '코믹': 8619, '코믹과': 8620, '코믹도': 8621, '코믹영화': 8622, '코믹한': 8623, '콜린': 8624, '콜린퍼스': 8625, '쿠엔틴': 8626, '퀄리티': 8627, '퀄리티가': 8628, '퀄리티는': 8629, '큐브': 8630, '크게': 8631, '크고': 8632, '크다.': 8633, '크레딧': 8634, '크리스': 8635, '크리스마스': 8636, '크리스찬': 8637, '크리스토퍼': 8638, '큰': 8639, '클라스': 8640, '클레멘타인': 8641, '클린트': 8642, '키아누': 8643, '킬링': 8644, '킬링타임': 8645, '킬링타임도': 8646, '킬링타임용': 8647, '킬링타임용도': 8648, '킬링타임용으로': 8649, '킬링타임용으로도': 8650, '킬링타임으로': 8651, '킬링타임으로도': 8652, '타고': 8653, '타는': 8654, '타란티노': 8655, '탁월한': 8656, '탄': 8657, '탄탄하고': 8658, '탄탄한': 8659, '탈을': 8660, '탑': 8661, '탕웨이': 8662, '태어나': 8663, '태어나서': 8664, '태어난': 8665, '터미네이터': 8666, '터지는': 8667, '터짐': 8668, '턱없이': 8669, '테러': 8670, '텐데': 8671, '토': 8672, '토끼를': 8673, '톰': 8674, '톰크루즈': 8675, '통쾌한': 8676, '통한': 8677, '통해': 8678, '통해서': 8679, '투': 8680, '투자를': 8681, '투자한': 8682, '퉤': 8683, '특별한': 8684, '특별히': 8685, '특수효과': 8686, '특유의': 8687, '특이하고': 8688, '특이한': 8689, '특히': 8690, '특히나': 8691, '틀고': 8692, '틀에': 8693, '틀을': 8694, '틈이': 8695, '티': 8696, '티가': 8697, '티비': 8698, '티비로': 8699, '티비서': 8700, '티비에': 8701, '티비에서': 8702, '팀': 8703, '파격적인': 8704, '파고드는': 8705, '파라노말': 8706, '파리의': 8707, '파워레인저': 8708, '팍팍': 8709, '판치는': 8710, '판타지': 8711, '판타지를': 8712, '판타지의': 8713, '팔': 8714, '팝콘': 8715, '패러디': 8716, '팬': 8717, '팬으로서': 8718, '팬으로써': 8719, '팬이': 8720, '팬이라': 8721, '팬이라면': 8722, '팬이라서': 8723, '팬이지만': 8724, '팬인데': 8725, '펑펑': 8726, '페이크': 8727, '편': 8728, '편.': 8729, '편견을': 8730, '편견이': 8731, '편안하게': 8732, '편안한': 8733, '편은': 8734, '편의': 8735, '편이': 8736, '편인데': 8737, '편집': 8738, '편집도': 8739, '편집은': 8740, '편집을': 8741, '편집이': 8742, '편하게': 8743, '편히': 8744, '펼쳐지는': 8745, '펼치는': 8746, '평': 8747, '평가': 8748, '평가가': 8749, '평가는': 8750, '평가를': 8751, '평가할': 8752, '평균': 8753, '평론가': 8754, '평론가는': 8755, '평론가들': 8756, '평론가들은': 8757, '평론가들이': 8758, '평범하지': 8759, '평범한': 8760, '평생': 8761, '평생을': 8762, '평소': 8763, '평소에': 8764, '평은': 8765, '평을': 8766, '평이': 8767, '평점': 8768, '평점과': 8769, '평점도': 8770, '평점만': 8771, '평점보고': 8772, '평점에': 8773, '평점으로': 8774, '평점은': 8775, '평점을': 8776, '평점이': 8777, '평점조절위원회': 8778, '평점좀': 8779, '평정': 8780, '포기': 8781, '포기하고': 8782, '포기한': 8783, '포르노': 8784, '포르노를': 8785, '포스': 8786, '포스가': 8787, '포스터': 8788, '포스터가': 8789, '포스터는': 8790, '포스터를': 8791, '포스터만': 8792, '포스터보고': 8793, '포스터에': 8794, '포스터의': 8795, '포인트가': 8796, '포인트를': 8797, '포장한': 8798, '포켓몬': 8799, '폭력': 8800, '폭력을': 8801, '폭력의': 8802, '폰': 8803, '폴': 8804, '폼만': 8805, '표정': 8806, '표정과': 8807, '표정이': 8808, '표현': 8809, '표현된': 8810, '표현은': 8811, '표현을': 8812, '표현의': 8813, '표현이': 8814, '표현하는': 8815, '표현한': 8816, '표현할': 8817, '표현했다.': 8818, '푸른': 8819, '푹': 8820, '풀어가는': 8821, '풀어내는': 8822, '풀어낸': 8823, '풋풋하고': 8824, '풋풋한': 8825, '풍경과': 8826, '풍경이': 8827, '풍기는': 8828, '풍부한': 8829, '풍자': 8830, '프랑스': 8831, '프랑스의': 8832, '프레디': 8833, '프로': 8834, '프로가': 8835, '프로그램': 8836, '프로그램이': 8837, '피가': 8838, '피를': 8839, '피아노': 8840, '피터': 8841, '피해자': 8842, '피해자가': 8843, '픽사': 8844, '픽사의': 8845, '필름': 8846, '필름을': 8847, '필름이': 8848, '필요': 8849, '필요가': 8850, '필요는': 8851, '필요도': 8852, '필요없고': 8853, '필요없는': 8854, '필요없다': 8855, '필요없다.': 8856, '필요없음': 8857, '필요없이': 8858, '필요하다': 8859, '필요하다.': 8860, '필요한': 8861, '필요한가?': 8862, '하': 8863, '하..': 8864, '하...': 8865, '하게': 8866, '하고': 8867, '하고,': 8868, '하고.': 8869, '하고..': 8870, '하고...': 8871, '하고싶은': 8872, '하기': 8873, '하기도': 8874, '하기에는': 8875, '하기엔': 8876, '하긴': 8877, '하길래': 8878, '하나': 8879, '하나!': 8880, '하나,': 8881, '하나.': 8882, '하나..': 8883, '하나...': 8884, '하나?': 8885, '하나가': 8886, '하나같이': 8887, '하나는': 8888, '하나님의': 8889, '하나도': 8890, '하나로': 8891, '하나를': 8892, '하나만': 8893, '하나만으로': 8894, '하나만으로도': 8895, '하나부터': 8896, '하나씩': 8897, '하나의': 8898, '하나하나': 8899, '하나하나가': 8900, '하네': 8901, '하네.': 8902, '하네요': 8903, '하네요.': 8904, '하네요..': 8905, '하는': 8906, '하는가?': 8907, '하는거': 8908, '하는건': 8909, '하는걸': 8910, '하는것': 8911, '하는게': 8912, '하는데': 8913, '하는데,': 8914, '하는데..': 8915, '하는데...': 8916, '하는지': 8917, '하늘을': 8918, '하니': 8919, '하니까': 8920, '하다': 8921, '하다.': 8922, '하다가': 8923, '하다니': 8924, '하다못해': 8925, '하더라도': 8926, '하던': 8927, '하던가': 8928, '하던데': 8929, '하도': 8930, '하려고': 8931, '하려는': 8932, '하려면': 8933, '하루': 8934, '하루를': 8935, '하루에': 8936, '하루종일': 8937, '하루하루': 8938, '하며': 8939, '하면': 8940, '하면서': 8941, '하면서도': 8942, '하세요': 8943, '하시길': 8944, '하시는': 8945, '하아': 8946, '하아...': 8947, '하야오': 8948, '하얀': 8949, '하여': 8950, '하여간': 8951, '하여금': 8952, '하여튼': 8953, '하이킥': 8954, '하이틴': 8955, '하자': 8956, '하정우': 8957, '하지': 8958, '하지.': 8959, '하지도': 8960, '하지마라': 8961, '하지만': 8962, '하지만,': 8963, '하지만..': 8964, '하지말고': 8965, '하지원': 8966, '하지원의': 8967, '하품만': 8968, '하필': 8969, '하하': 8970, '하하하': 8971, '하하하하': 8972, '학교': 8973, '학교에서': 8974, '학생들': 8975, '학창시절': 8976, '한': 8977, '한가지': 8978, '한개도': 8979, '한거': 8980, '한건': 8981, '한건지': 8982, '한것': 8983, '한게': 8984, '한계': 8985, '한계가': 8986, '한계를': 8987, '한국': 8988, '한국식': 8989, '한국어': 8990, '한국에': 8991, '한국에도': 8992, '한국에서': 8993, '한국영화': 8994, '한국영화.': 8995, '한국영화가': 8996, '한국영화는': 8997, '한국영화를': 8998, '한국영화에': 8999, '한국영화의': 9000, '한국영화중': 9001, '한국은': 9002, '한국을': 9003, '한국의': 9004, '한국이': 9005, '한국인': 9006, '한국인이': 9007, '한국판': 9008, '한국형': 9009, '한글': 9010, '한낱': 9011, '한다': 9012, '한다.': 9013, '한다..': 9014, '한다고': 9015, '한다는': 9016, '한다면': 9017, '한대': 9018, '한데': 9019, '한동안': 9020, '한때': 9021, '한마디': 9022, '한마디로': 9023, '한명': 9024, '한명도': 9025, '한명이': 9026, '한방': 9027, '한방에': 9028, '한번': 9029, '한번더': 9030, '한번도': 9031, '한번보고': 9032, '한번보면': 9033, '한번씩': 9034, '한번에': 9035, '한번은': 9036, '한번쯤': 9037, '한번쯤은': 9038, '한석규': 9039, '한순간도': 9040, '한순간에': 9041, '한숨만': 9042, '한시간': 9043, '한시도': 9044, '한심하고': 9045, '한심하다': 9046, '한심하다.': 9047, '한심한': 9048, '한없이': 9049, '한예슬': 9050, '한장면': 9051, '한정된': 9052, '한줄': 9053, '한참': 9054, '한참을': 9055, '한층': 9056, '한쿡영화의': 9057, '한테': 9058, '한편': 9059, '한편으로': 9060, '한편의': 9061, '한표': 9062, '한효주': 9063, '할': 9064, '할거': 9065, '할까': 9066, '할까?': 9067, '할때': 9068, '할려고': 9069, '할리우드': 9070, '할만한': 9071, '할말': 9072, '할말을': 9073, '할말이': 9074, '할머니': 9075, '할머니가': 9076, '할수': 9077, '할수있는': 9078, '할아버지': 9079, '할아버지가': 9080, '할지': 9081, '함': 9082, '함.': 9083, '함께': 9084, '함께한': 9085, '함부로': 9086, '합니다': 9087, '합니다.': 9088, '항상': 9089, '해': 9090, '해가': 9091, '해놓고': 9092, '해도': 9093, '해도해도': 9094, '해라': 9095, '해라.': 9096, '해리슨': 9097, '해리포터': 9098, '해보고': 9099, '해서': 9100, '해석': 9101, '해석이': 9102, '해야': 9103, '해야지': 9104, '해요': 9105, '해요.': 9106, '해주길래': 9107, '해주는': 9108, '해주는거': 9109, '해주세요': 9110, '해준': 9111, '해준다.': 9112, '해줘서': 9113, '해피엔딩': 9114, '해피엔딩으로': 9115, '해피엔딩이': 9116, '해피엔딩이라': 9117, '핵': 9118, '핵노잼': 9119, '했나?': 9120, '했는데': 9121, '했는데,': 9122, '했는데..': 9123, '했는데...': 9124, '했는지': 9125, '했다': 9126, '했다.': 9127, '했다...': 9128, '했다가': 9129, '했다고': 9130, '했다는': 9131, '했다면': 9132, '했더니': 9133, '했던': 9134, '했습니다': 9135, '했습니다.': 9136, '했어': 9137, '했어도': 9138, '했어야': 9139, '했어요': 9140, '했었는데': 9141, '했으나': 9142, '했으면': 9143, '했을': 9144, '했을까': 9145, '했을까?': 9146, '했음': 9147, '했음.': 9148, '했지만': 9149, '했지만,': 9150, '행동': 9151, '행동을': 9152, '행동이': 9153, '행복은': 9154, '행복을': 9155, '행복이': 9156, '행복하게': 9157, '행복하고': 9158, '행복한': 9159, '행복해지는': 9160, '향수를': 9161, '향연': 9162, '향연.': 9163, '향한': 9164, '향해': 9165, '허나': 9166, '허니잼': 9167, '허무': 9168, '허무맹랑한': 9169, '허무하게': 9170, '허무하고': 9171, '허무한': 9172, '허세': 9173, '허세만': 9174, '허술하고': 9175, '허술하다.': 9176, '허술한': 9177, '허접': 9178, '허접.': 9179, '허접하게': 9180, '허접하고': 9181, '허접하다': 9182, '허접하다.': 9183, '허접한': 9184, '허허': 9185, '헉': 9186, '헐': 9187, '헐..': 9188, '헐...': 9189, '헐리우드': 9190, '헐리웃': 9191, '헛웃음만': 9192, '현': 9193, '현대': 9194, '현대의': 9195, '현대판': 9196, '현란한': 9197, '현실': 9198, '현실.': 9199, '현실감': 9200, '현실감이': 9201, '현실과': 9202, '현실성': 9203, '현실성도': 9204, '현실성을': 9205, '현실성이': 9206, '현실에': 9207, '현실에서': 9208, '현실은': 9209, '현실을': 9210, '현실의': 9211, '현실이': 9212, '현실적으로': 9213, '현실적이고': 9214, '현실적이어서': 9215, '현실적인': 9216, '현재': 9217, '현재를': 9218, '현재의': 9219, '형': 9220, '형님': 9221, '형사': 9222, '형이': 9223, '형제의': 9224, '형편': 9225, '형편없는': 9226, '호기심에': 9227, '호기심을': 9228, '호러': 9229, '호불호가': 9230, '호흡이': 9231, '혹시': 9232, '혹시나': 9233, '혹은': 9234, '혼자': 9235, '혼자서': 9236, '홍금보': 9237, '홍보': 9238, '홍보가': 9239, '홍상수': 9240, '홍콩': 9241, '홍콩영화': 9242, '홍콩영화의': 9243, '화': 9244, '화가': 9245, '화가난다': 9246, '화끈하게': 9247, '화끈한': 9248, '화나는': 9249, '화나서': 9250, '화난다': 9251, '화려하게': 9252, '화려하고': 9253, '화려한': 9254, '화면': 9255, '화면,': 9256, '화면과': 9257, '화면도': 9258, '화면만': 9259, '화면에': 9260, '화면은': 9261, '화면이': 9262, '화이팅': 9263, '화이팅!': 9264, '화이팅!!': 9265, '화장실': 9266, '확': 9267, '확실하게': 9268, '확실한': 9269, '확실히': 9270, '환상': 9271, '환상을': 9272, '환상의': 9273, '환상적인': 9274, '환타지': 9275, '홧팅': 9276, '황당': 9277, '황당하고': 9278, '황당한': 9279, '황정민': 9280, '황홀한': 9281, '획을': 9282, '효과': 9283, '후': 9284, '후...': 9285, '후로': 9286, '후반': 9287, '후반부': 9288, '후반부가': 9289, '후반부는': 9290, '후반부로': 9291, '후반부에': 9292, '후반에': 9293, '후반으로': 9294, '후반의': 9295, '후속작': 9296, '후속편': 9297, '후에': 9298, '후에도': 9299, '후하게': 9300, '후한': 9301, '후회': 9302, '후회가': 9303, '후회는': 9304, '후회하지': 9305, '후회할': 9306, '후회함': 9307, '훈훈하고': 9308, '훈훈한': 9309, '훌륭하고': 9310, '훌륭하다': 9311, '훌륭하다.': 9312, '훌륭한': 9313, '훌륭함': 9314, '훌륭했다.': 9315, '훨': 9316, '훨씬': 9317, '휴': 9318, '휴...': 9319, '흐르는': 9320, '흐른다.': 9321, '흐름': 9322, '흐름도': 9323, '흐름에': 9324, '흐름을': 9325, '흐름이': 9326, '흐지부지': 9327, '흑역사': 9328, '흑인': 9329, '흑흑': 9330, '흔들리는': 9331, '흔적이': 9332, '흔하디': 9333, '흔한': 9334, '흔해빠진': 9335, '흔히': 9336, '흘러': 9337, '흘러가는': 9338, '흘러나오는': 9339, '흘러도': 9340, '흘리게': 9341, '흘리며': 9342, '흠': 9343, '흠.': 9344, '흠..': 9345, '흠...': 9346, '흠뻑': 9347, '흠잡을': 9348, '흠잡을데': 9349, '흡입력': 9350, '흡입력이': 9351, '흥미': 9352, '흥미가': 9353, '흥미도': 9354, '흥미로운': 9355, '흥미로웠다.': 9356, '흥미롭게': 9357, '흥미롭고': 9358, '흥미롭지': 9359, '흥미를': 9360, '흥미진진': 9361, '흥미진진하게': 9362, '흥미진진하고': 9363, '흥미진진한': 9364, '흥해라': 9365, '흥행': 9366, '흥행에': 9367, '흥행은': 9368, '흥행을': 9369, '흥행이': 9370, '희대의': 9371, '희망': 9372, '희망을': 9373, '희망이': 9374, '희생을': 9375, '히어로': 9376, '히어로물': 9377, '힐링': 9378, '힐링이': 9379, '힘': 9380, '힘.': 9381, '힘과': 9382, '힘내세요': 9383, '힘든': 9384, '힘들': 9385, '힘들게': 9386, '힘들고': 9387, '힘들다': 9388, '힘들다.': 9389, '힘들다..': 9390, '힘들듯': 9391, '힘들었다.': 9392, '힘으로': 9393, '힘을': 9394, '힘이': 9395}\n" ], [ "# idx_to_token\nprint(vocab.idx_to_token)", "{0: '<unk>', 1: '<pad>', 2: '!', 3: '!!', 4: '!!!', 5: '!!!!', 6: '\"\"', 7: '\"\"\"', 8: '\"이', 9: '&', 10: \"'\", 11: '(10자', 12: ')', 13: '+', 14: ',', 15: ',,', 16: ',,,', 17: '-', 18: '--', 19: '->', 20: '-_', 21: '-_-', 22: '-_-;', 23: '-_-;;', 24: '.', 25: '..', 26: '...', 27: '....', 28: '.....', 29: '......', 30: '/', 31: '//', 32: '0', 33: '007', 34: '0개는', 35: '0점', 36: '0점도', 37: '0점은', 38: '0점을', 39: '0점이', 40: '1', 41: '1,', 42: '1,2', 43: '1.', 44: '10', 45: '10,', 46: '100%', 47: '100배', 48: '100점', 49: '10개', 50: '10년', 51: '10년도', 52: '10년만에', 53: '10년이', 54: '10년전', 55: '10년전에', 56: '10대', 57: '10번', 58: '10번도', 59: '10분', 60: '10자', 61: '10점', 62: '10점!', 63: '10점.', 64: '10점도', 65: '10점만점에', 66: '10점은', 67: '10점을', 68: '10점이', 69: '10점이다.', 70: '10점주는', 71: '10점준', 72: '10점준다', 73: '10점준다.', 74: '10점줌', 75: '10점짜리', 76: '11점을', 77: '12세', 78: '15세', 79: '18', 80: '19금', 81: '1개', 82: '1개도', 83: '1도', 84: '1등', 85: '1시간', 86: '1위', 87: '1은', 88: '1을', 89: '1이', 90: '1인', 91: '1인칭', 92: '1점', 93: '1점.', 94: '1점도', 95: '1점도아깝다', 96: '1점은', 97: '1점을', 98: '1점이', 99: '1점주기도', 100: '1점주는', 101: '1점준다', 102: '1점줌', 103: '1점짜리', 104: '1편', 105: '1편과', 106: '1편도', 107: '1편보다', 108: '1편보단', 109: '1편에', 110: '1편에서', 111: '1편은', 112: '1편을', 113: '1편의', 114: '1편이', 115: '1회부터', 116: '2', 117: '2%', 118: '2,', 119: '2.22', 120: '2000년대', 121: '2003년', 122: '2004년', 123: '2012', 124: '2012년', 125: '2014년', 126: '2014년에', 127: '2015년', 128: '20년', 129: '20년이', 130: '20년전', 131: '20대', 132: '20분', 133: '20세기', 134: '21세기', 135: '21세기에', 136: '2가', 137: '2는', 138: '2도', 139: '2류', 140: '2를', 141: '2배속으로', 142: '2번', 143: '2번째', 144: '2시간', 145: '2시간동안', 146: '2시간을', 147: '2시간이', 148: '2점', 149: '2점도', 150: '2점부터', 151: '2점은', 152: '2점을', 153: '2점준다', 154: '2탄', 155: '2편', 156: '2편도', 157: '2편에', 158: '2편은', 159: '2편을', 160: '2편이', 161: '3', 162: '3,', 163: '30년', 164: '30년전', 165: '30대', 166: '30분', 167: '3D', 168: '3D로', 169: '3개', 170: '3는', 171: '3대', 172: '3류', 173: '3번', 174: '3점', 175: '3점.', 176: '3점도', 177: '3편', 178: '3편은', 179: '3편이', 180: '4', 181: '4,', 182: '4.44', 183: '40대', 184: '4점', 185: '4점이', 186: '5', 187: '5개', 188: '5번', 189: '5분', 190: '5점', 191: '5점은', 192: '60년대', 193: '6살', 194: '6점', 195: '6점대', 196: '70년대', 197: '7점', 198: '7점대', 199: '7점대가', 200: '7점은', 201: '7점이', 202: '80년대', 203: '8점', 204: '8점대', 205: '8점대는', 206: '8점은', 207: '9', 208: '90년대', 209: '90분', 210: '9점', 211: '9점.', 212: '9점대', 213: '9점대는', 214: '9점은', 215: '9점을', 216: ':', 217: ':)', 218: ';', 219: ';;', 220: ';;;', 221: ';;;;', 222: '=', 223: '?', 224: '??', 225: '???', 226: 'A', 227: 'A급', 228: 'Bad', 229: 'Best', 230: 'B급', 231: 'B급도', 232: 'B급영화', 233: 'CG', 234: 'CGV에서', 235: 'CG가', 236: 'CG는', 237: 'CG도', 238: 'CG로', 239: 'CG에', 240: 'CG와', 241: 'C급', 242: 'DVD', 243: 'DVD로', 244: 'EBS', 245: 'EBS에서', 246: 'Good', 247: 'I', 248: 'KBS', 249: 'M창', 250: 'M창있네', 251: 'No', 252: 'OCN에서', 253: 'OO', 254: 'OOO', 255: 'OOOO', 256: 'OOO기', 257: 'OOO기영화', 258: 'OO같은', 259: 'OST', 260: 'OST가', 261: 'OST는', 262: 'OST도', 263: 'SF', 264: 'TV', 265: 'TV로', 266: 'TV에서', 267: 'The', 268: 'Very', 269: 'You', 270: '[', 271: '[4점]', 272: ']', 273: '^', 274: '^-^', 275: '^^', 276: '^^*', 277: '^^;', 278: '^_^', 279: 'a', 280: 'and', 281: 'b', 282: 'bad', 283: 'bb', 284: 'be', 285: 'best', 286: 'boring', 287: 'but', 288: 'b급', 289: 'cg', 290: 'cg가', 291: 'cg는', 292: 'cg도', 293: 'c급', 294: 'dvd', 295: 'dvd로', 296: 'ebs에서', 297: 'for', 298: 'good', 299: 'good!', 300: 'i', 301: 'in', 302: 'is', 303: 'it', 304: 'love', 305: 'me', 306: 'movie', 307: 'my', 308: 'no', 309: 'not', 310: 'ocn에서', 311: 'of', 312: 'ost', 313: 'ost가', 314: 'ost는', 315: 'ost도', 316: 'ost만', 317: 'sf', 318: 'so', 319: 'the', 320: 'this', 321: 'time', 322: 'to', 323: 'tv', 324: 'tv로', 325: 'tv에서', 326: 'very', 327: 'vs', 328: 'you', 329: '~', 330: '~~', 331: '~~~', 332: '~~~~~~', 333: '~~~~~~~', 334: '↓', 335: '♡', 336: '♥', 337: 'ㄱ', 338: 'ㄱ-', 339: 'ㄱㄱ', 340: 'ㄴㄴ', 341: 'ㄷ', 342: 'ㄷㄷ', 343: 'ㄷㄷㄷ', 344: 'ㄹㅇ', 345: 'ㅁㅊ', 346: 'ㅂ', 347: 'ㅄ', 348: 'ㅅ', 349: 'ㅅㄱ', 350: 'ㅅㅂ', 351: 'ㅆㄹㄱ', 352: 'ㅇ', 353: 'ㅇㅇ', 354: 'ㅇㅇㅇ', 355: 'ㅈㄴ', 356: 'ㅉ', 357: 'ㅉㅉ', 358: 'ㅉㅉㅉ', 359: 'ㅋ', 360: 'ㅋㅋ', 361: 'ㅋㅋㅋ', 362: 'ㅋㅋㅋㅋ', 363: 'ㅋㅋㅋㅋㅋ', 364: 'ㅋㅋㅋㅋㅋㅋ', 365: 'ㅋㅋㅋㅋㅋㅋㅋ', 366: 'ㅋㅋㅋㅋㅋㅋㅋㅋ', 367: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 368: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 369: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 370: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 371: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 372: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 373: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 374: 'ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ', 375: 'ㅎ', 376: 'ㅎㄷㄷ', 377: 'ㅎㅎ', 378: 'ㅎㅎㅎ', 379: 'ㅎㅎㅎㅎ', 380: 'ㅗ', 381: 'ㅜ', 382: 'ㅜ.ㅜ', 383: 'ㅜㅜ', 384: 'ㅜㅜㅜ', 385: 'ㅜㅠ', 386: 'ㅠ', 387: 'ㅠ.ㅠ', 388: 'ㅠ_ㅠ', 389: 'ㅠㅜ', 390: 'ㅠㅠ', 391: 'ㅠㅠㅠ', 392: 'ㅠㅠㅠㅠ', 393: 'ㅡ', 394: 'ㅡ,.ㅡ', 395: 'ㅡ.ㅡ', 396: 'ㅡㅜ', 397: 'ㅡㅡ', 398: 'ㅡㅡ;', 399: 'ㅡㅡ;;', 400: 'ㅡㅡㅋ', 401: '가', 402: '가게', 403: '가고', 404: '가까운', 405: '가까이', 406: '가끔', 407: '가끔씩', 408: '가난한', 409: '가네', 410: '가는', 411: '가는게', 412: '가는데', 413: '가는줄', 414: '가능성을', 415: '가능성이', 416: '가능한', 417: '가득', 418: '가득찬', 419: '가득한', 420: '가만히', 421: '가면', 422: '가면갈수록', 423: '가문의', 424: '가벼운', 425: '가볍게', 426: '가볍고', 427: '가볍지', 428: '가서', 429: '가수', 430: '가슴', 431: '가슴속에', 432: '가슴아프고', 433: '가슴아픈', 434: '가슴에', 435: '가슴으로', 436: '가슴을', 437: '가슴이', 438: '가슴찡한', 439: '가시지', 440: '가야', 441: '가운데', 442: '가자', 443: '가장', 444: '가장한', 445: '가정', 446: '가져다', 447: '가족', 448: '가족,', 449: '가족과', 450: '가족끼리', 451: '가족들', 452: '가족들과', 453: '가족들이', 454: '가족애를', 455: '가족에', 456: '가족영화', 457: '가족영화로', 458: '가족은', 459: '가족을', 460: '가족의', 461: '가족이', 462: '가족이랑', 463: '가지', 464: '가지게', 465: '가지고', 466: '가지는', 467: '가진', 468: '가질', 469: '가짜', 470: '가치', 471: '가치가', 472: '가치는', 473: '가치도', 474: '가치를', 475: '가치있는', 476: '가히', 477: '각', 478: '각각', 479: '각각의', 480: '각본', 481: '각본,', 482: '각본.', 483: '각본과', 484: '각본을', 485: '각본이', 486: '각자', 487: '각자의', 488: '간', 489: '간간히', 490: '간다', 491: '간다.', 492: '간단한', 493: '간만에', 494: '간신히', 495: '간에', 496: '간의', 497: '간절히', 498: '간직하고', 499: '갇힌', 500: '갈', 501: '갈등', 502: '갈등과', 503: '갈등이', 504: '갈리는', 505: '갈수록', 506: '갈피를', 507: '감', 508: '감각이', 509: '감각적인', 510: '감독', 511: '감독,', 512: '감독.', 513: '감독..', 514: '감독과', 515: '감독님', 516: '감독님의', 517: '감독님이', 518: '감독도', 519: '감독들', 520: '감독아', 521: '감독에', 522: '감독에게', 523: '감독은', 524: '감독을', 525: '감독의', 526: '감독이', 527: '감독이나', 528: '감독이랑', 529: '감독한테', 530: '감동', 531: '감동!', 532: '감동,', 533: '감동.', 534: '감동..', 535: '감동...', 536: '감동~', 537: '감동과', 538: '감동까지', 539: '감동도', 540: '감동도없고', 541: '감동도있고', 542: '감동에', 543: '감동으로', 544: '감동은', 545: '감동을', 546: '감동의', 547: '감동이', 548: '감동이나', 549: '감동이다', 550: '감동입니다.', 551: '감동있게', 552: '감동있고', 553: '감동있는', 554: '감동적', 555: '감동적으로', 556: '감동적이고', 557: '감동적이네요', 558: '감동적이네요.', 559: '감동적이다', 560: '감동적이다.', 561: '감동적이었다', 562: '감동적이었다.', 563: '감동적이었습니다.', 564: '감동적이에요', 565: '감동적인', 566: '감동적임', 567: '감동적입니다', 568: '감동적입니다.', 569: '감명', 570: '감명깊게', 571: '감명깊은', 572: '감명을', 573: '감사드립니다.', 574: '감사하게', 575: '감사하고', 576: '감사합니다', 577: '감사합니다.', 578: '감상', 579: '감상한', 580: '감성', 581: '감성에', 582: '감성을', 583: '감성의', 584: '감성이', 585: '감성적인', 586: '감성팔이', 587: '감안하고', 588: '감안하면', 589: '감안해도', 590: '감이', 591: '감정', 592: '감정선을', 593: '감정에', 594: '감정은', 595: '감정을', 596: '감정의', 597: '감정이', 598: '감정이입이', 599: '감춰진', 600: '감탄', 601: '감회가', 602: '감흥도', 603: '감흥이', 604: '감히', 605: '갑', 606: '갑니다', 607: '갑니다.', 608: '갑자기', 609: '갑작스런', 610: '갓', 611: '갔는데', 612: '갔다', 613: '갔다.', 614: '갔다가', 615: '갔던', 616: '강동원', 617: '강력', 618: '강력한', 619: '강렬하게', 620: '강렬한', 621: '강아지', 622: '강요하는', 623: '강지환', 624: '강추', 625: '강추!', 626: '강추!!', 627: '강추!!!', 628: '강추.', 629: '강추~', 630: '강추합니다', 631: '강추합니다.', 632: '강하게', 633: '강한', 634: '강해서', 635: '강혜정', 636: '갖게', 637: '갖고', 638: '갖는', 639: '갖다', 640: '갖추고', 641: '갖춘', 642: '같고', 643: '같기도', 644: '같네', 645: '같네요', 646: '같네요.', 647: '같다', 648: '같다!', 649: '같다.', 650: '같다..', 651: '같다...', 652: '같다는', 653: '같습니다', 654: '같습니다.', 655: '같아', 656: '같아서', 657: '같아요', 658: '같아요.', 659: '같아요~', 660: '같았다.', 661: '같은', 662: '같은데', 663: '같은데.', 664: '같은데..', 665: '같은데...', 666: '같음', 667: '같음.', 668: '같음..', 669: '같이', 670: '같잖은', 671: '같지', 672: '같지도', 673: '같지만', 674: '개', 675: '개가', 676: '개같은', 677: '개그', 678: '개그가', 679: '개그는', 680: '개그맨', 681: '개그맨들', 682: '개꿀잼', 683: '개나', 684: '개나소나', 685: '개노잼', 686: '개는', 687: '개막장', 688: '개봉', 689: '개봉당시', 690: '개봉을', 691: '개봉하고', 692: '개봉하는', 693: '개봉하지', 694: '개봉한', 695: '개봉할', 696: '개봉했을때', 697: '개뿔', 698: '개뿔.', 699: '개성이', 700: '개성있는', 701: '개연성', 702: '개연성과', 703: '개연성도', 704: '개연성없는', 705: '개연성은', 706: '개연성이', 707: '개인', 708: '개인의', 709: '개인적', 710: '개인적으로', 711: '개인적으로는', 712: '개인적으론', 713: '개인적인', 714: '개잼', 715: '개콘', 716: '개판', 717: '개판.', 718: '객관적으로', 719: '갠적으로', 720: '걍', 721: '거', 722: '거기', 723: '거기다', 724: '거기다가', 725: '거기서', 726: '거기에', 727: '거꾸로', 728: '거냐?', 729: '거다.', 730: '거대한', 731: '거듭할수록', 732: '거라', 733: '거라는', 734: '거랑', 735: '거리가', 736: '거리는', 737: '거부감이', 738: '거북한', 739: '거슬린다.', 740: '거야', 741: '거의', 742: '거의다', 743: '거장', 744: '거장의', 745: '거지', 746: '거지?', 747: '거지같은', 748: '거짓말', 749: '거짓말을', 750: '거친', 751: '거침없이', 752: '거품', 753: '거품이', 754: '건', 755: '건가?', 756: '건데', 757: '건지', 758: '건지..', 759: '건지...', 760: '건진', 761: '건질', 762: '건질게', 763: '걷는', 764: '걸', 765: '걸까?', 766: '걸로', 767: '걸린', 768: '걸작', 769: '걸작!', 770: '걸작.', 771: '걸작을', 772: '걸작이', 773: '걸작이다', 774: '걸작이다.', 775: '검은', 776: '겁나', 777: '겁나게', 778: '겁니다.', 779: '것', 780: '것!', 781: '것,', 782: '것.', 783: '것..', 784: '것...', 785: '것과', 786: '것도', 787: '것들', 788: '것들은', 789: '것들을', 790: '것들이', 791: '것만', 792: '것만으로도', 793: '것보다', 794: '것에', 795: '것으로', 796: '것은', 797: '것을', 798: '것의', 799: '것이', 800: '것이다', 801: '것이다.', 802: '것인가', 803: '것인가?', 804: '것인지', 805: '것입니다.', 806: '것처럼', 807: '겉만', 808: '겉멋만', 809: '게', 810: '게다가', 811: '게이', 812: '게이는', 813: '게임', 814: '게임을', 815: '게임의', 816: '겨우', 817: '겨울에', 818: '겨울왕국', 819: '겪는', 820: '견자단', 821: '견자단이', 822: '결과가', 823: '결과는', 824: '결과를', 825: '결국', 826: '결국엔', 827: '결국은', 828: '결론', 829: '결론은', 830: '결론이', 831: '결말', 832: '결말,', 833: '결말.', 834: '결말..', 835: '결말...', 836: '결말까지', 837: '결말도', 838: '결말로', 839: '결말만', 840: '결말에', 841: '결말은', 842: '결말을', 843: '결말의', 844: '결말이', 845: '결정적으로', 846: '결코', 847: '결혼', 848: '결혼하고', 849: '경악을', 850: '경의를', 851: '경이로운', 852: '경찰', 853: '경찰은', 854: '경찰을', 855: '경찰이', 856: '경쾌한', 857: '경험을', 858: '경험이', 859: '곁에', 860: '계기가', 861: '계기로', 862: '계속', 863: '계속해서', 864: '고', 865: '고뇌와', 866: '고등학교', 867: '고등학교때', 868: '고딩때', 869: '고로', 870: '고르는', 871: '고마운', 872: '고맙습니다', 873: '고민을', 874: '고생', 875: '고생한', 876: '고스란히', 877: '고양이', 878: '고양이가', 879: '고인의', 880: '고작', 881: '고전', 882: '고질라', 883: '고통과', 884: '고통을', 885: '곧', 886: '골', 887: '골때리는', 888: '골라서', 889: '곳곳에', 890: '곳에', 891: '곳에서', 892: '곳이', 893: '공간', 894: '공간에서', 895: '공감', 896: '공감가는', 897: '공감대', 898: '공감도', 899: '공감되고', 900: '공감되는', 901: '공감되지', 902: '공감은', 903: '공감을', 904: '공감이', 905: '공감하고', 906: '공감하기', 907: '공감할', 908: '공부', 909: '공유', 910: '공존하는', 911: '공중파', 912: '공짜로', 913: '공포', 914: '공포가', 915: '공포는', 916: '공포도', 917: '공포를', 918: '공포물', 919: '공포에', 920: '공포영화', 921: '공포영화.', 922: '공포영화가', 923: '공포영화는', 924: '공포영화라고', 925: '공포영화를', 926: '공포영화의', 927: '공포와', 928: '공포의', 929: '공허한', 930: '과', 931: '과감한', 932: '과거', 933: '과거가', 934: '과거를', 935: '과거에', 936: '과거와', 937: '과거의', 938: '과도한', 939: '과언이', 940: '과연', 941: '과장된', 942: '과정은', 943: '과정을', 944: '과정이', 945: '과하게', 946: '과한', 947: '관객', 948: '관객들', 949: '관객들에게', 950: '관객들을', 951: '관객들의', 952: '관객들이', 953: '관객에게', 954: '관객은', 955: '관객을', 956: '관객의', 957: '관객이', 958: '관계', 959: '관계가', 960: '관계를', 961: '관계에', 962: '관계의', 963: '관람', 964: '관람객', 965: '관련', 966: '관심', 967: '관심을', 968: '관심이', 969: '관점에서', 970: '관점이', 971: '관한', 972: '광고', 973: '광고를', 974: '광기를', 975: '괜찮게', 976: '괜찮고', 977: '괜찮네요', 978: '괜찮다', 979: '괜찮다.', 980: '괜찮다고', 981: '괜찮아서', 982: '괜찮았고', 983: '괜찮았는데', 984: '괜찮았다', 985: '괜찮았다.', 986: '괜찮았던', 987: '괜찮았음', 988: '괜찮았음.', 989: '괜찮은', 990: '괜찮은데', 991: '괜찮을', 992: '괜찮음', 993: '괜찮음.', 994: '괜찮지만', 995: '괜히', 996: '괴물', 997: '괴물이', 998: '굉장한', 999: '굉장히', 1000: '교과서', 1001: '교훈', 1002: '교훈과', 1003: '교훈도', 1004: '교훈은', 1005: '교훈을', 1006: '교훈이', 1007: '교훈적인', 1008: '구려', 1009: '구리고', 1010: '구리다', 1011: '구분이', 1012: '구석이', 1013: '구성', 1014: '구성,', 1015: '구성.', 1016: '구성과', 1017: '구성도', 1018: '구성은', 1019: '구성을', 1020: '구성이', 1021: '구지', 1022: '구하기', 1023: '구할', 1024: '구해서', 1025: '구혜선', 1026: '국내', 1027: '국내에서', 1028: '국민', 1029: '국민이', 1030: '국산', 1031: '국어책', 1032: '군', 1033: '군대', 1034: '군대를', 1035: '군대에서', 1036: '군더더기', 1037: '군인', 1038: '굳', 1039: '굳!', 1040: '굳굳', 1041: '굳이', 1042: '굿', 1043: '굿!', 1044: '굿!!', 1045: '굿,', 1046: '굿.', 1047: '굿~', 1048: '굿굿', 1049: '굿굿굿', 1050: '굿바이', 1051: '궁금하게', 1052: '궁금하다', 1053: '궁금하다.', 1054: '궁금해서', 1055: '권상우', 1056: '권하고', 1057: '귀가', 1058: '귀를', 1059: '귀신', 1060: '귀신은', 1061: '귀신이', 1062: '귀에', 1063: '귀여운', 1064: '귀여움', 1065: '귀여워', 1066: '귀여워서', 1067: '귀여워요', 1068: '귀엽고', 1069: '귀엽다', 1070: '귀찮아서', 1071: '그', 1072: '그가', 1073: '그거', 1074: '그건', 1075: '그걸', 1076: '그걸로', 1077: '그것', 1078: '그것도', 1079: '그것은', 1080: '그것을', 1081: '그것이', 1082: '그게', 1083: '그나마', 1084: '그나저나', 1085: '그냥', 1086: '그냥..', 1087: '그냥...', 1088: '그냥저냥', 1089: '그녀', 1090: '그녀가', 1091: '그녀는', 1092: '그녀들의', 1093: '그녀를', 1094: '그녀의', 1095: '그놈의', 1096: '그는', 1097: '그다지', 1098: '그다지...', 1099: '그닥', 1100: '그닥..', 1101: '그닥...', 1102: '그당시', 1103: '그대로', 1104: '그대로의', 1105: '그동안', 1106: '그들', 1107: '그들만의', 1108: '그들에게', 1109: '그들은', 1110: '그들을', 1111: '그들의', 1112: '그들이', 1113: '그때', 1114: '그때는', 1115: '그때의', 1116: '그땐', 1117: '그래', 1118: '그래,', 1119: '그래도', 1120: '그래두', 1121: '그래서', 1122: '그래픽', 1123: '그래픽도', 1124: '그래픽은', 1125: '그래픽이', 1126: '그러고', 1127: '그러나', 1128: '그러니', 1129: '그러니까', 1130: '그러면', 1131: '그러면서', 1132: '그러지', 1133: '그럭저럭', 1134: '그런', 1135: '그런가', 1136: '그런가?', 1137: '그런거', 1138: '그런건', 1139: '그런걸', 1140: '그런게', 1141: '그런대로', 1142: '그런데', 1143: '그런지', 1144: '그럴', 1145: '그럴듯하게', 1146: '그럴듯한', 1147: '그럴싸하게', 1148: '그럼', 1149: '그럼에도', 1150: '그렇게', 1151: '그렇고', 1152: '그렇고..', 1153: '그렇다', 1154: '그렇다.', 1155: '그렇다고', 1156: '그렇다쳐도', 1157: '그렇다치고', 1158: '그렇지', 1159: '그렇지만', 1160: '그려낸', 1161: '그려냈다.', 1162: '그려진', 1163: '그로테스크한', 1164: '그를', 1165: '그리', 1166: '그리고', 1167: '그리고,', 1168: '그리운', 1169: '그리워', 1170: '그린', 1171: '그림', 1172: '그림도', 1173: '그림이', 1174: '그림체가', 1175: '그림체도', 1176: '그립다', 1177: '그립다.', 1178: '그만', 1179: '그만좀', 1180: '그만큼', 1181: '그보다', 1182: '그시절', 1183: '그야말로', 1184: '그에', 1185: '그와', 1186: '그외', 1187: '그은', 1188: '그의', 1189: '그이상', 1190: '그이상도', 1191: '그이하도', 1192: '그자체', 1193: '그저', 1194: '그저그런', 1195: '그정도로', 1196: '그중', 1197: '그지', 1198: '그지같은', 1199: '그치만', 1200: '그토록', 1201: '극', 1202: '극과', 1203: '극단적인', 1204: '극본', 1205: '극으로', 1206: '극을', 1207: '극의', 1208: '극장', 1209: '극장가서', 1210: '극장서', 1211: '극장에', 1212: '극장에서', 1213: '극장판', 1214: '극장판은', 1215: '극장판을', 1216: '극장판이', 1217: '극적인', 1218: '극중', 1219: '극치', 1220: '극치.', 1221: '극치를', 1222: '극한의', 1223: '극혐', 1224: '극히', 1225: '근', 1226: '근데', 1227: '근래', 1228: '근래에', 1229: '글', 1230: '글고', 1231: '글구', 1232: '글쎄', 1233: '글쎄.', 1234: '글쎄..', 1235: '글쎄...', 1236: '글을', 1237: '금방', 1238: '금치', 1239: '급', 1240: '급이', 1241: '급하게', 1242: '기', 1243: '기가', 1244: '기괴한', 1245: '기다리고', 1246: '기다리는', 1247: '기대', 1248: '기대가', 1249: '기대는', 1250: '기대도', 1251: '기대되는', 1252: '기대된다', 1253: '기대된다.', 1254: '기대됨', 1255: '기대됩니다', 1256: '기대됩니다.', 1257: '기대를', 1258: '기대보다', 1259: '기대안하고', 1260: '기대안하고봤는데', 1261: '기대안했는데', 1262: '기대없이', 1263: '기대에', 1264: '기대이상', 1265: '기대이상으로', 1266: '기대이하', 1267: '기대하게', 1268: '기대하고', 1269: '기대하고봤는데', 1270: '기대하는', 1271: '기대하며', 1272: '기대하면서', 1273: '기대하지', 1274: '기대한', 1275: '기대합니다', 1276: '기대했는데', 1277: '기대했다가', 1278: '기대했던', 1279: '기대했지만', 1280: '기독교', 1281: '기막힌', 1282: '기묘한', 1283: '기발하고', 1284: '기발한', 1285: '기본', 1286: '기본적으로', 1287: '기본적인', 1288: '기분', 1289: '기분.', 1290: '기분..', 1291: '기분나쁜', 1292: '기분만', 1293: '기분을', 1294: '기분이', 1295: '기분좋게', 1296: '기분좋은', 1297: '기사', 1298: '기억', 1299: '기억.', 1300: '기억나는', 1301: '기억나는건', 1302: '기억난다', 1303: '기억난다.', 1304: '기억도', 1305: '기억될', 1306: '기억속에', 1307: '기억에', 1308: '기억에서', 1309: '기억은', 1310: '기억을', 1311: '기억이', 1312: '기억이..', 1313: '기억이...', 1314: '기억하고', 1315: '기억하는', 1316: '기자', 1317: '기존', 1318: '기존에', 1319: '기존의', 1320: '기준으로', 1321: '기준이', 1322: '기타', 1323: '기타노', 1324: '기회가', 1325: '기회를', 1326: '긴', 1327: '긴박감', 1328: '긴박감이', 1329: '긴장감', 1330: '긴장감,', 1331: '긴장감과', 1332: '긴장감도', 1333: '긴장감은', 1334: '긴장감을', 1335: '긴장감이', 1336: '긴장을', 1337: '길', 1338: '길게', 1339: '길고', 1340: '길다', 1341: '길다.', 1342: '길어서', 1343: '길을', 1344: '길이', 1345: '김강우', 1346: '김구라', 1347: '김기덕', 1348: '김래원', 1349: '김민종', 1350: '김수로', 1351: '김수현', 1352: '김태희', 1353: '김혜수', 1354: '김희선', 1355: '깊게', 1356: '깊고', 1357: '깊다.', 1358: '깊은', 1359: '깊이', 1360: '깊이가', 1361: '깊이있는', 1362: '까는', 1363: '까지', 1364: '까지는', 1365: '깔끔하게', 1366: '깔끔하고', 1367: '깔끔한', 1368: '깜놀', 1369: '깜짝', 1370: '깨고', 1371: '깨닫게', 1372: '깨닫고', 1373: '깨알같은', 1374: '꺼버렸다.', 1375: '껏다', 1376: '껐다', 1377: '껐다.', 1378: '꼬마', 1379: '꼬박꼬박', 1380: '꼭', 1381: '꼭보세요', 1382: '꼭봐라', 1383: '꼴', 1384: '꼽는', 1385: '꼽으라면', 1386: '꼽을', 1387: '꼽히는', 1388: '꽃보다', 1389: '꽉', 1390: '꽝', 1391: '꽤', 1392: '꽤나', 1393: '꾀', 1394: '꾸역꾸역', 1395: '꾸준히', 1396: '꾹', 1397: '꿀잼', 1398: '꿈', 1399: '꿈과', 1400: '꿈꾸는', 1401: '꿈에', 1402: '꿈을', 1403: '꿈이', 1404: '끄고', 1405: '끄는', 1406: '끊기는', 1407: '끊이지', 1408: '끊임없는', 1409: '끊임없이', 1410: '끌고', 1411: '끌리는', 1412: '끌어', 1413: '끌지', 1414: '끔', 1415: '끔찍한', 1416: '끝', 1417: '끝.', 1418: '끝까지', 1419: '끝나고', 1420: '끝나고도', 1421: '끝나는', 1422: '끝나면', 1423: '끝나서', 1424: '끝나지', 1425: '끝난', 1426: '끝난다.', 1427: '끝날', 1428: '끝날때', 1429: '끝날때까지', 1430: '끝남', 1431: '끝내', 1432: '끝내주게', 1433: '끝내주는', 1434: '끝도', 1435: '끝없는', 1436: '끝에', 1437: '끝에서', 1438: '끝으로', 1439: '끝은', 1440: '끝을', 1441: '끝이', 1442: '끝판왕', 1443: '끼고', 1444: '끼워', 1445: '나', 1446: '나가고', 1447: '나가는', 1448: '나같은', 1449: '나게', 1450: '나고', 1451: '나까지', 1452: '나네', 1453: '나네요', 1454: '나네요.', 1455: '나는', 1456: '나니', 1457: '나도', 1458: '나도모르게', 1459: '나두', 1460: '나라', 1461: '나라가', 1462: '나라는', 1463: '나라를', 1464: '나라면', 1465: '나라에', 1466: '나라의', 1467: '나랑', 1468: '나레이션', 1469: '나로써는', 1470: '나루토', 1471: '나를', 1472: '나름', 1473: '나름대로', 1474: '나름의', 1475: '나만', 1476: '나머지', 1477: '나머지는', 1478: '나머진', 1479: '나면', 1480: '나발이고', 1481: '나쁘게', 1482: '나쁘지', 1483: '나쁘지는', 1484: '나쁘진', 1485: '나쁜', 1486: '나서', 1487: '나서도', 1488: '나약한', 1489: '나에게', 1490: '나에게는', 1491: '나에게도', 1492: '나에겐', 1493: '나오게', 1494: '나오고', 1495: '나오기', 1496: '나오긴', 1497: '나오길', 1498: '나오길래', 1499: '나오네', 1500: '나오네요', 1501: '나오네요.', 1502: '나오는', 1503: '나오는거', 1504: '나오는건', 1505: '나오는게', 1506: '나오는데', 1507: '나오는지', 1508: '나오니', 1509: '나오니까', 1510: '나오다니', 1511: '나오던', 1512: '나오면', 1513: '나오면서', 1514: '나오지', 1515: '나오지도', 1516: '나오지만', 1517: '나온', 1518: '나온거', 1519: '나온건', 1520: '나온게', 1521: '나온다', 1522: '나온다.', 1523: '나온다고', 1524: '나온다는', 1525: '나올', 1526: '나올까', 1527: '나올때', 1528: '나올때마다', 1529: '나올법한', 1530: '나올수', 1531: '나옴', 1532: '나옴.', 1533: '나와', 1534: '나와는', 1535: '나와도', 1536: '나와서', 1537: '나와야', 1538: '나왔는데', 1539: '나왔는지', 1540: '나왔다', 1541: '나왔다.', 1542: '나왔다고', 1543: '나왔다는', 1544: '나왔다면', 1545: '나왔던', 1546: '나왔으면', 1547: '나왔을', 1548: '나왔음', 1549: '나은', 1550: '나은듯', 1551: '나을', 1552: '나을듯', 1553: '나음', 1554: '나음.', 1555: '나의', 1556: '나이', 1557: '나이가', 1558: '나이를', 1559: '나이먹고', 1560: '나이에', 1561: '나이트', 1562: '나중에', 1563: '나중에는', 1564: '나중엔', 1565: '나지', 1566: '나참', 1567: '나처럼', 1568: '나타나서', 1569: '나타난', 1570: '나타낸', 1571: '나한테', 1572: '나한테는', 1573: '나혼자', 1574: '나홀로', 1575: '낚시', 1576: '낚여서', 1577: '낚였네', 1578: '낚였다', 1579: '낚였다.', 1580: '낚이지', 1581: '낚인', 1582: '난', 1583: '난다', 1584: '난다.', 1585: '난무하는', 1586: '난생', 1587: '난잡한', 1588: '난해한', 1589: '날', 1590: '날로', 1591: '날리고', 1592: '날이', 1593: '남', 1594: '남.', 1595: '남고', 1596: '남기고', 1597: '남기는', 1598: '남긴', 1599: '남긴다.', 1600: '남네요', 1601: '남네요.', 1602: '남녀', 1603: '남녀노소', 1604: '남녀의', 1605: '남는', 1606: '남는건', 1607: '남는게', 1608: '남는다', 1609: '남는다.', 1610: '남는다..', 1611: '남들', 1612: '남들이', 1613: '남매의', 1614: '남아', 1615: '남아있는', 1616: '남았다.', 1617: '남았던', 1618: '남은', 1619: '남을', 1620: '남의', 1621: '남이', 1622: '남자', 1623: '남자가', 1624: '남자는', 1625: '남자들', 1626: '남자들은', 1627: '남자들의', 1628: '남자들이', 1629: '남자라면', 1630: '남자랑', 1631: '남자를', 1632: '남자와', 1633: '남자의', 1634: '남자인', 1635: '남자주인공', 1636: '남자주인공의', 1637: '남자주인공이', 1638: '남자지만', 1639: '남주', 1640: '남주가', 1641: '남주는', 1642: '남주의', 1643: '남주인공', 1644: '남지', 1645: '남편', 1646: '남편이', 1647: '납니다.', 1648: '납득이', 1649: '낫겠다', 1650: '낫겠다.', 1651: '낫다', 1652: '낫다.', 1653: '낫다고', 1654: '낫지', 1655: '났다', 1656: '났다.', 1657: '낭비', 1658: '낭비.', 1659: '낭비한', 1660: '낮게', 1661: '낮네', 1662: '낮네..', 1663: '낮네요', 1664: '낮네요.', 1665: '낮다', 1666: '낮다.', 1667: '낮아', 1668: '낮아서', 1669: '낮은', 1670: '낮은거', 1671: '낮은게', 1672: '낮은지', 1673: '낮지', 1674: '낮지?', 1675: '낯선', 1676: '낳은', 1677: '내', 1678: '내가', 1679: '내가본', 1680: '내게', 1681: '내게는', 1682: '내겐', 1683: '내고', 1684: '내내', 1685: '내는', 1686: '내돈', 1687: '내리는', 1688: '내면', 1689: '내면을', 1690: '내면의', 1691: '내생애', 1692: '내생에', 1693: '내생의', 1694: '내세운', 1695: '내시간', 1696: '내용', 1697: '내용,', 1698: '내용.', 1699: '내용과', 1700: '내용까지', 1701: '내용도', 1702: '내용도없고', 1703: '내용들이', 1704: '내용만', 1705: '내용없고', 1706: '내용없는', 1707: '내용에', 1708: '내용으로', 1709: '내용은', 1710: '내용을', 1711: '내용의', 1712: '내용이', 1713: '내용이나', 1714: '내용이라', 1715: '내용이지만', 1716: '내용인지', 1717: '내용인지도', 1718: '내용전개', 1719: '내용전개가', 1720: '내인생', 1721: '내인생의', 1722: '내일', 1723: '내일이', 1724: '내취향은', 1725: '내평생', 1726: '낸', 1727: '냄새가', 1728: '냉혹한', 1729: '너', 1730: '너는', 1731: '너도', 1732: '너를', 1733: '너무', 1734: '너무나', 1735: '너무나도', 1736: '너무너무', 1737: '너무도', 1738: '너무재밌어요', 1739: '너무좋아요', 1740: '너무하네', 1741: '너무하다', 1742: '너의', 1743: '넋을', 1744: '넌', 1745: '널', 1746: '넘', 1747: '넘게', 1748: '넘나드는', 1749: '넘넘', 1750: '넘는', 1751: '넘어', 1752: '넘어가는', 1753: '넘어서', 1754: '넘어선', 1755: '넘은', 1756: '넘치게', 1757: '넘치고', 1758: '넘치는', 1759: '넣고', 1760: '넣어', 1761: '넣어서', 1762: '넣은', 1763: '네', 1764: '네가', 1765: '네이버', 1766: '네이버는', 1767: '네이버에', 1768: '네이버에서', 1769: '네이버평점', 1770: '네티즌', 1771: '년', 1772: '년이', 1773: '노', 1774: '노골적으로', 1775: '노골적인', 1776: '노는', 1777: '노답', 1778: '노래', 1779: '노래가', 1780: '노래는', 1781: '노래도', 1782: '노래를', 1783: '노래만', 1784: '노래와', 1785: '노력은', 1786: '노력이', 1787: '노력한', 1788: '노무현', 1789: '노잼', 1790: '노잼.', 1791: '노출', 1792: '노출도', 1793: '노출은', 1794: '노출이', 1795: '녹아있는', 1796: '놀라게', 1797: '놀라고', 1798: '놀라서', 1799: '놀라운', 1800: '놀라울', 1801: '놀란', 1802: '놀랍고', 1803: '놀랍다', 1804: '놀랍다.', 1805: '놀랐다', 1806: '놀랐다.', 1807: '놈', 1808: '놈들', 1809: '놈들은', 1810: '놈들이', 1811: '놈은', 1812: '놈의', 1813: '놈이', 1814: '높게', 1815: '높고', 1816: '높길래', 1817: '높네요.', 1818: '높다', 1819: '높다.', 1820: '높아', 1821: '높아서', 1822: '높은', 1823: '높은거', 1824: '높은지', 1825: '높이', 1826: '높지', 1827: '놓고', 1828: '놓은', 1829: '놓을', 1830: '놓치면', 1831: '놓치지', 1832: '놓친', 1833: '놓칠', 1834: '뇌가', 1835: '뇌리에', 1836: '누가', 1837: '누가봐도', 1838: '누구', 1839: '누구나', 1840: '누구냐', 1841: '누구도', 1842: '누구든', 1843: '누구를', 1844: '누구보다', 1845: '누구에게나', 1846: '누구인지', 1847: '누군가', 1848: '누군가가', 1849: '누군가를', 1850: '누군가에겐', 1851: '누군지', 1852: '누나', 1853: '누님', 1854: '누워서', 1855: '눈', 1856: '눈과', 1857: '눈도', 1858: '눈뜨고', 1859: '눈만', 1860: '눈물', 1861: '눈물과', 1862: '눈물나게', 1863: '눈물나는', 1864: '눈물도', 1865: '눈물은', 1866: '눈물을', 1867: '눈물이', 1868: '눈부신', 1869: '눈빛', 1870: '눈빛은', 1871: '눈빛이', 1872: '눈에', 1873: '눈으로', 1874: '눈은', 1875: '눈을', 1876: '눈이', 1877: '눈치', 1878: '느껴야', 1879: '느껴져', 1880: '느껴져서', 1881: '느껴졌다', 1882: '느껴졌다.', 1883: '느껴지는', 1884: '느껴지지', 1885: '느껴진', 1886: '느껴진다', 1887: '느껴진다.', 1888: '느껴질', 1889: '느껴짐', 1890: '느꼈다', 1891: '느꼈다.', 1892: '느꼈던', 1893: '느꼈습니다.', 1894: '느끼게', 1895: '느끼고', 1896: '느끼는', 1897: '느끼는게', 1898: '느끼며', 1899: '느끼지', 1900: '느낀', 1901: '느낀건', 1902: '느낀게', 1903: '느낀다', 1904: '느낀다.', 1905: '느낄', 1906: '느낄수', 1907: '느낄수있는', 1908: '느낌', 1909: '느낌!', 1910: '느낌,', 1911: '느낌.', 1912: '느낌..', 1913: '느낌...', 1914: '느낌?', 1915: '느낌과', 1916: '느낌도', 1917: '느낌으로', 1918: '느낌은', 1919: '느낌을', 1920: '느낌의', 1921: '느낌이', 1922: '느낌이다', 1923: '느낌이다.', 1924: '느리고', 1925: '느와르', 1926: '느와르의', 1927: '는', 1928: '늘', 1929: '늘어지는', 1930: '늙은', 1931: '능가하는', 1932: '능력을', 1933: '능력이', 1934: '늦게', 1935: '늦은', 1936: '니', 1937: '니가', 1938: '니네', 1939: '니들', 1940: '니들은', 1941: '니들이', 1942: '니미', 1943: '니콜', 1944: '니콜라스', 1945: '님', 1946: '님들', 1947: '다', 1948: '다가', 1949: '다가오는', 1950: '다같이', 1951: '다녀온', 1952: '다는', 1953: '다니는', 1954: '다니엘', 1955: '다들', 1956: '다루고', 1957: '다루는', 1958: '다룬', 1959: '다르게', 1960: '다르겠지만', 1961: '다르고', 1962: '다르다', 1963: '다르다.', 1964: '다르지', 1965: '다르지만', 1966: '다른', 1967: '다른거', 1968: '다른건', 1969: '다른걸', 1970: '다른게', 1971: '다른영화', 1972: '다를게', 1973: '다름', 1974: '다만', 1975: '다문화', 1976: '다보고', 1977: '다봤는데', 1978: '다세포소녀', 1979: '다소', 1980: '다시', 1981: '다시금', 1982: '다시는', 1983: '다시보게', 1984: '다시보고', 1985: '다시보고싶은', 1986: '다시보기', 1987: '다시보기로', 1988: '다시보니', 1989: '다시보니까', 1990: '다시보면', 1991: '다시봐도', 1992: '다시봤는데', 1993: '다시한번', 1994: '다신', 1995: '다양한', 1996: '다운', 1997: '다운로드', 1998: '다운받아', 1999: '다운받아서', 2000: '다음', 2001: '다음에', 2002: '다음엔', 2003: '다음으로', 2004: '다음편이', 2005: '다이하드', 2006: '다좋은데', 2007: '다짜고짜', 2008: '다큐', 2009: '다큐.', 2010: '다큐가', 2011: '다큐는', 2012: '다큐로', 2013: '다큐를', 2014: '다큐멘터리', 2015: '다큐멘터리가', 2016: '다큐의', 2017: '다크', 2018: '다크나이트', 2019: '다행', 2020: '다행이다', 2021: '다행이다.', 2022: '닥치고', 2023: '단', 2024: '단순', 2025: '단순하고', 2026: '단순한', 2027: '단순히', 2028: '단어', 2029: '단어가', 2030: '단연', 2031: '단연코', 2032: '단점은', 2033: '단지', 2034: '단체로', 2035: '단편', 2036: '달고', 2037: '달달하고', 2038: '달달한', 2039: '달라', 2040: '달라서', 2041: '달리', 2042: '달리는', 2043: '달린', 2044: '달콤한', 2045: '닮은', 2046: '담겨', 2047: '담겨있는', 2048: '담겨져', 2049: '담고', 2050: '담긴', 2051: '담담하게', 2052: '담담한', 2053: '담배', 2054: '담백하게', 2055: '담백한', 2056: '담아낸', 2057: '담은', 2058: '답게', 2059: '답답', 2060: '답답하게', 2061: '답답하고', 2062: '답답하다', 2063: '답답하다.', 2064: '답답한', 2065: '답답해', 2066: '답답해서', 2067: '답을', 2068: '답이', 2069: '답지', 2070: '당당히', 2071: '당대', 2072: '당시', 2073: '당시에', 2074: '당시에는', 2075: '당시에도', 2076: '당시엔', 2077: '당시의', 2078: '당신', 2079: '당신들이', 2080: '당신은', 2081: '당신을', 2082: '당신의', 2083: '당신이', 2084: '당연', 2085: '당연한', 2086: '당연히', 2087: '당장', 2088: '당최', 2089: '당췌', 2090: '당하고', 2091: '당하는', 2092: '당한', 2093: '당할', 2094: '대', 2095: '대놓고', 2096: '대단', 2097: '대단하고', 2098: '대단하다', 2099: '대단하다.', 2100: '대단하다...', 2101: '대단하다고', 2102: '대단한', 2103: '대단합니다', 2104: '대단히', 2105: '대략', 2106: '대로', 2107: '대박', 2108: '대박!', 2109: '대박!!', 2110: '대박.', 2111: '대박...', 2112: '대박이다', 2113: '대박인', 2114: '대박임', 2115: '대본', 2116: '대본을', 2117: '대부', 2118: '대부분', 2119: '대부분의', 2120: '대사', 2121: '대사,', 2122: '대사가', 2123: '대사는', 2124: '대사도', 2125: '대사들이', 2126: '대사로', 2127: '대사를', 2128: '대사만', 2129: '대사에', 2130: '대사와', 2131: '대상이', 2132: '대신', 2133: '대작', 2134: '대작은', 2135: '대작을', 2136: '대체', 2137: '대체로', 2138: '대체적으로', 2139: '대충', 2140: '대통령', 2141: '대표', 2142: '대표작', 2143: '대표적인', 2144: '대하여', 2145: '대학', 2146: '대학교', 2147: '대학생', 2148: '대한', 2149: '대한민국', 2150: '대한민국에서', 2151: '대한민국을', 2152: '대한민국의', 2153: '대해', 2154: '대해서', 2155: '대해서도', 2156: '대화가', 2157: '댓글', 2158: '댓글도', 2159: '댓글보고', 2160: '댓글에', 2161: '댓글을', 2162: '댓글이', 2163: '더', 2164: '더더욱', 2165: '더러운', 2166: '더러워', 2167: '더럽게', 2168: '더럽고', 2169: '더럽다', 2170: '더불어', 2171: '더빙', 2172: '더빙도', 2173: '더빙으로', 2174: '더빙은', 2175: '더빙을', 2176: '더빙이', 2177: '더욱', 2178: '더욱더', 2179: '더이상', 2180: '더한', 2181: '덕분에', 2182: '덕에', 2183: '던지는', 2184: '덜', 2185: '덤으로', 2186: '데', 2187: '데려다', 2188: '데려다가', 2189: '데리고', 2190: '데이', 2191: '데이빗', 2192: '덴젤', 2193: '도', 2194: '도는', 2195: '도대체', 2196: '도대체가', 2197: '도데체', 2198: '도라에몽', 2199: '도무지', 2200: '도움이', 2201: '도저히', 2202: '도중에', 2203: '도통', 2204: '독립', 2205: '독립영화', 2206: '독일', 2207: '독특하게', 2208: '독특하고', 2209: '독특한', 2210: '돈', 2211: '돈과', 2212: '돈낭비', 2213: '돈내고', 2214: '돈도', 2215: '돈만', 2216: '돈아까운', 2217: '돈아까움', 2218: '돈아까워', 2219: '돈아깝고', 2220: '돈아깝다', 2221: '돈에', 2222: '돈으로', 2223: '돈은', 2224: '돈을', 2225: '돈이', 2226: '돈주고', 2227: '돋는', 2228: '돋보였던', 2229: '돋보이는', 2230: '돋보인', 2231: '돋보인다', 2232: '돋보인다.', 2233: '돋보임.', 2234: '돌려', 2235: '돌리고', 2236: '돌리다가', 2237: '돌아', 2238: '돌아가고', 2239: '돌아가는', 2240: '돌아보게', 2241: '돌아온', 2242: '동네', 2243: '동물', 2244: '동물의', 2245: '동생', 2246: '동생이', 2247: '동생이랑', 2248: '동성애', 2249: '동시에', 2250: '동심을', 2251: '동안', 2252: '동영상', 2253: '동화', 2254: '동화같은', 2255: '동화를', 2256: '됐는데', 2257: '됐다.', 2258: '되게', 2259: '되고', 2260: '되기', 2261: '되길', 2262: '되나?', 2263: '되네요', 2264: '되네요.', 2265: '되는', 2266: '되는거', 2267: '되는데', 2268: '되는지', 2269: '되니', 2270: '되도', 2271: '되도않는', 2272: '되돌아', 2273: '되돌아보게', 2274: '되려', 2275: '되면', 2276: '되버린', 2277: '되서', 2278: '되야', 2279: '되어', 2280: '되어서', 2281: '되어야', 2282: '되었는데', 2283: '되었다', 2284: '되었다.', 2285: '되었던', 2286: '되었습니다.', 2287: '되었으면', 2288: '되었지만', 2289: '되지', 2290: '되질', 2291: '된', 2292: '된거', 2293: '된게', 2294: '된다', 2295: '된다.', 2296: '된다고', 2297: '된다는', 2298: '된다면', 2299: '될', 2300: '될것', 2301: '될듯', 2302: '될수', 2303: '될지', 2304: '됨', 2305: '됨.', 2306: '됩니다', 2307: '됩니다.', 2308: '됬는데', 2309: '두', 2310: '두고', 2311: '두고두고', 2312: '두근두근', 2313: '두배우의', 2314: '두번', 2315: '두번이나', 2316: '두번째', 2317: '두분', 2318: '두사람의', 2319: '두시간', 2320: '둔', 2321: '둘', 2322: '둘다', 2323: '둘을', 2324: '둘의', 2325: '둘이', 2326: '둘째', 2327: '둘째치고', 2328: '뒤', 2329: '뒤늦게', 2330: '뒤로', 2331: '뒤로갈수록', 2332: '뒤에', 2333: '뒤죽박죽', 2334: '뒤통수', 2335: '드', 2336: '드네요', 2337: '드네요.', 2338: '드는', 2339: '드니로', 2340: '드니로의', 2341: '드디어', 2342: '드라마', 2343: '드라마!', 2344: '드라마!!', 2345: '드라마!!!', 2346: '드라마,', 2347: '드라마.', 2348: '드라마..', 2349: '드라마...', 2350: '드라마~', 2351: '드라마가', 2352: '드라마같은', 2353: '드라마나', 2354: '드라마네요', 2355: '드라마는', 2356: '드라마다', 2357: '드라마다.', 2358: '드라마도', 2359: '드라마라고', 2360: '드라마로', 2361: '드라마를', 2362: '드라마보다', 2363: '드라마에', 2364: '드라마에서', 2365: '드라마였다.', 2366: '드라마와', 2367: '드라마의', 2368: '드라마인데', 2369: '드라마임', 2370: '드라마입니다.', 2371: '드라마중', 2372: '드라마중에', 2373: '드라마지만', 2374: '드러나는', 2375: '드러난', 2376: '드럽게', 2377: '드릅게', 2378: '드리고', 2379: '드림', 2380: '드림웍스', 2381: '드립니다', 2382: '드립니다.', 2383: '드문', 2384: '든', 2385: '든다', 2386: '든다.', 2387: '듣고', 2388: '듣기', 2389: '듣는', 2390: '듣보잡', 2391: '들', 2392: '들게', 2393: '들고', 2394: '들어', 2395: '들어가', 2396: '들어가는', 2397: '들어가서', 2398: '들어간', 2399: '들어도', 2400: '들어서', 2401: '들었다', 2402: '들었다.', 2403: '들었던', 2404: '들었습니다.', 2405: '들었지만', 2406: '들여서', 2407: '들으면', 2408: '들을', 2409: '들이', 2410: '들지', 2411: '들지만', 2412: '듬', 2413: '듭니다.', 2414: '듯', 2415: '듯!', 2416: '듯.', 2417: '듯..', 2418: '듯...', 2419: '듯이', 2420: '듯한', 2421: '등', 2422: '등등', 2423: '등에', 2424: '등을', 2425: '등의', 2426: '등이', 2427: '등장', 2428: '등장인물', 2429: '등장인물들의', 2430: '등장인물들이', 2431: '등장인물이', 2432: '등장하는', 2433: '디게', 2434: '디워', 2435: '디즈니', 2436: '디즈니의', 2437: '디카프리오', 2438: '디테일', 2439: '디테일이', 2440: '디테일한', 2441: '따듯한', 2442: '따뜻하게', 2443: '따뜻하고', 2444: '따뜻하다.', 2445: '따뜻한', 2446: '따뜻함이', 2447: '따뜻해', 2448: '따뜻해지고', 2449: '따뜻해지는', 2450: '따라', 2451: '따라가지', 2452: '따라서', 2453: '따라하기', 2454: '따라한', 2455: '따로', 2456: '따른', 2457: '따분한', 2458: '따스한', 2459: '따위', 2460: '따윈', 2461: '따지면', 2462: '딱', 2463: '딱봐도', 2464: '딱히', 2465: '딴', 2466: '딴건', 2467: '딸', 2468: '딸과', 2469: '딸을', 2470: '딸의', 2471: '딸이', 2472: '땀을', 2473: '땀이', 2474: '때', 2475: '때,', 2476: '때가', 2477: '때는', 2478: '때도', 2479: '때려', 2480: '때론', 2481: '때리고', 2482: '때마다', 2483: '때매', 2484: '때문', 2485: '때문에', 2486: '때문이다.', 2487: '때묻지', 2488: '때부터', 2489: '때의', 2490: '땐', 2491: '땜에', 2492: '떄문에', 2493: '떠나', 2494: '떠나는', 2495: '떠나서', 2496: '떠나지', 2497: '떠오르는', 2498: '떠올리게', 2499: '떨어져', 2500: '떨어져서', 2501: '떨어지고', 2502: '떨어지는', 2503: '떨어지지만', 2504: '떨어진', 2505: '떨어진다', 2506: '떨어진다.', 2507: '떨어짐', 2508: '떨어짐.', 2509: '뗄', 2510: '또', 2511: '또는', 2512: '또다른', 2513: '또다시', 2514: '또라이', 2515: '또보고', 2516: '또봐도', 2517: '또하나의', 2518: '또한', 2519: '또한번', 2520: '똑같은', 2521: '똑같이', 2522: '똑바로', 2523: '똥', 2524: '똥같은', 2525: '똥을', 2526: '뚜렷한', 2527: '뚝뚝', 2528: '뛰어', 2529: '뛰어나고', 2530: '뛰어난', 2531: '뛰어넘는', 2532: '뛰어넘은', 2533: '뜨거운', 2534: '뜨고', 2535: '뜬', 2536: '뜬금없는', 2537: '뜬금없이', 2538: '뜻이', 2539: '라고', 2540: '라는', 2541: '라스트', 2542: '라이언', 2543: '란', 2544: '랑', 2545: '러닝', 2546: '러닝타임', 2547: '러닝타임이', 2548: '러브', 2549: '러브라인', 2550: '러셀', 2551: '러시아', 2552: '런닝맨', 2553: '런닝타임이', 2554: '레알', 2555: '레옹', 2556: '레이', 2557: '레이싱', 2558: '레이첼', 2559: '레전드', 2560: '로', 2561: '로그인', 2562: '로그인하게', 2563: '로맨스', 2564: '로맨스가', 2565: '로맨스는', 2566: '로맨스도', 2567: '로맨스로', 2568: '로맨스를', 2569: '로맨스영화', 2570: '로맨틱', 2571: '로맨틱코미디', 2572: '로맨틱한', 2573: '로버트', 2574: '로봇', 2575: '로빈', 2576: '뤽', 2577: '류승범', 2578: '류의', 2579: '를', 2580: '리', 2581: '리메이크', 2582: '리메이크가', 2583: '리뷰', 2584: '리뷰를', 2585: '리암', 2586: '리얼', 2587: '리얼리티가', 2588: '리얼리티를', 2589: '리얼하게', 2590: '리얼한', 2591: '리즈', 2592: '링', 2593: '마구', 2594: '마냥', 2595: '마누라', 2596: '마니', 2597: '마다', 2598: '마디로', 2599: '마라', 2600: '마라.', 2601: '마무리', 2602: '마무리.', 2603: '마무리가', 2604: '마무리도', 2605: '마블', 2606: '마세요', 2607: '마세요.', 2608: '마시고', 2609: '마시길', 2610: '마시길..', 2611: '마음', 2612: '마음도', 2613: '마음속', 2614: '마음속에', 2615: '마음에', 2616: '마음으로', 2617: '마음은', 2618: '마음을', 2619: '마음의', 2620: '마음이', 2621: '마이', 2622: '마이너스', 2623: '마이너스가', 2624: '마이너스는', 2625: '마이클', 2626: '마저', 2627: '마지막', 2628: '마지막까지', 2629: '마지막에', 2630: '마지막에는', 2631: '마지막엔', 2632: '마지막으로', 2633: '마지막은', 2634: '마지막을', 2635: '마지막의', 2636: '마지막이', 2637: '마지막장면', 2638: '마지막장면에서', 2639: '마지막장면은', 2640: '마지막장면이', 2641: '마지막회', 2642: '마치', 2643: '마틴', 2644: '막', 2645: '막상', 2646: '막장', 2647: '막장도', 2648: '막장드라마', 2649: '막장에', 2650: '막장으로', 2651: '막장의', 2652: '막장이', 2653: '막판', 2654: '막판에', 2655: '만', 2656: '만나', 2657: '만나고', 2658: '만나는', 2659: '만나면', 2660: '만나서', 2661: '만난', 2662: '만날', 2663: '만남', 2664: '만드나', 2665: '만드냐', 2666: '만드냐?', 2667: '만드네', 2668: '만드는', 2669: '만드는게', 2670: '만드는데', 2671: '만드는지', 2672: '만든', 2673: '만든거', 2674: '만든건지', 2675: '만든것', 2676: '만든다', 2677: '만든다.', 2678: '만든다는', 2679: '만든영화', 2680: '만들', 2681: '만들거면', 2682: '만들고', 2683: '만들기', 2684: '만들기도', 2685: '만들다', 2686: '만들다니', 2687: '만들다니...', 2688: '만들려고', 2689: '만들려면', 2690: '만들면', 2691: '만들수', 2692: '만들어', 2693: '만들어내는', 2694: '만들어낸', 2695: '만들어도', 2696: '만들어라', 2697: '만들어라.', 2698: '만들어버린', 2699: '만들어서', 2700: '만들어야', 2701: '만들어주는', 2702: '만들어주세요', 2703: '만들어준', 2704: '만들어진', 2705: '만들었나', 2706: '만들었나?', 2707: '만들었냐', 2708: '만들었냐?', 2709: '만들었네', 2710: '만들었는데', 2711: '만들었는지', 2712: '만들었다', 2713: '만들었다.', 2714: '만들었다고', 2715: '만들었다면', 2716: '만들었던', 2717: '만들었으면', 2718: '만들었을까', 2719: '만들었을까?', 2720: '만들지', 2721: '만듬', 2722: '만약', 2723: '만에', 2724: '만으로', 2725: '만으로도', 2726: '만점', 2727: '만점에', 2728: '만점을', 2729: '만점짜리', 2730: '만족', 2731: '만족하고', 2732: '만큼', 2733: '만큼은', 2734: '만큼의', 2735: '만큼이나', 2736: '만한', 2737: '만화', 2738: '만화가', 2739: '만화같은', 2740: '만화는', 2741: '만화도', 2742: '만화로', 2743: '만화를', 2744: '만화영화', 2745: '만화의', 2746: '만화책', 2747: '만화책을', 2748: '많고', 2749: '많네', 2750: '많다', 2751: '많다.', 2752: '많아', 2753: '많아서', 2754: '많았다.', 2755: '많았던', 2756: '많으면', 2757: '많은', 2758: '많은걸', 2759: '많은것을', 2760: '많은데', 2761: '많음', 2762: '많이', 2763: '많지', 2764: '많지만', 2765: '말', 2766: '말고', 2767: '말고는', 2768: '말곤', 2769: '말그대로', 2770: '말대로', 2771: '말도', 2772: '말도안되는', 2773: '말로', 2774: '말로는', 2775: '말론', 2776: '말만', 2777: '말밖에', 2778: '말씀', 2779: '말아', 2780: '말아라', 2781: '말아먹은', 2782: '말아야', 2783: '말았다', 2784: '말았다.', 2785: '말았어야', 2786: '말에', 2787: '말은', 2788: '말을', 2789: '말이', 2790: '말이다.', 2791: '말이야', 2792: '말이필요없는', 2793: '말이필요없다', 2794: '말이필요없음', 2795: '말인가?', 2796: '말자', 2797: '말자.', 2798: '말지', 2799: '말처럼', 2800: '말투', 2801: '말투가', 2802: '말하고', 2803: '말하고자', 2804: '말하기', 2805: '말하는', 2806: '말하려고', 2807: '말하려는', 2808: '말하면', 2809: '말하지', 2810: '말한다', 2811: '말할', 2812: '말할것도', 2813: '말할수', 2814: '말해서', 2815: '말해주는', 2816: '맑고', 2817: '맑은', 2818: '맘', 2819: '맘에', 2820: '맘에든다', 2821: '맘이', 2822: '맛', 2823: '맛도', 2824: '맛없는', 2825: '맛에', 2826: '맛은', 2827: '맛을', 2828: '맛이', 2829: '맛있는', 2830: '망가진', 2831: '망작', 2832: '망작.', 2833: '망쳐버린', 2834: '망쳤다.', 2835: '망치는', 2836: '망친', 2837: '망한', 2838: '망할', 2839: '망함', 2840: '망했는지', 2841: '맞게', 2842: '맞고', 2843: '맞나?', 2844: '맞는', 2845: '맞다', 2846: '맞다.', 2847: '맞먹는', 2848: '맞아', 2849: '맞은', 2850: '맞지', 2851: '맞지만', 2852: '맞추기', 2853: '맞춘', 2854: '맞춰', 2855: '맞춰서', 2856: '맡은', 2857: '매', 2858: '매끄럽지', 2859: '매년', 2860: '매력', 2861: '매력.', 2862: '매력과', 2863: '매력도', 2864: '매력에', 2865: '매력은', 2866: '매력을', 2867: '매력의', 2868: '매력이', 2869: '매력있고', 2870: '매력있는', 2871: '매력적', 2872: '매력적으로', 2873: '매력적이고', 2874: '매력적이다', 2875: '매력적이다.', 2876: '매력적인', 2877: '매미', 2878: '매미OO', 2879: '매미OO있네', 2880: '매번', 2881: '매우', 2882: '매일', 2883: '매주', 2884: '매혹적인', 2885: '매회', 2886: '맨', 2887: '맨날', 2888: '머', 2889: '머가', 2890: '머냐', 2891: '머리', 2892: '머리가', 2893: '머리는', 2894: '머리를', 2895: '머리속에', 2896: '머리에', 2897: '머릿속에', 2898: '머릿속을', 2899: '머야', 2900: '머여', 2901: '먹고', 2902: '먹는', 2903: '먹먹하게', 2904: '먹먹하고', 2905: '먹먹하다.', 2906: '먹먹한', 2907: '먹먹해지는', 2908: '먹은', 2909: '먹칠을', 2910: '먼', 2911: '먼가', 2912: '먼저', 2913: '먼지', 2914: '멀리', 2915: '멀쩡한', 2916: '멋있게', 2917: '멋있고', 2918: '멋있는', 2919: '멋있다', 2920: '멋있다.', 2921: '멋있어서', 2922: '멋있어요', 2923: '멋있음', 2924: '멋져', 2925: '멋져요', 2926: '멋졌다.', 2927: '멋지게', 2928: '멋지고', 2929: '멋지다', 2930: '멋지다.', 2931: '멋진', 2932: '멋진영화', 2933: '멋짐', 2934: '멍청하게', 2935: '멍청하고', 2936: '멍청한', 2937: '메마른', 2938: '메세지', 2939: '메세지가', 2940: '메세지는', 2941: '메세지도', 2942: '메세지를', 2943: '메시지', 2944: '메시지가', 2945: '메시지는', 2946: '메시지도', 2947: '메시지를', 2948: '멘탈', 2949: '멜로', 2950: '멜로영화', 2951: '멜로의', 2952: '면도', 2953: '면에서', 2954: '면을', 2955: '면이', 2956: '명', 2957: '명대사', 2958: '명배우', 2959: '명배우들의', 2960: '명백한', 2961: '명복을', 2962: '명불허전', 2963: '명성에', 2964: '명성을', 2965: '명연기', 2966: '명작', 2967: '명작!', 2968: '명작!!', 2969: '명작.', 2970: '명작..', 2971: '명작...', 2972: '명작도', 2973: '명작으로', 2974: '명작은', 2975: '명작을', 2976: '명작이', 2977: '명작이네요', 2978: '명작이다', 2979: '명작이다.', 2980: '명작이다..', 2981: '명작이라', 2982: '명작이라고', 2983: '명작이라는', 2984: '명작인데', 2985: '명작임', 2986: '명작입니다', 2987: '명작입니다.', 2988: '명작중에', 2989: '명작중의', 2990: '명장면', 2991: '명품', 2992: '명화', 2993: '몇', 2994: '몇개', 2995: '몇년', 2996: '몇년만에', 2997: '몇년이', 2998: '몇년전에', 2999: '몇몇', 3000: '몇번', 3001: '몇번을', 3002: '몇번을봐도', 3003: '몇번이고', 3004: '몇번이나', 3005: '몇안되는', 3006: '모', 3007: '모냐', 3008: '모두', 3009: '모두가', 3010: '모두다', 3011: '모두들', 3012: '모두를', 3013: '모두에게', 3014: '모두의', 3015: '모든', 3016: '모든걸', 3017: '모든것을', 3018: '모든것이', 3019: '모든게', 3020: '모든면에서', 3021: '모르게', 3022: '모르겟다', 3023: '모르겠고', 3024: '모르겠네', 3025: '모르겠네요', 3026: '모르겠네요.', 3027: '모르겠는', 3028: '모르겠는데', 3029: '모르겠다', 3030: '모르겠다.', 3031: '모르겠다..', 3032: '모르겠다...', 3033: '모르겠어요.', 3034: '모르겠으나', 3035: '모르겠음', 3036: '모르겠음.', 3037: '모르겠지만', 3038: '모르고', 3039: '모르나', 3040: '모르는', 3041: '모르면', 3042: '모르면서', 3043: '모르지만', 3044: '모른다', 3045: '모른다.', 3046: '모를', 3047: '모습', 3048: '모습.', 3049: '모습과', 3050: '모습도', 3051: '모습만', 3052: '모습에', 3053: '모습에서', 3054: '모습으로', 3055: '모습은', 3056: '모습을', 3057: '모습이', 3058: '모아놓고', 3059: '모아서', 3060: '모야', 3061: '모여', 3062: '모여서', 3063: '모자라', 3064: '모자란', 3065: '모조리', 3066: '모처럼', 3067: '모티브로', 3068: '모험', 3069: '모호한', 3070: '목소리', 3071: '목소리가', 3072: '목소리는', 3073: '목소리도', 3074: '목숨걸고', 3075: '목숨을', 3076: '목을', 3077: '목적을', 3078: '목적이', 3079: '몬가', 3080: '몰라', 3081: '몰라도', 3082: '몰라서', 3083: '몰랐는데', 3084: '몰랐다', 3085: '몰랐다.', 3086: '몰랐던', 3087: '몰래', 3088: '몰입', 3089: '몰입감', 3090: '몰입감도', 3091: '몰입감이', 3092: '몰입도', 3093: '몰입도가', 3094: '몰입도는', 3095: '몰입도도', 3096: '몰입도를', 3097: '몰입을', 3098: '몰입이', 3099: '몰입하게', 3100: '몰입하기', 3101: '몰입하면서', 3102: '몰입할', 3103: '몰입해서', 3104: '몸', 3105: '몸매', 3106: '몸매가', 3107: '몸매는', 3108: '몸에', 3109: '몸을', 3110: '몸이', 3111: '못', 3112: '못된', 3113: '못된다.', 3114: '못미치는', 3115: '못보겠다', 3116: '못보겠다.', 3117: '못보고', 3118: '못보는', 3119: '못본', 3120: '못본게', 3121: '못봐주겠다.', 3122: '못생긴', 3123: '못지', 3124: '못하게', 3125: '못하겠다', 3126: '못하겠다.', 3127: '못하고', 3128: '못하고,', 3129: '못하네', 3130: '못하는', 3131: '못하다', 3132: '못하다.', 3133: '못하다는', 3134: '못하면', 3135: '못하지만', 3136: '못한', 3137: '못한게', 3138: '못한다', 3139: '못한다.', 3140: '못한다는', 3141: '못할', 3142: '못함', 3143: '못함.', 3144: '못해', 3145: '못해도', 3146: '못해서', 3147: '못했다', 3148: '못했다.', 3149: '못했던', 3150: '못했지만', 3151: '몽환적인', 3152: '묘사', 3153: '묘사가', 3154: '묘사한', 3155: '묘하게', 3156: '묘한', 3157: '무간도', 3158: '무거운', 3159: '무고한', 3160: '무난한', 3161: '무너지는', 3162: '무대', 3163: '무려', 3164: '무료로', 3165: '무리가', 3166: '무비', 3167: '무비.', 3168: '무비의', 3169: '무서운', 3170: '무서운거', 3171: '무서울', 3172: '무서움', 3173: '무서워', 3174: '무서워서', 3175: '무섭게', 3176: '무섭고', 3177: '무섭긴', 3178: '무섭다', 3179: '무섭다.', 3180: '무섭다고', 3181: '무섭지', 3182: '무섭지도', 3183: '무섭지도않고', 3184: '무술', 3185: '무슨', 3186: '무슨..', 3187: '무슨...', 3188: '무슨내용인지', 3189: '무슨말이', 3190: '무슨생각으로', 3191: '무시하고', 3192: '무시하는', 3193: '무시한', 3194: '무심코', 3195: '무언가', 3196: '무언가가', 3197: '무언가를', 3198: '무얼', 3199: '무엇보다', 3200: '무엇보다도', 3201: '무엇을', 3202: '무엇이', 3203: '무엇인가', 3204: '무엇인지', 3205: '무엇하나', 3206: '무의미한', 3207: '무작정', 3208: '무조건', 3209: '무지', 3210: '무지하게', 3211: '무척', 3212: '무척이나', 3213: '무튼', 3214: '무한한', 3215: '무협', 3216: '묵직한', 3217: '문득', 3218: '문제', 3219: '문제.', 3220: '문제가', 3221: '문제는', 3222: '문제를', 3223: '문화', 3224: '문화를', 3225: '문화의', 3226: '문화적', 3227: '묻어나는', 3228: '물', 3229: '물론', 3230: '물론이고', 3231: '물씬', 3232: '물에', 3233: '물체가', 3234: '뭉클한', 3235: '뭐', 3236: '뭐,', 3237: '뭐.', 3238: '뭐..', 3239: '뭐...', 3240: '뭐?', 3241: '뭐가', 3242: '뭐고', 3243: '뭐냐', 3244: '뭐냐.', 3245: '뭐냐..', 3246: '뭐냐?', 3247: '뭐니', 3248: '뭐든', 3249: '뭐든지', 3250: '뭐라', 3251: '뭐라고', 3252: '뭐랄까', 3253: '뭐야', 3254: '뭐야.', 3255: '뭐야..', 3256: '뭐야?', 3257: '뭐여', 3258: '뭐이리', 3259: '뭐임', 3260: '뭐임?', 3261: '뭐죠?', 3262: '뭐지', 3263: '뭐지..', 3264: '뭐지?', 3265: '뭐하나', 3266: '뭐하냐', 3267: '뭐하는', 3268: '뭐하러', 3269: '뭐하자는', 3270: '뭔', 3271: '뭔가', 3272: '뭔가가', 3273: '뭔가를', 3274: '뭔내용인지', 3275: '뭔데', 3276: '뭔지', 3277: '뭔지..', 3278: '뭔지도', 3279: '뭘', 3280: '뭘까', 3281: '뭘까?', 3282: '뭡니까', 3283: '뭣도', 3284: '뭥미', 3285: '뭥미?', 3286: '뮤지컬', 3287: '미', 3288: '미개한', 3289: '미국', 3290: '미국식', 3291: '미국에', 3292: '미국에서', 3293: '미국은', 3294: '미국의', 3295: '미국이', 3296: '미국판', 3297: '미드', 3298: '미래는', 3299: '미래를', 3300: '미래의', 3301: '미리', 3302: '미모', 3303: '미모는', 3304: '미모만', 3305: '미모와', 3306: '미묘한', 3307: '미소', 3308: '미소가', 3309: '미소를', 3310: '미스', 3311: '미스터', 3312: '미스테리', 3313: '미안하다', 3314: '미안하지만', 3315: '미야자키', 3316: '미쳤다', 3317: '미치게', 3318: '미치겠다', 3319: '미치는', 3320: '미치도록', 3321: '미치지', 3322: '미친', 3323: '미친듯이', 3324: '미화', 3325: '미화시킨', 3326: '미화하는', 3327: '민망한', 3328: '민폐', 3329: '믿고', 3330: '믿고보는', 3331: '믿기', 3332: '믿기지', 3333: '믿기지가', 3334: '믿는', 3335: '믿어지지', 3336: '믿을', 3337: '믿을게', 3338: '믿음이', 3339: '믿지', 3340: '밀라', 3341: '밀려오는', 3342: '밋밋하고', 3343: '밋밋한', 3344: '및', 3345: '밑도', 3346: '밑에', 3347: '밑에분', 3348: '바가', 3349: '바꾸는', 3350: '바꿔라', 3351: '바꿔서', 3352: '바뀌고', 3353: '바뀌는', 3354: '바라는', 3355: '바라보는', 3356: '바라본', 3357: '바란다', 3358: '바란다.', 3359: '바람', 3360: '바람에', 3361: '바랍니다', 3362: '바랍니다.', 3363: '바로', 3364: '바를', 3365: '바보', 3366: '바보가', 3367: '바보같은', 3368: '바보로', 3369: '바치는', 3370: '바탕으로', 3371: '박보영', 3372: '박수를', 3373: '박중훈', 3374: '박진감', 3375: '박찬욱', 3376: '박평식', 3377: '박힌', 3378: '밖에', 3379: '반', 3380: '반개', 3381: '반개도', 3382: '반담', 3383: '반도', 3384: '반드시', 3385: '반복되는', 3386: '반복해서', 3387: '반성해라', 3388: '반응이', 3389: '반의', 3390: '반전', 3391: '반전,', 3392: '반전.', 3393: '반전과', 3394: '반전까지', 3395: '반전도', 3396: '반전에', 3397: '반전으로', 3398: '반전은', 3399: '반전을', 3400: '반전의', 3401: '반전이', 3402: '반전이라고', 3403: '반지의', 3404: '받고', 3405: '받는', 3406: '받아', 3407: '받아서', 3408: '받아야', 3409: '받았다.', 3410: '받았을', 3411: '받은', 3412: '받을', 3413: '받을만한', 3414: '발', 3415: '발견', 3416: '발견한', 3417: '발랄한', 3418: '발로', 3419: '발상이', 3420: '발연기', 3421: '발연기,', 3422: '발연기가', 3423: '발연기는', 3424: '발연기에', 3425: '발연기와', 3426: '발음', 3427: '발음도', 3428: '발전을', 3429: '발전이', 3430: '밝고', 3431: '밝은', 3432: '밤', 3433: '밤에', 3434: '밥', 3435: '밥을', 3436: '방금', 3437: '방법', 3438: '방법을', 3439: '방법이', 3440: '방송', 3441: '방송에', 3442: '방송을', 3443: '방식으로', 3444: '방식이', 3445: '방향을', 3446: '방황하는', 3447: '배', 3448: '배가', 3449: '배경', 3450: '배경,', 3451: '배경과', 3452: '배경도', 3453: '배경만', 3454: '배경에', 3455: '배경으로', 3456: '배경은', 3457: '배경을', 3458: '배경음악', 3459: '배경음악도', 3460: '배경음악이', 3461: '배경의', 3462: '배경이', 3463: '배급사', 3464: '배급사가', 3465: '배꼽', 3466: '배슬기', 3467: '배역', 3468: '배역을', 3469: '배역이', 3470: '배우', 3471: '배우,', 3472: '배우가', 3473: '배우고', 3474: '배우나', 3475: '배우는', 3476: '배우다.', 3477: '배우도', 3478: '배우들', 3479: '배우들과', 3480: '배우들도', 3481: '배우들로', 3482: '배우들만', 3483: '배우들에', 3484: '배우들은', 3485: '배우들을', 3486: '배우들의', 3487: '배우들이', 3488: '배우로', 3489: '배우를', 3490: '배우만', 3491: '배우분들', 3492: '배우에', 3493: '배우에게', 3494: '배우와', 3495: '배우의', 3496: '배운', 3497: '배울', 3498: '배트맨', 3499: '백배', 3500: '백인', 3501: '뱀파이어', 3502: '버금가는', 3503: '버려', 3504: '버렸다', 3505: '버렸다.', 3506: '버리고', 3507: '버리는', 3508: '버린', 3509: '버무려진', 3510: '버전', 3511: '번', 3512: '번을', 3513: '벌써', 3514: '벌어지는', 3515: '범인', 3516: '범인은', 3517: '범인이', 3518: '범죄', 3519: '범죄를', 3520: '범죄자', 3521: '법.', 3522: '법을', 3523: '법정', 3524: '법한', 3525: '벗어나지', 3526: '베낀', 3527: '베드신', 3528: '베드신이', 3529: '베리', 3530: '베스트', 3531: '베트남', 3532: '벤', 3533: '변태', 3534: '변하는', 3535: '변하지', 3536: '변화가', 3537: '별', 3538: '별거', 3539: '별다른', 3540: '별로', 3541: '별로.', 3542: '별로..', 3543: '별로...', 3544: '별로....', 3545: '별로고', 3546: '별로네요', 3547: '별로다', 3548: '별로다.', 3549: '별로다..', 3550: '별로였는데', 3551: '별로였다.', 3552: '별로였던', 3553: '별로였음', 3554: '별로인', 3555: '별로임', 3556: '별로임.', 3557: '별로지만', 3558: '별론데', 3559: '별루', 3560: '별루다', 3561: '별반', 3562: '별반개도', 3563: '별생각없이', 3564: '별을', 3565: '별이', 3566: '별점', 3567: '별점은', 3568: '별점을', 3569: '별점이', 3570: '별하나', 3571: '별하나도', 3572: '별한개도', 3573: '병', 3574: '병맛', 3575: '병맛.', 3576: '병맛같은', 3577: '병입니다', 3578: '보게', 3579: '보게되는', 3580: '보게된', 3581: '보게됬는데', 3582: '보겠다', 3583: '보겠다.', 3584: '보고', 3585: '보고,', 3586: '보고나니', 3587: '보고나면', 3588: '보고나서', 3589: '보고나서도', 3590: '보고난', 3591: '보고도', 3592: '보고서', 3593: '보고싶네요', 3594: '보고싶다', 3595: '보고싶다.', 3596: '보고싶어', 3597: '보고싶어서', 3598: '보고싶어요', 3599: '보고싶은', 3600: '보고싶은데', 3601: '보고싶지', 3602: '보고있는', 3603: '보고있는데', 3604: '보구', 3605: '보기', 3606: '보기가', 3607: '보기는', 3608: '보기도', 3609: '보기드문', 3610: '보기에', 3611: '보기에는', 3612: '보기에도', 3613: '보기엔', 3614: '보기전에', 3615: '보기좋은', 3616: '보긴', 3617: '보길', 3618: '보나', 3619: '보내는', 3620: '보낸다.', 3621: '보느니', 3622: '보느라', 3623: '보는', 3624: '보는거', 3625: '보는건', 3626: '보는건지', 3627: '보는것', 3628: '보는게', 3629: '보는내내', 3630: '보는데', 3631: '보는동안', 3632: '보는듯한', 3633: '보는줄', 3634: '보니', 3635: '보니까', 3636: '보니깐', 3637: '보다', 3638: '보다.', 3639: '보다가', 3640: '보다는', 3641: '보다니', 3642: '보다도', 3643: '보다보니', 3644: '보다보다', 3645: '보다보면', 3646: '보단', 3647: '보더라도', 3648: '보던', 3649: '보라', 3650: '보라고', 3651: '보라는', 3652: '보러', 3653: '보려고', 3654: '보려는', 3655: '보려면', 3656: '보며', 3657: '보면', 3658: '보면볼수록', 3659: '보면서', 3660: '보면서도', 3661: '보삼', 3662: '보석같은', 3663: '보세요', 3664: '보세요!', 3665: '보세요.', 3666: '보세요..', 3667: '보세요~', 3668: '보셈', 3669: '보셨으면', 3670: '보소', 3671: '보시고', 3672: '보시길', 3673: '보시길.', 3674: '보시길..', 3675: '보시는', 3676: '보시면', 3677: '보아', 3678: '보아도', 3679: '보아야', 3680: '보았는데', 3681: '보았다', 3682: '보았다.', 3683: '보았던', 3684: '보았습니다', 3685: '보았습니다.', 3686: '보았지만', 3687: '보여', 3688: '보여서', 3689: '보여주고', 3690: '보여주는', 3691: '보여주는데', 3692: '보여주지', 3693: '보여준', 3694: '보여준다', 3695: '보여준다.', 3696: '보여줄', 3697: '보여줌.', 3698: '보여줬는데', 3699: '보여줬다.', 3700: '보여줬던', 3701: '보여줬으면', 3702: '보여지는', 3703: '보이고', 3704: '보이네', 3705: '보이는', 3706: '보이려고', 3707: '보이지', 3708: '보이지만', 3709: '보인다', 3710: '보인다.', 3711: '보일', 3712: '보임', 3713: '보자', 3714: '보자.', 3715: '보자고', 3716: '보자마자', 3717: '보지', 3718: '보지도', 3719: '보지마', 3720: '보지마라', 3721: '보지마라.', 3722: '보지마세요', 3723: '보지마세요.', 3724: '보지마셈', 3725: '보지말고', 3726: '보진', 3727: '보통', 3728: '보통의', 3729: '복수', 3730: '복수가', 3731: '복수는', 3732: '복수를', 3733: '복수의', 3734: '복잡하고', 3735: '복잡한', 3736: '본', 3737: '본거', 3738: '본건', 3739: '본건데', 3740: '본것', 3741: '본것중에', 3742: '본게', 3743: '본격', 3744: '본다', 3745: '본다.', 3746: '본다고', 3747: '본다는', 3748: '본다면', 3749: '본듯', 3750: '본듯한', 3751: '본방', 3752: '본방사수', 3753: '본사람들은', 3754: '본사람이', 3755: '본영화', 3756: '본영화인데', 3757: '본영화중', 3758: '본영화중에', 3759: '본인', 3760: '본인은', 3761: '본인의', 3762: '본인이', 3763: '본적이', 3764: '본지', 3765: '본질을', 3766: '본후', 3767: '볼', 3768: '볼거', 3769: '볼거리', 3770: '볼거리가', 3771: '볼거리는', 3772: '볼거리도', 3773: '볼건', 3774: '볼것', 3775: '볼것도', 3776: '볼게', 3777: '볼까', 3778: '볼때', 3779: '볼때는', 3780: '볼때마다', 3781: '볼땐', 3782: '볼라고', 3783: '볼려고', 3784: '볼만', 3785: '볼만은', 3786: '볼만하고', 3787: '볼만하네요', 3788: '볼만하다', 3789: '볼만하다.', 3790: '볼만하던데', 3791: '볼만한', 3792: '볼만한데', 3793: '볼만함', 3794: '볼만함.', 3795: '볼만합니다', 3796: '볼만합니다.', 3797: '볼만해요', 3798: '볼만했는데', 3799: '볼만했다', 3800: '볼만했다.', 3801: '볼만했음', 3802: '볼수', 3803: '볼수가', 3804: '볼수록', 3805: '볼수없는', 3806: '볼수있는', 3807: '볼수있어서', 3808: '볼시간에', 3809: '볼줄', 3810: '봄', 3811: '봄.', 3812: '봄...', 3813: '봅니다', 3814: '봅니다.', 3815: '봐', 3816: '봐도', 3817: '봐도봐도', 3818: '봐라', 3819: '봐라.', 3820: '봐서', 3821: '봐서는', 3822: '봐야', 3823: '봐야겠다', 3824: '봐야겠다.', 3825: '봐야될', 3826: '봐야지', 3827: '봐야하는', 3828: '봐야한다', 3829: '봐야한다.', 3830: '봐야할', 3831: '봐줄', 3832: '봐줄만', 3833: '봣는데', 3834: '봤고', 3835: '봤나', 3836: '봤네', 3837: '봤네요', 3838: '봤네요.', 3839: '봤는대', 3840: '봤는데', 3841: '봤는데,', 3842: '봤는데.', 3843: '봤는데..', 3844: '봤는데...', 3845: '봤는데....', 3846: '봤는데도', 3847: '봤는지', 3848: '봤다', 3849: '봤다!', 3850: '봤다.', 3851: '봤다..', 3852: '봤다...', 3853: '봤다가', 3854: '봤다고', 3855: '봤다는', 3856: '봤다면', 3857: '봤더니', 3858: '봤던', 3859: '봤던건데', 3860: '봤습니다', 3861: '봤습니다.', 3862: '봤습니다^^', 3863: '봤습니다~', 3864: '봤어도', 3865: '봤어야', 3866: '봤어요', 3867: '봤어요!', 3868: '봤어요!!', 3869: '봤어요.', 3870: '봤어요..', 3871: '봤어요^^', 3872: '봤어요~', 3873: '봤었는데', 3874: '봤었다.', 3875: '봤었던', 3876: '봤으니', 3877: '봤으면', 3878: '봤을', 3879: '봤을까', 3880: '봤을까?', 3881: '봤을때', 3882: '봤을때는', 3883: '봤을때도', 3884: '봤을땐', 3885: '봤음', 3886: '봤음.', 3887: '봤음..', 3888: '봤지만', 3889: '봤지만,', 3890: '부끄러운', 3891: '부끄럽게', 3892: '부담없이', 3893: '부드러운', 3894: '부디', 3895: '부럽다', 3896: '부르는', 3897: '부른', 3898: '부를', 3899: '부모', 3900: '부모가', 3901: '부모님', 3902: '부부', 3903: '부분', 3904: '부분도', 3905: '부분들이', 3906: '부분만', 3907: '부분에', 3908: '부분에서', 3909: '부분은', 3910: '부분을', 3911: '부분이', 3912: '부산', 3913: '부수고', 3914: '부실한', 3915: '부자연스러운', 3916: '부족', 3917: '부족.', 3918: '부족하고', 3919: '부족하다', 3920: '부족하다.', 3921: '부족하지만', 3922: '부족한', 3923: '부족함', 3924: '부족함이', 3925: '부족해', 3926: '부터', 3927: '부터가', 3928: '부활', 3929: '북한', 3930: '북한을', 3931: '북한의', 3932: '분', 3933: '분노가', 3934: '분노를', 3935: '분노의', 3936: '분들', 3937: '분들께', 3938: '분들도', 3939: '분들에게', 3940: '분들은', 3941: '분들의', 3942: '분들이', 3943: '분량', 3944: '분명', 3945: '분명하다.', 3946: '분명히', 3947: '분위기', 3948: '분위기,', 3949: '분위기가', 3950: '분위기나', 3951: '분위기는', 3952: '분위기도', 3953: '분위기를', 3954: '분위기만', 3955: '분위기에', 3956: '분위기와', 3957: '분위기의', 3958: '분은', 3959: '분이', 3960: '분이라면', 3961: '불', 3962: '불가능한', 3963: '불구하고', 3964: '불끄고', 3965: '불러', 3966: '불륜', 3967: '불륜에', 3968: '불륜은', 3969: '불륜을', 3970: '불면증', 3971: '불멸의', 3972: '불쌍', 3973: '불쌍하게', 3974: '불쌍하고', 3975: '불쌍하다', 3976: '불쌍하다.', 3977: '불쌍한', 3978: '불쌍함', 3979: '불쌍해', 3980: '불쌍해서', 3981: '불쾌한', 3982: '불편하고', 3983: '불편하다.', 3984: '불편한', 3985: '불필요한', 3986: '불후의', 3987: '붉은', 3988: '브라이언', 3989: '브래드', 3990: '브루스', 3991: '블랙', 3992: '블랙코미디', 3993: '블레이드', 3994: '블록버스터', 3995: '비', 3996: '비가', 3997: '비교', 3998: '비교가', 3999: '비교도', 4000: '비교적', 4001: '비교하면', 4002: '비교할', 4003: '비교해도', 4004: '비교해서', 4005: '비극', 4006: '비극을', 4007: '비극이', 4008: '비극적인', 4009: '비디오', 4010: '비디오로', 4011: '비디오용', 4012: '비로소', 4013: '비록', 4014: '비롯한', 4015: '비밀', 4016: '비밀을', 4017: '비슷하게', 4018: '비슷한', 4019: '비슷해서', 4020: '비싼', 4021: '비오는', 4022: '비운의', 4023: '비주얼', 4024: '비주얼도', 4025: '비주얼은', 4026: '비중이', 4027: '비쥬얼', 4028: '비참한', 4029: '비추', 4030: '비추.', 4031: '비하면', 4032: '비해', 4033: '비해서', 4034: '비행기', 4035: '비현실적이고', 4036: '비현실적인', 4037: '비호감', 4038: '빈', 4039: '빈약한', 4040: '빌려', 4041: '빌려서', 4042: '빌린', 4043: '빌어먹을', 4044: '빕니다', 4045: '빕니다.', 4046: '빚어낸', 4047: '빛', 4048: '빛나는', 4049: '빛을', 4050: '빛이', 4051: '빠르게', 4052: '빠르고', 4053: '빠르기.', 4054: '빠른', 4055: '빠져', 4056: '빠져드는', 4057: '빠져들게', 4058: '빠져서', 4059: '빠지게', 4060: '빠지고', 4061: '빠지는', 4062: '빠지지', 4063: '빠진', 4064: '빠질', 4065: '빠짐없이', 4066: '빡쳐서', 4067: '빨갱이', 4068: '빨리', 4069: '빵', 4070: '빵빵', 4071: '빵점', 4072: '빵터짐', 4073: '빼고', 4074: '빼고는', 4075: '빼곤', 4076: '빼면', 4077: '뻔', 4078: '뻔뻔한', 4079: '뻔하고', 4080: '뻔하다', 4081: '뻔하다.', 4082: '뻔하디', 4083: '뻔하지', 4084: '뻔하지만', 4085: '뻔한', 4086: '뻔한스토리', 4087: '뻔함', 4088: '뻔해', 4089: '뻔해서', 4090: '뻔히', 4091: '뿐', 4092: '뿐,', 4093: '뿐.', 4094: '뿐..', 4095: '뿐...', 4096: '뿐만', 4097: '뿐이다', 4098: '뿐이다.', 4099: '사건', 4100: '사건을', 4101: '사건의', 4102: '사건이', 4103: '사고', 4104: '사극', 4105: '사극을', 4106: '사기', 4107: '사기꾼', 4108: '사는', 4109: '사는게', 4110: '사다코', 4111: '사라지고', 4112: '사라진', 4113: '사람', 4114: '사람.', 4115: '사람과', 4116: '사람도', 4117: '사람들', 4118: '사람들도', 4119: '사람들만', 4120: '사람들에게', 4121: '사람들에겐', 4122: '사람들은', 4123: '사람들을', 4124: '사람들의', 4125: '사람들이', 4126: '사람마다', 4127: '사람만', 4128: '사람사는', 4129: '사람에', 4130: '사람에게', 4131: '사람으로', 4132: '사람으로서', 4133: '사람으로써', 4134: '사람은', 4135: '사람을', 4136: '사람의', 4137: '사람이', 4138: '사람이라면', 4139: '사람이면', 4140: '사람한테', 4141: '사랑', 4142: '사랑,', 4143: '사랑.', 4144: '사랑..', 4145: '사랑...', 4146: '사랑?', 4147: '사랑과', 4148: '사랑도', 4149: '사랑스러운', 4150: '사랑스런', 4151: '사랑스럽게', 4152: '사랑스럽고', 4153: '사랑스럽다', 4154: '사랑스럽다.', 4155: '사랑에', 4156: '사랑으로', 4157: '사랑은', 4158: '사랑을', 4159: '사랑의', 4160: '사랑이', 4161: '사랑이라는', 4162: '사랑이란', 4163: '사랑이야기', 4164: '사랑이야기.', 4165: '사랑하게', 4166: '사랑하고', 4167: '사랑하는', 4168: '사랑한다', 4169: '사랑한다면', 4170: '사랑할', 4171: '사랑함', 4172: '사랑합니다', 4173: '사랑합니다.', 4174: '사랑해', 4175: '사랑해요', 4176: '사랑했던', 4177: '사뭇', 4178: '사상', 4179: '사서', 4180: '사소한', 4181: '사실', 4182: '사실에', 4183: '사실은', 4184: '사실을', 4185: '사실이', 4186: '사실적으로', 4187: '사실적이고', 4188: '사실적인', 4189: '사운드', 4190: '사이', 4191: '사이에', 4192: '사이에서', 4193: '사이의', 4194: '사이코', 4195: '사진', 4196: '사투리', 4197: '사회', 4198: '사회가', 4199: '사회를', 4200: '사회에', 4201: '사회의', 4202: '사회적', 4203: '산만하고', 4204: '산만한', 4205: '산으로', 4206: '살', 4207: '살고', 4208: '살기', 4209: '살다', 4210: '살다가', 4211: '살다살다', 4212: '살렸다', 4213: '살렸다.', 4214: '살리지', 4215: '살린', 4216: '살릴', 4217: '살면서', 4218: '살아', 4219: '살아가는', 4220: '살아갈', 4221: '살아남은', 4222: '살아서', 4223: '살아야', 4224: '살아있는', 4225: '살인', 4226: '살인을', 4227: '살지', 4228: '살짝', 4229: '삶', 4230: '삶.', 4231: '삶과', 4232: '삶에', 4233: '삶에서', 4234: '삶은', 4235: '삶을', 4236: '삶의', 4237: '삶이', 4238: '삼가', 4239: '삼류', 4240: '삼류영화', 4241: '삼천포로', 4242: '상', 4243: '상관없는', 4244: '상관없이', 4245: '상당한', 4246: '상당히', 4247: '상상', 4248: '상상도', 4249: '상상력과', 4250: '상상력에', 4251: '상상력을', 4252: '상상력이', 4253: '상상을', 4254: '상상이', 4255: '상어', 4256: '상어가', 4257: '상업영화', 4258: '상영', 4259: '상영관이', 4260: '상처가', 4261: '상처를', 4262: '상큼한', 4263: '상태에서', 4264: '상투적인', 4265: '상황', 4266: '상황과', 4267: '상황에', 4268: '상황에서', 4269: '상황을', 4270: '상황이', 4271: '새', 4272: '새끼', 4273: '새끼들', 4274: '새로', 4275: '새로운', 4276: '새록새록', 4277: '새롭게', 4278: '새롭고', 4279: '새벽', 4280: '새벽에', 4281: '새삼', 4282: '색감', 4283: '색감도', 4284: '색감이', 4285: '색깔이', 4286: '색다르고', 4287: '색다른', 4288: '샘', 4289: '생', 4290: '생각', 4291: '생각.', 4292: '생각과', 4293: '생각나게', 4294: '생각나고', 4295: '생각나네', 4296: '생각나네요', 4297: '생각나네요.', 4298: '생각나는', 4299: '생각나서', 4300: '생각난다', 4301: '생각난다.', 4302: '생각남', 4303: '생각도', 4304: '생각되는', 4305: '생각된다.', 4306: '생각됩니다.', 4307: '생각만', 4308: '생각밖에', 4309: '생각보다', 4310: '생각보단', 4311: '생각없이', 4312: '생각에', 4313: '생각외로', 4314: '생각으로', 4315: '생각은', 4316: '생각을', 4317: '생각이', 4318: '생각지도', 4319: '생각하게', 4320: '생각하고', 4321: '생각하는', 4322: '생각하니', 4323: '생각하며', 4324: '생각하면', 4325: '생각하지', 4326: '생각한', 4327: '생각한다', 4328: '생각한다.', 4329: '생각한다면', 4330: '생각할', 4331: '생각함', 4332: '생각함.', 4333: '생각합니다', 4334: '생각합니다.', 4335: '생각해', 4336: '생각해도', 4337: '생각해보게', 4338: '생각해보면', 4339: '생각해볼', 4340: '생각해서', 4341: '생각했는데', 4342: '생각했다', 4343: '생각했던', 4344: '생겨서', 4345: '생기는', 4346: '생긴', 4347: '생명을', 4348: '생명의', 4349: '생생하게', 4350: '생생한', 4351: '생생히', 4352: '생애', 4353: '생에', 4354: '생의', 4355: '샤를리즈', 4356: '서', 4357: '서로', 4358: '서로를', 4359: '서로의', 4360: '서부극', 4361: '서서히', 4362: '서스펜스', 4363: '서우', 4364: '서울', 4365: '서정적인', 4366: '서프라이즈', 4367: '섞어', 4368: '섞인', 4369: '선', 4370: '선동영화', 4371: '선물', 4372: '선사하는', 4373: '선생님', 4374: '선생님의', 4375: '선생님이', 4376: '선입견을', 4377: '선택', 4378: '선택은', 4379: '선택의', 4380: '선택한', 4381: '선한', 4382: '설경구', 4383: '설득력', 4384: '설득력이', 4385: '설레게', 4386: '설레고', 4387: '설레는', 4388: '설마', 4389: '설명', 4390: '설명도', 4391: '설명은', 4392: '설명이', 4393: '설명할', 4394: '설정', 4395: '설정,', 4396: '설정.', 4397: '설정과', 4398: '설정도', 4399: '설정만', 4400: '설정에', 4401: '설정으로', 4402: '설정은', 4403: '설정을', 4404: '설정의', 4405: '설정이', 4406: '섬뜩한', 4407: '섬세하게', 4408: '섬세하고', 4409: '섬세한', 4410: '성격', 4411: '성격이', 4412: '성공', 4413: '성공한', 4414: '성룡', 4415: '성룡은', 4416: '성룡의', 4417: '성룡이', 4418: '성우', 4419: '성우가', 4420: '성우들', 4421: '성우를', 4422: '성인', 4423: '성인이', 4424: '성장', 4425: '성장하는', 4426: '세', 4427: '세계', 4428: '세계가', 4429: '세계관이', 4430: '세계를', 4431: '세계에', 4432: '세계에서', 4433: '세계적인', 4434: '세기의', 4435: '세련된', 4436: '세번', 4437: '세상', 4438: '세상에', 4439: '세상에서', 4440: '세상은', 4441: '세상을', 4442: '세상의', 4443: '세상이', 4444: '세심한', 4445: '세월을', 4446: '세월이', 4447: '섹스', 4448: '섹시', 4449: '섹시하고', 4450: '섹시한', 4451: '소녀', 4452: '소녀의', 4453: '소년의', 4454: '소름', 4455: '소름끼치게', 4456: '소름끼치는', 4457: '소름돋는', 4458: '소름이', 4459: '소리', 4460: '소리가', 4461: '소리를', 4462: '소리만', 4463: '소설', 4464: '소설도', 4465: '소설로', 4466: '소설을', 4467: '소설의', 4468: '소설이', 4469: '소소하게', 4470: '소소하고', 4471: '소소한', 4472: '소장', 4473: '소장하고', 4474: '소재', 4475: '소재,', 4476: '소재.', 4477: '소재가', 4478: '소재나', 4479: '소재는', 4480: '소재도', 4481: '소재로', 4482: '소재로한', 4483: '소재를', 4484: '소재만', 4485: '소재에', 4486: '소재와', 4487: '소재의', 4488: '소중한', 4489: '소중함을', 4490: '소중히', 4491: '소지섭', 4492: '속', 4493: '속아서', 4494: '속았다', 4495: '속았다.', 4496: '속에', 4497: '속에서', 4498: '속에서도', 4499: '속으로', 4500: '속은', 4501: '속의', 4502: '속이', 4503: '속지', 4504: '속편', 4505: '속편.', 4506: '속편은', 4507: '속편을', 4508: '속편의', 4509: '속편이', 4510: '손', 4511: '손꼽히는', 4512: '손발', 4513: '손발이', 4514: '손색이', 4515: '손에', 4516: '손예진', 4517: '손으로', 4518: '손을', 4519: '손이', 4520: '손잡고', 4521: '솔까', 4522: '솔직하게', 4523: '솔직하고', 4524: '솔직한', 4525: '솔직히', 4526: '솔찍히', 4527: '송강호', 4528: '송강호의', 4529: '송승헌', 4530: '송지효', 4531: '숀', 4532: '수', 4533: '수가', 4534: '수고', 4535: '수는', 4536: '수도', 4537: '수록', 4538: '수많은', 4539: '수면제', 4540: '수밖에', 4541: '수없이', 4542: '수있는', 4543: '수작', 4544: '수작!', 4545: '수작.', 4546: '수작을', 4547: '수작이', 4548: '수작이다', 4549: '수작이다.', 4550: '수작이라고', 4551: '수작입니다.', 4552: '수준', 4553: '수준.', 4554: '수준..', 4555: '수준...', 4556: '수준낮은', 4557: '수준도', 4558: '수준에', 4559: '수준으로', 4560: '수준은', 4561: '수준을', 4562: '수준의', 4563: '수준이', 4564: '수준이다.', 4565: '수준이하', 4566: '순', 4567: '순간', 4568: '순간을', 4569: '순간의', 4570: '순간이', 4571: '순수', 4572: '순수하게', 4573: '순수하고', 4574: '순수한', 4575: '순수함을', 4576: '순수함이', 4577: '순수했던', 4578: '순식간에', 4579: '숨', 4580: '숨겨진', 4581: '숨막히는', 4582: '숨어있는', 4583: '숨은', 4584: '숨이', 4585: '쉬운', 4586: '쉽게', 4587: '쉽지', 4588: '슈스케', 4589: '슈퍼', 4590: '슈퍼맨', 4591: '스릴', 4592: '스릴감', 4593: '스릴과', 4594: '스릴도', 4595: '스릴러', 4596: '스릴러,', 4597: '스릴러.', 4598: '스릴러?', 4599: '스릴러가', 4600: '스릴러는', 4601: '스릴러도', 4602: '스릴러라고', 4603: '스릴러로', 4604: '스릴러를', 4605: '스릴러물', 4606: '스릴러의', 4607: '스릴은', 4608: '스릴이', 4609: '스릴있고', 4610: '스스로', 4611: '스스로를', 4612: '스칼렛', 4613: '스케일', 4614: '스케일도', 4615: '스케일이', 4616: '스콧', 4617: '스크린에', 4618: '스크린에서', 4619: '스크린으로', 4620: '스타', 4621: '스타뎀', 4622: '스타워즈', 4623: '스타일', 4624: '스타일은', 4625: '스타일의', 4626: '스타일이', 4627: '스텝업', 4628: '스토리', 4629: '스토리,', 4630: '스토리.', 4631: '스토리..', 4632: '스토리...', 4633: '스토리가', 4634: '스토리고', 4635: '스토리나', 4636: '스토리는', 4637: '스토리도', 4638: '스토리로', 4639: '스토리를', 4640: '스토리만', 4641: '스토리며', 4642: '스토리에', 4643: '스토리와', 4644: '스토리의', 4645: '스토리전개', 4646: '스토리전개가', 4647: '스토리지만', 4648: '스트레스', 4649: '스티브', 4650: '스티븐', 4651: '스파이', 4652: '스파이더맨', 4653: '스페인', 4654: '스포츠', 4655: '스필버그', 4656: '슬래셔', 4657: '슬슬', 4658: '슬퍼', 4659: '슬퍼서', 4660: '슬퍼요', 4661: '슬펐다.', 4662: '슬펐던', 4663: '슬프게', 4664: '슬프고', 4665: '슬프네요', 4666: '슬프다', 4667: '슬프다.', 4668: '슬프다..', 4669: '슬프다...', 4670: '슬프지만', 4671: '슬픈', 4672: '슬픈영화', 4673: '슬픔', 4674: '슬픔과', 4675: '슬픔을', 4676: '슬픔이', 4677: '승리', 4678: '시', 4679: '시각으로', 4680: '시간', 4681: '시간가는', 4682: '시간가는줄', 4683: '시간과', 4684: '시간낭비', 4685: '시간도', 4686: '시간만', 4687: '시간아까운', 4688: '시간아까움', 4689: '시간아깝다', 4690: '시간아깝다..', 4691: '시간에', 4692: '시간은', 4693: '시간을', 4694: '시간의', 4695: '시간이', 4696: '시걸', 4697: '시걸의', 4698: '시골', 4699: '시끄러워서', 4700: '시끄럽고', 4701: '시나리오', 4702: '시나리오,', 4703: '시나리오.', 4704: '시나리오가', 4705: '시나리오는', 4706: '시나리오도', 4707: '시나리오를', 4708: '시나리오에', 4709: '시나리오와', 4710: '시나리오의', 4711: '시대', 4712: '시대가', 4713: '시대를', 4714: '시대에', 4715: '시대의', 4716: '시대적', 4717: '시덥잖은', 4718: '시도', 4719: '시도가', 4720: '시도는', 4721: '시리즈', 4722: '시리즈.', 4723: '시리즈가', 4724: '시리즈는', 4725: '시리즈로', 4726: '시리즈를', 4727: '시리즈의', 4728: '시리즈중', 4729: '시리즈중에', 4730: '시바', 4731: '시사회', 4732: '시사회로', 4733: '시사회에서', 4734: '시선으로', 4735: '시선을', 4736: '시선이', 4737: '시시하고', 4738: '시시한', 4739: '시원하게', 4740: '시원한', 4741: '시작', 4742: '시작.', 4743: '시작된', 4744: '시작부터', 4745: '시작은', 4746: '시작을', 4747: '시작한', 4748: '시작해', 4749: '시작해서', 4750: '시절', 4751: '시절에', 4752: '시절을', 4753: '시절의', 4754: '시절이', 4755: '시점에서', 4756: '시종일관', 4757: '시즌', 4758: '시즌2', 4759: '시청', 4760: '시청률', 4761: '시청률은', 4762: '시청률을', 4763: '시청률이', 4764: '시청자', 4765: '시청자가', 4766: '시청자를', 4767: '시켜서', 4768: '시키는', 4769: '시키지', 4770: '시트콤', 4771: '시한부', 4772: '식상하고', 4773: '식상하다', 4774: '식상한', 4775: '식스센스', 4776: '식으로', 4777: '식의', 4778: '신', 4779: '신경', 4780: '신경을', 4781: '신고', 4782: '신기하다', 4783: '신기한', 4784: '신기할', 4785: '신나게', 4786: '신나는', 4787: '신데렐라', 4788: '신들린', 4789: '신민아', 4790: '신비로운', 4791: '신비롭고', 4792: '신선하고', 4793: '신선하다.', 4794: '신선한', 4795: '신선함이', 4796: '신선했고', 4797: '신세경', 4798: '신세계', 4799: '신은', 4800: '신은경', 4801: '신을', 4802: '신의', 4803: '신이', 4804: '신인', 4805: '신파극', 4806: '신하균', 4807: '실력', 4808: '실력이', 4809: '실망', 4810: '실망.', 4811: '실망..', 4812: '실망...', 4813: '실망도', 4814: '실망스러운', 4815: '실망스럽다.', 4816: '실망시키지', 4817: '실망을', 4818: '실망이', 4819: '실망이다', 4820: '실망이다.', 4821: '실망한', 4822: '실망했다.', 4823: '실상은', 4824: '실상을', 4825: '실제', 4826: '실제로', 4827: '실제로는', 4828: '실컷', 4829: '실패', 4830: '실패작', 4831: '실패한', 4832: '실화', 4833: '실화라고', 4834: '실화라는', 4835: '실화라는게', 4836: '실화라니', 4837: '실화라서', 4838: '실화를', 4839: '싫다', 4840: '싫다.', 4841: '싫어', 4842: '싫어서', 4843: '싫어하는', 4844: '싫어하는데', 4845: '싫은', 4846: '심각하게', 4847: '심각한', 4848: '심금을', 4849: '심리', 4850: '심리를', 4851: '심리묘사가', 4852: '심사위원', 4853: '심심한', 4854: '심심할때', 4855: '심심해서', 4856: '심오한', 4857: '심장', 4858: '심장을', 4859: '심장이', 4860: '심지어', 4861: '심하게', 4862: '심하다', 4863: '심한', 4864: '심했다', 4865: '심형래', 4866: '심히', 4867: '십점', 4868: '싱거운', 4869: '싶게', 4870: '싶네요', 4871: '싶네요.', 4872: '싶다', 4873: '싶다.', 4874: '싶다..', 4875: '싶다...', 4876: '싶다는', 4877: '싶다면', 4878: '싶습니다.', 4879: '싶어', 4880: '싶어도', 4881: '싶어서', 4882: '싶어요', 4883: '싶어요.', 4884: '싶어지는', 4885: '싶어하는', 4886: '싶었는데', 4887: '싶었다', 4888: '싶었다.', 4889: '싶었던', 4890: '싶었음', 4891: '싶으면', 4892: '싶은', 4893: '싶은건지', 4894: '싶은데', 4895: '싶을', 4896: '싶을때', 4897: '싶을정도로', 4898: '싶음', 4899: '싶음.', 4900: '싶지', 4901: '싶지도', 4902: '싶지만', 4903: '싸구려', 4904: '싸우고', 4905: '싸우는', 4906: '싸움', 4907: '싸이코', 4908: '싸이코패스', 4909: '싹', 4910: '쌈마이', 4911: '쌍벽을', 4912: '쌍팔년도', 4913: '써도', 4914: '써라', 4915: '써서', 4916: '썩', 4917: '썩은', 4918: '쏙', 4919: '쓰', 4920: '쓰고', 4921: '쓰는', 4922: '쓰래기', 4923: '쓰레기', 4924: '쓰레기.', 4925: '쓰레기..', 4926: '쓰레기...', 4927: '쓰레기가', 4928: '쓰레기같은', 4929: '쓰레기네', 4930: '쓰레기는', 4931: '쓰레기다', 4932: '쓰레기다.', 4933: '쓰레기도', 4934: '쓰레기들', 4935: '쓰레기라', 4936: '쓰레기라고', 4937: '쓰레기로', 4938: '쓰레기를', 4939: '쓰레기에', 4940: '쓰레기영화', 4941: '쓰레기영화.', 4942: '쓰레기영화는', 4943: '쓰레기의', 4944: '쓰레기임', 4945: '쓰레기중에', 4946: '쓰렉', 4947: '쓰면', 4948: '쓰지', 4949: '쓴', 4950: '쓸', 4951: '쓸데없는', 4952: '쓸데없이', 4953: '쓸쓸한', 4954: '씁쓸한', 4955: '씨', 4956: '씨네21', 4957: '씨발', 4958: '씬', 4959: '씬은', 4960: '씬을', 4961: '씬이', 4962: '씹노잼', 4963: '아', 4964: '아!', 4965: '아,', 4966: '아.', 4967: '아..', 4968: '아...', 4969: '아....', 4970: '아~', 4971: '아기', 4972: '아기자기하고', 4973: '아기자기한', 4974: '아까운', 4975: '아까운데', 4976: '아까운영화', 4977: '아까울', 4978: '아까움', 4979: '아까움.', 4980: '아까움..', 4981: '아까움...', 4982: '아까워', 4983: '아까워서', 4984: '아까워요', 4985: '아까웠다', 4986: '아까웠다!!!', 4987: '아까웠다.', 4988: '아까웠던', 4989: '아깝', 4990: '아깝게', 4991: '아깝고', 4992: '아깝네', 4993: '아깝네요', 4994: '아깝네요.', 4995: '아깝다', 4996: '아깝다!', 4997: '아깝다!!!', 4998: '아깝다.', 4999: '아깝다..', 5000: '아깝다...', 5001: '아깝다고', 5002: '아깝다는', 5003: '아깝단', 5004: '아깝습니다.', 5005: '아깝지', 5006: '아나', 5007: '아날로그', 5008: '아내가', 5009: '아냐?', 5010: '아놀드', 5011: '아놀드의', 5012: '아놔', 5013: '아는', 5014: '아는데', 5015: '아니', 5016: '아니,', 5017: '아니고', 5018: '아니고,', 5019: '아니고.', 5020: '아니고..', 5021: '아니고...', 5022: '아니네', 5023: '아니네요', 5024: '아니다', 5025: '아니다.', 5026: '아니다..', 5027: '아니다...', 5028: '아니더라도', 5029: '아니라', 5030: '아니라,', 5031: '아니라고', 5032: '아니라는', 5033: '아니라면', 5034: '아니라서', 5035: '아니면', 5036: '아니야', 5037: '아니어도', 5038: '아니었나', 5039: '아니었다', 5040: '아니었다.', 5041: '아니었다면', 5042: '아니었으면', 5043: '아니었음', 5044: '아니였으면', 5045: '아니잖아', 5046: '아니지', 5047: '아니지.', 5048: '아니지..', 5049: '아니지만', 5050: '아니지만,', 5051: '아닌', 5052: '아닌,', 5053: '아닌...', 5054: '아닌가', 5055: '아닌가?', 5056: '아닌가요?', 5057: '아닌거', 5058: '아닌것', 5059: '아닌데', 5060: '아닌데..', 5061: '아닌데...', 5062: '아닌듯', 5063: '아닌듯.', 5064: '아닌지', 5065: '아닐까', 5066: '아닐까.', 5067: '아닐까?', 5068: '아님', 5069: '아님.', 5070: '아님..', 5071: '아님...', 5072: '아님?', 5073: '아닙니다', 5074: '아닙니다.', 5075: '아담', 5076: '아동용', 5077: '아들', 5078: '아들과', 5079: '아들을', 5080: '아들의', 5081: '아들이', 5082: '아래', 5083: '아련하고', 5084: '아련한', 5085: '아류작', 5086: '아름다운', 5087: '아름다움', 5088: '아름다움과', 5089: '아름다움에', 5090: '아름다움을', 5091: '아름다워서', 5092: '아름다웠고', 5093: '아름다웠다.', 5094: '아름다웠던', 5095: '아름답게', 5096: '아름답고', 5097: '아름답다', 5098: '아름답다.', 5099: '아마', 5100: '아마도', 5101: '아마추어', 5102: '아만다', 5103: '아메리칸', 5104: '아무', 5105: '아무것도', 5106: '아무나', 5107: '아무도', 5108: '아무래도', 5109: '아무런', 5110: '아무리', 5111: '아무리봐도', 5112: '아무생각없이', 5113: '아무튼', 5114: '아버지', 5115: '아버지가', 5116: '아버지는', 5117: '아버지를', 5118: '아버지와', 5119: '아버지의', 5120: '아빠', 5121: '아빠가', 5122: '아빠랑', 5123: '아빠와', 5124: '아쉬운', 5125: '아쉬운건', 5126: '아쉬울', 5127: '아쉬움', 5128: '아쉬움.', 5129: '아쉬움이', 5130: '아쉬워', 5131: '아쉬워서', 5132: '아쉬워요', 5133: '아쉬웠다.', 5134: '아쉬웠던', 5135: '아쉬웠지만', 5136: '아쉽', 5137: '아쉽게', 5138: '아쉽고', 5139: '아쉽긴', 5140: '아쉽네요', 5141: '아쉽네요.', 5142: '아쉽다', 5143: '아쉽다.', 5144: '아쉽다..', 5145: '아쉽다...', 5146: '아쉽지만', 5147: '아쉽지만,', 5148: '아역', 5149: '아예', 5150: '아오', 5151: '아오이', 5152: '아우', 5153: '아이', 5154: '아이가', 5155: '아이고', 5156: '아이는', 5157: '아이도', 5158: '아이돌', 5159: '아이들', 5160: '아이들과', 5161: '아이들도', 5162: '아이들에게', 5163: '아이들은', 5164: '아이들을', 5165: '아이들의', 5166: '아이들이', 5167: '아이디어', 5168: '아이디어가', 5169: '아이디어는', 5170: '아이를', 5171: '아이언맨', 5172: '아이에게', 5173: '아이와', 5174: '아이유', 5175: '아이의', 5176: '아저씨', 5177: '아저씨가', 5178: '아저씨는', 5179: '아저씨의', 5180: '아주', 5181: '아줌마', 5182: '아줌마가', 5183: '아직', 5184: '아직까지', 5185: '아직까지도', 5186: '아직도', 5187: '아직은', 5188: '아진짜', 5189: '아청법', 5190: '아침', 5191: '아침부터', 5192: '아침에', 5193: '아카데미', 5194: '아팠다.', 5195: '아프고', 5196: '아프다', 5197: '아프다.', 5198: '아프리카', 5199: '아픈', 5200: '아픔과', 5201: '아픔을', 5202: '아픔이', 5203: '아휴', 5204: '악당', 5205: '악당이', 5206: '악역', 5207: '악역이', 5208: '안', 5209: '안가고', 5210: '안가는', 5211: '안간다', 5212: '안간다.', 5213: '안감', 5214: '안감.', 5215: '안고', 5216: '안나', 5217: '안나고', 5218: '안나오고', 5219: '안나오는', 5220: '안나온다', 5221: '안나온다.', 5222: '안나옴', 5223: '안나와서', 5224: '안나왔으면', 5225: '안나지만', 5226: '안난다.', 5227: '안남기는데', 5228: '안다', 5229: '안돼', 5230: '안돼는', 5231: '안되게', 5232: '안되고', 5233: '안되나?', 5234: '안되네', 5235: '안되는', 5236: '안되는데', 5237: '안되면', 5238: '안되서', 5239: '안되요', 5240: '안된', 5241: '안된다', 5242: '안된다.', 5243: '안된다는', 5244: '안될', 5245: '안됨', 5246: '안됨.', 5247: '안됩니다.', 5248: '안든다.', 5249: '안듬', 5250: '안맞고', 5251: '안맞는', 5252: '안목이', 5253: '안무섭고', 5254: '안무섭다', 5255: '안보고', 5256: '안보길', 5257: '안보는', 5258: '안보는게', 5259: '안보는데', 5260: '안보면', 5261: '안본', 5262: '안본다', 5263: '안본다.', 5264: '안봄', 5265: '안봐도', 5266: '안봐서', 5267: '안봤는데', 5268: '안봤으면', 5269: '안봤지만', 5270: '안습', 5271: '안쓰는데', 5272: '안에', 5273: '안에서', 5274: '안의', 5275: '안젤리나', 5276: '안좋아하는데', 5277: '안좋은', 5278: '안타까운', 5279: '안타깝고', 5280: '안타깝네요', 5281: '안타깝다', 5282: '안타깝다.', 5283: '안타깝습니다.', 5284: '안타깝지만', 5285: '안하고', 5286: '안하는', 5287: '안하는데', 5288: '안하면', 5289: '안함', 5290: '안했는데', 5291: '안했으면', 5292: '앉아', 5293: '앉아서', 5294: '않게', 5295: '않고', 5296: '않고,', 5297: '않고.', 5298: '않고..', 5299: '않고...', 5300: '않기', 5301: '않길', 5302: '않나', 5303: '않나?', 5304: '않네', 5305: '않네요', 5306: '않네요.', 5307: '않는', 5308: '않는게', 5309: '않는다', 5310: '않는다.', 5311: '않는다..', 5312: '않는다...', 5313: '않는다면', 5314: '않는데', 5315: '않다', 5316: '않다.', 5317: '않다..', 5318: '않다는', 5319: '않습니다.', 5320: '않아', 5321: '않아도', 5322: '않아서', 5323: '않아요', 5324: '않아요.', 5325: '않았고', 5326: '않았나', 5327: '않았는데', 5328: '않았다', 5329: '않았다.', 5330: '않았다면', 5331: '않았던', 5332: '않았습니다.', 5333: '않았으면', 5334: '않았을', 5335: '않았지만', 5336: '않으면', 5337: '않으면서', 5338: '않은', 5339: '않은데', 5340: '않을', 5341: '않을까', 5342: '않을까?', 5343: '않음', 5344: '않음.', 5345: '않지만', 5346: '알', 5347: '알게', 5348: '알게된', 5349: '알게해준', 5350: '알겠는데', 5351: '알겠는데,', 5352: '알겠다', 5353: '알겠다.', 5354: '알겠으나', 5355: '알겠지만', 5356: '알고', 5357: '알고보니', 5358: '알고보면', 5359: '알려주는', 5360: '알려준', 5361: '알려지지', 5362: '알려진', 5363: '알면', 5364: '알면서', 5365: '알면서도', 5366: '알바', 5367: '알바가', 5368: '알바들', 5369: '알바들아', 5370: '알바들이', 5371: '알수', 5372: '알수가', 5373: '알수없는', 5374: '알수있는', 5375: '알아', 5376: '알아서', 5377: '알아야', 5378: '알았네', 5379: '알았는데', 5380: '알았는데...', 5381: '알았다', 5382: '알았다.', 5383: '알았다..', 5384: '알았더니', 5385: '알았습니다.', 5386: '알았으면', 5387: '알았음', 5388: '알았음.', 5389: '알았지만', 5390: '알지', 5391: '알지만', 5392: '암', 5393: '암울한', 5394: '암이', 5395: '암튼', 5396: '압권', 5397: '압권.', 5398: '압도적인', 5399: '앞', 5400: '앞뒤', 5401: '앞뒤가', 5402: '앞서간', 5403: '앞에', 5404: '앞에서', 5405: '앞으로', 5406: '앞으로도', 5407: '애', 5408: '애가', 5409: '애기', 5410: '애기가', 5411: '애는', 5412: '애니', 5413: '애니.', 5414: '애니가', 5415: '애니는', 5416: '애니도', 5417: '애니로', 5418: '애니를', 5419: '애니매이션', 5420: '애니메이션', 5421: '애니메이션.', 5422: '애니메이션으로', 5423: '애니메이션은', 5424: '애니메이션을', 5425: '애니메이션의', 5426: '애니메이션이', 5427: '애니의', 5428: '애니중', 5429: '애들', 5430: '애들도', 5431: '애들용', 5432: '애들은', 5433: '애들을', 5434: '애들이', 5435: '애들한테', 5436: '애를', 5437: '애매한', 5438: '애써', 5439: '애잔한', 5440: '애절한', 5441: '애초에', 5442: '애틋한', 5443: '액션', 5444: '액션,', 5445: '액션.', 5446: '액션과', 5447: '액션도', 5448: '액션마저', 5449: '액션만', 5450: '액션물', 5451: '액션신', 5452: '액션신이', 5453: '액션씬', 5454: '액션씬은', 5455: '액션씬이', 5456: '액션에', 5457: '액션영화', 5458: '액션영화가', 5459: '액션영화는', 5460: '액션영화의', 5461: '액션으로', 5462: '액션은', 5463: '액션을', 5464: '액션의', 5465: '액션이', 5466: '앤', 5467: '야', 5468: '야구를', 5469: '야동', 5470: '야동을', 5471: '야이', 5472: '야하지도', 5473: '야한', 5474: '약', 5475: '약간', 5476: '약간은', 5477: '약간의', 5478: '약빨고', 5479: '약을', 5480: '약하고', 5481: '약하다', 5482: '약하다.', 5483: '약한', 5484: '얄팍한', 5485: '양동근', 5486: '양심도', 5487: '양심적으로', 5488: '양아치', 5489: '얘기', 5490: '얘기가', 5491: '얘기는', 5492: '얘기를', 5493: '얘기하고', 5494: '어', 5495: '어거지', 5496: '어거지로', 5497: '어느', 5498: '어느것', 5499: '어느덧', 5500: '어느새', 5501: '어느순간', 5502: '어느정도', 5503: '어느하나', 5504: '어두운', 5505: '어둡고', 5506: '어디', 5507: '어디가', 5508: '어디가서', 5509: '어디까지', 5510: '어디다', 5511: '어디로', 5512: '어디서', 5513: '어디선가', 5514: '어디에', 5515: '어디에도', 5516: '어딘가', 5517: '어딜봐서', 5518: '어땠을까', 5519: '어떠한', 5520: '어떤', 5521: '어떨지', 5522: '어떻게', 5523: '어떻게든', 5524: '어려서', 5525: '어려운', 5526: '어려울', 5527: '어렴풋이', 5528: '어렵게', 5529: '어렵고', 5530: '어렵다', 5531: '어렵다.', 5532: '어렸을', 5533: '어렸을때', 5534: '어렸을때는', 5535: '어렸을땐', 5536: '어렸을적', 5537: '어른', 5538: '어른도', 5539: '어른들을', 5540: '어른들의', 5541: '어른들이', 5542: '어른을', 5543: '어른이', 5544: '어린', 5545: '어린시절', 5546: '어린이', 5547: '어린이들이', 5548: '어린이용', 5549: '어릴', 5550: '어릴때', 5551: '어릴때부터', 5552: '어릴땐', 5553: '어릴적', 5554: '어릴적에', 5555: '어마어마한', 5556: '어머니', 5557: '어머니가', 5558: '어머니의', 5559: '어메이징', 5560: '어색', 5561: '어색하고', 5562: '어색하지', 5563: '어색한', 5564: '어색해서', 5565: '어서', 5566: '어설퍼', 5567: '어설프게', 5568: '어설프고', 5569: '어설프다', 5570: '어설픈', 5571: '어수선하고', 5572: '어우', 5573: '어우러져', 5574: '어우러진', 5575: '어울리는', 5576: '어울리지', 5577: '어울린다.', 5578: '어이', 5579: '어이가', 5580: '어이없게', 5581: '어이없고', 5582: '어이없는', 5583: '어이없다', 5584: '어이없어서', 5585: '어이없음', 5586: '어정쩡한', 5587: '어제', 5588: '어중간한', 5589: '어째', 5590: '어째서', 5591: '어쨋든', 5592: '어쨌든', 5593: '어쩌고', 5594: '어쩌구', 5595: '어쩌다', 5596: '어쩌라고', 5597: '어쩌면', 5598: '어쩐지', 5599: '어쩔', 5600: '어쩔수', 5601: '어쩔수없이', 5602: '어쩜', 5603: '어찌', 5604: '어찌나', 5605: '어찌보면', 5606: '어차피', 5607: '어처구니', 5608: '어처구니가', 5609: '어처구니없는', 5610: '어케', 5611: '어휴', 5612: '어휴..', 5613: '어휴...', 5614: '억지', 5615: '억지가', 5616: '억지감동', 5617: '억지로', 5618: '억지스러운', 5619: '억지스런', 5620: '억지스럽고', 5621: '억지스럽다', 5622: '언니', 5623: '언니가', 5624: '언제', 5625: '언제까지', 5626: '언제나', 5627: '언제봐도', 5628: '언제쯤', 5629: '언젠가', 5630: '언젠가는', 5631: '얻어', 5632: '얻은', 5633: '얻을', 5634: '얼굴', 5635: '얼굴과', 5636: '얼굴도', 5637: '얼굴로', 5638: '얼굴만', 5639: '얼굴에', 5640: '얼굴은', 5641: '얼굴을', 5642: '얼굴이', 5643: '얼른', 5644: '얼마', 5645: '얼마나', 5646: '얼마전', 5647: '엄마', 5648: '엄마가', 5649: '엄마는', 5650: '엄마랑', 5651: '엄마를', 5652: '엄마와', 5653: '엄마의', 5654: '엄마한테', 5655: '엄정화', 5656: '엄청', 5657: '엄청나게', 5658: '엄청난', 5659: '없게', 5660: '없고', 5661: '없고,', 5662: '없고.', 5663: '없고..', 5664: '없고...', 5665: '없구', 5666: '없구나', 5667: '없군', 5668: '없기', 5669: '없나', 5670: '없나?', 5671: '없나요', 5672: '없나요?', 5673: '없냐', 5674: '없냐?', 5675: '없네', 5676: '없네.', 5677: '없네..', 5678: '없네...', 5679: '없네요', 5680: '없네요.', 5681: '없네요..', 5682: '없는', 5683: '없는,', 5684: '없는...', 5685: '없는거', 5686: '없는것', 5687: '없는게', 5688: '없는데', 5689: '없는듯', 5690: '없는영화', 5691: '없는지', 5692: '없다', 5693: '없다!', 5694: '없다.', 5695: '없다..', 5696: '없다...', 5697: '없다....', 5698: '없다고', 5699: '없다는', 5700: '없다는게', 5701: '없다면', 5702: '없던', 5703: '없습니다', 5704: '없습니다.', 5705: '없어', 5706: '없어.', 5707: '없어도', 5708: '없어서', 5709: '없어요', 5710: '없어요.', 5711: '없어요..', 5712: '없었고', 5713: '없었는데', 5714: '없었다', 5715: '없었다.', 5716: '없었다..', 5717: '없었다면', 5718: '없었던', 5719: '없었습니다.', 5720: '없었으면', 5721: '없었음', 5722: '없었음.', 5723: '없었지만', 5724: '없으니', 5725: '없으며', 5726: '없으면', 5727: '없으면서', 5728: '없을', 5729: '없을것', 5730: '없음', 5731: '없음.', 5732: '없음..', 5733: '없음...', 5734: '없이', 5735: '없이도', 5736: '없잖아', 5737: '없지', 5738: '없지만', 5739: '없지만,', 5740: '엉뚱한', 5741: '엉망', 5742: '엉망.', 5743: '엉망이고', 5744: '엉망인', 5745: '엉성', 5746: '엉성하게', 5747: '엉성하고', 5748: '엉성하기', 5749: '엉성하다.', 5750: '엉성한', 5751: '엉엉', 5752: '엉터리', 5753: '에', 5754: '에드워드', 5755: '에라이', 5756: '에로', 5757: '에로물', 5758: '에로영화', 5759: '에릭', 5760: '에밀리', 5761: '에서', 5762: '에이', 5763: '에이리언', 5764: '에피소드', 5765: '에효', 5766: '에휴', 5767: '에휴..', 5768: '에휴...', 5769: '엑스맨', 5770: '엔딩', 5771: '엔딩까지', 5772: '엔딩도', 5773: '엔딩에', 5774: '엔딩에서', 5775: '엔딩은', 5776: '엔딩을', 5777: '엔딩이', 5778: '엔딩크레딧이', 5779: '여', 5780: '여기', 5781: '여기가', 5782: '여기는', 5783: '여기서', 5784: '여기서도', 5785: '여기에', 5786: '여기저기', 5787: '여느', 5788: '여러', 5789: '여러가지', 5790: '여러가지를', 5791: '여러모로', 5792: '여러번', 5793: '여러번봐도', 5794: '여러분', 5795: '여름', 5796: '여름에', 5797: '여배우', 5798: '여배우가', 5799: '여배우는', 5800: '여배우들', 5801: '여배우들의', 5802: '여배우의', 5803: '여성', 5804: '여성을', 5805: '여성의', 5806: '여실히', 5807: '여운', 5808: '여운과', 5809: '여운도', 5810: '여운에', 5811: '여운은', 5812: '여운을', 5813: '여운이', 5814: '여인의', 5815: '여자', 5816: '여자가', 5817: '여자는', 5818: '여자도', 5819: '여자들', 5820: '여자들은', 5821: '여자들의', 5822: '여자들이', 5823: '여자를', 5824: '여자배우', 5825: '여자애', 5826: '여자애가', 5827: '여자에게', 5828: '여자와', 5829: '여자의', 5830: '여자주인공', 5831: '여자주인공이', 5832: '여자친구랑', 5833: '여전히', 5834: '여주', 5835: '여주가', 5836: '여주는', 5837: '여주의', 5838: '여주인공', 5839: '여주인공도', 5840: '여주인공은', 5841: '여주인공의', 5842: '여주인공이', 5843: '여지껏', 5844: '여친이', 5845: '여태', 5846: '여태까지', 5847: '여태껏', 5848: '여튼', 5849: '여행', 5850: '여행을', 5851: '역', 5852: '역겨운', 5853: '역겹고', 5854: '역겹다', 5855: '역겹다.', 5856: '역대', 5857: '역대급', 5858: '역량', 5859: '역량이', 5860: '역사', 5861: '역사가', 5862: '역사는', 5863: '역사를', 5864: '역사상', 5865: '역사에', 5866: '역사와', 5867: '역사의', 5868: '역사적', 5869: '역쉬', 5870: '역시', 5871: '역시..', 5872: '역시나', 5873: '역에', 5874: '역을', 5875: '역의', 5876: '역작.', 5877: '역할', 5878: '역할은', 5879: '역할을', 5880: '역할이', 5881: '연결도', 5882: '연결이', 5883: '연극을', 5884: '연기', 5885: '연기,', 5886: '연기.', 5887: '연기..', 5888: '연기...', 5889: '연기가', 5890: '연기까지', 5891: '연기나', 5892: '연기는', 5893: '연기도', 5894: '연기들도', 5895: '연기들이', 5896: '연기때문에', 5897: '연기력', 5898: '연기력,', 5899: '연기력과', 5900: '연기력도', 5901: '연기력에', 5902: '연기력으로', 5903: '연기력은', 5904: '연기력을', 5905: '연기력이', 5906: '연기로', 5907: '연기를', 5908: '연기만', 5909: '연기며', 5910: '연기에', 5911: '연기와', 5912: '연기의', 5913: '연기자', 5914: '연기자가', 5915: '연기자들', 5916: '연기자들의', 5917: '연기자들이', 5918: '연기잘하고', 5919: '연기잘하는', 5920: '연기파', 5921: '연기하는', 5922: '연기한', 5923: '연속', 5924: '연애', 5925: '연애를', 5926: '연애의', 5927: '연예인', 5928: '연예인이', 5929: '연출', 5930: '연출,', 5931: '연출.', 5932: '연출과', 5933: '연출도', 5934: '연출력', 5935: '연출력.', 5936: '연출력과', 5937: '연출력도', 5938: '연출력은', 5939: '연출력의', 5940: '연출력이', 5941: '연출로', 5942: '연출에', 5943: '연출은', 5944: '연출을', 5945: '연출의', 5946: '연출이', 5947: '연출이나', 5948: '연출한', 5949: '열', 5950: '열까지', 5951: '열라', 5952: '열린', 5953: '열받아서', 5954: '열심히', 5955: '열연이', 5956: '열정과', 5957: '열정을', 5958: '열정이', 5959: '엽기적인', 5960: '였다.', 5961: '영', 5962: '영..', 5963: '영...', 5964: '영~', 5965: '영감을', 5966: '영구와', 5967: '영국', 5968: '영상', 5969: '영상,', 5970: '영상.', 5971: '영상과', 5972: '영상도', 5973: '영상만', 5974: '영상미', 5975: '영상미,', 5976: '영상미가', 5977: '영상미는', 5978: '영상미도', 5979: '영상미에', 5980: '영상미와', 5981: '영상으로', 5982: '영상은', 5983: '영상을', 5984: '영상이', 5985: '영어', 5986: '영웅', 5987: '영웅본색', 5988: '영원한', 5989: '영원히', 5990: '영향을', 5991: '영혼을', 5992: '영혼의', 5993: '영혼이', 5994: '영화', 5995: '영화!', 5996: '영화!!', 5997: '영화!!!', 5998: '영화!!!!', 5999: '영화\"', 6000: '영화,', 6001: '영화,,', 6002: '영화.', 6003: '영화..', 6004: '영화...', 6005: '영화....', 6006: '영화......', 6007: '영화?', 6008: '영화^^', 6009: '영화~', 6010: '영화~!', 6011: '영화~!!', 6012: '영화~~', 6013: '영화~~~', 6014: '영화ㅋ', 6015: '영화ㅋㅋ', 6016: '영화가', 6017: '영화감독', 6018: '영화감독이', 6019: '영화같다', 6020: '영화같다.', 6021: '영화같은', 6022: '영화계의', 6023: '영화고', 6024: '영화관', 6025: '영화관가서', 6026: '영화관에', 6027: '영화관에서', 6028: '영화군', 6029: '영화까지', 6030: '영화나', 6031: '영화내내', 6032: '영화냐', 6033: '영화냐?', 6034: '영화네', 6035: '영화네.', 6036: '영화네요', 6037: '영화네요!', 6038: '영화네요.', 6039: '영화네요..', 6040: '영화네요^^', 6041: '영화네요~', 6042: '영화는', 6043: '영화니까', 6044: '영화다', 6045: '영화다!', 6046: '영화다!!', 6047: '영화다.', 6048: '영화다..', 6049: '영화다...', 6050: '영화다운', 6051: '영화도', 6052: '영화들', 6053: '영화들은', 6054: '영화들을', 6055: '영화들의', 6056: '영화들이', 6057: '영화라', 6058: '영화라고', 6059: '영화라고..', 6060: '영화라고...', 6061: '영화라고는', 6062: '영화라기', 6063: '영화라기보다는', 6064: '영화라는', 6065: '영화라는게', 6066: '영화라니', 6067: '영화라도', 6068: '영화라면', 6069: '영화라서', 6070: '영화라지만', 6071: '영화란', 6072: '영화랑', 6073: '영화로', 6074: '영화로는', 6075: '영화로만', 6076: '영화로서', 6077: '영화를', 6078: '영화만', 6079: '영화만큼', 6080: '영화면', 6081: '영화보고', 6082: '영화보는', 6083: '영화보는내내', 6084: '영화보는데', 6085: '영화보다', 6086: '영화보다가', 6087: '영화보다는', 6088: '영화보다도', 6089: '영화보단', 6090: '영화보면', 6091: '영화보면서', 6092: '영화본', 6093: '영화사에', 6094: '영화속', 6095: '영화속의', 6096: '영화야', 6097: '영화야?', 6098: '영화에', 6099: '영화에는', 6100: '영화에서', 6101: '영화에서는', 6102: '영화에선', 6103: '영화에요', 6104: '영화에요.', 6105: '영화엔', 6106: '영화여서', 6107: '영화였는데', 6108: '영화였다', 6109: '영화였다.', 6110: '영화였습니다', 6111: '영화였습니다.', 6112: '영화였어요', 6113: '영화였어요.', 6114: '영화였음', 6115: '영화였음.', 6116: '영화예요', 6117: '영화예요.', 6118: '영화와', 6119: '영화의', 6120: '영화이고', 6121: '영화이다', 6122: '영화이다.', 6123: '영화이지만', 6124: '영화인', 6125: '영화인가', 6126: '영화인가?', 6127: '영화인거', 6128: '영화인것', 6129: '영화인데', 6130: '영화인데,', 6131: '영화인데..', 6132: '영화인데도', 6133: '영화인듯', 6134: '영화인듯.', 6135: '영화인줄', 6136: '영화인지', 6137: '영화일', 6138: '영화임', 6139: '영화임.', 6140: '영화임..', 6141: '영화임?', 6142: '영화임에도', 6143: '영화입니다', 6144: '영화입니다!', 6145: '영화입니다.', 6146: '영화입니다..', 6147: '영화입니다...', 6148: '영화자체가', 6149: '영화자체는', 6150: '영화자체도', 6151: '영화적', 6152: '영화제', 6153: '영화좀', 6154: '영화중', 6155: '영화중에', 6156: '영화중에서', 6157: '영화지', 6158: '영화지만', 6159: '영화지만,', 6160: '영화처럼', 6161: '영화치고', 6162: '영화치고는', 6163: '영화치곤', 6164: '영화판', 6165: '영화평점', 6166: '영화화', 6167: '영환', 6168: '영환데', 6169: '영환줄', 6170: '영활', 6171: '옆에', 6172: '옆에서', 6173: '예', 6174: '예.', 6175: '예고편', 6176: '예고편만', 6177: '예고편에', 6178: '예고편을', 6179: '예고편이', 6180: '예뻐서', 6181: '예쁘게', 6182: '예쁘고', 6183: '예쁘다', 6184: '예쁘다.', 6185: '예쁜', 6186: '예쁨', 6187: '예산', 6188: '예상', 6189: '예상되는', 6190: '예상외로', 6191: '예상을', 6192: '예상치', 6193: '예술', 6194: '예술로', 6195: '예술은', 6196: '예술을', 6197: '예술의', 6198: '예술이', 6199: '예술이다', 6200: '예전', 6201: '예전에', 6202: '예전에는', 6203: '예전의', 6204: '예측', 6205: '옛', 6206: '옛날', 6207: '옛날에', 6208: '옛날엔', 6209: '옛날영화라', 6210: '옛다', 6211: '오', 6212: '오~', 6213: '오그라드는', 6214: '오글', 6215: '오글거려', 6216: '오글거려서', 6217: '오글거리고', 6218: '오글거리는', 6219: '오글오글', 6220: '오는', 6221: '오늘', 6222: '오늘도', 6223: '오늘은', 6224: '오늘의', 6225: '오드리', 6226: '오락', 6227: '오락영화', 6228: '오래', 6229: '오래간만에', 6230: '오래도록', 6231: '오래된', 6232: '오래오래', 6233: '오래전', 6234: '오래전에', 6235: '오랜', 6236: '오랜만에', 6237: '오랫동안', 6238: '오랫만에', 6239: '오로라', 6240: '오로지', 6241: '오리지널', 6242: '오면', 6243: '오버', 6244: '오브', 6245: '오빠', 6246: '오오', 6247: '오우', 6248: '오우삼', 6249: '오유에서', 6250: '오인혜', 6251: '오지', 6252: '오직', 6253: '오페라의', 6254: '오프닝', 6255: '오히려', 6256: '온', 6257: '온갖', 6258: '온다.', 6259: '온몸에', 6260: '온전히', 6261: '온통', 6262: '올', 6263: '올레', 6264: '올레티비', 6265: '올리고', 6266: '올만에', 6267: '올바른', 6268: '올해', 6269: '올해의', 6270: '옴니버스', 6271: '옷', 6272: '옷을', 6273: '와', 6274: '와..', 6275: '와...', 6276: '와..진짜', 6277: '와~', 6278: '와나', 6279: '와닿는', 6280: '와닿는다.', 6281: '와닿지', 6282: '와서', 6283: '와우', 6284: '와이어', 6285: '와중에', 6286: '완벽', 6287: '완벽하게', 6288: '완벽하다', 6289: '완벽하다.', 6290: '완벽한', 6291: '완벽히', 6292: '완성도', 6293: '완성도가', 6294: '완성도는', 6295: '완성도를', 6296: '완소', 6297: '완전', 6298: '완전한', 6299: '완전히', 6300: '완젼', 6301: '완존', 6302: '완죤', 6303: '왔는데', 6304: '왔다', 6305: '왔다갔다', 6306: '왔습니다', 6307: '왔습니다.', 6308: '왔으면', 6309: '왕', 6310: '왕가위', 6311: '왕가위의', 6312: '왕조현', 6313: '왜', 6314: '왜?', 6315: '왜들', 6316: '왜이래', 6317: '왜이래?', 6318: '왜이렇게', 6319: '왜이리', 6320: '왜자꾸', 6321: '왜캐', 6322: '왜케', 6323: '왠', 6324: '왠만하면', 6325: '왠만한', 6326: '왠만해선', 6327: '왠지', 6328: '왤캐', 6329: '왤케', 6330: '외', 6331: '외계인', 6332: '외계인이', 6333: '외국', 6334: '외국인', 6335: '외로운', 6336: '외로움을', 6337: '외모', 6338: '외모가', 6339: '외에', 6340: '외에는', 6341: '외엔', 6342: '요', 6343: '요근래', 6344: '요리', 6345: '요새', 6346: '요세', 6347: '요소가', 6348: '요소는', 6349: '요소도', 6350: '요소들이', 6351: '요소를', 6352: '요즘', 6353: '요즘에', 6354: '요즘엔', 6355: '요즘은', 6356: '욕', 6357: '욕나오는', 6358: '욕나온다', 6359: '욕나옴', 6360: '욕만', 6361: '욕망을', 6362: '욕망의', 6363: '욕심이', 6364: '욕을', 6365: '욕이', 6366: '욕하고', 6367: '욕하는', 6368: '욕하면서', 6369: '용가리', 6370: '용감한', 6371: '용기를', 6372: '용기에', 6373: '용두사미', 6374: '용서가', 6375: '용서와', 6376: '용서할', 6377: '우는', 6378: '우디', 6379: '우뢰매', 6380: '우리', 6381: '우리가', 6382: '우리나라', 6383: '우리나라는', 6384: '우리나라도', 6385: '우리나라에', 6386: '우리나라에서', 6387: '우리나라의', 6388: '우리네', 6389: '우리는', 6390: '우리도', 6391: '우리들의', 6392: '우리를', 6393: '우리에게', 6394: '우리와', 6395: '우리의', 6396: '우리집', 6397: '우린', 6398: '우선', 6399: '우아한', 6400: '우연히', 6401: '우와', 6402: '우왕', 6403: '우울하고', 6404: '우울한', 6405: '우익', 6406: '우정', 6407: '우정,', 6408: '우정과', 6409: '우정을', 6410: '우정이', 6411: '우주', 6412: '운', 6413: '운동', 6414: '운명을', 6415: '울', 6416: '울게', 6417: '울고', 6418: '울리는', 6419: '울림이', 6420: '울면서', 6421: '울었네요', 6422: '울었다', 6423: '울었다.', 6424: '울었던', 6425: '울었습니다', 6426: '울었어요', 6427: '울었어요.', 6428: '울컥', 6429: '움직이거나', 6430: '움직이는', 6431: '웃게', 6432: '웃겨', 6433: '웃겨서', 6434: '웃겨요', 6435: '웃겼다', 6436: '웃겼음', 6437: '웃고', 6438: '웃기게', 6439: '웃기고', 6440: '웃기기도', 6441: '웃기긴', 6442: '웃기네', 6443: '웃기는', 6444: '웃기다', 6445: '웃기다.', 6446: '웃기려고', 6447: '웃기지', 6448: '웃기지도', 6449: '웃긴', 6450: '웃긴건', 6451: '웃긴다', 6452: '웃김', 6453: '웃김.', 6454: '웃는', 6455: '웃다', 6456: '웃다가', 6457: '웃어야', 6458: '웃었다', 6459: '웃었다.', 6460: '웃으며', 6461: '웃으면서', 6462: '웃을', 6463: '웃음', 6464: '웃음과', 6465: '웃음도', 6466: '웃음만', 6467: '웃음을', 6468: '웃음이', 6469: '웃지', 6470: '웅장한', 6471: '워낙', 6472: '원', 6473: '원래', 6474: '원숭이', 6475: '원작', 6476: '원작과', 6477: '원작도', 6478: '원작보다', 6479: '원작에', 6480: '원작으로', 6481: '원작은', 6482: '원작을', 6483: '원작의', 6484: '원작이', 6485: '원작이랑', 6486: '원작인', 6487: '원작자가', 6488: '원조', 6489: '원주율', 6490: '원표', 6491: '원피스', 6492: '원하는', 6493: '웨슬리', 6494: '웬', 6495: '웬만하면', 6496: '웬만한', 6497: '웬만해선', 6498: '웬지', 6499: '웰메이드', 6500: '웹툰', 6501: '위대한', 6502: '위대함을', 6503: '위로가', 6504: '위안부', 6505: '위에', 6506: '위에서', 6507: '위트있는', 6508: '위하여', 6509: '위한', 6510: '위해', 6511: '위해서', 6512: '위험한', 6513: '윌리스', 6514: '유덕화', 6515: '유덕화가', 6516: '유덕화의', 6517: '유럽', 6518: '유머', 6519: '유머가', 6520: '유머도', 6521: '유머와', 6522: '유명', 6523: '유명한', 6524: '유명해서', 6525: '유발', 6526: '유발하는', 6527: '유승준', 6528: '유아용', 6529: '유아인', 6530: '유익한', 6531: '유일하게', 6532: '유일한', 6533: '유치', 6534: '유치뽕짝', 6535: '유치하게', 6536: '유치하고', 6537: '유치하기', 6538: '유치하다', 6539: '유치하다.', 6540: '유치하다..', 6541: '유치하다고', 6542: '유치하지', 6543: '유치하지만', 6544: '유치한', 6545: '유치함', 6546: '유치함.', 6547: '유치함의', 6548: '유치해', 6549: '유치해서', 6550: '유쾌', 6551: '유쾌하게', 6552: '유쾌하고', 6553: '유쾌한', 6554: '으', 6555: '으로', 6556: '으리!', 6557: '으리가', 6558: '은', 6559: '은근', 6560: '은근히', 6561: '은은한', 6562: '을', 6563: '음', 6564: '음..', 6565: '음...', 6566: '음....', 6567: '음식', 6568: '음악', 6569: '음악,', 6570: '음악.', 6571: '음악과', 6572: '음악까지', 6573: '음악도', 6574: '음악만', 6575: '음악에', 6576: '음악으로', 6577: '음악은', 6578: '음악을', 6579: '음악의', 6580: '음악이', 6581: '음향', 6582: '응?', 6583: '응원합니다.', 6584: '의', 6585: '의도가', 6586: '의도는', 6587: '의도로', 6588: '의도를', 6589: '의도한', 6590: '의리', 6591: '의리로', 6592: '의문을', 6593: '의문이', 6594: '의미', 6595: '의미가', 6596: '의미는', 6597: '의미도', 6598: '의미로', 6599: '의미를', 6600: '의미없는', 6601: '의미에서', 6602: '의미있는', 6603: '의사가', 6604: '의상', 6605: '의심이', 6606: '의외로', 6607: '의외의', 6608: '의지가', 6609: '의한', 6610: '의해', 6611: '이', 6612: '이감독', 6613: '이거', 6614: '이거?', 6615: '이거는', 6616: '이거랑', 6617: '이거밖에', 6618: '이거보고', 6619: '이거보느니', 6620: '이거보다', 6621: '이거보단', 6622: '이거보면', 6623: '이건', 6624: '이건...', 6625: '이건머', 6626: '이건뭐', 6627: '이건좀', 6628: '이건진짜', 6629: '이걸', 6630: '이걸로', 6631: '이걸보고', 6632: '이것', 6633: '이것도', 6634: '이것만', 6635: '이것밖에', 6636: '이것보다', 6637: '이것보단', 6638: '이것은', 6639: '이것을', 6640: '이것이', 6641: '이것저것', 6642: '이게', 6643: '이게뭐야', 6644: '이게왜', 6645: '이기는', 6646: '이기적이고', 6647: '이기적인', 6648: '이길', 6649: '이끌어', 6650: '이나영', 6651: '이다', 6652: '이다.', 6653: '이도', 6654: '이도저도', 6655: '이드라마', 6656: '이들에게', 6657: '이들은', 6658: '이들을', 6659: '이들의', 6660: '이들이', 6661: '이따구로', 6662: '이따위', 6663: '이따위로', 6664: '이딴', 6665: '이딴거', 6666: '이딴걸', 6667: '이딴것도', 6668: '이딴게', 6669: '이딴영화', 6670: '이딴영화가', 6671: '이때', 6672: '이때까지', 6673: '이때부터', 6674: '이라', 6675: '이라고', 6676: '이라는', 6677: '이라도', 6678: '이란', 6679: '이래', 6680: '이래?', 6681: '이래서', 6682: '이러니', 6683: '이러면', 6684: '이러한', 6685: '이런', 6686: '이런거', 6687: '이런건', 6688: '이런걸', 6689: '이런것도', 6690: '이런게', 6691: '이런데', 6692: '이런드라마', 6693: '이런류의', 6694: '이런식으로', 6695: '이런영화', 6696: '이런영화가', 6697: '이런영화는', 6698: '이런영화도', 6699: '이런영화를', 6700: '이런영화에', 6701: '이런일이', 6702: '이런저런', 6703: '이럴', 6704: '이럴거면', 6705: '이렇게', 6706: '이렇게까지', 6707: '이렇게나', 6708: '이렇게도', 6709: '이렇게밖에', 6710: '이렇지', 6711: '이루는', 6712: '이루어진', 6713: '이룬', 6714: '이를', 6715: '이름', 6716: '이름도', 6717: '이름만', 6718: '이름에', 6719: '이름으로', 6720: '이름을', 6721: '이름이', 6722: '이리', 6723: '이리도', 6724: '이리저리', 6725: '이만큼', 6726: '이만한', 6727: '이미', 6728: '이미지', 6729: '이미지가', 6730: '이민기', 6731: '이번', 6732: '이번에', 6733: '이번에는', 6734: '이번엔', 6735: '이범수', 6736: '이병헌', 6737: '이보다', 6738: '이보단', 6739: '이뻐', 6740: '이뻐서', 6741: '이쁘게', 6742: '이쁘고', 6743: '이쁘다', 6744: '이쁘다.', 6745: '이쁜', 6746: '이쁨', 6747: '이상', 6748: '이상도', 6749: '이상우', 6750: '이상으로', 6751: '이상은', 6752: '이상을', 6753: '이상의', 6754: '이상이', 6755: '이상하게', 6756: '이상하고', 6757: '이상하다', 6758: '이상하다.', 6759: '이상한', 6760: '이상해', 6761: '이상해서', 6762: '이세상에', 6763: '이소룡', 6764: '이소룡이', 6765: '이수근', 6766: '이스트우드', 6767: '이승기', 6768: '이시대에', 6769: '이야', 6770: '이야기', 6771: '이야기,', 6772: '이야기.', 6773: '이야기..', 6774: '이야기...', 6775: '이야기가', 6776: '이야기는', 6777: '이야기도', 6778: '이야기들이', 6779: '이야기라', 6780: '이야기로', 6781: '이야기를', 6782: '이야기에', 6783: '이야기와', 6784: '이야기의', 6785: '이야기지만', 6786: '이어', 6787: '이어서', 6788: '이어지는', 6789: '이연걸', 6790: '이연걸의', 6791: '이연걸이', 6792: '이연희', 6793: '이영화', 6794: '이영화가', 6795: '이영화는', 6796: '이영화도', 6797: '이영화로', 6798: '이영화를', 6799: '이영화보고', 6800: '이영화에', 6801: '이영화에서', 6802: '이영화의', 6803: '이용한', 6804: '이용해', 6805: '이유', 6806: '이유.', 6807: '이유가', 6808: '이유는', 6809: '이유도', 6810: '이유로', 6811: '이유를', 6812: '이유없이', 6813: '이은', 6814: '이의', 6815: '이전', 6816: '이전에', 6817: '이정도', 6818: '이정도는', 6819: '이정도로', 6820: '이정도면', 6821: '이정도의', 6822: '이정재', 6823: '이제', 6824: '이제껏', 6825: '이제는', 6826: '이제서야', 6827: '이제야', 6828: '이제와서', 6829: '이젠', 6830: '이지만', 6831: '이처럼', 6832: '이탈리아', 6833: '이토록', 6834: '이하', 6835: '이하도', 6836: '이하의', 6837: '이해', 6838: '이해가', 6839: '이해는', 6840: '이해도', 6841: '이해를', 6842: '이해불가', 6843: '이해안되는', 6844: '이해안됨', 6845: '이해하고', 6846: '이해하기', 6847: '이해하는', 6848: '이해하지', 6849: '이해할', 6850: '이해할수', 6851: '이해할수없는', 6852: '이후', 6853: '이후로', 6854: '이후에', 6855: '이후의', 6856: '익숙한', 6857: '인', 6858: '인가', 6859: '인간', 6860: '인간들', 6861: '인간들은', 6862: '인간들이', 6863: '인간에', 6864: '인간은', 6865: '인간을', 6866: '인간의', 6867: '인간이', 6868: '인간적으로', 6869: '인간적인', 6870: '인것', 6871: '인기', 6872: '인기가', 6873: '인내심', 6874: '인내심을', 6875: '인데', 6876: '인도', 6877: '인도영화', 6878: '인디아나', 6879: '인류의', 6880: '인물', 6881: '인물들', 6882: '인물들의', 6883: '인물들이', 6884: '인물을', 6885: '인물의', 6886: '인물이', 6887: '인상', 6888: '인상깊게', 6889: '인상깊다.', 6890: '인상깊었던', 6891: '인상깊은', 6892: '인상이', 6893: '인상적', 6894: '인상적.', 6895: '인상적이다.', 6896: '인상적인', 6897: '인생', 6898: '인생과', 6899: '인생에', 6900: '인생에서', 6901: '인생영화', 6902: '인생은', 6903: '인생을', 6904: '인생의', 6905: '인생이', 6906: '인연이', 6907: '인정', 6908: '인종차별', 6909: '인줄', 6910: '인지', 6911: '인터넷', 6912: '인피니트', 6913: '인한', 6914: '인해', 6915: '일', 6916: '일깨워', 6917: '일깨워주는', 6918: '일깨워준', 6919: '일단', 6920: '일반', 6921: '일반인이', 6922: '일방적인', 6923: '일본', 6924: '일본식', 6925: '일본에', 6926: '일본에서', 6927: '일본영화', 6928: '일본영화가', 6929: '일본영화는', 6930: '일본영화의', 6931: '일본은', 6932: '일본을', 6933: '일본의', 6934: '일본이', 6935: '일본인', 6936: '일본판', 6937: '일부', 6938: '일부러', 6939: '일상', 6940: '일상을', 6941: '일상의', 6942: '일상적인', 6943: '일어나는', 6944: '일어나지', 6945: '일어난', 6946: '일어날', 6947: '일요일', 6948: '일은', 6949: '일을', 6950: '일이', 6951: '일점도', 6952: '일찍', 6953: '일품', 6954: '읽고', 6955: '읽는', 6956: '읽어보고', 6957: '읽은', 6958: '잃게', 6959: '잃고', 6960: '잃어버린', 6961: '잃은', 6962: '잃지', 6963: '임', 6964: '임권택', 6965: '임수정', 6966: '임창정', 6967: '임창정의', 6968: '임청하', 6969: '임팩트', 6970: '임팩트가', 6971: '입', 6972: '입가에', 6973: '입고', 6974: '입니다', 6975: '입니다.', 6976: '입에', 6977: '입은', 6978: '입을', 6979: '입장에서', 6980: '입장이', 6981: '잇고', 6982: '잇는', 6983: '있게', 6984: '있겠지', 6985: '있겠지만', 6986: '있고', 6987: '있고,', 6988: '있구나', 6989: '있구나.', 6990: '있기는', 6991: '있기에', 6992: '있긴', 6993: '있나', 6994: '있나.', 6995: '있나..', 6996: '있나?', 6997: '있나요?', 6998: '있냐', 6999: '있네', 7000: '있네요', 7001: '있네요.', 7002: '있는', 7003: '있는..', 7004: '있는가?', 7005: '있는거', 7006: '있는건', 7007: '있는것', 7008: '있는것도', 7009: '있는게', 7010: '있는데', 7011: '있는데,', 7012: '있는영화', 7013: '있는지', 7014: '있다', 7015: '있다!', 7016: '있다.', 7017: '있다..', 7018: '있다...', 7019: '있다가', 7020: '있다고', 7021: '있다는', 7022: '있다는걸', 7023: '있다는게', 7024: '있다니', 7025: '있다면', 7026: '있다면..', 7027: '있던', 7028: '있도록', 7029: '있습니다', 7030: '있습니다.', 7031: '있어', 7032: '있어도', 7033: '있어보이려고', 7034: '있어서', 7035: '있어야', 7036: '있어요', 7037: '있어요.', 7038: '있었고', 7039: '있었는데', 7040: '있었다', 7041: '있었다.', 7042: '있었다면', 7043: '있었던', 7044: '있었습니다.', 7045: '있었으나', 7046: '있었으면', 7047: '있었을', 7048: '있었을까?', 7049: '있었음', 7050: '있었지만', 7051: '있었지만,', 7052: '있으나', 7053: '있으니', 7054: '있으면', 7055: '있을', 7056: '있을까', 7057: '있을까.', 7058: '있을까..', 7059: '있을까...', 7060: '있을까?', 7061: '있을때', 7062: '있을수', 7063: '있음', 7064: '있음.', 7065: '있지', 7066: '있지.', 7067: '있지?', 7068: '있지만', 7069: '있지만,', 7070: '잊고', 7071: '잊을', 7072: '잊을수', 7073: '잊을수가', 7074: '잊지', 7075: '잊지못할', 7076: '잊혀지지', 7077: '잊혀지지가', 7078: '자', 7079: '자격이', 7080: '자고', 7081: '자극적이고', 7082: '자극적인', 7083: '자극하는', 7084: '자기', 7085: '자기가', 7086: '자꾸', 7087: '자꾸만', 7088: '자는', 7089: '자다가', 7090: '자동차', 7091: '자리가', 7092: '자리를', 7093: '자리에서', 7094: '자막', 7095: '자막으로', 7096: '자막을', 7097: '자막이', 7098: '자본주의', 7099: '자세한', 7100: '자세히', 7101: '자식', 7102: '자식을', 7103: '자신', 7104: '자신들의', 7105: '자신만의', 7106: '자신에', 7107: '자신에게', 7108: '자신을', 7109: '자신의', 7110: '자신이', 7111: '자연과', 7112: '자연스러운', 7113: '자연스럽게', 7114: '자연스럽고', 7115: '자연스럽지', 7116: '자연스레', 7117: '자연의', 7118: '자유롭게', 7119: '자유를', 7120: '자의', 7121: '자주', 7122: '자체', 7123: '자체.', 7124: '자체...', 7125: '자체가', 7126: '자체는', 7127: '자체도', 7128: '자체로', 7129: '자체를', 7130: '자체에', 7131: '자체의', 7132: '자칫', 7133: '자칭', 7134: '작', 7135: '작가', 7136: '작가가', 7137: '작가는', 7138: '작가님', 7139: '작가의', 7140: '작고', 7141: '작년에', 7142: '작위적이고', 7143: '작위적인', 7144: '작은', 7145: '작품', 7146: '작품!', 7147: '작품!!', 7148: '작품,', 7149: '작품.', 7150: '작품..', 7151: '작품...', 7152: '작품도', 7153: '작품들', 7154: '작품성', 7155: '작품성도', 7156: '작품성은', 7157: '작품성을', 7158: '작품성이', 7159: '작품에', 7160: '작품에서', 7161: '작품으로', 7162: '작품은', 7163: '작품을', 7164: '작품의', 7165: '작품이', 7166: '작품이다', 7167: '작품이다.', 7168: '작품이라', 7169: '작품이라고', 7170: '작품이었다.', 7171: '작품인데', 7172: '작품입니다', 7173: '작품입니다.', 7174: '작품중', 7175: '잔', 7176: '잔뜩', 7177: '잔인하게', 7178: '잔인하고', 7179: '잔인하기만', 7180: '잔인하지도', 7181: '잔인하지만', 7182: '잔인한', 7183: '잔잔하게', 7184: '잔잔하고', 7185: '잔잔하니', 7186: '잔잔하면서', 7187: '잔잔하면서도', 7188: '잔잔하지만', 7189: '잔잔한', 7190: '잔잔히', 7191: '잔혹한', 7192: '잘', 7193: '잘나가다가', 7194: '잘도', 7195: '잘된', 7196: '잘만', 7197: '잘만든', 7198: '잘만들어진', 7199: '잘만들었다', 7200: '잘만들었다.', 7201: '잘못', 7202: '잘못된', 7203: '잘못을', 7204: '잘보고', 7205: '잘봤다.', 7206: '잘봤습니다', 7207: '잘봤습니다.', 7208: '잘봤어요', 7209: '잘생기고', 7210: '잘생긴', 7211: '잘생김', 7212: '잘어울리는', 7213: '잘하고', 7214: '잘하네', 7215: '잘하네요', 7216: '잘하는', 7217: '잘하시고', 7218: '잘한', 7219: '잘한다', 7220: '잘한다.', 7221: '잘함', 7222: '잘해', 7223: '잘해서', 7224: '잠', 7225: '잠깐', 7226: '잠든', 7227: '잠수함', 7228: '잠시', 7229: '잠시나마', 7230: '잠을', 7231: '잠이', 7232: '잠이나', 7233: '잡고', 7234: '잡는', 7235: '잡은', 7236: '잡을', 7237: '잤다', 7238: '잤다.', 7239: '잤음', 7240: '장', 7241: '장국영', 7242: '장근석', 7243: '장나라', 7244: '장난', 7245: '장난으로', 7246: '장난이', 7247: '장난하나', 7248: '장난하냐', 7249: '장난하냐?', 7250: '장르', 7251: '장르가', 7252: '장르는', 7253: '장르를', 7254: '장르에', 7255: '장르의', 7256: '장면', 7257: '장면,', 7258: '장면.', 7259: '장면과', 7260: '장면도', 7261: '장면들', 7262: '장면들은', 7263: '장면들이', 7264: '장면마다', 7265: '장면만', 7266: '장면에', 7267: '장면에서', 7268: '장면에선', 7269: '장면으로', 7270: '장면은', 7271: '장면을', 7272: '장면의', 7273: '장면이', 7274: '장점을', 7275: '장진', 7276: '장혁', 7277: '재', 7278: '재개봉', 7279: '재난', 7280: '재난영화', 7281: '재난영화의', 7282: '재능이', 7283: '재대로', 7284: '재미', 7285: '재미,', 7286: '재미.', 7287: '재미가', 7288: '재미까지', 7289: '재미나게', 7290: '재미난', 7291: '재미는', 7292: '재미도', 7293: '재미도없고', 7294: '재미로', 7295: '재미를', 7296: '재미만', 7297: '재미없게', 7298: '재미없고', 7299: '재미없고,', 7300: '재미없네', 7301: '재미없네.', 7302: '재미없네..', 7303: '재미없네...', 7304: '재미없네요', 7305: '재미없네요.', 7306: '재미없는', 7307: '재미없는데', 7308: '재미없다', 7309: '재미없다.', 7310: '재미없다..', 7311: '재미없다...', 7312: '재미없다고', 7313: '재미없다는', 7314: '재미없습니다.', 7315: '재미없어', 7316: '재미없어..', 7317: '재미없어도', 7318: '재미없어서', 7319: '재미없어요', 7320: '재미없어요.', 7321: '재미없었는데', 7322: '재미없었다', 7323: '재미없었다.', 7324: '재미없었던', 7325: '재미없었음', 7326: '재미없었음.', 7327: '재미없으면', 7328: '재미없을', 7329: '재미없음', 7330: '재미없음.', 7331: '재미없음..', 7332: '재미없음...', 7333: '재미에', 7334: '재미와', 7335: '재미있게', 7336: '재미있고', 7337: '재미있고,', 7338: '재미있네요', 7339: '재미있네요.', 7340: '재미있는', 7341: '재미있는데', 7342: '재미있다', 7343: '재미있다!', 7344: '재미있다.', 7345: '재미있다..', 7346: '재미있다고', 7347: '재미있던데', 7348: '재미있습니다', 7349: '재미있습니다.', 7350: '재미있어', 7351: '재미있어서', 7352: '재미있어요', 7353: '재미있어요!', 7354: '재미있어요.', 7355: '재미있어요^^', 7356: '재미있었고', 7357: '재미있었는데', 7358: '재미있었다', 7359: '재미있었다.', 7360: '재미있었던', 7361: '재미있었습니다', 7362: '재미있었습니다.', 7363: '재미있었어요', 7364: '재미있었어요!', 7365: '재미있었어요.', 7366: '재미있었음', 7367: '재미있을', 7368: '재미있음', 7369: '재미있음.', 7370: '재밋게', 7371: '재밋고', 7372: '재밋네요', 7373: '재밋는', 7374: '재밋는데', 7375: '재밋다', 7376: '재밋다.', 7377: '재밋다고', 7378: '재밋어', 7379: '재밋어요', 7380: '재밋음', 7381: '재밌게', 7382: '재밌게본', 7383: '재밌게봄', 7384: '재밌게봤는데', 7385: '재밌게봤다', 7386: '재밌게봤습니다', 7387: '재밌게봤어요', 7388: '재밌게봤음', 7389: '재밌고', 7390: '재밌고,', 7391: '재밌기만', 7392: '재밌긴', 7393: '재밌네', 7394: '재밌네.', 7395: '재밌네요', 7396: '재밌네요.', 7397: '재밌네요^^', 7398: '재밌는', 7399: '재밌는데', 7400: '재밌는데?', 7401: '재밌는영화', 7402: '재밌다', 7403: '재밌다!', 7404: '재밌다!!', 7405: '재밌다.', 7406: '재밌다..', 7407: '재밌다고', 7408: '재밌다는', 7409: '재밌당', 7410: '재밌던데', 7411: '재밌습니다', 7412: '재밌습니다!', 7413: '재밌습니다.', 7414: '재밌어', 7415: '재밌어서', 7416: '재밌어요', 7417: '재밌어요!', 7418: '재밌어요!!', 7419: '재밌어요.', 7420: '재밌어요^^', 7421: '재밌어요~', 7422: '재밌었고', 7423: '재밌었는데', 7424: '재밌었는데..', 7425: '재밌었는데...', 7426: '재밌었다', 7427: '재밌었다.', 7428: '재밌었다..', 7429: '재밌었던', 7430: '재밌었습니다.', 7431: '재밌었어요', 7432: '재밌었어요!', 7433: '재밌었어요.', 7434: '재밌었어요~', 7435: '재밌었음', 7436: '재밌었음.', 7437: '재밌으면', 7438: '재밌을', 7439: '재밌음', 7440: '재밌음!', 7441: '재밌음.', 7442: '재밌지', 7443: '재밌지는', 7444: '재밌지도', 7445: '재밌지만', 7446: '재치있는', 7447: '재평가', 7448: '잭', 7449: '잼', 7450: '잼나게', 7451: '잼나요', 7452: '잼씀', 7453: '잼없다', 7454: '잼없음', 7455: '잼있게', 7456: '잼있고', 7457: '잼있네요', 7458: '잼있는', 7459: '잼있는데', 7460: '잼있다', 7461: '잼있다.', 7462: '잼있어요', 7463: '잼있어요...', 7464: '잼있었다', 7465: '잼있었음', 7466: '잼있음', 7467: '저', 7468: '저거', 7469: '저건', 7470: '저것도', 7471: '저게', 7472: '저급한', 7473: '저기', 7474: '저는', 7475: '저도', 7476: '저랑', 7477: '저런', 7478: '저렇게', 7479: '저렴한', 7480: '저를', 7481: '저리', 7482: '저만', 7483: '저버리지', 7484: '저에게', 7485: '저에겐', 7486: '저예산', 7487: '저예산으로', 7488: '저의', 7489: '저절로', 7490: '저지른', 7491: '저질', 7492: '저평가', 7493: '저평가된', 7494: '저희', 7495: '적', 7496: '적극', 7497: '적나라하게', 7498: '적당', 7499: '적당하다', 7500: '적당한', 7501: '적당히', 7502: '적어도', 7503: '적은', 7504: '적을', 7505: '적이', 7506: '적인', 7507: '적절하게', 7508: '적절한', 7509: '적절히', 7510: '전', 7511: '전개', 7512: '전개,', 7513: '전개.', 7514: '전개...', 7515: '전개가', 7516: '전개나', 7517: '전개는', 7518: '전개도', 7519: '전개되는', 7520: '전개로', 7521: '전개를', 7522: '전개에', 7523: '전개와', 7524: '전개의', 7525: '전기세가', 7526: '전까지는', 7527: '전달하고자', 7528: '전달하는', 7529: '전달하려는', 7530: '전라도', 7531: '전문', 7532: '전문가', 7533: '전반적으로', 7534: '전반적인', 7535: '전부', 7536: '전부다', 7537: '전부인', 7538: '전설', 7539: '전설의', 7540: '전설이', 7541: '전설적인', 7542: '전성기', 7543: '전에', 7544: '전율이', 7545: '전의', 7546: '전작', 7547: '전작보다', 7548: '전작에', 7549: '전작을', 7550: '전작의', 7551: '전쟁', 7552: '전쟁과', 7553: '전쟁에', 7554: '전쟁영화', 7555: '전쟁은', 7556: '전쟁을', 7557: '전쟁의', 7558: '전쟁이', 7559: '전적으로', 7560: '전지현', 7561: '전체', 7562: '전체가', 7563: '전체를', 7564: '전체적으로', 7565: '전체적인', 7566: '전편', 7567: '전편과', 7568: '전편보다', 7569: '전편에', 7570: '전편을', 7571: '전편의', 7572: '전하고', 7573: '전하고자', 7574: '전하는', 7575: '전해주는', 7576: '전해지는', 7577: '전혀', 7578: '전혀없는', 7579: '전형적', 7580: '전형적인', 7581: '절대', 7582: '절대로', 7583: '절라', 7584: '절로', 7585: '절망적인', 7586: '절묘한', 7587: '절실히', 7588: '절제된', 7589: '젊은', 7590: '젊은이들의', 7591: '점', 7592: '점도', 7593: '점수', 7594: '점수가', 7595: '점수는', 7596: '점수를', 7597: '점수준것들', 7598: '점에서', 7599: '점은', 7600: '점을', 7601: '점이', 7602: '점점', 7603: '접한', 7604: '정', 7605: '정당화', 7606: '정도', 7607: '정도.', 7608: '정도..', 7609: '정도...', 7610: '정도가', 7611: '정도는', 7612: '정도다.', 7613: '정도로', 7614: '정도면', 7615: '정도의', 7616: '정말', 7617: '정말,', 7618: '정말.', 7619: '정말..', 7620: '정말...', 7621: '정말....', 7622: '정말로', 7623: '정말이지', 7624: '정말정말', 7625: '정보', 7626: '정서가', 7627: '정서를', 7628: '정서에', 7629: '정석', 7630: '정신', 7631: '정신건강에', 7632: '정신나간', 7633: '정신병자', 7634: '정신없고', 7635: '정신없는', 7636: '정신없이', 7637: '정신을', 7638: '정신이', 7639: '정우성', 7640: '정을', 7641: '정의가', 7642: '정의는', 7643: '정이', 7644: '정작', 7645: '정적인', 7646: '정점을', 7647: '정주행', 7648: '정체가', 7649: '정체성을', 7650: '정치', 7651: '정치적', 7652: '정통', 7653: '정확하게', 7654: '정확한', 7655: '정확히', 7656: '제', 7657: '제2의', 7658: '제가', 7659: '제니퍼', 7660: '제대로', 7661: '제대로된', 7662: '제레미', 7663: '제로', 7664: '제목', 7665: '제목과', 7666: '제목도', 7667: '제목만', 7668: '제목보고', 7669: '제목부터', 7670: '제목에', 7671: '제목으로', 7672: '제목은', 7673: '제목을', 7674: '제목이', 7675: '제목이랑', 7676: '제목처럼', 7677: '제발', 7678: '제법', 7679: '제시카', 7680: '제왕', 7681: '제외하고', 7682: '제외하고는', 7683: '제외하면', 7684: '제외한', 7685: '제이슨', 7686: '제일', 7687: '제임스', 7688: '제작', 7689: '제작된', 7690: '제작비', 7691: '제작비가', 7692: '제작비로', 7693: '제작을', 7694: '제작진', 7695: '제작진은', 7696: '제작진의', 7697: '제작진이', 7698: '제작한', 7699: '제타', 7700: '제한)', 7701: '젠장', 7702: '젤', 7703: '조', 7704: '조금', 7705: '조금더', 7706: '조금도', 7707: '조금만', 7708: '조금씩', 7709: '조금은', 7710: '조금이나마', 7711: '조금이라도', 7712: '조낸', 7713: '조니뎁', 7714: '조승우', 7715: '조아요', 7716: '조악한', 7717: '조연', 7718: '조연들', 7719: '조연들의', 7720: '조용한', 7721: '조용히', 7722: '조작', 7723: '조잡하고', 7724: '조잡한', 7725: '조절', 7726: '조지', 7727: '조차', 7728: '조카', 7729: '조카가', 7730: '조폭', 7731: '조합', 7732: '조합.', 7733: '조합이', 7734: '조화', 7735: '조화가', 7736: '조화를', 7737: '존', 7738: '존1나', 7739: '존경합니다.', 7740: '존나', 7741: '존나게', 7742: '존내', 7743: '존재', 7744: '존재가', 7745: '존재감', 7746: '존재의', 7747: '존재하는', 7748: '존재하지', 7749: '존잼', 7750: '졸', 7751: '졸라', 7752: '졸려', 7753: '졸려서', 7754: '졸면서', 7755: '졸음이', 7756: '졸작', 7757: '졸작.', 7758: '졸작..', 7759: '졸작으로', 7760: '졸작을', 7761: '졸작이', 7762: '졸작이다', 7763: '졸작이다.', 7764: '졸잼', 7765: '좀', 7766: '좀..', 7767: '좀...', 7768: '좀더', 7769: '좀만', 7770: '좀비', 7771: '좀비가', 7772: '좀비영화', 7773: '종교를', 7774: '종교적', 7775: '종교적인', 7776: '종류의', 7777: '종종', 7778: '종합', 7779: '좋게', 7780: '좋겠네요', 7781: '좋겠네요.', 7782: '좋겠다', 7783: '좋겠다.', 7784: '좋겠다..', 7785: '좋겠습니다', 7786: '좋겠습니다.', 7787: '좋겠어요', 7788: '좋겠어요.', 7789: '좋고', 7790: '좋고,', 7791: '좋고.', 7792: '좋고..', 7793: '좋구', 7794: '좋네', 7795: '좋네요', 7796: '좋네요.', 7797: '좋네요~', 7798: '좋다', 7799: '좋다!', 7800: '좋다.', 7801: '좋다..', 7802: '좋다...', 7803: '좋다~', 7804: '좋다고', 7805: '좋다는', 7806: '좋더라', 7807: '좋습니다', 7808: '좋습니다.', 7809: '좋아', 7810: '좋아.', 7811: '좋아도', 7812: '좋아라', 7813: '좋아서', 7814: '좋아요', 7815: '좋아요!', 7816: '좋아요.', 7817: '좋아요~', 7818: '좋아용', 7819: '좋아지는', 7820: '좋아하게', 7821: '좋아하고', 7822: '좋아하네요', 7823: '좋아하는', 7824: '좋아하는데', 7825: '좋아하던', 7826: '좋아하면', 7827: '좋아하시는', 7828: '좋아하지', 7829: '좋아하지만', 7830: '좋아한다면', 7831: '좋아할', 7832: '좋아할만한', 7833: '좋아해서', 7834: '좋아해요', 7835: '좋아했는데', 7836: '좋아했던', 7837: '좋았고', 7838: '좋았고,', 7839: '좋았는데', 7840: '좋았는데..', 7841: '좋았는데...', 7842: '좋았다', 7843: '좋았다.', 7844: '좋았다..', 7845: '좋았다고', 7846: '좋았던', 7847: '좋았습니다', 7848: '좋았습니다.', 7849: '좋았어', 7850: '좋았어요', 7851: '좋았어요!', 7852: '좋았어요.', 7853: '좋았으나', 7854: '좋았을', 7855: '좋았을텐데', 7856: '좋았음', 7857: '좋았음.', 7858: '좋았지만', 7859: '좋았지만,', 7860: '좋으나', 7861: '좋으면', 7862: '좋은', 7863: '좋은거', 7864: '좋은게', 7865: '좋은데', 7866: '좋은데,', 7867: '좋은데..', 7868: '좋은영화', 7869: '좋은영화.', 7870: '좋을', 7871: '좋을듯', 7872: '좋을듯..', 7873: '좋음', 7874: '좋음.', 7875: '좋지', 7876: '좋지만', 7877: '좋지만,', 7878: '죄가', 7879: '죄다', 7880: '죄를', 7881: '죄송하지만', 7882: '죄없는', 7883: '주', 7884: '주고', 7885: '주고싶다', 7886: '주고싶다.', 7887: '주고싶은', 7888: '주관적인', 7889: '주구장창', 7890: '주기', 7891: '주기도', 7892: '주는', 7893: '주는거', 7894: '주는건', 7895: '주는데', 7896: '주려고', 7897: '주말에', 7898: '주면', 7899: '주변', 7900: '주변에', 7901: '주성치', 7902: '주성치의', 7903: '주세요', 7904: '주셔서', 7905: '주어진', 7906: '주연', 7907: '주연배우', 7908: '주연배우의', 7909: '주연으로', 7910: '주연을', 7911: '주연의', 7912: '주연이', 7913: '주옥같은', 7914: '주온', 7915: '주윤발', 7916: '주윤발의', 7917: '주윤발이', 7918: '주인공', 7919: '주인공과', 7920: '주인공도', 7921: '주인공들', 7922: '주인공들도', 7923: '주인공들의', 7924: '주인공들이', 7925: '주인공으로', 7926: '주인공은', 7927: '주인공을', 7928: '주인공의', 7929: '주인공이', 7930: '주인공인', 7931: '주인공처럼', 7932: '주장하는', 7933: '주제', 7934: '주제가', 7935: '주제는', 7936: '주제도', 7937: '주제로', 7938: '주제를', 7939: '주제에', 7940: '주지', 7941: '죽고', 7942: '죽기', 7943: '죽기전에', 7944: '죽는', 7945: '죽는다', 7946: '죽는줄', 7947: '죽어', 7948: '죽어가는', 7949: '죽어도', 7950: '죽어라', 7951: '죽어서', 7952: '죽은', 7953: '죽을', 7954: '죽을때', 7955: '죽을때까지', 7956: '죽을뻔', 7957: '죽음', 7958: '죽음.', 7959: '죽음에', 7960: '죽음은', 7961: '죽음을', 7962: '죽음의', 7963: '죽음이', 7964: '죽이고', 7965: '죽이네', 7966: '죽이는', 7967: '죽인', 7968: '죽인다.', 7969: '죽일', 7970: '죽지', 7971: '준', 7972: '준다', 7973: '준다.', 7974: '줄', 7975: '줄거리', 7976: '줄거리가', 7977: '줄거리는', 7978: '줄거리도', 7979: '줄거리를', 7980: '줄거리만', 7981: '줄거리에', 7982: '줄도', 7983: '줄리아', 7984: '줄리엣', 7985: '줄수', 7986: '줄은', 7987: '줄이야.', 7988: '줌', 7989: '줌.', 7990: '줍니다', 7991: '줍니다.', 7992: '중', 7993: '중2병', 7994: '중간', 7995: '중간부터', 7996: '중간에', 7997: '중간중간', 7998: '중간중간에', 7999: '중국', 8000: '중국영화', 8001: '중국은', 8002: '중국의', 8003: '중국이', 8004: '중년', 8005: '중딩때', 8006: '중반', 8007: '중반까지는', 8008: '중반부터', 8009: '중심으로', 8010: '중에', 8011: '중에서', 8012: '중에서는', 8013: '중에서도', 8014: '중요성을', 8015: '중요하지', 8016: '중요한', 8017: '중요한건', 8018: '중요한지', 8019: '중의', 8020: '중학교', 8021: '중학교때', 8022: '중학생', 8023: '중학생때', 8024: '줘', 8025: '줘도', 8026: '줘서', 8027: '줘야', 8028: '줬다', 8029: '쥐어짜는', 8030: '쥬라기', 8031: '즉', 8032: '즐거운', 8033: '즐거웠다.', 8034: '즐거웠던', 8035: '즐겁게', 8036: '즐겁고', 8037: '즐기는', 8038: '즐길', 8039: '증말', 8040: '지', 8041: '지가', 8042: '지겨운', 8043: '지겨워', 8044: '지겹고', 8045: '지겹다', 8046: '지겹다.', 8047: '지구', 8048: '지구를', 8049: '지구에', 8050: '지극히', 8051: '지금', 8052: '지금까지', 8053: '지금까지도', 8054: '지금껏', 8055: '지금도', 8056: '지금보니', 8057: '지금보면', 8058: '지금봐도', 8059: '지금에서야', 8060: '지금은', 8061: '지금의', 8062: '지금이', 8063: '지금이나', 8064: '지나', 8065: '지나가는', 8066: '지나고', 8067: '지나도', 8068: '지나면', 8069: '지나서', 8070: '지나지', 8071: '지나치게', 8072: '지나친', 8073: '지난', 8074: '지날수록', 8075: '지났는데도', 8076: '지났지만', 8077: '지는', 8078: '지닌', 8079: '지대로', 8080: '지독한', 8081: '지들', 8082: '지들끼리', 8083: '지들이', 8084: '지루', 8085: '지루...', 8086: '지루하게', 8087: '지루하고', 8088: '지루하고,', 8089: '지루하고..', 8090: '지루하기', 8091: '지루하기도', 8092: '지루하기만', 8093: '지루하네', 8094: '지루하네요', 8095: '지루하다', 8096: '지루하다.', 8097: '지루하다..', 8098: '지루하다...', 8099: '지루하다고', 8100: '지루하다는', 8101: '지루하지', 8102: '지루하지도', 8103: '지루하지만', 8104: '지루하지않고', 8105: '지루하진', 8106: '지루한', 8107: '지루한건', 8108: '지루할', 8109: '지루함', 8110: '지루함.', 8111: '지루함..', 8112: '지루함...', 8113: '지루함과', 8114: '지루함을', 8115: '지루함의', 8116: '지루함이', 8117: '지루합니다.', 8118: '지루해', 8119: '지루해.', 8120: '지루해서', 8121: '지루했다', 8122: '지루했다.', 8123: '지루했다..', 8124: '지루했던', 8125: '지루했어요', 8126: '지루했음', 8127: '지르고', 8128: '지브리', 8129: '지저분한', 8130: '지친', 8131: '지키는', 8132: '직접', 8133: '진', 8134: '진리', 8135: '진리를', 8136: '진부하고', 8137: '진부하다.', 8138: '진부한', 8139: '진솔한', 8140: '진수', 8141: '진수.', 8142: '진수를', 8143: '진실', 8144: '진실된', 8145: '진실은', 8146: '진실을', 8147: '진실이', 8148: '진심', 8149: '진심으로', 8150: '진자', 8151: '진작', 8152: '진작에', 8153: '진정', 8154: '진정으로', 8155: '진정한', 8156: '진지하게', 8157: '진지하고', 8158: '진지한', 8159: '진짜', 8160: '진짜.', 8161: '진짜..', 8162: '진짜...', 8163: '진짜....', 8164: '진짜로', 8165: '진짜진짜', 8166: '진하게', 8167: '진한', 8168: '진행', 8169: '진행도', 8170: '진행되는', 8171: '진행이', 8172: '질', 8173: '질리지', 8174: '질리지않는', 8175: '질이', 8176: '질질', 8177: '질질끌고', 8178: '짐', 8179: '짐캐리', 8180: '집', 8181: '집에', 8182: '집에서', 8183: '집중', 8184: '집중도', 8185: '집중력이', 8186: '집중을', 8187: '집중이', 8188: '집중하게', 8189: '집중할', 8190: '집중해서', 8191: '집착하는', 8192: '짓', 8193: '짓을', 8194: '짙게', 8195: '짙은', 8196: '짜', 8197: '짜고', 8198: '짜리', 8199: '짜릿한', 8200: '짜여진', 8201: '짜임새', 8202: '짜증', 8203: '짜증나', 8204: '짜증나게', 8205: '짜증나고', 8206: '짜증나네', 8207: '짜증나는', 8208: '짜증나서', 8209: '짜증난다', 8210: '짜증난다.', 8211: '짜증남', 8212: '짜증만', 8213: '짜증을', 8214: '짜증이', 8215: '짜집기', 8216: '짝이', 8217: '짝이없는', 8218: '짝퉁', 8219: '짠하고', 8220: '짠한', 8221: '짧게', 8222: '짧고', 8223: '짧아서', 8224: '짧은', 8225: '짧지만', 8226: '짱', 8227: '짱!', 8228: '짱!!', 8229: '짱!!!', 8230: '짱구', 8231: '짱나', 8232: '짱이다', 8233: '짱임', 8234: '짱짱', 8235: '짱짱!', 8236: '짱짱맨', 8237: '쩌네', 8238: '쩌는', 8239: '쩐다', 8240: '쩐다.', 8241: '쩔고', 8242: '쩔어', 8243: '쩝', 8244: '쩝..', 8245: '쩝...', 8246: '쪼금', 8247: '쪽', 8248: '쫌', 8249: '쭉', 8250: '쯤', 8251: '쯧쯧', 8252: '찌질이', 8253: '찌질한', 8254: '찍고', 8255: '찍는', 8256: '찍어', 8257: '찍어도', 8258: '찍어서', 8259: '찍으면', 8260: '찍은', 8261: '찍을', 8262: '찍지', 8263: '찜찜한', 8264: '찝찝하고', 8265: '찝찝한', 8266: '찡하게', 8267: '찡한', 8268: '차', 8269: '차가운', 8270: '차라리', 8271: '차마', 8272: '차승원', 8273: '차원이', 8274: '차이가', 8275: '차이를', 8276: '차태현', 8277: '착하고', 8278: '착한', 8279: '찬', 8280: '찬사를', 8281: '찬양하는', 8282: '찰리', 8283: '참', 8284: '참.', 8285: '참..', 8286: '참...', 8287: '참....', 8288: '참고', 8289: '참고로', 8290: '참나', 8291: '참다참다', 8292: '참된', 8293: '참신하고', 8294: '참신한', 8295: '참으로', 8296: '참을', 8297: '찾고', 8298: '찾기', 8299: '찾는', 8300: '찾아', 8301: '찾아가는', 8302: '찾아보고', 8303: '찾아보는', 8304: '찾아보니', 8305: '찾아볼', 8306: '찾아봐도', 8307: '찾아봤는데', 8308: '찾아서', 8309: '찾은', 8310: '찾을', 8311: '채', 8312: '채널', 8313: '채널에서', 8314: '채널을', 8315: '책', 8316: '책도', 8317: '책으로', 8318: '책으로도', 8319: '책은', 8320: '책을', 8321: '책의', 8322: '책이', 8323: '책임을', 8324: '챙겨보는', 8325: '처', 8326: '처럼', 8327: '처음', 8328: '처음.', 8329: '처음..', 8330: '처음...', 8331: '처음보는', 8332: '처음본다', 8333: '처음부터', 8334: '처음부터끝까지', 8335: '처음에', 8336: '처음에는', 8337: '처음엔', 8338: '처음으로', 8339: '처음이네', 8340: '처음이다', 8341: '처음이다.', 8342: '처음이자', 8343: '처음인', 8344: '처음임', 8345: '처음입니다.', 8346: '처절하게', 8347: '처절한', 8348: '척', 8349: '천재', 8350: '천재다.', 8351: '천재적인', 8352: '천하의', 8353: '철', 8354: '철없는', 8355: '철저하게', 8356: '철저한', 8357: '철저히', 8358: '철학을', 8359: '철학이', 8360: '철학적', 8361: '철학적인', 8362: '첨', 8363: '첨본다', 8364: '첨부터', 8365: '첨에', 8366: '첨엔', 8367: '첨으로', 8368: '첨이네', 8369: '첨이다', 8370: '첨이다.', 8371: '첩보', 8372: '첫', 8373: '첫번째', 8374: '첫사랑', 8375: '청소년', 8376: '청춘', 8377: '청춘들의', 8378: '청춘의', 8379: '청춘이', 8380: '쳐', 8381: '쳐도', 8382: '초', 8383: '초기', 8384: '초등학교', 8385: '초등학교때', 8386: '초등학생', 8387: '초등학생때', 8388: '초딩', 8389: '초딩도', 8390: '초딩때', 8391: '초딩이', 8392: '초반', 8393: '초반부', 8394: '초반부터', 8395: '초반에', 8396: '초반에는', 8397: '초반엔', 8398: '초반은', 8399: '초반의', 8400: '초심을', 8401: '초월하는', 8402: '초월한', 8403: '초점을', 8404: '초호화', 8405: '촌스러운', 8406: '촌스럽고', 8407: '촌스럽지', 8408: '총', 8409: '총으로', 8410: '총을', 8411: '총체적', 8412: '촬영', 8413: '최강', 8414: '최강의', 8415: '최강희', 8416: '최고', 8417: '최고!', 8418: '최고!!', 8419: '최고!!!', 8420: '최고!!!!', 8421: '최고,', 8422: '최고.', 8423: '최고..', 8424: '최고...', 8425: '최고....', 8426: '최고~', 8427: '최고네요', 8428: '최고네요.', 8429: '최고다', 8430: '최고다!', 8431: '최고다.', 8432: '최고다..', 8433: '최고다...', 8434: '최고라', 8435: '최고라고', 8436: '최고라는', 8437: '최고로', 8438: '최고봉', 8439: '최고봉.', 8440: '최고에요', 8441: '최고였다', 8442: '최고였다.', 8443: '최고였습니다.', 8444: '최고였어요', 8445: '최고의', 8446: '최고의영화', 8447: '최고인', 8448: '최고인듯', 8449: '최고임', 8450: '최고입니다', 8451: '최고입니다.', 8452: '최고최고', 8453: '최근', 8454: '최근에', 8455: '최대', 8456: '최대의', 8457: '최대한', 8458: '최민수', 8459: '최선을', 8460: '최소', 8461: '최소한', 8462: '최소한의', 8463: '최악', 8464: '최악!', 8465: '최악.', 8466: '최악..', 8467: '최악...', 8468: '최악으로', 8469: '최악은', 8470: '최악의', 8471: '최악의영화', 8472: '최악이다', 8473: '최악이다.', 8474: '최악임', 8475: '최악입니다.', 8476: '최지우', 8477: '최초', 8478: '최초로', 8479: '최초의', 8480: '최후의', 8481: '쵝오', 8482: '추구하는', 8483: '추리', 8484: '추악한', 8485: '추억', 8486: '추억에', 8487: '추억으로', 8488: '추억은', 8489: '추억을', 8490: '추억의', 8491: '추억이', 8492: '추천', 8493: '추천!', 8494: '추천!!', 8495: '추천.', 8496: '추천으로', 8497: '추천하고', 8498: '추천하는', 8499: '추천한다', 8500: '추천한다.', 8501: '추천합니다', 8502: '추천합니다!', 8503: '추천합니다.', 8504: '추천해', 8505: '추천해주고', 8506: '축구', 8507: '출연', 8508: '출연자', 8509: '출연진', 8510: '출연진에', 8511: '출연하는', 8512: '출연한', 8513: '춤', 8514: '춤도', 8515: '춤을', 8516: '충격', 8517: '충격과', 8518: '충격을', 8519: '충격이', 8520: '충격적이고', 8521: '충격적인', 8522: '충무로', 8523: '충분하다.', 8524: '충분한', 8525: '충분히', 8526: '충실한', 8527: '취해', 8528: '취향', 8529: '취향은', 8530: '취향이', 8531: '치고', 8532: '치고는', 8533: '치곤', 8534: '치는', 8535: '치닫는', 8536: '치면', 8537: '치명적인', 8538: '치밀하고', 8539: '치밀한', 8540: '치열한', 8541: '친', 8542: '친구', 8543: '친구가', 8544: '친구는', 8545: '친구들', 8546: '친구들과', 8547: '친구들이', 8548: '친구들이랑', 8549: '친구랑', 8550: '친구를', 8551: '친구에게', 8552: '친구와', 8553: '친구한테', 8554: '카리스마', 8555: '카리스마가', 8556: '카메라', 8557: '카메론', 8558: '칼', 8559: '칼로', 8560: '캐리', 8561: '캐릭터', 8562: '캐릭터,', 8563: '캐릭터가', 8564: '캐릭터는', 8565: '캐릭터도', 8566: '캐릭터들', 8567: '캐릭터들과', 8568: '캐릭터들도', 8569: '캐릭터들은', 8570: '캐릭터들의', 8571: '캐릭터들이', 8572: '캐릭터로', 8573: '캐릭터를', 8574: '캐릭터에', 8575: '캐릭터와', 8576: '캐릭터의', 8577: '캐서린', 8578: '캐스팅', 8579: '캐스팅,', 8580: '캐스팅도', 8581: '캐스팅에', 8582: '캐스팅은', 8583: '캐스팅을', 8584: '캐스팅이', 8585: '캬', 8586: '커녕', 8587: '커다란', 8588: '커서', 8589: '커플', 8590: '커플이', 8591: '컬트', 8592: '컴퓨터', 8593: '케릭터가', 8594: '케미가', 8595: '케빈', 8596: '케서방', 8597: '케이블', 8598: '케이블로', 8599: '케이블에서', 8600: '케이지', 8601: '케이트', 8602: '코난', 8603: '코난은', 8604: '코난이', 8605: '코드가', 8606: '코메디', 8607: '코메디.', 8608: '코메디가', 8609: '코미디', 8610: '코미디.', 8611: '코미디가', 8612: '코미디는', 8613: '코미디도', 8614: '코미디로', 8615: '코미디를', 8616: '코미디에', 8617: '코미디와', 8618: '코미디의', 8619: '코믹', 8620: '코믹과', 8621: '코믹도', 8622: '코믹영화', 8623: '코믹한', 8624: '콜린', 8625: '콜린퍼스', 8626: '쿠엔틴', 8627: '퀄리티', 8628: '퀄리티가', 8629: '퀄리티는', 8630: '큐브', 8631: '크게', 8632: '크고', 8633: '크다.', 8634: '크레딧', 8635: '크리스', 8636: '크리스마스', 8637: '크리스찬', 8638: '크리스토퍼', 8639: '큰', 8640: '클라스', 8641: '클레멘타인', 8642: '클린트', 8643: '키아누', 8644: '킬링', 8645: '킬링타임', 8646: '킬링타임도', 8647: '킬링타임용', 8648: '킬링타임용도', 8649: '킬링타임용으로', 8650: '킬링타임용으로도', 8651: '킬링타임으로', 8652: '킬링타임으로도', 8653: '타고', 8654: '타는', 8655: '타란티노', 8656: '탁월한', 8657: '탄', 8658: '탄탄하고', 8659: '탄탄한', 8660: '탈을', 8661: '탑', 8662: '탕웨이', 8663: '태어나', 8664: '태어나서', 8665: '태어난', 8666: '터미네이터', 8667: '터지는', 8668: '터짐', 8669: '턱없이', 8670: '테러', 8671: '텐데', 8672: '토', 8673: '토끼를', 8674: '톰', 8675: '톰크루즈', 8676: '통쾌한', 8677: '통한', 8678: '통해', 8679: '통해서', 8680: '투', 8681: '투자를', 8682: '투자한', 8683: '퉤', 8684: '특별한', 8685: '특별히', 8686: '특수효과', 8687: '특유의', 8688: '특이하고', 8689: '특이한', 8690: '특히', 8691: '특히나', 8692: '틀고', 8693: '틀에', 8694: '틀을', 8695: '틈이', 8696: '티', 8697: '티가', 8698: '티비', 8699: '티비로', 8700: '티비서', 8701: '티비에', 8702: '티비에서', 8703: '팀', 8704: '파격적인', 8705: '파고드는', 8706: '파라노말', 8707: '파리의', 8708: '파워레인저', 8709: '팍팍', 8710: '판치는', 8711: '판타지', 8712: '판타지를', 8713: '판타지의', 8714: '팔', 8715: '팝콘', 8716: '패러디', 8717: '팬', 8718: '팬으로서', 8719: '팬으로써', 8720: '팬이', 8721: '팬이라', 8722: '팬이라면', 8723: '팬이라서', 8724: '팬이지만', 8725: '팬인데', 8726: '펑펑', 8727: '페이크', 8728: '편', 8729: '편.', 8730: '편견을', 8731: '편견이', 8732: '편안하게', 8733: '편안한', 8734: '편은', 8735: '편의', 8736: '편이', 8737: '편인데', 8738: '편집', 8739: '편집도', 8740: '편집은', 8741: '편집을', 8742: '편집이', 8743: '편하게', 8744: '편히', 8745: '펼쳐지는', 8746: '펼치는', 8747: '평', 8748: '평가', 8749: '평가가', 8750: '평가는', 8751: '평가를', 8752: '평가할', 8753: '평균', 8754: '평론가', 8755: '평론가는', 8756: '평론가들', 8757: '평론가들은', 8758: '평론가들이', 8759: '평범하지', 8760: '평범한', 8761: '평생', 8762: '평생을', 8763: '평소', 8764: '평소에', 8765: '평은', 8766: '평을', 8767: '평이', 8768: '평점', 8769: '평점과', 8770: '평점도', 8771: '평점만', 8772: '평점보고', 8773: '평점에', 8774: '평점으로', 8775: '평점은', 8776: '평점을', 8777: '평점이', 8778: '평점조절위원회', 8779: '평점좀', 8780: '평정', 8781: '포기', 8782: '포기하고', 8783: '포기한', 8784: '포르노', 8785: '포르노를', 8786: '포스', 8787: '포스가', 8788: '포스터', 8789: '포스터가', 8790: '포스터는', 8791: '포스터를', 8792: '포스터만', 8793: '포스터보고', 8794: '포스터에', 8795: '포스터의', 8796: '포인트가', 8797: '포인트를', 8798: '포장한', 8799: '포켓몬', 8800: '폭력', 8801: '폭력을', 8802: '폭력의', 8803: '폰', 8804: '폴', 8805: '폼만', 8806: '표정', 8807: '표정과', 8808: '표정이', 8809: '표현', 8810: '표현된', 8811: '표현은', 8812: '표현을', 8813: '표현의', 8814: '표현이', 8815: '표현하는', 8816: '표현한', 8817: '표현할', 8818: '표현했다.', 8819: '푸른', 8820: '푹', 8821: '풀어가는', 8822: '풀어내는', 8823: '풀어낸', 8824: '풋풋하고', 8825: '풋풋한', 8826: '풍경과', 8827: '풍경이', 8828: '풍기는', 8829: '풍부한', 8830: '풍자', 8831: '프랑스', 8832: '프랑스의', 8833: '프레디', 8834: '프로', 8835: '프로가', 8836: '프로그램', 8837: '프로그램이', 8838: '피가', 8839: '피를', 8840: '피아노', 8841: '피터', 8842: '피해자', 8843: '피해자가', 8844: '픽사', 8845: '픽사의', 8846: '필름', 8847: '필름을', 8848: '필름이', 8849: '필요', 8850: '필요가', 8851: '필요는', 8852: '필요도', 8853: '필요없고', 8854: '필요없는', 8855: '필요없다', 8856: '필요없다.', 8857: '필요없음', 8858: '필요없이', 8859: '필요하다', 8860: '필요하다.', 8861: '필요한', 8862: '필요한가?', 8863: '하', 8864: '하..', 8865: '하...', 8866: '하게', 8867: '하고', 8868: '하고,', 8869: '하고.', 8870: '하고..', 8871: '하고...', 8872: '하고싶은', 8873: '하기', 8874: '하기도', 8875: '하기에는', 8876: '하기엔', 8877: '하긴', 8878: '하길래', 8879: '하나', 8880: '하나!', 8881: '하나,', 8882: '하나.', 8883: '하나..', 8884: '하나...', 8885: '하나?', 8886: '하나가', 8887: '하나같이', 8888: '하나는', 8889: '하나님의', 8890: '하나도', 8891: '하나로', 8892: '하나를', 8893: '하나만', 8894: '하나만으로', 8895: '하나만으로도', 8896: '하나부터', 8897: '하나씩', 8898: '하나의', 8899: '하나하나', 8900: '하나하나가', 8901: '하네', 8902: '하네.', 8903: '하네요', 8904: '하네요.', 8905: '하네요..', 8906: '하는', 8907: '하는가?', 8908: '하는거', 8909: '하는건', 8910: '하는걸', 8911: '하는것', 8912: '하는게', 8913: '하는데', 8914: '하는데,', 8915: '하는데..', 8916: '하는데...', 8917: '하는지', 8918: '하늘을', 8919: '하니', 8920: '하니까', 8921: '하다', 8922: '하다.', 8923: '하다가', 8924: '하다니', 8925: '하다못해', 8926: '하더라도', 8927: '하던', 8928: '하던가', 8929: '하던데', 8930: '하도', 8931: '하려고', 8932: '하려는', 8933: '하려면', 8934: '하루', 8935: '하루를', 8936: '하루에', 8937: '하루종일', 8938: '하루하루', 8939: '하며', 8940: '하면', 8941: '하면서', 8942: '하면서도', 8943: '하세요', 8944: '하시길', 8945: '하시는', 8946: '하아', 8947: '하아...', 8948: '하야오', 8949: '하얀', 8950: '하여', 8951: '하여간', 8952: '하여금', 8953: '하여튼', 8954: '하이킥', 8955: '하이틴', 8956: '하자', 8957: '하정우', 8958: '하지', 8959: '하지.', 8960: '하지도', 8961: '하지마라', 8962: '하지만', 8963: '하지만,', 8964: '하지만..', 8965: '하지말고', 8966: '하지원', 8967: '하지원의', 8968: '하품만', 8969: '하필', 8970: '하하', 8971: '하하하', 8972: '하하하하', 8973: '학교', 8974: '학교에서', 8975: '학생들', 8976: '학창시절', 8977: '한', 8978: '한가지', 8979: '한개도', 8980: '한거', 8981: '한건', 8982: '한건지', 8983: '한것', 8984: '한게', 8985: '한계', 8986: '한계가', 8987: '한계를', 8988: '한국', 8989: '한국식', 8990: '한국어', 8991: '한국에', 8992: '한국에도', 8993: '한국에서', 8994: '한국영화', 8995: '한국영화.', 8996: '한국영화가', 8997: '한국영화는', 8998: '한국영화를', 8999: '한국영화에', 9000: '한국영화의', 9001: '한국영화중', 9002: '한국은', 9003: '한국을', 9004: '한국의', 9005: '한국이', 9006: '한국인', 9007: '한국인이', 9008: '한국판', 9009: '한국형', 9010: '한글', 9011: '한낱', 9012: '한다', 9013: '한다.', 9014: '한다..', 9015: '한다고', 9016: '한다는', 9017: '한다면', 9018: '한대', 9019: '한데', 9020: '한동안', 9021: '한때', 9022: '한마디', 9023: '한마디로', 9024: '한명', 9025: '한명도', 9026: '한명이', 9027: '한방', 9028: '한방에', 9029: '한번', 9030: '한번더', 9031: '한번도', 9032: '한번보고', 9033: '한번보면', 9034: '한번씩', 9035: '한번에', 9036: '한번은', 9037: '한번쯤', 9038: '한번쯤은', 9039: '한석규', 9040: '한순간도', 9041: '한순간에', 9042: '한숨만', 9043: '한시간', 9044: '한시도', 9045: '한심하고', 9046: '한심하다', 9047: '한심하다.', 9048: '한심한', 9049: '한없이', 9050: '한예슬', 9051: '한장면', 9052: '한정된', 9053: '한줄', 9054: '한참', 9055: '한참을', 9056: '한층', 9057: '한쿡영화의', 9058: '한테', 9059: '한편', 9060: '한편으로', 9061: '한편의', 9062: '한표', 9063: '한효주', 9064: '할', 9065: '할거', 9066: '할까', 9067: '할까?', 9068: '할때', 9069: '할려고', 9070: '할리우드', 9071: '할만한', 9072: '할말', 9073: '할말을', 9074: '할말이', 9075: '할머니', 9076: '할머니가', 9077: '할수', 9078: '할수있는', 9079: '할아버지', 9080: '할아버지가', 9081: '할지', 9082: '함', 9083: '함.', 9084: '함께', 9085: '함께한', 9086: '함부로', 9087: '합니다', 9088: '합니다.', 9089: '항상', 9090: '해', 9091: '해가', 9092: '해놓고', 9093: '해도', 9094: '해도해도', 9095: '해라', 9096: '해라.', 9097: '해리슨', 9098: '해리포터', 9099: '해보고', 9100: '해서', 9101: '해석', 9102: '해석이', 9103: '해야', 9104: '해야지', 9105: '해요', 9106: '해요.', 9107: '해주길래', 9108: '해주는', 9109: '해주는거', 9110: '해주세요', 9111: '해준', 9112: '해준다.', 9113: '해줘서', 9114: '해피엔딩', 9115: '해피엔딩으로', 9116: '해피엔딩이', 9117: '해피엔딩이라', 9118: '핵', 9119: '핵노잼', 9120: '했나?', 9121: '했는데', 9122: '했는데,', 9123: '했는데..', 9124: '했는데...', 9125: '했는지', 9126: '했다', 9127: '했다.', 9128: '했다...', 9129: '했다가', 9130: '했다고', 9131: '했다는', 9132: '했다면', 9133: '했더니', 9134: '했던', 9135: '했습니다', 9136: '했습니다.', 9137: '했어', 9138: '했어도', 9139: '했어야', 9140: '했어요', 9141: '했었는데', 9142: '했으나', 9143: '했으면', 9144: '했을', 9145: '했을까', 9146: '했을까?', 9147: '했음', 9148: '했음.', 9149: '했지만', 9150: '했지만,', 9151: '행동', 9152: '행동을', 9153: '행동이', 9154: '행복은', 9155: '행복을', 9156: '행복이', 9157: '행복하게', 9158: '행복하고', 9159: '행복한', 9160: '행복해지는', 9161: '향수를', 9162: '향연', 9163: '향연.', 9164: '향한', 9165: '향해', 9166: '허나', 9167: '허니잼', 9168: '허무', 9169: '허무맹랑한', 9170: '허무하게', 9171: '허무하고', 9172: '허무한', 9173: '허세', 9174: '허세만', 9175: '허술하고', 9176: '허술하다.', 9177: '허술한', 9178: '허접', 9179: '허접.', 9180: '허접하게', 9181: '허접하고', 9182: '허접하다', 9183: '허접하다.', 9184: '허접한', 9185: '허허', 9186: '헉', 9187: '헐', 9188: '헐..', 9189: '헐...', 9190: '헐리우드', 9191: '헐리웃', 9192: '헛웃음만', 9193: '현', 9194: '현대', 9195: '현대의', 9196: '현대판', 9197: '현란한', 9198: '현실', 9199: '현실.', 9200: '현실감', 9201: '현실감이', 9202: '현실과', 9203: '현실성', 9204: '현실성도', 9205: '현실성을', 9206: '현실성이', 9207: '현실에', 9208: '현실에서', 9209: '현실은', 9210: '현실을', 9211: '현실의', 9212: '현실이', 9213: '현실적으로', 9214: '현실적이고', 9215: '현실적이어서', 9216: '현실적인', 9217: '현재', 9218: '현재를', 9219: '현재의', 9220: '형', 9221: '형님', 9222: '형사', 9223: '형이', 9224: '형제의', 9225: '형편', 9226: '형편없는', 9227: '호기심에', 9228: '호기심을', 9229: '호러', 9230: '호불호가', 9231: '호흡이', 9232: '혹시', 9233: '혹시나', 9234: '혹은', 9235: '혼자', 9236: '혼자서', 9237: '홍금보', 9238: '홍보', 9239: '홍보가', 9240: '홍상수', 9241: '홍콩', 9242: '홍콩영화', 9243: '홍콩영화의', 9244: '화', 9245: '화가', 9246: '화가난다', 9247: '화끈하게', 9248: '화끈한', 9249: '화나는', 9250: '화나서', 9251: '화난다', 9252: '화려하게', 9253: '화려하고', 9254: '화려한', 9255: '화면', 9256: '화면,', 9257: '화면과', 9258: '화면도', 9259: '화면만', 9260: '화면에', 9261: '화면은', 9262: '화면이', 9263: '화이팅', 9264: '화이팅!', 9265: '화이팅!!', 9266: '화장실', 9267: '확', 9268: '확실하게', 9269: '확실한', 9270: '확실히', 9271: '환상', 9272: '환상을', 9273: '환상의', 9274: '환상적인', 9275: '환타지', 9276: '홧팅', 9277: '황당', 9278: '황당하고', 9279: '황당한', 9280: '황정민', 9281: '황홀한', 9282: '획을', 9283: '효과', 9284: '후', 9285: '후...', 9286: '후로', 9287: '후반', 9288: '후반부', 9289: '후반부가', 9290: '후반부는', 9291: '후반부로', 9292: '후반부에', 9293: '후반에', 9294: '후반으로', 9295: '후반의', 9296: '후속작', 9297: '후속편', 9298: '후에', 9299: '후에도', 9300: '후하게', 9301: '후한', 9302: '후회', 9303: '후회가', 9304: '후회는', 9305: '후회하지', 9306: '후회할', 9307: '후회함', 9308: '훈훈하고', 9309: '훈훈한', 9310: '훌륭하고', 9311: '훌륭하다', 9312: '훌륭하다.', 9313: '훌륭한', 9314: '훌륭함', 9315: '훌륭했다.', 9316: '훨', 9317: '훨씬', 9318: '휴', 9319: '휴...', 9320: '흐르는', 9321: '흐른다.', 9322: '흐름', 9323: '흐름도', 9324: '흐름에', 9325: '흐름을', 9326: '흐름이', 9327: '흐지부지', 9328: '흑역사', 9329: '흑인', 9330: '흑흑', 9331: '흔들리는', 9332: '흔적이', 9333: '흔하디', 9334: '흔한', 9335: '흔해빠진', 9336: '흔히', 9337: '흘러', 9338: '흘러가는', 9339: '흘러나오는', 9340: '흘러도', 9341: '흘리게', 9342: '흘리며', 9343: '흠', 9344: '흠.', 9345: '흠..', 9346: '흠...', 9347: '흠뻑', 9348: '흠잡을', 9349: '흠잡을데', 9350: '흡입력', 9351: '흡입력이', 9352: '흥미', 9353: '흥미가', 9354: '흥미도', 9355: '흥미로운', 9356: '흥미로웠다.', 9357: '흥미롭게', 9358: '흥미롭고', 9359: '흥미롭지', 9360: '흥미를', 9361: '흥미진진', 9362: '흥미진진하게', 9363: '흥미진진하고', 9364: '흥미진진한', 9365: '흥해라', 9366: '흥행', 9367: '흥행에', 9368: '흥행은', 9369: '흥행을', 9370: '흥행이', 9371: '희대의', 9372: '희망', 9373: '희망을', 9374: '희망이', 9375: '희생을', 9376: '히어로', 9377: '히어로물', 9378: '힐링', 9379: '힐링이', 9380: '힘', 9381: '힘.', 9382: '힘과', 9383: '힘내세요', 9384: '힘든', 9385: '힘들', 9386: '힘들게', 9387: '힘들고', 9388: '힘들다', 9389: '힘들다.', 9390: '힘들다..', 9391: '힘들듯', 9392: '힘들었다.', 9393: '힘으로', 9394: '힘을', 9395: '힘이'}\n" ], [ "# to_indices\nexample_sentence = tr_dataset['document'][0]\ntokenized_sentence = split_eojeol(example_sentence)\ntransformed_sentence = vocab.to_indices(tokenized_sentence)\nprint(example_sentence)\nprint(tokenized_sentence)\nprint(transformed_sentence)", "애들 욕하지마라 지들은 뭐 그렇게 잘났나? 솔까 거기 나오는 귀여운 애들이 당신들보다 훨 낮다.\n['애들', '욕하지마라', '지들은', '뭐', '그렇게', '잘났나?', '솔까', '거기', '나오는', '귀여운', '애들이', '당신들보다', '훨', '낮다.']\n[5429, 0, 0, 3235, 1150, 0, 4521, 722, 1502, 1063, 5434, 0, 9316, 1666]\n" ], [ "# to_tokens\nprint(vocab.to_tokens(transformed_sentence))", "['애들', '<unk>', '<unk>', '뭐', '그렇게', '<unk>', '솔까', '거기', '나오는', '귀여운', '애들이', '<unk>', '훨', '낮다.']\n" ] ], [ [ "### How to use `Tokenizer`\n위의 `Vocab` class의 활용 형태를 보면 `split_fn`으로 활용하는 `split_morphs` function의 결과를 input을 기본적으로 받습니다. `Vocab` class의 instance와 `split_fn`으로 활용하는 `split_morphs` function을 parameter로 전달받아 전처리를 통합적인 형태의 `Tokenizer` class를 활용할 수 있습니다. `composition` 형태를 이용하여 구현합니다.", "_____no_output_____" ] ], [ [ "help(Tokenizer)", "Help on class Tokenizer in module model.utils:\n\nclass Tokenizer(builtins.object)\n | Tokenizer class\n | \n | Methods defined here:\n | \n | __init__(self, vocab:model.utils.Vocab, split_fn:Callable[[str], List[str]], pad_fn:Callable[[List[int]], List[int]]=None) -> None\n | Instantiating Tokenizer class\n | \n | Args:\n | vocab (model.utils.Vocab): the instance of model.utils.Vocab created from specific split_fn\n | split_fn (Callable): a function that can act as a splitter\n | pad_fn (Callable): a function that can act as a padder\n | \n | split(self, string:str) -> List[str]\n | \n | split_and_transform(self, string:str) -> List[int]\n | \n | transform(self, list_of_tokens:List[str]) -> List[int]\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | vocab\n\n" ], [ "tokenizer = Tokenizer(vocab, split_eojeol)", "_____no_output_____" ], [ "# split, transform, split_and_transform\nexample_sentence = tr_dataset['document'][1]\ntokenized_sentence = tokenizer.split(example_sentence)\ntransformed_sentence = tokenizer.transform(tokenized_sentence)\nprint(example_sentence)\nprint(tokenized_sentence)\nprint(transformed_sentence)", "여전히 반복되고 있는 80년대 한국 멜로 영화의 유치함.\n['여전히', '반복되고', '있는', '80년대', '한국', '멜로', '영화의', '유치함.']\n[5833, 0, 7002, 202, 8988, 2949, 6119, 6546]\n" ], [ "print(tokenizer.split_and_transform(example_sentence))", "[5833, 0, 7002, 202, 8988, 2949, 6119, 6546]\n" ] ], [ [ "`model.utils` module에 있는 `PadSequence`를 활용, `Tokenizer` class의 instance인 `tokenizer`에 padding 기능을 추가할 수 있습니다. 이 때 padding은 vocabulary에서 `<pad>`가 가리키고 있는 정수값을 활용합니다.", "_____no_output_____" ] ], [ [ "padding_value = vocab.to_indices(vocab.padding_token)\nprint(padding_value)", "1\n" ], [ "pad_sequence = PadSequence(length=32, pad_val=padding_value)", "_____no_output_____" ], [ "tokenizer = Tokenizer(vocab, split_eojeol, pad_sequence)", "_____no_output_____" ], [ "# split, transform, split_and_transform\nexample_sentence = tr_dataset['document'][1]\ntokenized_sentence = tokenizer.split(example_sentence)\ntransformed_sentence = tokenizer.transform(tokenized_sentence)\nprint(example_sentence)\nprint(tokenized_sentence)\nprint(transformed_sentence)", "여전히 반복되고 있는 80년대 한국 멜로 영화의 유치함.\n['여전히', '반복되고', '있는', '80년대', '한국', '멜로', '영화의', '유치함.']\n[5833, 0, 7002, 202, 8988, 2949, 6119, 6546, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n" ] ], [ [ "### Save `vocab`", "_____no_output_____" ] ], [ [ "with open('data/vocab.pkl', mode='wb') as io:\n pickle.dump(vocab, io)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e7639988c4bd5a12b7b9a28f87575a9d191ade63
76,824
ipynb
Jupyter Notebook
examples/forecast/0_ForecastIntro.ipynb
ankitakashyap05/Merlion
7dc95fbf64002e22bfce89625bdb76b7a3cbfbfc
[ "BSD-3-Clause" ]
1
2021-09-24T11:03:42.000Z
2021-09-24T11:03:42.000Z
examples/forecast/0_ForecastIntro.ipynb
ankitakashyap05/Merlion
7dc95fbf64002e22bfce89625bdb76b7a3cbfbfc
[ "BSD-3-Clause" ]
null
null
null
examples/forecast/0_ForecastIntro.ipynb
ankitakashyap05/Merlion
7dc95fbf64002e22bfce89625bdb76b7a3cbfbfc
[ "BSD-3-Clause" ]
1
2021-12-01T16:20:23.000Z
2021-12-01T16:20:23.000Z
498.857143
72,652
0.945082
[ [ [ "# A Gentle Introduction to Forecasting in Merlion", "_____no_output_____" ], [ "We begin by importing Merlion's `TimeSeries` class and the data loader for the `M4` dataset. We can then divide a specific time series from this dataset into training and testing splits.", "_____no_output_____" ] ], [ [ "from merlion.utils import TimeSeries\nfrom ts_datasets.forecast import M4\n\ntime_series, metadata = M4(subset=\"Hourly\")[0]\ntrain_data = TimeSeries.from_pd(time_series[metadata.trainval])\ntest_data = TimeSeries.from_pd(time_series[~metadata.trainval])", "100%|██████████| 414/414 [00:01<00:00, 400.14it/s]\n" ] ], [ [ "We can then initialize and train Merlion's `DefaultForecaster`, which is an forecasting model that balances performance with efficiency. We also obtain its predictions on the test split.", "_____no_output_____" ] ], [ [ "from merlion.models.defaults import DefaultForecasterConfig, DefaultForecaster\nmodel = DefaultForecaster(DefaultForecasterConfig())\nmodel.train(train_data=train_data)\ntest_pred, test_err = model.forecast(time_stamps=test_data.time_stamps)", "_____no_output_____" ] ], [ [ "Next, we visualize the model's predictions.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nfig, ax = model.plot_forecast(time_series=test_data, plot_forecast_uncertainty=True)\nplt.show()", "_____no_output_____" ] ], [ [ "Finally, we quantitatively evaluate the model. sMAPE measures the error of the prediction on a scale of 0 to 100 (lower is better), while MSIS evaluates the quality of the 95% confidence band on a scale of 0 to 100 (lower is better).", "_____no_output_____" ] ], [ [ "from scipy.stats import norm\nfrom merlion.evaluate.forecast import ForecastMetric\n\n# Compute the sMAPE of the predictions (0 to 100, smaller is better)\nsmape = ForecastMetric.sMAPE.value(ground_truth=test_data, predict=test_pred)\n\n# Compute the MSIS of the model's 95% confidence interval (0 to 100, smaller is better)\nlb = TimeSeries.from_pd(test_pred.to_pd() + norm.ppf(0.025) * test_err.to_pd().values)\nub = TimeSeries.from_pd(test_pred.to_pd() + norm.ppf(0.975) * test_err.to_pd().values)\nmsis = ForecastMetric.MSIS.value(ground_truth=test_data, predict=test_pred,\n insample=train_data, lb=lb, ub=ub)\nprint(f\"sMAPE: {smape:.4f}, MSIS: {msis:.4f}\")\n", "sMAPE: 4.1944, MSIS: 19.1599\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7639a92124bb3435501f94b807facbae8be8bef
9,340
ipynb
Jupyter Notebook
NIU-NLU dummy codes/2_Input_processing_I_dummy_code.ipynb
Cezanne-ai/project-2021
7d032bc1e7e7746225f6babbbfe166df968d2ba9
[ "Apache-2.0" ]
null
null
null
NIU-NLU dummy codes/2_Input_processing_I_dummy_code.ipynb
Cezanne-ai/project-2021
7d032bc1e7e7746225f6babbbfe166df968d2ba9
[ "Apache-2.0" ]
null
null
null
NIU-NLU dummy codes/2_Input_processing_I_dummy_code.ipynb
Cezanne-ai/project-2021
7d032bc1e7e7746225f6babbbfe166df968d2ba9
[ "Apache-2.0" ]
null
null
null
40.258621
482
0.539079
[ [ [ "Input/text processing needs to be implemented at two different levels:\n\nMain level. The objective is not to eliminate/stem/lemma too much data/info, not to lose precious information. Also, the chronological steps are important to be implemented gradually, for different algorithms to use the current state of the processed text/input. The sub-words tokenization can be a solution but depending on the language can affect the model. The same with stop-words or keywords that can impact our objective to have a multi-domain conversational bot.\n\nLocal/Algorithm level. More elaborate processing depending on the objectives.\nThe first processing needs to be focusing on better understanding of the language and conversational specificities. At the same time, we need to separate the information transmitted by the user in five categories:\n\n•\tWords, numerical data and characters not needed in the main NIU model, for GDPR reasons or due to model limitations to understand them (ex: visual inputs or links that are in scope of future developments).\n\n•\tNumerical data needed especially for task-oriented user requests (ex: reservation for 3 persons on Monday, 5th January). As an exception, specific words need to be transmitted to DER (Data Entity Recognition, we will talk more later).\n\n•\tEmoji that will be analyzed separately.\n\n•\tWords in the core-input that will be processed.\n\n•\tPunctuation together with other information that must be analyzed in context.\n\n", "_____no_output_____" ] ], [ [ "# IN: corrected list with words and characters (pipeline-Corrected words)\n# OUT: processed input sent to diffrent pipelines for computation : EER , DER, Grammar-Semantics, CVM", "_____no_output_____" ] ], [ [ "Objectives:\n\n•\tLinguistic evaluation. Addressing special characters of the language. (For example: removing hyphens by separating into 2 words; \n\n•\tRemoving words/ characters not needed; \n\n•\tAddressing numerical data, punctuation, emoji;\n\nLanguage specificities: yes (check language specificities);\n\nDependencies: DER/CVM/EER/Grammar-Semantics;\n\nDatabase/ Vocabularies needed: Lexicon/ Dictionaries for numerical data and special dates & events /Emoji.\n", "_____no_output_____" ] ], [ [ "# To dos:\n# 1.\tAddressing special characters of the language in order for all words/characters to be included into a list that can be processed further (we will leave this task for linguists to assess).\n# 2.\tAll numerical characters found (including in a special vocabulary) are marked for DER.\n# 3.\tEmoji are marked and sent to EER.\n# 4.\tMark nonexistent vocabulary words with UNK.\n# 5.\tMark words that exist more than once in vocabularies for Grammar/Semantics analysis.\n# 6.\tRemove personal names, telephone numbers, visual inputs, links, other characters like $#@ and the related words.\n", "_____no_output_____" ] ], [ [ "Ude existing codes\nsee code example!!! do not use before adapting to objectives", "_____no_output_____" ] ], [ [ "\n# import re # library for regular expression operations\n# import string # for string operations\n\n# from nltk.corpus import stopwords # module for stop words that come with NLTK\n# from nltk.stem import PorterStemmer # module for stemming\n# from nltk.tokenize import TweetTokenizer # module for tokenizing strings\n\n# remove old style retweet text \"RT\"\n# tweet2 = re.sub(r'^RT[\\s]+', '', tweet)\n\n# remove hyperlinks\n# tweet2 = re.sub(r'https?:\\/\\/.*[\\r\\n]*', '', tweet2)\n\n# remove hashtags\n# only removing the hash # sign from the word\n# tweet2 = re.sub(r'#', '', tweet2)\n\n# instantiate tokenizer class\n# tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,\n reduce_len=True)\n\n# tokenize tweets\n# tweet_tokens = tokenizer.tokenize(tweet2)\n\n#Import the english stop words list from NLTK\n# stopwords_english = stopwords.words('english')\n\n# tweets_clean = []\n\n# for word in tweet_tokens: # Go through every word in your tokens list\n # if (word not in stopwords_english and # remove stopwords\n # word not in string.punctuation): # remove punctuation\n # tweets_clean.append(word)\n\n", "_____no_output_____" ], [ "# def replace_oov_words_by_unk(tokenized_sentences, vocabulary, unknown_token=\"<unk>\"):\n \"\"\"\n Replace words not in the given vocabulary with '<unk>' token.\n \n Args:\n tokenized_sentences: List of lists of strings\n vocabulary: List of strings that we will use\n unknown_token: A string representing unknown (out-of-vocabulary) words\n \n Returns:\n List of lists of strings, with words not in the vocabulary replaced\n \"\"\"\n \n # Place vocabulary into a set for faster search\n vocabulary = set(vocabulary)\n \n # Initialize a list that will hold the sentences\n # after less frequent words are replaced by the unknown token\n replaced_tokenized_sentences = []\n \n # Go through each sentence\n for sentence in tokenized_sentences:\n \n # Initialize the list that will contain\n # a single sentence with \"unknown_token\" replacements\n replaced_sentence = []\n\n # for each token in the sentence\n for token in sentence: # complete this line\n \n # Check if the token is in the closed vocabulary\n if token in vocabulary: # complete this line\n # If so, append the word to the replaced_sentence\n replaced_sentence.append(token)\n else:\n # otherwise, append the unknown token instead\n replaced_sentence.append(unknown_token)\n \n # Append the list of tokens to the list of lists\n replaced_tokenized_sentences.append(replaced_sentence)\n \n return replaced_tokenized_sentences", "_____no_output_____" ], [ "# def tokenize(corpus):\n data = re.sub(r'[,!?;-]+', '.', corpus)\n data = nltk.word_tokenize(data) # tokenize string to words\n data = [ ch.lower() for ch in data\n if ch.isalpha()\n or ch == '.'\n or emoji.get_emoji_regexp().search(ch)\n ]\n return data", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7639b0ca404b7947017b9cb421f820e79d6b3be
5,313
ipynb
Jupyter Notebook
6_numpy_boolean_operations.ipynb
ssozuer/numpy
bcb67b706a6f8bc829df6fa65627fc5b1b940db3
[ "MIT" ]
null
null
null
6_numpy_boolean_operations.ipynb
ssozuer/numpy
bcb67b706a6f8bc829df6fa65627fc5b1b940db3
[ "MIT" ]
null
null
null
6_numpy_boolean_operations.ipynb
ssozuer/numpy
bcb67b706a6f8bc829df6fa65627fc5b1b940db3
[ "MIT" ]
null
null
null
17.028846
63
0.440241
[ [ [ "# Boolean Operations", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "x = np.array([1, 2, 3, 4,5])\nx < 3", "_____no_output_____" ], [ "x > 3", "_____no_output_____" ], [ "x <= 3", "_____no_output_____" ], [ "x >= 3", "_____no_output_____" ], [ "x != 3", "_____no_output_____" ], [ "# how many values less than 6?\nnp.count_nonzero(x < 6)", "_____no_output_____" ], [ "np.sum(x < 6)", "_____no_output_____" ], [ "# how many values less than 6 in each row?\nx = np.arange(1,10).reshape((3,3))\nprint(x)\nnp.sum( x < 5, axis=1)", "[[1 2 3]\n [4 5 6]\n [7 8 9]]\n" ], [ "# are there any value greater than 7?\nnp.any(x > 7)", "_____no_output_____" ], [ "# are there all values equals to 10?\nnp.all(x == 10)", "_____no_output_____" ], [ "# are all values in each row less than 10?\nnp.all(x < 10, axis=1)", "_____no_output_____" ], [ "# new array \ny = x[x<5]\ny", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7639bc692cae73d153fc4b8353283e2994c0476
146,630
ipynb
Jupyter Notebook
jupyter/DenseNet-Celeb.ipynb
KabirSingh114/DeepFake_Face_Detection
0cf1ce3e69a7e4a18adff889dab4f6db029a0f25
[ "MIT" ]
null
null
null
jupyter/DenseNet-Celeb.ipynb
KabirSingh114/DeepFake_Face_Detection
0cf1ce3e69a7e4a18adff889dab4f6db029a0f25
[ "MIT" ]
null
null
null
jupyter/DenseNet-Celeb.ipynb
KabirSingh114/DeepFake_Face_Detection
0cf1ce3e69a7e4a18adff889dab4f6db029a0f25
[ "MIT" ]
null
null
null
146,630
146,630
0.712801
[ [ [ "from google.colab import drive\ndrive.mount('/content/gdrive')", "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/gdrive\n" ], [ "import tarfile\ntfile = tarfile.open(\"/content/gdrive/My Drive/Deep Learning Groupwork/Project/Data-Celeb.tar\")\ntfile.extractall()", "_____no_output_____" ], [ "training_dir = '/content/Data-Celeb/Train'\nval_dir = '/content/Data-Celeb/Validation'\nfinetunedir = '/content/Data-Celeb/FineTune'\ntestdir = '/content/Data-Celeb/Test'", "_____no_output_____" ], [ "import os\nfrom os import listdir\nfrom os.path import isfile, join\nmypath = '/content/Data-Celeb/Test/Fake'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")\n\nmypath = '/content/Data-Celeb/Test/Real'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")\nmypath = '/content/Data-Celeb/Train/Fake'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")\n\nmypath = '/content/Data-Celeb/Train/Real'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")\n\nmypath = '/content/Data-Celeb/Validation/Real'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")\n\nmypath = '/content/Data-Celeb/Validation/Fake'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")\n\nmypath = '/content/Data-Celeb/FineTune/Real'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")\n\nmypath = '/content/Data-Celeb/FineTune/Fake'\nprint(mypath)\nfor f in listdir(mypath):\n #print(f[0])\n if f[0] == '.':\n try:\n os.remove(join(mypath, f))\n except: \n print(\"file not deleted\")", "/content/Data-Celeb/Test/Fake\n/content/Data-Celeb/Test/Real\n/content/Data-Celeb/Train/Fake\n/content/Data-Celeb/Train/Real\n/content/Data-Celeb/Validation/Real\n/content/Data-Celeb/Validation/Fake\n/content/Data-Celeb/FineTune/Real\n/content/Data-Celeb/FineTune/Fake\n" ], [ "import os\nos.chdir('/content/gdrive/My Drive/Deep Learning Groupwork/Project/Code - Kabir/Code/FDFtNet')\nprint(os.getcwd())", "/content/gdrive/My Drive/Deep Learning Groupwork/Project/Code - Kabir/Code/FDFtNet\n" ], [ "!python3 pretrain.py -network='denseNet' -train_dir='/content/Data-Celeb/Train' -val_dir='/content/Data-Celeb/Validation' -batch_size=128 -reduce_patience=100 -step=200 -epochs=100", "Using TensorFlow backend.\n2020-04-28 15:09:59.945343: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 15:10:06.259919: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2020-04-28 15:10:06.308586: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:06.309505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0\ncoreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s\n2020-04-28 15:10:06.309561: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 15:10:06.550004: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 15:10:06.680981: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2020-04-28 15:10:06.713260: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2020-04-28 15:10:06.989108: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2020-04-28 15:10:07.042867: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2020-04-28 15:10:07.555692: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2020-04-28 15:10:07.555922: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:07.557061: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:07.557876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2020-04-28 15:10:07.585341: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2300000000 Hz\n2020-04-28 15:10:07.585651: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2672bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2020-04-28 15:10:07.585735: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2020-04-28 15:10:07.747583: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:07.748715: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2672d80 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2020-04-28 15:10:07.748753: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0\n2020-04-28 15:10:07.750080: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:07.750908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0\ncoreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s\n2020-04-28 15:10:07.750986: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 15:10:07.751051: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 15:10:07.751090: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2020-04-28 15:10:07.751133: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2020-04-28 15:10:07.751170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2020-04-28 15:10:07.751225: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2020-04-28 15:10:07.751276: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2020-04-28 15:10:07.751390: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:07.752343: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:07.753214: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2020-04-28 15:10:07.757088: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 15:10:14.146126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-04-28 15:10:14.146197: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2020-04-28 15:10:14.146215: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2020-04-28 15:10:14.152484: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:14.153365: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 15:10:14.154152: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n2020-04-28 15:10:14.154219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14974 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)\nModel: \"densenet121\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 64, 64, 3) 0 \n__________________________________________________________________________________________________\nzero_padding2d_1 (ZeroPadding2D (None, 70, 70, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1/conv (Conv2D) (None, 32, 32, 64) 9408 zero_padding2d_1[0][0] \n__________________________________________________________________________________________________\nconv1/bn (BatchNormalization) (None, 32, 32, 64) 256 conv1/conv[0][0] \n__________________________________________________________________________________________________\nconv1/relu (Activation) (None, 32, 32, 64) 0 conv1/bn[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_2 (ZeroPadding2D (None, 34, 34, 64) 0 conv1/relu[0][0] \n__________________________________________________________________________________________________\npool1 (MaxPooling2D) (None, 16, 16, 64) 0 zero_padding2d_2[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_bn (BatchNormali (None, 16, 16, 64) 256 pool1[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_relu (Activation (None, 16, 16, 64) 0 conv2_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_conv (Conv2D) (None, 16, 16, 128) 8192 conv2_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_bn (BatchNormali (None, 16, 16, 128) 512 conv2_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_relu (Activation (None, 16, 16, 128) 0 conv2_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv2_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_concat (Concatenat (None, 16, 16, 96) 0 pool1[0][0] \n conv2_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_0_bn (BatchNormali (None, 16, 16, 96) 384 conv2_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block2_0_relu (Activation (None, 16, 16, 96) 0 conv2_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_conv (Conv2D) (None, 16, 16, 128) 12288 conv2_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_bn (BatchNormali (None, 16, 16, 128) 512 conv2_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_relu (Activation (None, 16, 16, 128) 0 conv2_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv2_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_concat (Concatenat (None, 16, 16, 128) 0 conv2_block1_concat[0][0] \n conv2_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_0_bn (BatchNormali (None, 16, 16, 128) 512 conv2_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block3_0_relu (Activation (None, 16, 16, 128) 0 conv2_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_conv (Conv2D) (None, 16, 16, 128) 16384 conv2_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_bn (BatchNormali (None, 16, 16, 128) 512 conv2_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_relu (Activation (None, 16, 16, 128) 0 conv2_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv2_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_concat (Concatenat (None, 16, 16, 160) 0 conv2_block2_concat[0][0] \n conv2_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block4_0_bn (BatchNormali (None, 16, 16, 160) 640 conv2_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block4_0_relu (Activation (None, 16, 16, 160) 0 conv2_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_conv (Conv2D) (None, 16, 16, 128) 20480 conv2_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_bn (BatchNormali (None, 16, 16, 128) 512 conv2_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_relu (Activation (None, 16, 16, 128) 0 conv2_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block4_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv2_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block4_concat (Concatenat (None, 16, 16, 192) 0 conv2_block3_concat[0][0] \n conv2_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block5_0_bn (BatchNormali (None, 16, 16, 192) 768 conv2_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block5_0_relu (Activation (None, 16, 16, 192) 0 conv2_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_conv (Conv2D) (None, 16, 16, 128) 24576 conv2_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_bn (BatchNormali (None, 16, 16, 128) 512 conv2_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_relu (Activation (None, 16, 16, 128) 0 conv2_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block5_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv2_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block5_concat (Concatenat (None, 16, 16, 224) 0 conv2_block4_concat[0][0] \n conv2_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block6_0_bn (BatchNormali (None, 16, 16, 224) 896 conv2_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block6_0_relu (Activation (None, 16, 16, 224) 0 conv2_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_conv (Conv2D) (None, 16, 16, 128) 28672 conv2_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_bn (BatchNormali (None, 16, 16, 128) 512 conv2_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_relu (Activation (None, 16, 16, 128) 0 conv2_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block6_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv2_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block6_concat (Concatenat (None, 16, 16, 256) 0 conv2_block5_concat[0][0] \n conv2_block6_2_conv[0][0] \n__________________________________________________________________________________________________\npool2_bn (BatchNormalization) (None, 16, 16, 256) 1024 conv2_block6_concat[0][0] \n__________________________________________________________________________________________________\npool2_relu (Activation) (None, 16, 16, 256) 0 pool2_bn[0][0] \n__________________________________________________________________________________________________\npool2_conv (Conv2D) (None, 16, 16, 128) 32768 pool2_relu[0][0] \n__________________________________________________________________________________________________\npool2_pool (AveragePooling2D) (None, 8, 8, 128) 0 pool2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_bn (BatchNormali (None, 8, 8, 128) 512 pool2_pool[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_relu (Activation (None, 8, 8, 128) 0 conv3_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_conv (Conv2D) (None, 8, 8, 128) 16384 conv3_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_relu (Activation (None, 8, 8, 128) 0 conv3_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_concat (Concatenat (None, 8, 8, 160) 0 pool2_pool[0][0] \n conv3_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_0_bn (BatchNormali (None, 8, 8, 160) 640 conv3_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block2_0_relu (Activation (None, 8, 8, 160) 0 conv3_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_conv (Conv2D) (None, 8, 8, 128) 20480 conv3_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_relu (Activation (None, 8, 8, 128) 0 conv3_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_concat (Concatenat (None, 8, 8, 192) 0 conv3_block1_concat[0][0] \n conv3_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_0_bn (BatchNormali (None, 8, 8, 192) 768 conv3_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block3_0_relu (Activation (None, 8, 8, 192) 0 conv3_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_conv (Conv2D) (None, 8, 8, 128) 24576 conv3_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_relu (Activation (None, 8, 8, 128) 0 conv3_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_concat (Concatenat (None, 8, 8, 224) 0 conv3_block2_concat[0][0] \n conv3_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_0_bn (BatchNormali (None, 8, 8, 224) 896 conv3_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block4_0_relu (Activation (None, 8, 8, 224) 0 conv3_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_conv (Conv2D) (None, 8, 8, 128) 28672 conv3_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_relu (Activation (None, 8, 8, 128) 0 conv3_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_concat (Concatenat (None, 8, 8, 256) 0 conv3_block3_concat[0][0] \n conv3_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block5_0_bn (BatchNormali (None, 8, 8, 256) 1024 conv3_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block5_0_relu (Activation (None, 8, 8, 256) 0 conv3_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_conv (Conv2D) (None, 8, 8, 128) 32768 conv3_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_relu (Activation (None, 8, 8, 128) 0 conv3_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block5_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block5_concat (Concatenat (None, 8, 8, 288) 0 conv3_block4_concat[0][0] \n conv3_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block6_0_bn (BatchNormali (None, 8, 8, 288) 1152 conv3_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block6_0_relu (Activation (None, 8, 8, 288) 0 conv3_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_conv (Conv2D) (None, 8, 8, 128) 36864 conv3_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_relu (Activation (None, 8, 8, 128) 0 conv3_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block6_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block6_concat (Concatenat (None, 8, 8, 320) 0 conv3_block5_concat[0][0] \n conv3_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block7_0_bn (BatchNormali (None, 8, 8, 320) 1280 conv3_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block7_0_relu (Activation (None, 8, 8, 320) 0 conv3_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_conv (Conv2D) (None, 8, 8, 128) 40960 conv3_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_relu (Activation (None, 8, 8, 128) 0 conv3_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block7_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block7_concat (Concatenat (None, 8, 8, 352) 0 conv3_block6_concat[0][0] \n conv3_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block8_0_bn (BatchNormali (None, 8, 8, 352) 1408 conv3_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block8_0_relu (Activation (None, 8, 8, 352) 0 conv3_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_conv (Conv2D) (None, 8, 8, 128) 45056 conv3_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_relu (Activation (None, 8, 8, 128) 0 conv3_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block8_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block8_concat (Concatenat (None, 8, 8, 384) 0 conv3_block7_concat[0][0] \n conv3_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block9_0_bn (BatchNormali (None, 8, 8, 384) 1536 conv3_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block9_0_relu (Activation (None, 8, 8, 384) 0 conv3_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_conv (Conv2D) (None, 8, 8, 128) 49152 conv3_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_bn (BatchNormali (None, 8, 8, 128) 512 conv3_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_relu (Activation (None, 8, 8, 128) 0 conv3_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block9_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block9_concat (Concatenat (None, 8, 8, 416) 0 conv3_block8_concat[0][0] \n conv3_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block10_0_bn (BatchNormal (None, 8, 8, 416) 1664 conv3_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block10_0_relu (Activatio (None, 8, 8, 416) 0 conv3_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_conv (Conv2D) (None, 8, 8, 128) 53248 conv3_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_bn (BatchNormal (None, 8, 8, 128) 512 conv3_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_relu (Activatio (None, 8, 8, 128) 0 conv3_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block10_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block10_concat (Concatena (None, 8, 8, 448) 0 conv3_block9_concat[0][0] \n conv3_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block11_0_bn (BatchNormal (None, 8, 8, 448) 1792 conv3_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block11_0_relu (Activatio (None, 8, 8, 448) 0 conv3_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_conv (Conv2D) (None, 8, 8, 128) 57344 conv3_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_bn (BatchNormal (None, 8, 8, 128) 512 conv3_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_relu (Activatio (None, 8, 8, 128) 0 conv3_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block11_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block11_concat (Concatena (None, 8, 8, 480) 0 conv3_block10_concat[0][0] \n conv3_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block12_0_bn (BatchNormal (None, 8, 8, 480) 1920 conv3_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block12_0_relu (Activatio (None, 8, 8, 480) 0 conv3_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_conv (Conv2D) (None, 8, 8, 128) 61440 conv3_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_bn (BatchNormal (None, 8, 8, 128) 512 conv3_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_relu (Activatio (None, 8, 8, 128) 0 conv3_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block12_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv3_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block12_concat (Concatena (None, 8, 8, 512) 0 conv3_block11_concat[0][0] \n conv3_block12_2_conv[0][0] \n__________________________________________________________________________________________________\npool3_bn (BatchNormalization) (None, 8, 8, 512) 2048 conv3_block12_concat[0][0] \n__________________________________________________________________________________________________\npool3_relu (Activation) (None, 8, 8, 512) 0 pool3_bn[0][0] \n__________________________________________________________________________________________________\npool3_conv (Conv2D) (None, 8, 8, 256) 131072 pool3_relu[0][0] \n__________________________________________________________________________________________________\npool3_pool (AveragePooling2D) (None, 4, 4, 256) 0 pool3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_bn (BatchNormali (None, 4, 4, 256) 1024 pool3_pool[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_relu (Activation (None, 4, 4, 256) 0 conv4_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_conv (Conv2D) (None, 4, 4, 128) 32768 conv4_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_relu (Activation (None, 4, 4, 128) 0 conv4_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_concat (Concatenat (None, 4, 4, 288) 0 pool3_pool[0][0] \n conv4_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_0_bn (BatchNormali (None, 4, 4, 288) 1152 conv4_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block2_0_relu (Activation (None, 4, 4, 288) 0 conv4_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_conv (Conv2D) (None, 4, 4, 128) 36864 conv4_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_relu (Activation (None, 4, 4, 128) 0 conv4_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_concat (Concatenat (None, 4, 4, 320) 0 conv4_block1_concat[0][0] \n conv4_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_0_bn (BatchNormali (None, 4, 4, 320) 1280 conv4_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block3_0_relu (Activation (None, 4, 4, 320) 0 conv4_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_conv (Conv2D) (None, 4, 4, 128) 40960 conv4_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_relu (Activation (None, 4, 4, 128) 0 conv4_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_concat (Concatenat (None, 4, 4, 352) 0 conv4_block2_concat[0][0] \n conv4_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_0_bn (BatchNormali (None, 4, 4, 352) 1408 conv4_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block4_0_relu (Activation (None, 4, 4, 352) 0 conv4_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_conv (Conv2D) (None, 4, 4, 128) 45056 conv4_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_relu (Activation (None, 4, 4, 128) 0 conv4_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_concat (Concatenat (None, 4, 4, 384) 0 conv4_block3_concat[0][0] \n conv4_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_0_bn (BatchNormali (None, 4, 4, 384) 1536 conv4_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block5_0_relu (Activation (None, 4, 4, 384) 0 conv4_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_conv (Conv2D) (None, 4, 4, 128) 49152 conv4_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_relu (Activation (None, 4, 4, 128) 0 conv4_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_concat (Concatenat (None, 4, 4, 416) 0 conv4_block4_concat[0][0] \n conv4_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_0_bn (BatchNormali (None, 4, 4, 416) 1664 conv4_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block6_0_relu (Activation (None, 4, 4, 416) 0 conv4_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_conv (Conv2D) (None, 4, 4, 128) 53248 conv4_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_relu (Activation (None, 4, 4, 128) 0 conv4_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_concat (Concatenat (None, 4, 4, 448) 0 conv4_block5_concat[0][0] \n conv4_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block7_0_bn (BatchNormali (None, 4, 4, 448) 1792 conv4_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block7_0_relu (Activation (None, 4, 4, 448) 0 conv4_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_conv (Conv2D) (None, 4, 4, 128) 57344 conv4_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_relu (Activation (None, 4, 4, 128) 0 conv4_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block7_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block7_concat (Concatenat (None, 4, 4, 480) 0 conv4_block6_concat[0][0] \n conv4_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block8_0_bn (BatchNormali (None, 4, 4, 480) 1920 conv4_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block8_0_relu (Activation (None, 4, 4, 480) 0 conv4_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_conv (Conv2D) (None, 4, 4, 128) 61440 conv4_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_relu (Activation (None, 4, 4, 128) 0 conv4_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block8_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block8_concat (Concatenat (None, 4, 4, 512) 0 conv4_block7_concat[0][0] \n conv4_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block9_0_bn (BatchNormali (None, 4, 4, 512) 2048 conv4_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block9_0_relu (Activation (None, 4, 4, 512) 0 conv4_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_conv (Conv2D) (None, 4, 4, 128) 65536 conv4_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_bn (BatchNormali (None, 4, 4, 128) 512 conv4_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_relu (Activation (None, 4, 4, 128) 0 conv4_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block9_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block9_concat (Concatenat (None, 4, 4, 544) 0 conv4_block8_concat[0][0] \n conv4_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block10_0_bn (BatchNormal (None, 4, 4, 544) 2176 conv4_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block10_0_relu (Activatio (None, 4, 4, 544) 0 conv4_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_conv (Conv2D) (None, 4, 4, 128) 69632 conv4_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block10_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block10_concat (Concatena (None, 4, 4, 576) 0 conv4_block9_concat[0][0] \n conv4_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block11_0_bn (BatchNormal (None, 4, 4, 576) 2304 conv4_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block11_0_relu (Activatio (None, 4, 4, 576) 0 conv4_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_conv (Conv2D) (None, 4, 4, 128) 73728 conv4_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block11_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block11_concat (Concatena (None, 4, 4, 608) 0 conv4_block10_concat[0][0] \n conv4_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block12_0_bn (BatchNormal (None, 4, 4, 608) 2432 conv4_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block12_0_relu (Activatio (None, 4, 4, 608) 0 conv4_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_conv (Conv2D) (None, 4, 4, 128) 77824 conv4_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block12_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block12_concat (Concatena (None, 4, 4, 640) 0 conv4_block11_concat[0][0] \n conv4_block12_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block13_0_bn (BatchNormal (None, 4, 4, 640) 2560 conv4_block12_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block13_0_relu (Activatio (None, 4, 4, 640) 0 conv4_block13_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_conv (Conv2D) (None, 4, 4, 128) 81920 conv4_block13_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block13_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block13_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block13_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block13_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block13_concat (Concatena (None, 4, 4, 672) 0 conv4_block12_concat[0][0] \n conv4_block13_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block14_0_bn (BatchNormal (None, 4, 4, 672) 2688 conv4_block13_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block14_0_relu (Activatio (None, 4, 4, 672) 0 conv4_block14_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_conv (Conv2D) (None, 4, 4, 128) 86016 conv4_block14_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block14_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block14_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block14_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block14_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block14_concat (Concatena (None, 4, 4, 704) 0 conv4_block13_concat[0][0] \n conv4_block14_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block15_0_bn (BatchNormal (None, 4, 4, 704) 2816 conv4_block14_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block15_0_relu (Activatio (None, 4, 4, 704) 0 conv4_block15_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_conv (Conv2D) (None, 4, 4, 128) 90112 conv4_block15_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block15_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block15_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block15_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block15_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block15_concat (Concatena (None, 4, 4, 736) 0 conv4_block14_concat[0][0] \n conv4_block15_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block16_0_bn (BatchNormal (None, 4, 4, 736) 2944 conv4_block15_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block16_0_relu (Activatio (None, 4, 4, 736) 0 conv4_block16_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_conv (Conv2D) (None, 4, 4, 128) 94208 conv4_block16_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block16_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block16_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block16_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block16_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block16_concat (Concatena (None, 4, 4, 768) 0 conv4_block15_concat[0][0] \n conv4_block16_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block17_0_bn (BatchNormal (None, 4, 4, 768) 3072 conv4_block16_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block17_0_relu (Activatio (None, 4, 4, 768) 0 conv4_block17_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_conv (Conv2D) (None, 4, 4, 128) 98304 conv4_block17_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block17_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block17_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block17_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block17_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block17_concat (Concatena (None, 4, 4, 800) 0 conv4_block16_concat[0][0] \n conv4_block17_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block18_0_bn (BatchNormal (None, 4, 4, 800) 3200 conv4_block17_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block18_0_relu (Activatio (None, 4, 4, 800) 0 conv4_block18_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_conv (Conv2D) (None, 4, 4, 128) 102400 conv4_block18_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block18_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block18_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block18_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block18_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block18_concat (Concatena (None, 4, 4, 832) 0 conv4_block17_concat[0][0] \n conv4_block18_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block19_0_bn (BatchNormal (None, 4, 4, 832) 3328 conv4_block18_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block19_0_relu (Activatio (None, 4, 4, 832) 0 conv4_block19_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_conv (Conv2D) (None, 4, 4, 128) 106496 conv4_block19_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block19_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block19_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block19_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block19_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block19_concat (Concatena (None, 4, 4, 864) 0 conv4_block18_concat[0][0] \n conv4_block19_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block20_0_bn (BatchNormal (None, 4, 4, 864) 3456 conv4_block19_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block20_0_relu (Activatio (None, 4, 4, 864) 0 conv4_block20_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_conv (Conv2D) (None, 4, 4, 128) 110592 conv4_block20_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block20_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block20_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block20_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block20_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block20_concat (Concatena (None, 4, 4, 896) 0 conv4_block19_concat[0][0] \n conv4_block20_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block21_0_bn (BatchNormal (None, 4, 4, 896) 3584 conv4_block20_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block21_0_relu (Activatio (None, 4, 4, 896) 0 conv4_block21_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_conv (Conv2D) (None, 4, 4, 128) 114688 conv4_block21_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block21_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block21_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block21_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block21_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block21_concat (Concatena (None, 4, 4, 928) 0 conv4_block20_concat[0][0] \n conv4_block21_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block22_0_bn (BatchNormal (None, 4, 4, 928) 3712 conv4_block21_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block22_0_relu (Activatio (None, 4, 4, 928) 0 conv4_block22_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_conv (Conv2D) (None, 4, 4, 128) 118784 conv4_block22_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block22_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block22_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block22_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block22_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block22_concat (Concatena (None, 4, 4, 960) 0 conv4_block21_concat[0][0] \n conv4_block22_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block23_0_bn (BatchNormal (None, 4, 4, 960) 3840 conv4_block22_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block23_0_relu (Activatio (None, 4, 4, 960) 0 conv4_block23_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_conv (Conv2D) (None, 4, 4, 128) 122880 conv4_block23_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block23_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block23_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block23_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block23_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block23_concat (Concatena (None, 4, 4, 992) 0 conv4_block22_concat[0][0] \n conv4_block23_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block24_0_bn (BatchNormal (None, 4, 4, 992) 3968 conv4_block23_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block24_0_relu (Activatio (None, 4, 4, 992) 0 conv4_block24_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_conv (Conv2D) (None, 4, 4, 128) 126976 conv4_block24_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_bn (BatchNormal (None, 4, 4, 128) 512 conv4_block24_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_relu (Activatio (None, 4, 4, 128) 0 conv4_block24_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block24_2_conv (Conv2D) (None, 4, 4, 32) 36864 conv4_block24_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block24_concat (Concatena (None, 4, 4, 1024) 0 conv4_block23_concat[0][0] \n conv4_block24_2_conv[0][0] \n__________________________________________________________________________________________________\npool4_bn (BatchNormalization) (None, 4, 4, 1024) 4096 conv4_block24_concat[0][0] \n__________________________________________________________________________________________________\npool4_relu (Activation) (None, 4, 4, 1024) 0 pool4_bn[0][0] \n__________________________________________________________________________________________________\npool4_conv (Conv2D) (None, 4, 4, 512) 524288 pool4_relu[0][0] \n__________________________________________________________________________________________________\npool4_pool (AveragePooling2D) (None, 2, 2, 512) 0 pool4_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_bn (BatchNormali (None, 2, 2, 512) 2048 pool4_pool[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_relu (Activation (None, 2, 2, 512) 0 conv5_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_conv (Conv2D) (None, 2, 2, 128) 65536 conv5_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_relu (Activation (None, 2, 2, 128) 0 conv5_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_concat (Concatenat (None, 2, 2, 544) 0 pool4_pool[0][0] \n conv5_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_0_bn (BatchNormali (None, 2, 2, 544) 2176 conv5_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block2_0_relu (Activation (None, 2, 2, 544) 0 conv5_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_conv (Conv2D) (None, 2, 2, 128) 69632 conv5_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_relu (Activation (None, 2, 2, 128) 0 conv5_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_concat (Concatenat (None, 2, 2, 576) 0 conv5_block1_concat[0][0] \n conv5_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_0_bn (BatchNormali (None, 2, 2, 576) 2304 conv5_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block3_0_relu (Activation (None, 2, 2, 576) 0 conv5_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_conv (Conv2D) (None, 2, 2, 128) 73728 conv5_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_relu (Activation (None, 2, 2, 128) 0 conv5_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_concat (Concatenat (None, 2, 2, 608) 0 conv5_block2_concat[0][0] \n conv5_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block4_0_bn (BatchNormali (None, 2, 2, 608) 2432 conv5_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block4_0_relu (Activation (None, 2, 2, 608) 0 conv5_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_conv (Conv2D) (None, 2, 2, 128) 77824 conv5_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_relu (Activation (None, 2, 2, 128) 0 conv5_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block4_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block4_concat (Concatenat (None, 2, 2, 640) 0 conv5_block3_concat[0][0] \n conv5_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block5_0_bn (BatchNormali (None, 2, 2, 640) 2560 conv5_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block5_0_relu (Activation (None, 2, 2, 640) 0 conv5_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_conv (Conv2D) (None, 2, 2, 128) 81920 conv5_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_relu (Activation (None, 2, 2, 128) 0 conv5_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block5_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block5_concat (Concatenat (None, 2, 2, 672) 0 conv5_block4_concat[0][0] \n conv5_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block6_0_bn (BatchNormali (None, 2, 2, 672) 2688 conv5_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block6_0_relu (Activation (None, 2, 2, 672) 0 conv5_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_conv (Conv2D) (None, 2, 2, 128) 86016 conv5_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_relu (Activation (None, 2, 2, 128) 0 conv5_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block6_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block6_concat (Concatenat (None, 2, 2, 704) 0 conv5_block5_concat[0][0] \n conv5_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block7_0_bn (BatchNormali (None, 2, 2, 704) 2816 conv5_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block7_0_relu (Activation (None, 2, 2, 704) 0 conv5_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_conv (Conv2D) (None, 2, 2, 128) 90112 conv5_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_relu (Activation (None, 2, 2, 128) 0 conv5_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block7_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block7_concat (Concatenat (None, 2, 2, 736) 0 conv5_block6_concat[0][0] \n conv5_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block8_0_bn (BatchNormali (None, 2, 2, 736) 2944 conv5_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block8_0_relu (Activation (None, 2, 2, 736) 0 conv5_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_conv (Conv2D) (None, 2, 2, 128) 94208 conv5_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_relu (Activation (None, 2, 2, 128) 0 conv5_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block8_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block8_concat (Concatenat (None, 2, 2, 768) 0 conv5_block7_concat[0][0] \n conv5_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block9_0_bn (BatchNormali (None, 2, 2, 768) 3072 conv5_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block9_0_relu (Activation (None, 2, 2, 768) 0 conv5_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_conv (Conv2D) (None, 2, 2, 128) 98304 conv5_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_bn (BatchNormali (None, 2, 2, 128) 512 conv5_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_relu (Activation (None, 2, 2, 128) 0 conv5_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block9_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block9_concat (Concatenat (None, 2, 2, 800) 0 conv5_block8_concat[0][0] \n conv5_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block10_0_bn (BatchNormal (None, 2, 2, 800) 3200 conv5_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block10_0_relu (Activatio (None, 2, 2, 800) 0 conv5_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_conv (Conv2D) (None, 2, 2, 128) 102400 conv5_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_bn (BatchNormal (None, 2, 2, 128) 512 conv5_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_relu (Activatio (None, 2, 2, 128) 0 conv5_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block10_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block10_concat (Concatena (None, 2, 2, 832) 0 conv5_block9_concat[0][0] \n conv5_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block11_0_bn (BatchNormal (None, 2, 2, 832) 3328 conv5_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block11_0_relu (Activatio (None, 2, 2, 832) 0 conv5_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_conv (Conv2D) (None, 2, 2, 128) 106496 conv5_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_bn (BatchNormal (None, 2, 2, 128) 512 conv5_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_relu (Activatio (None, 2, 2, 128) 0 conv5_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block11_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block11_concat (Concatena (None, 2, 2, 864) 0 conv5_block10_concat[0][0] \n conv5_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block12_0_bn (BatchNormal (None, 2, 2, 864) 3456 conv5_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block12_0_relu (Activatio (None, 2, 2, 864) 0 conv5_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_conv (Conv2D) (None, 2, 2, 128) 110592 conv5_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_bn (BatchNormal (None, 2, 2, 128) 512 conv5_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_relu (Activatio (None, 2, 2, 128) 0 conv5_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block12_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block12_concat (Concatena (None, 2, 2, 896) 0 conv5_block11_concat[0][0] \n conv5_block12_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block13_0_bn (BatchNormal (None, 2, 2, 896) 3584 conv5_block12_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block13_0_relu (Activatio (None, 2, 2, 896) 0 conv5_block13_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_conv (Conv2D) (None, 2, 2, 128) 114688 conv5_block13_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_bn (BatchNormal (None, 2, 2, 128) 512 conv5_block13_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_relu (Activatio (None, 2, 2, 128) 0 conv5_block13_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block13_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block13_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block13_concat (Concatena (None, 2, 2, 928) 0 conv5_block12_concat[0][0] \n conv5_block13_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block14_0_bn (BatchNormal (None, 2, 2, 928) 3712 conv5_block13_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block14_0_relu (Activatio (None, 2, 2, 928) 0 conv5_block14_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_conv (Conv2D) (None, 2, 2, 128) 118784 conv5_block14_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_bn (BatchNormal (None, 2, 2, 128) 512 conv5_block14_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_relu (Activatio (None, 2, 2, 128) 0 conv5_block14_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block14_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block14_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block14_concat (Concatena (None, 2, 2, 960) 0 conv5_block13_concat[0][0] \n conv5_block14_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block15_0_bn (BatchNormal (None, 2, 2, 960) 3840 conv5_block14_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block15_0_relu (Activatio (None, 2, 2, 960) 0 conv5_block15_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_conv (Conv2D) (None, 2, 2, 128) 122880 conv5_block15_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_bn (BatchNormal (None, 2, 2, 128) 512 conv5_block15_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_relu (Activatio (None, 2, 2, 128) 0 conv5_block15_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block15_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block15_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block15_concat (Concatena (None, 2, 2, 992) 0 conv5_block14_concat[0][0] \n conv5_block15_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block16_0_bn (BatchNormal (None, 2, 2, 992) 3968 conv5_block15_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block16_0_relu (Activatio (None, 2, 2, 992) 0 conv5_block16_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_conv (Conv2D) (None, 2, 2, 128) 126976 conv5_block16_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_bn (BatchNormal (None, 2, 2, 128) 512 conv5_block16_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_relu (Activatio (None, 2, 2, 128) 0 conv5_block16_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block16_2_conv (Conv2D) (None, 2, 2, 32) 36864 conv5_block16_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block16_concat (Concatena (None, 2, 2, 1024) 0 conv5_block15_concat[0][0] \n conv5_block16_2_conv[0][0] \n__________________________________________________________________________________________________\nbn (BatchNormalization) (None, 2, 2, 1024) 4096 conv5_block16_concat[0][0] \n__________________________________________________________________________________________________\nrelu (Activation) (None, 2, 2, 1024) 0 bn[0][0] \n__________________________________________________________________________________________________\navg_pool (GlobalAveragePooling2 (None, 1024) 0 relu[0][0] \n__________________________________________________________________________________________________\nfc1000 (Dense) (None, 2) 2050 avg_pool[0][0] \n==================================================================================================\nTotal params: 7,039,554\nTrainable params: 6,955,906\nNon-trainable params: 83,648\n__________________________________________________________________________________________________\n364\nFound 93910 images belonging to 2 classes.\nFound 22196 images belonging to 2 classes.\nEpoch 1/100\n2020-04-28 15:11:49.778605: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 15:11:51.378045: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n200/200 [==============================] - 126s 628ms/step - loss: 0.4849 - accuracy: 0.7575 - val_loss: 1.3138 - val_accuracy: 0.3830\nEpoch 2/100\n200/200 [==============================] - 72s 362ms/step - loss: 0.2305 - accuracy: 0.9044 - val_loss: 12.2555 - val_accuracy: 0.3985\nEpoch 3/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.1607 - accuracy: 0.9344 - val_loss: 0.0109 - val_accuracy: 0.7769\nEpoch 4/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.1226 - accuracy: 0.9485 - val_loss: 9.7873 - val_accuracy: 0.3878\nEpoch 5/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.0991 - accuracy: 0.9614 - val_loss: 1.9718e-04 - val_accuracy: 0.7319\nEpoch 6/100\n200/200 [==============================] - 73s 364ms/step - loss: 0.0823 - accuracy: 0.9687 - val_loss: 0.0535 - val_accuracy: 0.8164\nEpoch 7/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0739 - accuracy: 0.9718 - val_loss: 0.0124 - val_accuracy: 0.9352\nEpoch 8/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0640 - accuracy: 0.9749 - val_loss: 20.4140 - val_accuracy: 0.3821\nEpoch 9/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0611 - accuracy: 0.9776 - val_loss: 5.2624 - val_accuracy: 0.3981\nEpoch 10/100\n200/200 [==============================] - 72s 362ms/step - loss: 0.0555 - accuracy: 0.9795 - val_loss: 0.0766 - val_accuracy: 0.8237\nEpoch 11/100\n200/200 [==============================] - 72s 360ms/step - loss: 0.0455 - accuracy: 0.9833 - val_loss: 4.4716e-04 - val_accuracy: 0.8651\nEpoch 12/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0367 - accuracy: 0.9860 - val_loss: 1.2863 - val_accuracy: 0.9033\nEpoch 13/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0384 - accuracy: 0.9855 - val_loss: 15.2673 - val_accuracy: 0.3820\nEpoch 14/100\n200/200 [==============================] - 72s 358ms/step - loss: 0.0359 - accuracy: 0.9855 - val_loss: 0.0777 - val_accuracy: 0.8973\nEpoch 15/100\n200/200 [==============================] - 71s 356ms/step - loss: 0.0357 - accuracy: 0.9870 - val_loss: 7.7112 - val_accuracy: 0.4158\nEpoch 16/100\n200/200 [==============================] - 71s 354ms/step - loss: 0.0299 - accuracy: 0.9888 - val_loss: 10.9476 - val_accuracy: 0.3840\nEpoch 17/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0350 - accuracy: 0.9867 - val_loss: 8.4214 - val_accuracy: 0.3913\nEpoch 18/100\n200/200 [==============================] - 72s 360ms/step - loss: 0.0285 - accuracy: 0.9893 - val_loss: 0.1081 - val_accuracy: 0.9094\nEpoch 19/100\n200/200 [==============================] - 72s 358ms/step - loss: 0.0257 - accuracy: 0.9911 - val_loss: 0.0065 - val_accuracy: 0.9310\nEpoch 20/100\n200/200 [==============================] - 71s 357ms/step - loss: 0.0281 - accuracy: 0.9901 - val_loss: 7.7678e-04 - val_accuracy: 0.9088\nEpoch 21/100\n200/200 [==============================] - 71s 357ms/step - loss: 0.0279 - accuracy: 0.9892 - val_loss: 0.0079 - val_accuracy: 0.9464\nEpoch 22/100\n200/200 [==============================] - 70s 352ms/step - loss: 0.0236 - accuracy: 0.9912 - val_loss: 0.0521 - val_accuracy: 0.9786\nEpoch 23/100\n200/200 [==============================] - 71s 355ms/step - loss: 0.0187 - accuracy: 0.9929 - val_loss: 1.7502e-04 - val_accuracy: 0.9459\nEpoch 24/100\n200/200 [==============================] - 70s 350ms/step - loss: 0.0177 - accuracy: 0.9941 - val_loss: 11.8579 - val_accuracy: 0.3952\nEpoch 25/100\n200/200 [==============================] - 71s 356ms/step - loss: 0.0231 - accuracy: 0.9923 - val_loss: 8.1375 - val_accuracy: 0.4250\nEpoch 26/100\n200/200 [==============================] - 72s 358ms/step - loss: 0.0226 - accuracy: 0.9917 - val_loss: 13.2875 - val_accuracy: 0.3832\nEpoch 27/100\n200/200 [==============================] - 72s 358ms/step - loss: 0.0164 - accuracy: 0.9945 - val_loss: 6.9313e-04 - val_accuracy: 0.8765\nEpoch 28/100\n200/200 [==============================] - 71s 355ms/step - loss: 0.0177 - accuracy: 0.9937 - val_loss: 0.1757 - val_accuracy: 0.9059\nEpoch 29/100\n200/200 [==============================] - 71s 357ms/step - loss: 0.0183 - accuracy: 0.9932 - val_loss: 0.9003 - val_accuracy: 0.7446\nEpoch 30/100\n200/200 [==============================] - 71s 356ms/step - loss: 0.0202 - accuracy: 0.9931 - val_loss: 0.0158 - val_accuracy: 0.9789\nEpoch 31/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0173 - accuracy: 0.9940 - val_loss: 5.4435 - val_accuracy: 0.4691\nEpoch 32/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0214 - accuracy: 0.9927 - val_loss: 8.8752 - val_accuracy: 0.3866\nEpoch 33/100\n200/200 [==============================] - 71s 354ms/step - loss: 0.0160 - accuracy: 0.9944 - val_loss: 0.5666 - val_accuracy: 0.9378\nEpoch 34/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.0141 - accuracy: 0.9952 - val_loss: 2.1525 - val_accuracy: 0.8641\nEpoch 35/100\n200/200 [==============================] - 71s 357ms/step - loss: 0.0175 - accuracy: 0.9939 - val_loss: 5.8669 - val_accuracy: 0.4128\nEpoch 36/100\n200/200 [==============================] - 71s 356ms/step - loss: 0.0143 - accuracy: 0.9941 - val_loss: 2.4490e-04 - val_accuracy: 0.8290\nEpoch 37/100\n200/200 [==============================] - 71s 355ms/step - loss: 0.0144 - accuracy: 0.9947 - val_loss: 0.4840 - val_accuracy: 0.9408\nEpoch 38/100\n200/200 [==============================] - 71s 357ms/step - loss: 0.0155 - accuracy: 0.9949 - val_loss: 3.8644 - val_accuracy: 0.4540\nEpoch 39/100\n200/200 [==============================] - 71s 356ms/step - loss: 0.0118 - accuracy: 0.9959 - val_loss: 1.6547 - val_accuracy: 0.4733\nEpoch 40/100\n200/200 [==============================] - 72s 360ms/step - loss: 0.0133 - accuracy: 0.9950 - val_loss: 3.3367e-04 - val_accuracy: 0.9785\nEpoch 41/100\n200/200 [==============================] - 71s 355ms/step - loss: 0.0104 - accuracy: 0.9966 - val_loss: 2.2865 - val_accuracy: 0.9016\nEpoch 42/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0090 - accuracy: 0.9973 - val_loss: 0.0866 - val_accuracy: 0.9732\nEpoch 43/100\n200/200 [==============================] - 71s 355ms/step - loss: 0.0114 - accuracy: 0.9964 - val_loss: 2.2396e-04 - val_accuracy: 0.9736\nEpoch 44/100\n200/200 [==============================] - 71s 354ms/step - loss: 0.0142 - accuracy: 0.9955 - val_loss: 0.0027 - val_accuracy: 0.9845\nEpoch 45/100\n200/200 [==============================] - 71s 353ms/step - loss: 0.0108 - accuracy: 0.9962 - val_loss: 0.0031 - val_accuracy: 0.8976\nEpoch 46/100\n200/200 [==============================] - 71s 355ms/step - loss: 0.0108 - accuracy: 0.9966 - val_loss: 6.5889e-04 - val_accuracy: 0.9834\nEpoch 47/100\n200/200 [==============================] - 71s 356ms/step - loss: 0.0107 - accuracy: 0.9966 - val_loss: 0.0072 - val_accuracy: 0.9851\nEpoch 48/100\n200/200 [==============================] - 71s 355ms/step - loss: 0.0123 - accuracy: 0.9960 - val_loss: 2.8005e-04 - val_accuracy: 0.9801\nEpoch 49/100\n200/200 [==============================] - 71s 354ms/step - loss: 0.0094 - accuracy: 0.9972 - val_loss: 1.2735e-04 - val_accuracy: 0.8990\nEpoch 50/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0073 - accuracy: 0.9974 - val_loss: 4.4054e-05 - val_accuracy: 0.9673\nEpoch 51/100\n200/200 [==============================] - 72s 360ms/step - loss: 0.0145 - accuracy: 0.9956 - val_loss: 6.9784 - val_accuracy: 0.4240\nEpoch 52/100\n200/200 [==============================] - 71s 357ms/step - loss: 0.0076 - accuracy: 0.9977 - val_loss: 11.2825 - val_accuracy: 0.3951\nEpoch 53/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0107 - accuracy: 0.9965 - val_loss: 0.0011 - val_accuracy: 0.9191\nEpoch 54/100\n200/200 [==============================] - 71s 357ms/step - loss: 0.0129 - accuracy: 0.9950 - val_loss: 2.1235 - val_accuracy: 0.4831\nEpoch 55/100\n200/200 [==============================] - 72s 362ms/step - loss: 0.0108 - accuracy: 0.9966 - val_loss: 7.6601e-05 - val_accuracy: 0.9467\nEpoch 56/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0073 - accuracy: 0.9979 - val_loss: 10.5227 - val_accuracy: 0.4058\nEpoch 57/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.0076 - accuracy: 0.9976 - val_loss: 0.0100 - val_accuracy: 0.9822\nEpoch 58/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0083 - accuracy: 0.9971 - val_loss: 0.0101 - val_accuracy: 0.8609\nEpoch 59/100\n200/200 [==============================] - 72s 360ms/step - loss: 0.0088 - accuracy: 0.9971 - val_loss: 0.0736 - val_accuracy: 0.9806\nEpoch 60/100\n200/200 [==============================] - 73s 367ms/step - loss: 0.0084 - accuracy: 0.9969 - val_loss: 0.0044 - val_accuracy: 0.9918\nEpoch 61/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0088 - accuracy: 0.9971 - val_loss: 0.0168 - val_accuracy: 0.9041\nEpoch 62/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0080 - accuracy: 0.9971 - val_loss: 0.0022 - val_accuracy: 0.9943\nEpoch 63/100\n200/200 [==============================] - 72s 362ms/step - loss: 0.0061 - accuracy: 0.9979 - val_loss: 38.4191 - val_accuracy: 0.3807\nEpoch 64/100\n200/200 [==============================] - 72s 360ms/step - loss: 0.0106 - accuracy: 0.9966 - val_loss: 0.0271 - val_accuracy: 0.9837\nEpoch 65/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.0057 - accuracy: 0.9980 - val_loss: 0.0014 - val_accuracy: 0.9887\nEpoch 66/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.0052 - accuracy: 0.9980 - val_loss: 0.0510 - val_accuracy: 0.9810\nEpoch 67/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0038 - accuracy: 0.9989 - val_loss: 0.1022 - val_accuracy: 0.9843\nEpoch 68/100\n200/200 [==============================] - 72s 362ms/step - loss: 0.0064 - accuracy: 0.9976 - val_loss: 0.0658 - val_accuracy: 0.9566\nEpoch 69/100\n200/200 [==============================] - 72s 359ms/step - loss: 0.0106 - accuracy: 0.9964 - val_loss: 0.0059 - val_accuracy: 0.9594\nEpoch 70/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0096 - accuracy: 0.9972 - val_loss: 2.9283e-04 - val_accuracy: 0.9518\nEpoch 71/100\n200/200 [==============================] - 72s 360ms/step - loss: 0.0066 - accuracy: 0.9977 - val_loss: 3.1090e-04 - val_accuracy: 0.9109\nEpoch 72/100\n200/200 [==============================] - 72s 362ms/step - loss: 0.0114 - accuracy: 0.9958 - val_loss: 13.9053 - val_accuracy: 0.4029\nEpoch 73/100\n200/200 [==============================] - 73s 364ms/step - loss: 0.0061 - accuracy: 0.9981 - val_loss: 3.7403 - val_accuracy: 0.5599\nEpoch 74/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.0036 - accuracy: 0.9990 - val_loss: 0.0838 - val_accuracy: 0.9963\nEpoch 75/100\n200/200 [==============================] - 74s 370ms/step - loss: 0.0045 - accuracy: 0.9988 - val_loss: 1.2767e-04 - val_accuracy: 0.8443\nEpoch 76/100\n200/200 [==============================] - 73s 366ms/step - loss: 0.0061 - accuracy: 0.9981 - val_loss: 2.6347 - val_accuracy: 0.6577\nEpoch 77/100\n200/200 [==============================] - 73s 365ms/step - loss: 0.0059 - accuracy: 0.9977 - val_loss: 0.0097 - val_accuracy: 0.9849\nEpoch 78/100\n200/200 [==============================] - 73s 366ms/step - loss: 0.0035 - accuracy: 0.9989 - val_loss: 0.5138 - val_accuracy: 0.9603\nEpoch 79/100\n200/200 [==============================] - 73s 367ms/step - loss: 0.0071 - accuracy: 0.9980 - val_loss: 0.4417 - val_accuracy: 0.9536\nEpoch 80/100\n200/200 [==============================] - 73s 366ms/step - loss: 0.0048 - accuracy: 0.9986 - val_loss: 0.1164 - val_accuracy: 0.9964\nEpoch 81/100\n200/200 [==============================] - 73s 367ms/step - loss: 0.0064 - accuracy: 0.9979 - val_loss: 0.0555 - val_accuracy: 0.9793\nEpoch 82/100\n200/200 [==============================] - 72s 361ms/step - loss: 0.0054 - accuracy: 0.9982 - val_loss: 0.1932 - val_accuracy: 0.9643\nEpoch 83/100\n200/200 [==============================] - 73s 363ms/step - loss: 0.0072 - accuracy: 0.9973 - val_loss: 13.3264 - val_accuracy: 0.3883\nEpoch 84/100\n200/200 [==============================] - 73s 364ms/step - loss: 0.0039 - accuracy: 0.9987 - val_loss: 0.0328 - val_accuracy: 0.9909\nEpoch 85/100\n200/200 [==============================] - 72s 362ms/step - loss: 0.0018 - accuracy: 0.9995 - val_loss: 0.1830 - val_accuracy: 0.9749\nEpoch 86/100\n200/200 [==============================] - 73s 367ms/step - loss: 0.0049 - accuracy: 0.9987 - val_loss: 0.2794 - val_accuracy: 0.9428\nEpoch 87/100\n200/200 [==============================] - 74s 369ms/step - loss: 0.0076 - accuracy: 0.9976 - val_loss: 1.3054e-04 - val_accuracy: 0.8456\nEpoch 88/100\n200/200 [==============================] - 74s 370ms/step - loss: 0.0062 - accuracy: 0.9981 - val_loss: 0.0024 - val_accuracy: 0.9934\nEpoch 89/100\n200/200 [==============================] - 74s 372ms/step - loss: 0.0051 - accuracy: 0.9985 - val_loss: 7.3510e-05 - val_accuracy: 0.9123\nEpoch 90/100\n200/200 [==============================] - 75s 373ms/step - loss: 0.0052 - accuracy: 0.9982 - val_loss: 0.0021 - val_accuracy: 0.9800\nEpoch 91/100\n200/200 [==============================] - 74s 369ms/step - loss: 0.0040 - accuracy: 0.9986 - val_loss: 4.2478 - val_accuracy: 0.4271\nEpoch 92/100\n200/200 [==============================] - 73s 364ms/step - loss: 0.0057 - accuracy: 0.9977 - val_loss: 25.3290 - val_accuracy: 0.3825\nEpoch 93/100\n200/200 [==============================] - 73s 365ms/step - loss: 0.0054 - accuracy: 0.9980 - val_loss: 1.1851 - val_accuracy: 0.7456\nEpoch 94/100\n200/200 [==============================] - 73s 364ms/step - loss: 0.0034 - accuracy: 0.9989 - val_loss: 4.6332e-05 - val_accuracy: 0.9908\nEpoch 95/100\n200/200 [==============================] - 73s 367ms/step - loss: 0.0046 - accuracy: 0.9987 - val_loss: 14.0928 - val_accuracy: 0.3869\nEpoch 96/100\n200/200 [==============================] - 74s 371ms/step - loss: 0.0039 - accuracy: 0.9986 - val_loss: 0.0046 - val_accuracy: 0.9896\nEpoch 97/100\n200/200 [==============================] - 74s 368ms/step - loss: 0.0069 - accuracy: 0.9979 - val_loss: 0.0542 - val_accuracy: 0.9891\nEpoch 98/100\n200/200 [==============================] - 73s 367ms/step - loss: 0.0094 - accuracy: 0.9973 - val_loss: 0.0087 - val_accuracy: 0.9847\nEpoch 99/100\n200/200 [==============================] - 74s 371ms/step - loss: 0.0053 - accuracy: 0.9986 - val_loss: 0.0022 - val_accuracy: 0.9945\nEpoch 100/100\n200/200 [==============================] - 74s 369ms/step - loss: 0.0043 - accuracy: 0.9984 - val_loss: 2.1234e-05 - val_accuracy: 0.9786\n" ], [ "#!python3 '/content/gdrive/My Drive/Deep Learning Groupwork/Project/Code - Kabir/Code/FDFtNet/network/DenseNet.py'", "2020-04-27 12:21:31.437180: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\nUsing TensorFlow backend.\n" ], [ "#!python3 pretrain.py -network='denseNet' -train_dir='/content/Data-Celeb/Train' -val_dir='/content/Data-Celeb/Validation' -batch_size=128 -reduce_patience=100 -step=200 -epochs=100", "_____no_output_____" ], [ "!python3 test.py -test_dir='/content/Data-Celeb/Test' -model='/content/gdrive/My Drive/Deep Learning Groupwork/Project/Code - Kabir/Code/FDFtNet/models/weights-densenet-celeb'", "2020-04-28 17:13:25.498786: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\nUsing TensorFlow backend.\nFound 26098 images belonging to 2 classes.\n2020-04-28 17:13:35.086543: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2020-04-28 17:13:35.103121: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.104000: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0\ncoreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s\n2020-04-28 17:13:35.104049: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 17:13:35.106776: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 17:13:35.120347: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2020-04-28 17:13:35.120689: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2020-04-28 17:13:35.122325: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2020-04-28 17:13:35.123433: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2020-04-28 17:13:35.127640: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2020-04-28 17:13:35.127766: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.128688: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.129420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2020-04-28 17:13:35.135088: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2300000000 Hz\n2020-04-28 17:13:35.135315: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1674f40 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2020-04-28 17:13:35.135347: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2020-04-28 17:13:35.224856: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.225965: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1674d80 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2020-04-28 17:13:35.226025: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0\n2020-04-28 17:13:35.226276: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.227270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0\ncoreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s\n2020-04-28 17:13:35.227340: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 17:13:35.227398: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 17:13:35.227438: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2020-04-28 17:13:35.227495: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2020-04-28 17:13:35.227557: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2020-04-28 17:13:35.227596: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2020-04-28 17:13:35.227634: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2020-04-28 17:13:35.227750: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.228659: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.229662: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2020-04-28 17:13:35.229732: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 17:13:35.782068: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-04-28 17:13:35.782145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2020-04-28 17:13:35.782178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2020-04-28 17:13:35.782605: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.783925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:13:35.784843: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n2020-04-28 17:13:35.784906: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14974 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)\n2020-04-28 17:14:06.269803: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 17:14:06.545163: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n204/204 [==============================] - 13s 65ms/step\n[[1.000 0.000]\n [1.000 0.000]\n [1.000 0.000]\n ...\n [0.000 1.000]\n [0.000 1.000]\n [0.000 1.000]]\n100% 204/204 [00:30<00:00, 6.71it/s]\n[0 0 0 ... 1 1 1]\n[0 0 0 ... 1 1 1]\n precision recall f1-score support\n\n 0 1.00 0.96 0.98 13198\n 1 0.96 1.00 0.98 12900\n\n accuracy 0.98 26098\n macro avg 0.98 0.98 0.98 26098\nweighted avg 0.98 0.98 0.98 26098\n\n[[12640 558]\n [ 14 12886]]\nAUROC: 0.997590\n0.9946304559709602\ntest_acc: 0.9780826116943827\n" ], [ "!python3 fdft.py -model='/content/gdrive/My Drive/Deep Learning Groupwork/Project/Code - Kabir/Code/FDFtNet/models/weights-densenet-celeb' -ft_dir='/content/Data-Celeb/FineTune' -val_dir='/content/Data-Celeb/Validation' -network='denseNet' -test_dir='/content/Data-Celeb/Test'", "2020-04-28 17:14:58.120614: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\nUsing TensorFlow backend.\nFound 2003 images belonging to 2 classes.\nFound 22196 images belonging to 2 classes.\n2020-04-28 17:15:03.744185: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2020-04-28 17:15:03.761349: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:03.762258: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0\ncoreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s\n2020-04-28 17:15:03.762335: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 17:15:03.764073: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 17:15:03.765624: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2020-04-28 17:15:03.765923: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2020-04-28 17:15:03.768193: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2020-04-28 17:15:03.769473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2020-04-28 17:15:03.778145: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2020-04-28 17:15:03.778314: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:03.779351: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:03.780049: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2020-04-28 17:15:03.786090: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2300000000 Hz\n2020-04-28 17:15:03.786370: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19e0d80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2020-04-28 17:15:03.786447: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2020-04-28 17:15:03.881417: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:03.882547: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19e0bc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2020-04-28 17:15:03.882586: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0\n2020-04-28 17:15:03.882859: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:03.883661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0\ncoreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s\n2020-04-28 17:15:03.883722: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 17:15:03.883781: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 17:15:03.883829: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2020-04-28 17:15:03.883869: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2020-04-28 17:15:03.883908: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2020-04-28 17:15:03.883951: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2020-04-28 17:15:03.883994: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2020-04-28 17:15:03.884109: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:03.884898: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:03.885649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2020-04-28 17:15:03.885711: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2020-04-28 17:15:04.410889: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-04-28 17:15:04.411018: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2020-04-28 17:15:04.411052: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2020-04-28 17:15:04.411301: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:04.412270: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-04-28 17:15:04.413178: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n2020-04-28 17:15:04.413243: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14974 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)\n<tensorflow.python.keras.engine.training.Model object at 0x7f0c9a8d9080>\n(1, 1, 32, 4)\n(1, 1, 64, 8)\n(1, 1, 96, 12)\nModel: \"model_3\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 64, 64, 3) 0 \n__________________________________________________________________________________________________\ndensenet121 (Model) (None, 2, 2, 1024) 7037504 input_1[0][0] \n__________________________________________________________________________________________________\nmodel_1 (Model) (None, 576) 1914368 densenet121[1][0] \n__________________________________________________________________________________________________\nmodel_2 (Model) (None, 576) 85710 input_1[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 576) 0 model_1[1][0] \n model_2[1][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 2) 1154 add_4[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 2) 0 dense_1[0][0] \n==================================================================================================\nTotal params: 9,038,736\nTrainable params: 8,942,160\nNon-trainable params: 96,576\n__________________________________________________________________________________________________\nEpoch 1/100\n2020-04-28 17:17:22.150437: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2020-04-28 17:17:22.562811: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n200/200 [==============================] - 171s 854ms/step - loss: 0.1555 - acc: 0.9561 - val_loss: 0.0388 - val_acc: 0.9084\nEpoch 2/100\n200/200 [==============================] - 107s 533ms/step - loss: 0.0894 - acc: 0.9794 - val_loss: 0.0389 - val_acc: 0.9859\nEpoch 3/100\n200/200 [==============================] - 107s 537ms/step - loss: 0.0682 - acc: 0.9866 - val_loss: 0.2739 - val_acc: 0.8334\nEpoch 4/100\n200/200 [==============================] - 109s 547ms/step - loss: 0.0605 - acc: 0.9881 - val_loss: 0.0258 - val_acc: 0.9465\nEpoch 5/100\n200/200 [==============================] - 108s 542ms/step - loss: 0.0499 - acc: 0.9911 - val_loss: 0.0519 - val_acc: 0.9577\nEpoch 6/100\n200/200 [==============================] - 108s 541ms/step - loss: 0.0461 - acc: 0.9911 - val_loss: 1.3842 - val_acc: 0.5444\nEpoch 7/100\n200/200 [==============================] - 107s 534ms/step - loss: 0.0395 - acc: 0.9926 - val_loss: 0.0181 - val_acc: 0.9669\nEpoch 8/100\n200/200 [==============================] - 109s 543ms/step - loss: 0.0364 - acc: 0.9937 - val_loss: 0.0165 - val_acc: 0.9504\nEpoch 9/100\n200/200 [==============================] - 108s 539ms/step - loss: 0.0356 - acc: 0.9928 - val_loss: 0.0175 - val_acc: 0.9740\nEpoch 10/100\n200/200 [==============================] - 109s 544ms/step - loss: 0.0295 - acc: 0.9942 - val_loss: 0.0177 - val_acc: 0.9506\nEpoch 11/100\n200/200 [==============================] - 109s 544ms/step - loss: 0.0270 - acc: 0.9942 - val_loss: 0.0108 - val_acc: 0.9419\nEpoch 12/100\n200/200 [==============================] - 109s 546ms/step - loss: 0.0240 - acc: 0.9951 - val_loss: 0.3311 - val_acc: 0.9538\nFound 26098 images belonging to 2 classes.\n204/204 [==============================] - 23s 111ms/step\n[[1.000 0.000]\n [1.000 0.000]\n [1.000 0.000]\n ...\n [0.004 0.996]\n [0.005 0.995]\n [0.000 1.000]]\n100% 204/204 [00:37<00:00, 5.48it/s]\n[0 0 0 ... 1 1 1]\n[0 0 0 ... 1 1 1]\n precision recall f1-score support\n\n 0 0.98 0.91 0.95 13198\n 1 0.92 0.98 0.95 12900\n\n accuracy 0.95 26098\n macro avg 0.95 0.95 0.95 26098\nweighted avg 0.95 0.95 0.95 26098\n\n[[12073 1125]\n [ 205 12695]]\nAUROC: 0.994260\n0.9511211699294744\ntest_acc: 0.9490382404781975\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e763a459afd264e9046e92ce548da4658ca8ba76
108,033
ipynb
Jupyter Notebook
Code/Archive/1_PrepCorpus-Copy1.ipynb
DannyGsGit/GTC2018Talk
dbce4f7dfb08ffc91b3df1c7e182bdfa93703b9c
[ "MIT" ]
null
null
null
Code/Archive/1_PrepCorpus-Copy1.ipynb
DannyGsGit/GTC2018Talk
dbce4f7dfb08ffc91b3df1c7e182bdfa93703b9c
[ "MIT" ]
null
null
null
Code/Archive/1_PrepCorpus-Copy1.ipynb
DannyGsGit/GTC2018Talk
dbce4f7dfb08ffc91b3df1c7e182bdfa93703b9c
[ "MIT" ]
null
null
null
54.562121
10,956
0.635556
[ [ [ "# Word Vector- Example\nThis notebook sets up a word vectorization workflow on a toy example. Sentences are made up of spelled-out even and odd numbers.", "_____no_output_____" ], [ "## Setup\nImport necesarry libraries and configure run settings:", "_____no_output_____" ] ], [ [ "import os, math, csv, re, itertools\nimport numpy as np\nimport pandas as pd\nfrom collections import Counter\nimport jellyfish as jyfs\nimport datetime, time\nimport pickle\nimport matplotlib.pyplot as plt\nimport inflect\nfrom random import *", "_____no_output_____" ] ], [ [ "## Obtain Corpus\n\nWe will import the CAB item description dataset in this section. The names must be contained within a list type object of format:\n\n['Sentence 1', 'Sentence 2', 'Sentence 3']", "_____no_output_____" ] ], [ [ "ItemDescription = {}\nItemPartNumber = {}\ni = 0\n\nwith open('../Data/Private/CAB_ItemVersion.csv') as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n ItemDescription[i] = row['ItemDescription']\n ItemPartNumber[i] = row['ItemNumber']\n i += 1", "_____no_output_____" ], [ "# Extract list of part descriptions and numbers\nItemDescriptionList = [(v) for k,v in ItemDescription.items()]\nItemPartNumberList = [(v) for k,v in ItemPartNumber.items()]", "_____no_output_____" ] ], [ [ "## Text Processing\nSome light text processing will be performed to remove numbers, punctuation, special characters, whitespace and single-character words.", "_____no_output_____" ] ], [ [ "ItemDescriptionList[0:10]", "_____no_output_____" ] ], [ [ "### Basics\nRemove all non-letters, whitespace and single characters, lower space.", "_____no_output_____" ] ], [ [ "sentences = [re.sub('[^A-Za-z]', ' ', e) for e in ItemDescriptionList] # Remove all non-letter characters\nsentences = [re.sub('\\s+', ' ', e).strip().lower() for e in sentences] # Strip excess whitespace and lower case\nsentences = [' '.join( [w for w in sent.split() if len(w)>1] ) for sent in sentences] # Remove single character \"words\nsentences[0:10]", "_____no_output_____" ] ], [ [ "### Resolve Typos/Abbreviations\nThe next text processing step is to attempt to remediate misspellings and abbreviations with string matching patterns. First step is to generate a list of unique words.", "_____no_output_____" ], [ "First, we need a term frequency list.", "_____no_output_____" ] ], [ [ "wordList = \" \".join(sentences).split()\ncounts = Counter(wordList)", "_____no_output_____" ], [ "# Make a data frame of words, their counts, and original/new index values (not yet modified)\n\n## Convert counter to data frame\nwordcount = pd.DataFrame.from_dict(counts, orient='index').reset_index()\nwordcount = wordcount.rename(columns={'index':'word', 0:'count'})\n## Sort in descending order of count and reset index\nwordcount = wordcount.sort_values('count', ascending=False).reset_index(drop = True)\n## Copy index into original and new index columns\nwordcount['og_index'] = wordcount['new_index'] = wordcount.index.values\nwordcount['new_word'] = wordcount['word']\n\n# Check it out\nplt.hist(wordcount['count'])\nplt.yscale('log', nonposy='clip')\nplt.show()", "_____no_output_____" ] ], [ [ "Now we iterate down the list and examine Jaro Winkler distances as we go. If a distance is above our threshold, we update the \"new index\" value to the match.", "_____no_output_____" ] ], [ [ "# Prep word matrices\noriginalWords = wordcount.copy()\ncondensedWords = pd.DataFrame()\nscores = []\n\n# Set distance threshold\ndef distanceSet(currentCount, maxCount, baseline = 0.9):\n # This function sets a logarithmically increasing Jaro-Winkler distance threshold\n # based on the number of occurences of a word in the corpus.\n threshold = baseline + ((math.log(currentCount) / math.log(maxCount)) / (1/(1 - baseline)))\n return threshold\n\n\n# Initialize loop parameters\ncont_iteration = (len(originalWords) >= 1)\n\nwhile cont_iteration == True:\n # Pop top word from originalWords\n originalWords, targetRecord = originalWords.drop(originalWords.head(1).index),originalWords.head(1)\n \n # Score against condensed newWords\n ## Pull out the targetWord\n targetWord = targetRecord.iloc[0]['word']\n targetIndex = targetRecord.iloc[0]['og_index']\n targetCount = targetRecord.iloc[0]['count']\n distanceThreshold = distanceSet(currentCount = targetCount, maxCount = 163000, baseline = 0.9)\n \n ## Score against condensed word list (if it exists) \n if len(condensedWords) > 0:\n scores = [jyfs.jaro_winkler(targetWord, comp) for comp in condensedWords['new_word']]\n \n ## If a score has passed the threshold, update record with the new word\n if max(scores) >= distanceThreshold:\n ## Get the index of the match\n matchPointer = scores.index(max(scores))\n \n ## Create a new targetRecord with the match\n matchWord = condensedWords.iloc[matchPointer]['new_word']\n matchIndex = condensedWords.iloc[matchPointer]['new_index']\n # print(\"Match found: Target-\", targetWord, \" Match-\", matchWord)\n targetRecord = pd.DataFrame({'word': targetWord,\n 'new_word': matchWord,\n 'og_index': int(targetIndex),\n 'new_index': int(matchIndex)},\n index = [0])\n \n # Append to condensed words\n condensedWords = condensedWords.append(targetRecord)\n \n # Reset iteration checker\n cont_iteration = (len(originalWords) >= 1)\n \n # Status update\n if len(condensedWords) % 100 == 0:\n print(\"Completed: \", len(condensedWords))\n print(\"Distance Threshold: \", distanceThreshold)\n print(\"================================================\")", "Completed: 100\nDistance Threshold: 0.9740141148948647\n================================================\nCompleted: 200\nDistance Threshold: 0.9684340036458587\n================================================\nCompleted: 300\nDistance Threshold: 0.9644319793332964\n================================================\nCompleted: 400\nDistance Threshold: 0.960952506350742\n================================================\nCompleted: 500\nDistance Threshold: 0.9583363948286313\n================================================\nCompleted: 600\nDistance Threshold: 0.9562032518407266\n================================================\nCompleted: 700\nDistance Threshold: 0.9540825777685408\n================================================\nCompleted: 800\nDistance Threshold: 0.9526364354654147\n================================================\nCompleted: 900\nDistance Threshold: 0.9506787995940942\n================================================\nCompleted: 1000\nDistance Threshold: 0.9489982943915144\n================================================\nCompleted: 1100\nDistance Threshold: 0.947608467247143\n================================================\nCompleted: 1200\nDistance Threshold: 0.9464604137583945\n================================================\nCompleted: 1300\nDistance Threshold: 0.9451285083469352\n================================================\nCompleted: 1400\nDistance Threshold: 0.9438069655584598\n================================================\nCompleted: 1500\nDistance Threshold: 0.9428418215128216\n================================================\nCompleted: 1600\nDistance Threshold: 0.9418604193391131\n================================================\nCompleted: 1700\nDistance Threshold: 0.940747797319739\n================================================\nCompleted: 1800\nDistance Threshold: 0.9397507182119753\n================================================\nCompleted: 1900\nDistance Threshold: 0.9388571175671712\n================================================\nCompleted: 2000\nDistance Threshold: 0.9381178093548964\n================================================\nCompleted: 2100\nDistance Threshold: 0.9374006109257811\n================================================\nCompleted: 2200\nDistance Threshold: 0.9366158159247013\n================================================\nCompleted: 2300\nDistance Threshold: 0.9358627098946246\n================================================\nCompleted: 2400\nDistance Threshold: 0.9352797947868099\n================================================\nCompleted: 2500\nDistance Threshold: 0.9345217917316188\n================================================\nCompleted: 2600\nDistance Threshold: 0.9338327805406114\n================================================\nCompleted: 2700\nDistance Threshold: 0.9332373638730733\n================================================\nCompleted: 2800\nDistance Threshold: 0.932596102314132\n================================================\nCompleted: 2900\nDistance Threshold: 0.9319013427351819\n================================================\nCompleted: 3000\nDistance Threshold: 0.9313394025610171\n================================================\nCompleted: 3100\nDistance Threshold: 0.9309425519403223\n================================================\nCompleted: 3200\nDistance Threshold: 0.9303094154800183\n================================================\nCompleted: 3300\nDistance Threshold: 0.9298589118214454\n================================================\nCompleted: 3400\nDistance Threshold: 0.9291339079697524\n================================================\nCompleted: 3500\nDistance Threshold: 0.928612970349996\n================================================\nCompleted: 3600\nDistance Threshold: 0.928339756103015\n================================================\nCompleted: 3700\nDistance Threshold: 0.927764887628363\n================================================\nCompleted: 3800\nDistance Threshold: 0.927461861943526\n================================================\nCompleted: 3900\nDistance Threshold: 0.9268206003845846\n================================================\nCompleted: 4000\nDistance Threshold: 0.9264804597698175\n================================================\nCompleted: 4100\nDistance Threshold: 0.9261258408056345\n================================================\nCompleted: 4200\nDistance Threshold: 0.9257554559181245\n================================================\nCompleted: 4300\nDistance Threshold: 0.9249613040513871\n================================================\nCompleted: 4400\nDistance Threshold: 0.924533913550471\n================================================\nCompleted: 4500\nDistance Threshold: 0.924533913550471\n================================================\nCompleted: 4600\nDistance Threshold: 0.9240834098918981\n================================================\nCompleted: 4700\nDistance Threshold: 0.9236071495265946\n================================================\nCompleted: 4800\nDistance Threshold: 0.9231020077181896\n================================================\nCompleted: 4900\nDistance Threshold: 0.9225642541734677\n================================================\nCompleted: 5000\nDistance Threshold: 0.9225642541734677\n================================================\nCompleted: 5100\nDistance Threshold: 0.9219893856988155\n================================================\nCompleted: 5200\nDistance Threshold: 0.9213718967322978\n================================================\nCompleted: 5300\nDistance Threshold: 0.9213718967322978\n================================================\nCompleted: 5400\nDistance Threshold: 0.9207049578402702\n================================================\nCompleted: 5500\nDistance Threshold: 0.9207049578402702\n================================================\nCompleted: 5600\nDistance Threshold: 0.919979953988577\n================================================\nCompleted: 5700\nDistance Threshold: 0.919979953988577\n================================================\nCompleted: 5800\nDistance Threshold: 0.9191858021218398\n================================================\nCompleted: 5900\nDistance Threshold: 0.9191858021218398\n================================================\nCompleted: 6000\nDistance Threshold: 0.9191858021218398\n================================================\nCompleted: 6100\nDistance Threshold: 0.9183079079623506\n================================================\nCompleted: 6200\nDistance Threshold: 0.9183079079623506\n================================================\nCompleted: 6300\nDistance Threshold: 0.9183079079623506\n================================================\nCompleted: 6400\nDistance Threshold: 0.9173265057886423\n================================================\nCompleted: 6500\nDistance Threshold: 0.9173265057886423\n================================================\nCompleted: 6600\nDistance Threshold: 0.9173265057886423\n================================================\nCompleted: 6700\nDistance Threshold: 0.9162138837692682\n================================================\nCompleted: 6800\nDistance Threshold: 0.9162138837692682\n================================================\nCompleted: 6900\nDistance Threshold: 0.9162138837692682\n================================================\nCompleted: 7000\nDistance Threshold: 0.9162138837692682\n================================================\nCompleted: 7100\nDistance Threshold: 0.9149294559107227\n================================================\nCompleted: 7200\nDistance Threshold: 0.9149294559107227\n================================================\nCompleted: 7300\nDistance Threshold: 0.9149294559107227\n================================================\nCompleted: 7400\nDistance Threshold: 0.9149294559107227\n================================================\nCompleted: 7500\nDistance Threshold: 0.9149294559107227\n================================================\nCompleted: 7600\nDistance Threshold: 0.9149294559107227\n================================================\nCompleted: 7700\nDistance Threshold: 0.9134103001922923\n================================================\nCompleted: 7800\nDistance Threshold: 0.9134103001922923\n================================================\n" ] ], [ [ "Now save the results to a CSV and a pickle.", "_____no_output_____" ] ], [ [ "condensedWords.to_csv(\"./tempfiles/condensedWords.csv\")\ncondensedWords_filename = (\"./tempfiles/condensedWords_final.p\")\ncondensedWords.to_pickle(condensedWords_filename)", "_____no_output_____" ], [ "condensedWords = pd.read_pickle(condensedWords_filename)", "_____no_output_____" ] ], [ [ "Next, we need to re-map the part descriptions to use the new words.", "_____no_output_____" ] ], [ [ "# Build the replacement map using original words as indeces\nwordReplacement_map = {}\n\nfor record in range(0, (len(condensedWords))):\n old_word = condensedWords.iloc[record]['word']\n new_word = condensedWords.iloc[record]['new_word']\n wordReplacement_map[old_word] = new_word", "_____no_output_____" ], [ "# Apply mappings to sentences using the above map.\n## Initiate an empty list to catch the new sentences\nnew_sentences = []\n\n## Loop through sentences and replace words\nfor sent in sentences:\n # Tokenize the sentence\n tokenized_sent = sent.split()\n \n # Prep an empty sentence to reconstruct the tokenized sentence with new words\n temp_sent = []\n \n # Rebuild the sentence using the word-new word map\n [temp_sent.append(wordReplacement_map[word]) for word in tokenized_sent]\n # Join the sentence\n temp_sent = \" \".join(temp_sent)\n # Append the new sentence to the new_sentence list\n new_sentences.append(temp_sent)", "_____no_output_____" ] ], [ [ "### Append Base PNs", "_____no_output_____" ] ], [ [ "# String split base part number. PNs are of format C11-1010-23124, we want the first part (C11)\nBasePNList = [x.split(\"-\")[0] for x in ItemPartNumberList]\n\n# Convert numerics to text\nBasePNList = [w.replace('0', 'zero') for w in BasePNList]\nBasePNList = [w.replace('1', 'one') for w in BasePNList]\nBasePNList = [w.replace('2', 'two') for w in BasePNList]\nBasePNList = [w.replace('3', 'three') for w in BasePNList]\nBasePNList = [w.replace('4', 'four') for w in BasePNList]\nBasePNList = [w.replace('5', 'five') for w in BasePNList]\nBasePNList = [w.replace('6', 'six') for w in BasePNList]\nBasePNList = [w.replace('7', 'seven') for w in BasePNList]\nBasePNList = [w.replace('8', 'eight') for w in BasePNList]\nBasePNList = [w.replace('9', 'nine') for w in BasePNList]\nBasePNList[0:10]", "_____no_output_____" ], [ "BasePNList = [re.sub('[^A-Za-z]', ' ', e) for e in BasePNList] # Remove all non-letter characters\nBasePNList = [re.sub('\\s+', ' ', e).strip().lower() for e in BasePNList] # Strip excess whitespace and lower case\nBasePNList[0:10]", "_____no_output_____" ], [ "# Append the base p/n to part description\nsep = [\" \"] * len(BasePNList)\nnew_sentences = [x + y + z for x, y, z in zip(BasePNList, sep, new_sentences)]\nnew_sentences[0:10]", "_____no_output_____" ] ], [ [ "### Drop Infrequent Terms\nFilter out words below an occurence threshold. We will apply this filter to our sentences by replacing \"new_word\" in our wordReplacement_map dict with a blank.", "_____no_output_____" ] ], [ [ "# Extract the new words into a list, then count the occurences\nnewWordList = \" \".join(new_sentences).split()\nnewCounts = Counter(newWordList)\n\n## Convert counter to data frame and sort occurence\nnewwordcount = pd.DataFrame.from_dict(newCounts, orient='index').reset_index()\nnewwordcount = newwordcount.rename(columns={'index':'word', 0:'count'})\n## Sort in descending order of count and reset index\nnewwordcount = newwordcount.sort_values('count', ascending=False).reset_index(drop = True)\n## Check it out\nnewwordcount.head()\n\n# Build a word to frequency map\nwordFreq_map = {}\n## Generate new list using word as key and count as value\nfor record in range(0, (len(newwordcount))):\n word = newwordcount.iloc[record]['word']\n count = newwordcount.iloc[record]['count']\n wordFreq_map[word] = count", "_____no_output_____" ], [ "# Plot occurences \nplt.hist(newwordcount['count'])\nplt.yscale('log', nonposy='clip')\nplt.show()", "_____no_output_____" ] ], [ [ "Now that we have a map of word-to-frequency, let's run back through the sentences and drop words below some occurence threshold.", "_____no_output_____" ] ], [ [ "# Apply mappings to sentences using the above map.\n## Initiate an empty list to catch the new sentences\nfinal_sentences = []\n## Set a threshold for occurence\noccurence_threshold = 30\n\n## Loop through sentences and replace words\nfor sent in new_sentences:\n # Tokenize the sentence\n tokenized_sent = sent.split()\n \n # Prep an empty sentence to reconstruct the tokenized sentence with only frequent terms\n temp_sent = []\n \n # Loop through each word in the sentence\n for word in tokenized_sent:\n # Get the word frequency\n word_freq = wordFreq_map[word]\n # Add word back into sentence only if it passes the threshold for freq.\n if word_freq >= occurence_threshold:\n temp_sent.append(word)\n # else:\n # print(\"Dropping word:\", word, \"with frequency \", word_freq)\n \n # Make sure that we didn't remove all words from a sentence. If we did, we will use the original sentence\n if len(temp_sent) == 0:\n # print(\"Drop sentence\")\n continue\n # temp_sent = tokenized_sent\n # print(\"Word drop override\")\n\n # Collapse tokenized sentence\n temp_sent = \" \".join(temp_sent)\n # Join to final sentences list\n final_sentences.append(temp_sent)", "_____no_output_____" ], [ "# Extract the new words into a list, then count the occurences\nfinalWordList = \" \".join(final_sentences).split()\nfinalCounts = Counter(finalWordList)\n\n## Convert counter to data frame and sort occurence\nnewwordcount = pd.DataFrame.from_dict(finalCounts, orient='index').reset_index()\nnewwordcount = newwordcount.rename(columns={'index':'word', 0:'count'})\n## Sort in descending order of count and reset index\nnewwordcount = newwordcount.sort_values('count', ascending=False).reset_index(drop = True)\n## Check it out\nnewwordcount.head()\n\n# Build a word to frequency map\nwordFreq_map = {}\n## Generate new list using word as key and count as value\nfor record in range(0, (len(newwordcount))):\n word = newwordcount.iloc[record]['word']\n count = newwordcount.iloc[record]['count']\n wordFreq_map[word] = count\n \nplt.hist(newwordcount['count'])\nplt.yscale('log', nonposy='clip')\nplt.show()", "_____no_output_____" ], [ "len(newwordcount)", "_____no_output_____" ], [ "newwordcount[4340:]", "_____no_output_____" ], [ "# Inspect most common words\nprint(newwordcount[0:5])\n\n# Show cumulative count plot\nnewwordcount['cumcount'] = np.cumsum(newwordcount['count'])\nplt.plot(newwordcount['cumcount'])", " word count\n0 seat 20169\n1 cab 20146\n2 brkt 20114\n3 bracket 20103\n4 eaton 20102\n" ] ], [ [ "### Subsample Frequent Words", "_____no_output_____" ] ], [ [ "# Extract the new words into a list, then count the occurences\nnewWordList = \" \".join(final_sentences).split()\nnewCounts = Counter(newWordList)\n\n## Convert counter to data frame and sort occurence\nnewwordcount = pd.DataFrame.from_dict(newCounts, orient='index').reset_index()\nnewwordcount = newwordcount.rename(columns={'index':'word', 0:'count'})\n## Sort in descending order of count and reset index\nnewwordcount = newwordcount.sort_values('count', ascending=False).reset_index(drop = True)\n## Check it out\nnewwordcount.head()\n\n# Build a word to frequency map\nwordFreq_map = {}\n## Generate new list using word as key and count as value\nfor record in range(0, (len(newwordcount))):\n word = newwordcount.iloc[record]['word']\n count = newwordcount.iloc[record]['count']\n wordFreq_map[word] = count", "_____no_output_____" ], [ "# Plot occurences \nplt.hist(newwordcount['count'])\nplt.yscale('log', nonposy='clip')\nplt.show()", "_____no_output_____" ], [ "# Apply mappings to sentences using the above map.\n## Initiate an empty list to catch the new sentences\nsubsampled_sentences = []\n## Set a threshold for occurence\noccurence_threshold = 20000", "_____no_output_____" ], [ "## Loop through sentences and replace words\nfor sent in final_sentences:\n # print(\"===================\")\n # print(sent)\n \n # Tokenize the sentence\n tokenized_sent = sent.split()\n \n # Prep an empty sentence to reconstruct the tokenized sentence with only frequent terms\n temp_sent = []\n \n # Loop through each word in the sentence\n for word in tokenized_sent:\n # Get the word frequency\n word_freq = wordFreq_map[word]\n # Get probability of inclusion\n p_inclusion = occurence_threshold / word_freq\n rnd_draw = random()\n \n if rnd_draw < p_inclusion:\n temp_sent.append(word)\n # else:\n # print(\"Dropping word \", word)\n # print(temp_sent)\n \n # Make sure that we didn't remove all words from a sentence. If we did, drop the sentence altogether\n if len(temp_sent) == 0:\n # print(\"Drop sentence\")\n continue\n\n # Collapse tokenized sentence\n temp_sent = \" \".join(temp_sent)\n # Join to final sentences list\n subsampled_sentences.append(temp_sent)", "_____no_output_____" ] ], [ [ "Now examing the distribution of word frequencies after sub-sampling", "_____no_output_____" ] ], [ [ "# Extract the new words into a list, then count the occurences\nfinalWordList = \" \".join(subsampled_sentences).split()\nfinalCounts = Counter(finalWordList)\n\n## Convert counter to data frame and sort occurence\nnewwordcount = pd.DataFrame.from_dict(finalCounts, orient='index').reset_index()\nnewwordcount = newwordcount.rename(columns={'index':'word', 0:'count'})\n## Sort in descending order of count and reset index\nnewwordcount = newwordcount.sort_values('count', ascending=False).reset_index(drop = True)\n## Check it out\nnewwordcount.head()\n \nplt.hist(newwordcount['count'])\nplt.yscale('log', nonposy='clip')\nplt.show()", "_____no_output_____" ], [ "subsampled_sentences[100:110]", "_____no_output_____" ] ], [ [ "## Skip-Grams\nNext, the unique words are indexed and our sentences are skip-grammed.", "_____no_output_____" ] ], [ [ "# Map words to indices\nword2index_map = {}\nindex = 0\nfor sent in final_sentences:\n for word in sent.lower().split():\n if word not in word2index_map:\n word2index_map[word] = index\n index += 1\nindex2word_map = {index: word for word, index in word2index_map.items()}\n\nvocabulary_size = len(index2word_map)\n\nprint(\"Vocab size:\", vocabulary_size)\n# print(\"Word Index:\", index2word_map)", "_____no_output_____" ] ], [ [ "Inspect the top of the dictionary:", "_____no_output_____" ] ], [ [ "dict(list(index2word_map.items())[0:5])", "_____no_output_____" ] ], [ [ "We will then generate skip-grams. The skip-gram window will start with the first word in a sentence and iterate through every word to the last in a sentence. The window size is parameterized. Skip-gram pairs are appended to a list.", "_____no_output_____" ] ], [ [ "# Initialize the skip-gram pairs list\nskip_gram_pairs = []\n\n# Set the skip-gram window size\nwindow_size = 2\n\nfor sent in final_sentences:\n tokenized_sent = sent.split()\n # Set the target index\n for tgt_idx in range(0, len(tokenized_sent)):\n # Set range for the sentence\n max_idx = len(tokenized_sent) - 1\n\n # Define range around target\n lo_idx = max(tgt_idx - window_size, 0)\n hi_idx = min(tgt_idx + window_size, max_idx) + 1\n\n # List the indices in the skip-gram outputs (removing target index)\n number_list = range(lo_idx, hi_idx)\n output_matches = list(filter(lambda x: x != tgt_idx, number_list))\n\n # Generate skip-gram pairs\n pairs = [[word2index_map[tokenized_sent[tgt_idx]], word2index_map[tokenized_sent[out]]] for out in output_matches]\n # print(pairs)\n\n for p in pairs:\n skip_gram_pairs.append(p)", "_____no_output_____" ] ], [ [ "Inspect some skip-gram pairs to ensure the behavior is as expected:", "_____no_output_____" ] ], [ [ "skip_gram_pairs[0:12]", "_____no_output_____" ] ], [ [ "For training, we will need to provide the neural network with batches of skip-gram pairs randomly sampled from our population. The next two blocks define the \"get batch\" function and print out a sample.", "_____no_output_____" ] ], [ [ "def get_skipgram_batch(batch_size):\n instance_indices = list(range(len(skip_gram_pairs)))\n np.random.shuffle(instance_indices)\n batch = instance_indices[:batch_size]\n x = [skip_gram_pairs[i][0] for i in batch]\n y = [[skip_gram_pairs[i][1]] for i in batch]\n return x, y", "_____no_output_____" ], [ "# batch example\nx_batch, y_batch = get_skipgram_batch(8)\nx_batch\ny_batch\nprint(\"X Batch: \", [index2word_map[word] for word in x_batch])\nprint(\"Y Batch: \", [index2word_map[word[0]] for word in y_batch])", "_____no_output_____" ] ], [ [ "## Training\nThe embeddings are trained in mini-batches. First, we define placeholders for the inputs and outputs of size batch_size:", "_____no_output_____" ] ], [ [ "batch_size = 64\nembedding_dimension = 200\nnegative_samples = 8\nn_iterations = 50000\nLOG_DIR = \"logs/word2vec_cab\"", "_____no_output_____" ] ], [ [ "## ========= To Do Training ==================\nAdd the following features:\n* Replace embedding with GPU-compatible method\n* Decay training rate\n* Downsample frequent terms\n* Increase number of negative samples\n* Work over full corpus, then count iterations of full corpus\n\n## =========================================", "_____no_output_____" ] ], [ [ "# Input data, labels\ntrain_inputs = tf.placeholder(tf.int32, shape=[batch_size])\ntrain_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])", "_____no_output_____" ], [ "# Embedding lookup table currently only implemented in CPU\nwith tf.name_scope(\"embeddings\"):\n embeddings = tf.Variable(\n tf.random_uniform([vocabulary_size, embedding_dimension],\n -1.0, 1.0), name='embedding')\n # This is essentialy a lookup table\n embed = tf.nn.embedding_lookup(embeddings, train_inputs)", "_____no_output_____" ], [ "# Create variables for the NCE loss\nnce_weights = tf.Variable(\n tf.truncated_normal([vocabulary_size, embedding_dimension],\n stddev=1.0 / math.sqrt(embedding_dimension)))\nnce_biases = tf.Variable(tf.zeros([vocabulary_size]))\n\n\nloss = tf.reduce_mean(\n tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, inputs=embed, labels=train_labels,\n num_sampled=negative_samples, num_classes=vocabulary_size))\ntf.summary.scalar(\"NCE_loss\", loss)", "_____no_output_____" ], [ "# Learning rate decay\nglobal_step = tf.Variable(0, trainable=False)\nlearningRate = tf.train.exponential_decay(learning_rate=0.1,\n global_step=global_step,\n decay_steps=1000,\n decay_rate=0.95,\n staircase=True)\ntrain_step = tf.train.GradientDescentOptimizer(learningRate).minimize(loss)\nmerged = tf.summary.merge_all()", "_____no_output_____" ], [ "with tf.Session() as sess:\n train_writer = tf.summary.FileWriter(LOG_DIR,\n graph=tf.get_default_graph())\n saver = tf.train.Saver()\n\n with open(os.path.join(LOG_DIR, 'metadata.tsv'), \"w\") as metadata:\n metadata.write('Name\\tClass\\n')\n for k, v in index2word_map.items():\n metadata.write('%s\\t%d\\n' % (v, k))\n\n config = projector.ProjectorConfig()\n embedding = config.embeddings.add()\n embedding.tensor_name = embeddings.name\n # Link this tensor to its metadata file (e.g. labels).\n embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')\n projector.visualize_embeddings(train_writer, config)\n\n tf.global_variables_initializer().run()\n\n for step in range(n_iterations):\n x_batch, y_batch = get_skipgram_batch(batch_size)\n summary, _ = sess.run([merged, train_step],\n feed_dict={train_inputs: x_batch,\n train_labels: y_batch})\n train_writer.add_summary(summary, step)\n\n if step % 100 == 0:\n saver.save(sess, os.path.join(LOG_DIR, \"w2v_model.ckpt\"), step)\n loss_value = sess.run(loss,\n feed_dict={train_inputs: x_batch,\n train_labels: y_batch})\n print(\"Loss at %d: %.5f\" % (step, loss_value))\n\n # Normalize embeddings before using\n norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))\n normalized_embeddings = embeddings / norm\n normalized_embeddings_matrix = sess.run(normalized_embeddings)", "_____no_output_____" ], [ "ref_word = normalized_embeddings_matrix[word2index_map[\"one\"]]\n\ncosine_dists = np.dot(normalized_embeddings_matrix, ref_word)\nff = np.argsort(cosine_dists)[::-1][1:10]\nfor f in ff:\n print(index2word_map[f])\n print(cosine_dists[f])", "_____no_output_____" ] ], [ [ "### Adagrad Gradient Descent", "_____no_output_____" ], [ "## Tensorboard Visualization\nTo see a visualization of the word vectors, open a Tensorboard session and navigate to the \"Projector\" tab. \n\nTo start a tensorflow session, run the following command in a terminal (make sure you're in the correct Python virtual env):\n```\ntensorboard --logdir=c:\\DataScience\\CloudCAB\\Code\\logs\n```\nAlternatively, a \"Start_Tensorboard\" bat file has been placed in the root directory of this repo.\n\nNOTE: On windows systems, the path to your metadata.tsv file for embeddings may be incorrect, resulting in an error when trying to plot embeddings in Tensorboard. To fix this, edit the logs/word2vec_intro/projector_config.pbtxt file as follows:\n* Examine the path listed in the error message in Tensorboard. It will likely have your log path duplicated.\n* Remove the duplicate portion of the path from the metadata_path value in projector_config\n* Restart Tensorboard and re-try plotting", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
e763a8d49738d11a6bff75508bca4f57bda0f26b
157,259
ipynb
Jupyter Notebook
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
753acd954be4a2f99639c9f9fd5e623689fc7493
[ "BSD-2-Clause" ]
1
2021-12-13T05:51:18.000Z
2021-12-13T05:51:18.000Z
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
753acd954be4a2f99639c9f9fd5e623689fc7493
[ "BSD-2-Clause" ]
null
null
null
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
753acd954be4a2f99639c9f9fd5e623689fc7493
[ "BSD-2-Clause" ]
null
null
null
92.234018
75,812
0.775084
[ [ [ "# Start-to-Finish Example: Scalar Field Collapse\n\n## Authors: Leonardo Werneck & Zachariah B. Etienne\n\n## This module sets up spherically symmetric, time-symmetric initial data for a scalar field collapse in Spherical coordinates, as [documented in this NRPy+ module](Tutorial-ADM_Initial_Data-ScalarField.ipynb) (the initial data is shown to satisfy the Hamiltonian constraint [in this tutorial module](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_ScalarField_initial_data.ipynb)), which is then evolved forward in time. The aim is to reproduce the results from [Akbarian & Choptuik (2014)]( https://arxiv.org/pdf/1508.01614.pdf) and [Baumgarte (2018)](https://arxiv.org/abs/1807.10342) (which used a similar approach), demonstrating that the Hamiltonian constraint violation during the simulation also converges to zero with increasing numerical resolution.\n\n### **Results from this tutorial notebook have been used in the paper [Werneck *et al.* (2021)](https://arxiv.org/pdf/2106.06553.pdf)**\n\nThe entire algorithm is outlined below, with NRPy+-based components highlighted in <font color='green'>green</font>.\n\n1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.\n1. <font color='green'>Set gridfunction values to initial data.</font>\n1. Evolve the system forward in time using RK4 time integration. At each RK4 substep, do the following:\n 1. <font color='green'>Evaluate BSSN RHS expressions.</font>\n 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658)\n 1. <font color='green'>Apply constraints on conformal 3-metric: $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$</font>\n1. At the end of each iteration in time, output the <font color='green'>Hamiltonian constraint violation</font>.\n1. Repeat above steps at two numerical resolutions to confirm convergence to zero.", "_____no_output_____" ], [ "## References\n\n* [Akbarian & Choptuik (2015)](https://arxiv.org/pdf/1508.01614.pdf) (Useful to understand the theoretical framework)\n* [Baumgarte (2018)](https://arxiv.org/pdf/1807.10342.pdf) (Useful to understand the theoretical framework)\n* [Baumgarte & Shapiro's Numerical Relativity](https://books.google.com.br/books/about/Numerical_Relativity.html?id=dxU1OEinvRUC&redir_esc=y): Section 6.2.2 (Useful to understand how to solve the Hamiltonian constraint)\n\n<a id='toc'></a>\n\n# Table of Contents\n$$\\label{toc}$$\n \n1. [Step 1](#nrpy_core) Set core NRPy+ parameters for numerical grids and reference metric\n 1. [Step 1.a](#cfl) Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep\n1. [Step 2](#initial_data) Set up ADM initial data for the Scalar Field \n1. [Step 3](#adm_id_spacetime) Convert ADM initial data to BSSN-in-curvilinear coordinates\n1. [Step 4](#bssn) Output C code for BSSN spacetime evolution\n 1. [Step 4.a](#bssnrhs) Set up the BSSN and ScalarField right-hand-side (RHS) expressions, and add the *rescaled* $T^{\\mu\\nu}$ source terms\n 1. [Step 4.b](#hamconstraint) Output the Hamiltonian constraint\n 1. [Step 4.c](#enforce3metric) Enforce conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint\n 1. [Step 4.d](#ccodegen) Generate C code kernels for BSSN expressions, in parallel if possible\n 1. [Step 4.e](#cparams_rfm_and_domainsize) Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`\n1. [Step 5](#bc_functs) Set up boundary condition functions for chosen singular, curvilinear coordinate system\n1. [Step 6](#main_ccode) The main C code: `ScalarFieldCollapse_Playground.c`\n1. [Step 7](#visualization) Visualization\n 1. [Step 7.a](#install_download) Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded\n 1. [Step 7.b](#movie_dynamics) Dynamics of the solution\n 1. [Step 7.b.i](#genimages) Generate images for visualization animation\n 1. [Step 7.b.ii](#gemnvideo) Generate visualization animation\n 1. [Step 7.c](#convergence) Convergence of constraint violation\n1. [Step 8](#output_to_pdf) Output this module as $\\LaTeX$-formatted PDF file", "_____no_output_____" ], [ "<a id='nrpy_core'></a>\n\n# Step 1: Set core NRPy+ parameters for numerical grids and reference metric\n$$\\label{nrpy_core}$$", "_____no_output_____" ] ], [ [ "# Step P1: Import needed NRPy+ core modules:\nfrom outputC import lhrh,outputC,outCfunction # NRPy+: Core C code output module\nimport NRPy_param_funcs as par # NRPy+: Parameter interface\nimport sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends\nimport finite_difference as fin # NRPy+: Finite difference C code generation module\nimport grid as gri # NRPy+: Functions having to do with numerical grids\nimport indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support\nimport reference_metric as rfm # NRPy+: Reference metric support\nimport cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\nimport shutil, os, sys # Standard Python modules for multiplatform OS-level functions\n\n# Step P2: Create C code output directory:\nCcodesdir = os.path.join(\"BSSN_ScalarFieldCollapse_Ccodes\")\n# First remove C code output directory if it exists\n# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty\n# !rm -r ScalarWaveCurvilinear_Playground_Ccodes\nshutil.rmtree(Ccodesdir, ignore_errors=True)\n# Then create a fresh directory\ncmd.mkdir(Ccodesdir)\n\n# Step P3: Create executable output directory:\noutdir = os.path.join(Ccodesdir,\"output\")\ncmd.mkdir(outdir)\n\n# Step 1: Set the spatial dimension parameter\n# to three this time, and then read\n# the parameter as DIM.\npar.set_parval_from_str(\"grid::DIM\",3)\nDIM = par.parval_from_str(\"grid::DIM\")\n\n# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,\n# FD order, floating point precision, and CFL factor:\n# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,\n# SymTP, SinhSymTP\nCoordSystem = \"Spherical\"\n\n# Step 2.a: Set defaults for Coordinate system parameters.\n# These are perhaps the most commonly adjusted parameters,\n# so we enable modifications at this high level.\ndomain_size = 32\n\n# sinh_width sets the default value for:\n# * SinhSpherical's params.SINHW\n# * SinhCylindrical's params.SINHW{RHO,Z}\n# * SinhSymTP's params.SINHWAA\nsinh_width = 0.2 # If Sinh* coordinates chosen\n\n# sinhv2_const_dr sets the default value for:\n# * SinhSphericalv2's params.const_dr\n# * SinhCylindricalv2's params.const_d{rho,z}\nsinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen\n\n# SymTP_bScale sets the default value for:\n# * SinhSymTP's params.bScale\nSymTP_bScale = 0.5 # If SymTP chosen\n\n# Step 2.b: Set the order of spatial and temporal derivatives;\n# the core data type, and the CFL factor.\n# RK_method choices include: Euler, \"RK2 Heun\", \"RK2 MP\", \"RK2 Ralston\", RK3, \"RK3 Heun\", \"RK3 Ralston\",\n# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8\nRK_method = \"RK4\"\nFD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable\nREAL = \"double\" # Best to use double here.\nCFL_FACTOR= 0.5\n\n# Set the lapse & shift conditions\nLapseCondition = \"OnePlusLog\"\nShiftCondition = \"GammaDriving2ndOrder_Covariant\"\n\n# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.\n# As described above the Table of Contents, this is a 3-step process:\n# 3.A: Evaluate RHSs (RHS_string)\n# 3.B: Apply boundary conditions (post_RHS_string, pt 1)\n# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)\nimport MoLtimestepping.C_Code_Generation as MoL\nfrom MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict\nRK_order = Butcher_dict[RK_method][1]\ncmd.mkdir(os.path.join(Ccodesdir,\"MoLtimestepping/\"))\nMoL.MoL_C_Code_Generation(RK_method,\n RHS_string = \"\"\"\nRicci_eval(&rfmstruct, &params, RK_INPUT_GFS, auxevol_gfs);\nrhs_eval(&rfmstruct, &params, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);\"\"\",\n post_RHS_string = \"\"\"\napply_bcs_curvilinear(&params, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);\nenforce_detgammabar_constraint(&rfmstruct, &params, RK_OUTPUT_GFS);\\n\"\"\",\n outdir = os.path.join(Ccodesdir,\"MoLtimestepping/\"))\n\n# Step 4: Set the coordinate system for the numerical grid\npar.set_parval_from_str(\"reference_metric::CoordSystem\",CoordSystem)\nrfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.\n\n# Step 5: Set the finite differencing order to FD_order (set above).\npar.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", FD_order)\n\n# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h\ncmd.mkdir(os.path.join(Ccodesdir,\"SIMD\"))\nshutil.copy(os.path.join(\"SIMD/\")+\"SIMD_intrinsics.h\",os.path.join(Ccodesdir,\"SIMD/\"))\n\n# Step 7: Impose spherical symmetry by demanding that all\n# derivatives in the angular directions vanish\npar.set_parval_from_str(\"indexedexp::symmetry_axes\",\"12\")", "_____no_output_____" ] ], [ [ "<a id='cfl'></a>\n\n## Step 1.a: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \\[Back to [top](#toc)\\]\n$$\\label{cfl}$$\n\nIn order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:\n$$\n\\Delta t \\le \\frac{\\min(ds_i)}{c},\n$$\nwhere $c$ is the wavespeed, and\n$$ds_i = h_i \\Delta x^i$$ \nis the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\\Delta x^i$ is the uniform grid spacing in the $i$th direction:", "_____no_output_____" ] ], [ [ "# Output the find_timestep() function to a C file.\nrfm.out_timestep_func_to_file(os.path.join(Ccodesdir,\"find_timestep.h\"))", "_____no_output_____" ] ], [ [ "<a id='initial_data'></a>\n\n# Step 2: Set up ADM initial data for the Scalar Field \\[Back to [top](#toc)\\]\n$$\\label{initial_data}$$\n\nAs documented [in the scalar field Gaussian pulse initial data NRPy+ tutorial notebook](TTutorial-ADM_Initial_Data-ScalarField.ipynb), we will now set up the scalar field initial data, storing the densely-sampled result to file.\n\nThe initial data function `ScalarField_InitialData` requires `SciPy`, so let's make sure it's installed.", "_____no_output_____" ] ], [ [ "!pip install scipy numpy > /dev/null", "_____no_output_____" ] ], [ [ "Next call the `ScalarField_InitialData()` function from the [ScalarField/ScalarField_InitialData.py](../edit/ScalarField/ScalarField_InitialData.py) NRPy+ module (see the [tutorial notebook](Tutorial-ADM_Initial_Data-ScalarField.ipynb)).", "_____no_output_____" ] ], [ [ "# Step 2.a: Import necessary Python and NRPy+ modules\nimport ScalarField.ScalarField_InitialData as sfid\n\n# Step 2.b: Set the initial data parameters\noutputfilename = os.path.join(outdir,\"SFID.txt\")\nID_Family = \"Gaussian_pulse\"\npulse_amplitude = 0.4\npulse_center = 0\npulse_width = 1\nNr = 30000\nrmax = domain_size*1.1\n\n# Step 2.c: Generate the initial data\nsfid.ScalarField_InitialData(outputfilename,Ccodesdir,ID_Family,\n pulse_amplitude,pulse_center,pulse_width,Nr,rmax)", "Generated the ADM initial data for the gravitational collapse \nof a massless scalar field in Spherical coordinates.\n\nType of initial condition: Scalar field: \"Gaussian\" Shell\n ADM quantities: Time-symmetric\n Lapse condition: Pre-collapsed\nParameters: amplitude = 0.4,\n center = 0,\n width = 1,\n domain size = 35.2,\n number of points = 30000,\n Initial data file = BSSN_ScalarFieldCollapse_Ccodes/output/SFID.txt.\n\nWrote to file BSSN_ScalarFieldCollapse_Ccodes/ID_scalar_field_ADM_quantities.h\nWrote to file BSSN_ScalarFieldCollapse_Ccodes/ID_scalar_field_spherical.h\nAppended to file \"BSSN_ScalarFieldCollapse_Ccodes/ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2.h\"\nAppended to file \"BSSN_ScalarFieldCollapse_Ccodes/ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2.h\"\n" ] ], [ [ "<a id='adm_id_spacetime'></a>\n\n# Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \\[Back to [top](#toc)\\]\n$$\\label{adm_id_spacetime}$$\n\nThis is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).", "_____no_output_____" ] ], [ [ "import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum\nAtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear(\"Spherical\",\"ID_scalar_field_ADM_quantities\",\n Ccodesdir=Ccodesdir,loopopts=\"\")", "Output C function ID_BSSN_lambdas() to file BSSN_ScalarFieldCollapse_Ccodes/ID_BSSN_lambdas.h\nOutput C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file BSSN_ScalarFieldCollapse_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h\nOutput C function ID_BSSN__ALL_BUT_LAMBDAs() to file BSSN_ScalarFieldCollapse_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h\n" ] ], [ [ "<a id='bssn'></a>\n\n# Step 4: Output C code for BSSN spacetime evolution \\[Back to [top](#toc)\\]\n$$\\label{bssn}$$\n\n<a id='bssnrhs'></a>\n\n## Step 4.a: Set up the BSSN and ScalarField right-hand-side (RHS) expressions, and add the *rescaled* $T^{\\mu\\nu}$ source terms \\[Back to [top](#toc)\\]\n$$\\label{bssnrhs}$$\n\n`BSSN.BSSN_RHSs()` sets up the RHSs assuming a spacetime vacuum: $T^{\\mu\\nu}=0$. (This might seem weird, but remember that, for example, *spacetimes containing only single or binary black holes are vacuum spacetimes*.) Here, using the [`BSSN.BSSN_stress_energy_source_terms`](../edit/BSSN/BSSN_stress_energy_source_terms.py) ([**tutorial**](Tutorial-BSSN_stress_energy_source_terms.ipynb)) NRPy+ module, we add the $T^{\\mu\\nu}$ source terms to these equations.", "_____no_output_____" ] ], [ [ "import time\nimport BSSN.BSSN_RHSs as rhs\nimport BSSN.BSSN_gauge_RHSs as gaugerhs\npar.set_parval_from_str(\"BSSN.BSSN_gauge_RHSs::LapseEvolutionOption\", LapseCondition)\npar.set_parval_from_str(\"BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption\", ShiftCondition)\n\nprint(\"Generating symbolic expressions for BSSN RHSs...\")\nstart = time.time()\n# Enable rfm_precompute infrastructure, which results in\n# BSSN RHSs that are free of transcendental functions,\n# even in curvilinear coordinates, so long as\n# ConformalFactor is set to \"W\" (default).\ncmd.mkdir(os.path.join(Ccodesdir,\"rfm_files/\"))\npar.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"True\")\npar.set_parval_from_str(\"reference_metric::rfm_precompute_Ccode_outdir\",os.path.join(Ccodesdir,\"rfm_files/\"))\n\n# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:\nimport BSSN.BSSN_quantities as Bq\npar.set_parval_from_str(\"BSSN.BSSN_quantities::LeaveRicciSymbolic\",\"True\")\n\nrhs.BSSN_RHSs()\n\n# Evaluate the Scalar Field RHSs\nimport ScalarField.ScalarField_RHSs as sfrhs\nsfrhs.ScalarField_RHSs()\n\n# Compute ScalarField T^{\\mu\\nu}\n# Compute the scalar field energy-momentum tensor\nimport ScalarField.ScalarField_Tmunu as sfTmunu\nsfTmunu.ScalarField_Tmunu()\nT4UU = sfTmunu.T4UU\n\nimport BSSN.BSSN_stress_energy_source_terms as Bsest\nBsest.BSSN_source_terms_for_BSSN_RHSs(T4UU)\nrhs.trK_rhs += Bsest.sourceterm_trK_rhs\nfor i in range(DIM):\n # Needed for Gamma-driving shift RHSs:\n rhs.Lambdabar_rhsU[i] += Bsest.sourceterm_Lambdabar_rhsU[i]\n # Needed for BSSN RHSs:\n rhs.lambda_rhsU[i] += Bsest.sourceterm_lambda_rhsU[i]\n for j in range(DIM):\n rhs.a_rhsDD[i][j] += Bsest.sourceterm_a_rhsDD[i][j]\n\ngaugerhs.BSSN_gauge_RHSs()\n\n# We use betaU as our upwinding control vector:\nBq.BSSN_basic_tensors()\nbetaU = Bq.betaU\n\nimport BSSN.Enforce_Detgammabar_Constraint as EGC\nenforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()\n\n# Next compute Ricci tensor\npar.set_parval_from_str(\"BSSN.BSSN_quantities::LeaveRicciSymbolic\",\"False\")\nBq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()\n\n# Now register the Hamiltonian as a gridfunction.\nH = gri.register_gridfunctions(\"AUX\",\"H\")\n\n# Then define the Hamiltonian constraint and output the optimized C code.\nimport BSSN.BSSN_constraints as bssncon\nbssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)\nBsest.BSSN_source_terms_for_BSSN_constraints(T4UU)\nbssncon.H += Bsest.sourceterm_H\n\n# Add Kreiss-Oliger dissipation\ndiss_strength = par.Cparameters(\"REAL\",\"ScalarFieldCollapse\",[\"diss_strength\"],0.1)\n\nalpha_dKOD = ixp.declarerank1(\"alpha_dKOD\")\ncf_dKOD = ixp.declarerank1(\"cf_dKOD\")\ntrK_dKOD = ixp.declarerank1(\"trK_dKOD\")\nsf_dKOD = ixp.declarerank1(\"sf_dKOD\")\nsfM_dKOD = ixp.declarerank1(\"sfM_dKOD\")\nbetU_dKOD = ixp.declarerank2(\"betU_dKOD\",\"nosym\")\nvetU_dKOD = ixp.declarerank2(\"vetU_dKOD\",\"nosym\")\nlambdaU_dKOD = ixp.declarerank2(\"lambdaU_dKOD\",\"nosym\")\naDD_dKOD = ixp.declarerank3(\"aDD_dKOD\",\"sym01\")\nhDD_dKOD = ixp.declarerank3(\"hDD_dKOD\",\"sym01\")\n\nfor k in range(3):\n gaugerhs.alpha_rhs += diss_strength*alpha_dKOD[k]*rfm.ReU[k]\n rhs.cf_rhs += diss_strength* cf_dKOD[k]*rfm.ReU[k]\n rhs.trK_rhs += diss_strength* trK_dKOD[k]*rfm.ReU[k]\n sfrhs.sf_rhs += diss_strength* sf_dKOD[k]*rfm.ReU[k]\n sfrhs.sfM_rhs += diss_strength* sfM_dKOD[k]*rfm.ReU[k]\n for i in range(3):\n if \"2ndOrder\" in ShiftCondition:\n gaugerhs.bet_rhsU[i] += diss_strength* betU_dKOD[i][k]*rfm.ReU[k]\n gaugerhs.vet_rhsU[i] += diss_strength* vetU_dKOD[i][k]*rfm.ReU[k]\n rhs.lambda_rhsU[i] += diss_strength*lambdaU_dKOD[i][k]*rfm.ReU[k]\n for j in range(3):\n rhs.a_rhsDD[i][j] += diss_strength*aDD_dKOD[i][j][k]*rfm.ReU[k]\n rhs.h_rhsDD[i][j] += diss_strength*hDD_dKOD[i][j][k]*rfm.ReU[k]\n \n# Now that we are finished with all the rfm hatted\n# quantities in generic precomputed functional\n# form, let's restore them to their closed-\n# form expressions.\npar.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"False\") # Reset to False to disable rfm_precompute.\nrfm.ref_metric__hatted_quantities()\nend = time.time()\nprint(\"(BENCH) Finished BSSN symbolic expressions in \"+str(end-start)+\" seconds.\")\n\ndef BSSN_plus_ScalarField_RHSs():\n print(\"Generating C code for BSSN RHSs in \"+par.parval_from_str(\"reference_metric::CoordSystem\")+\" coordinates.\")\n start = time.time()\n\n # Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs\n lhs_names = [ \"alpha\", \"cf\", \"trK\", \"sf\", \"sfM\" ]\n rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs, sfrhs.sf_rhs, sfrhs.sfM_rhs]\n for i in range(3):\n lhs_names.append( \"betU\"+str(i))\n rhs_exprs.append(gaugerhs.bet_rhsU[i])\n lhs_names.append( \"lambdaU\"+str(i))\n rhs_exprs.append(rhs.lambda_rhsU[i])\n lhs_names.append( \"vetU\"+str(i))\n rhs_exprs.append(gaugerhs.vet_rhsU[i])\n for j in range(i,3):\n lhs_names.append( \"aDD\"+str(i)+str(j))\n rhs_exprs.append(rhs.a_rhsDD[i][j])\n lhs_names.append( \"hDD\"+str(i)+str(j))\n rhs_exprs.append(rhs.h_rhsDD[i][j])\n\n # Sort the lhss list alphabetically, and rhss to match.\n # This ensures the RHSs are evaluated in the same order\n # they're allocated in memory:\n lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]\n\n # Declare the list of lhrh's\n BSSN_evol_rhss = []\n for var in range(len(lhs_names)):\n BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess(\"rhs_gfs\",lhs_names[var]),rhs=rhs_exprs[var]))\n\n # Set up the C function for the BSSN RHSs\n desc=\"Evaluate the BSSN RHSs\"\n name=\"rhs_eval\"\n outCfunction(\n outfile = os.path.join(Ccodesdir,name+\".h\"), desc=desc, name=name,\n params = \"\"\"rfm_struct *restrict rfmstruct,const paramstruct *restrict params,\n const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs\"\"\",\n body = fin.FD_outputC(\"returnstring\",BSSN_evol_rhss, params=\"outCverbose=False,SIMD_enable=True\",\n upwindcontrolvec=betaU).replace(\"IDX4\",\"IDX4S\"),\n loopopts = \"InteriorPoints,EnableSIMD,Enable_rfm_precompute\")\n end = time.time()\n print(\"(BENCH) Finished BSSN_RHS C codegen in \" + str(end - start) + \" seconds.\")\n\ndef Ricci():\n print(\"Generating C code for Ricci tensor in \"+par.parval_from_str(\"reference_metric::CoordSystem\")+\" coordinates.\")\n start = time.time()\n desc=\"Evaluate the Ricci tensor\"\n name=\"Ricci_eval\"\n outCfunction(\n outfile = os.path.join(Ccodesdir,name+\".h\"), desc=desc, name=name,\n params = \"\"\"rfm_struct *restrict rfmstruct,const paramstruct *restrict params,\n const REAL *restrict in_gfs,REAL *restrict auxevol_gfs\"\"\",\n body = fin.FD_outputC(\"returnstring\",\n [lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD00\"),rhs=Bq.RbarDD[0][0]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD01\"),rhs=Bq.RbarDD[0][1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD02\"),rhs=Bq.RbarDD[0][2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD11\"),rhs=Bq.RbarDD[1][1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD12\"),rhs=Bq.RbarDD[1][2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD22\"),rhs=Bq.RbarDD[2][2])],\n params=\"outCverbose=False,SIMD_enable=True\").replace(\"IDX4\",\"IDX4S\"),\n loopopts = \"InteriorPoints,EnableSIMD,Enable_rfm_precompute\")\n end = time.time()\n print(\"(BENCH) Finished Ricci C codegen in \" + str(end - start) + \" seconds.\")", "Generating symbolic expressions for BSSN RHSs...\n(BENCH) Finished BSSN symbolic expressions in 9.372083902359009 seconds.\n" ] ], [ [ "<a id='hamconstraint'></a>\n\n## Step 4.b: Output the Hamiltonian constraint \\[Back to [top](#toc)\\]\n$$\\label{hamconstraint}$$\n\nNext output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.", "_____no_output_____" ] ], [ [ "def Hamiltonian():\n start = time.time()\n print(\"Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.\")\n # Set up the C function for the Hamiltonian RHS\n desc=\"Evaluate the Hamiltonian constraint\"\n name=\"Hamiltonian_constraint\"\n outCfunction(\n outfile = os.path.join(Ccodesdir,name+\".h\"), desc=desc, name=name,\n params = \"\"\"rfm_struct *restrict rfmstruct,const paramstruct *restrict params,\n REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs\"\"\",\n body = fin.FD_outputC(\"returnstring\",lhrh(lhs=gri.gfaccess(\"aux_gfs\", \"H\"), rhs=bssncon.H),\n params=\"outCverbose=False\").replace(\"IDX4\",\"IDX4S\"),\n loopopts = \"InteriorPoints,Enable_rfm_precompute\")\n\n end = time.time()\n print(\"(BENCH) Finished Hamiltonian C codegen in \" + str(end - start) + \" seconds.\")", "_____no_output_____" ] ], [ [ "<a id='enforce3metric'></a>\n\n## Step 4.c: Enforce conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint \\[Back to [top](#toc)\\]\n$$\\label{enforce3metric}$$\n\nThen enforce conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)\n\nApplying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint:", "_____no_output_____" ] ], [ [ "def gammadet():\n start = time.time()\n print(\"Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.\")\n\n # Set up the C function for the det(gammahat) = det(gammabar)\n EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)\n end = time.time()\n print(\"(BENCH) Finished gamma constraint C codegen in \" + str(end - start) + \" seconds.\")", "_____no_output_____" ] ], [ [ "<a id='ccodegen'></a>\n\n## Step 4.d: Generate C code kernels for BSSN expressions, in parallel if possible \\[Back to [top](#toc)\\]\n$$\\label{ccodegen}$$", "_____no_output_____" ] ], [ [ "# Step 4.d: C code kernel generation\n# Step 4.d.i: Create a list of functions we wish to evaluate in parallel\nfuncs = [BSSN_plus_ScalarField_RHSs,Ricci,Hamiltonian,gammadet]\n\ntry:\n if os.name == 'nt':\n # It's a mess to get working in Windows, so we don't bother. :/\n # https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac\n raise Exception(\"Parallel codegen currently not available in Windows\")\n # Step 4.d.ii: Import the multiprocessing module.\n import multiprocessing\n\n # Step 4.d.iii: Define master function for parallelization.\n # Note that lambdifying this doesn't work in Python 3\n def master_func(arg):\n funcs[arg]()\n\n # Step 4.d.iv: Evaluate list of functions in parallel if possible;\n # otherwise fallback to serial evaluation:\n pool = multiprocessing.Pool()\n pool.map(master_func,range(len(funcs)))\nexcept:\n # Steps 4.d.iii-4.d.v, alternate: As fallback, evaluate functions in serial.\n for func in funcs:\n func()", "Generating C code for BSSN RHSs in Spherical coordinates.\nOutput C function rhs_eval() to file BSSN_ScalarFieldCollapse_Ccodes/rhs_eval.h\n(BENCH) Finished BSSN_RHS C codegen in 17.11899423599243 seconds.\nGenerating C code for Ricci tensor in Spherical coordinates.\nOutput C function Ricci_eval() to file BSSN_ScalarFieldCollapse_Ccodes/Ricci_eval.h\n(BENCH) Finished Ricci C codegen in 15.75356411933899 seconds.\nGenerating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.\nOutput C function Hamiltonian_constraint() to file BSSN_ScalarFieldCollapse_Ccodes/Hamiltonian_constraint.h\n(BENCH) Finished Hamiltonian C codegen in 30.016396045684814 seconds.\nGenerating optimized C code for gamma constraint. May take a while, depending on CoordSystem.\nOutput C function enforce_detgammabar_constraint() to file BSSN_ScalarFieldCollapse_Ccodes/enforce_detgammabar_constraint.h\n(BENCH) Finished gamma constraint C codegen in 0.08405900001525879 seconds.\n" ] ], [ [ "<a id='cparams_rfm_and_domainsize'></a>\n\n## Step 4.e: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \\[Back to [top](#toc)\\]\n$$\\label{cparams_rfm_and_domainsize}$$\n\nBased on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.\n\nThen we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above", "_____no_output_____" ] ], [ [ "# Step 4.e.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h\npar.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))\n\n# Step 4.e.ii: Set free_parameters.h\n# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic\n# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,\n# parameters set above.\nrfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,\"free_parameters.h\"),\n domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)\n\n# Step 4.e.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:\nrfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)\n\n# Step 4.e.iv: Generate xx_to_Cart.h, which contains xx_to_Cart() for\n# (the mapping from xx->Cartesian) for the chosen\n# CoordSystem:\nrfm.xx_to_Cart_h(\"xx_to_Cart\",\"./set_Cparameters.h\",os.path.join(Ccodesdir,\"xx_to_Cart.h\"))\n\n# Step 4.e.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h\npar.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))", "_____no_output_____" ] ], [ [ "<a id='bc_functs'></a>\n\n# Step 5: Set up boundary condition functions for chosen singular, curvilinear coordinate system \\[Back to [top](#toc)\\]\n$$\\label{bc_functs}$$\n\nNext apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb).", "_____no_output_____" ] ], [ [ "import CurviBoundaryConditions.CurviBoundaryConditions as cbcs\ncbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,\"boundary_conditions/\"),Cparamspath=os.path.join(\"../\"))", "Wrote to file \"BSSN_ScalarFieldCollapse_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h\"\nEvolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,\n alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,\n hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, sf:0,\n sfM:0, trK:0, vetU0:1, vetU1:2, vetU2:3 )\nAuxiliary parity: ( H:0 )\nAuxEvol parity: ( RbarDD00:4, RbarDD01:5, RbarDD02:6, RbarDD11:7,\n RbarDD12:8, RbarDD22:9 )\nWrote to file \"BSSN_ScalarFieldCollapse_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h\"\n" ] ], [ [ "<a id='main_ccode'></a>\n\n# Step 6: The main C code: `ScalarFieldCollapse_Playground.c` \\[Back to [top](#toc)\\]\n$$\\label{main_ccode}$$", "_____no_output_____" ] ], [ [ "# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),\n# and set the CFL_FACTOR (which can be overwritten at the command line)\n\nwith open(os.path.join(Ccodesdir,\"ScalarFieldCollapse_Playground_REAL__NGHOSTS__CFL_FACTOR.h\"), \"w\") as file:\n file.write(\"\"\"\n// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER\n#define NGHOSTS \"\"\"+str(int(FD_order/2)+1)+\"\"\"\n// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point\n// numbers are stored to at least ~16 significant digits\n#define REAL \"\"\"+REAL+\"\"\"\n// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER\nREAL CFL_FACTOR = \"\"\"+str(CFL_FACTOR)+\"\"\"; // Set the CFL Factor. Can be overwritten at command line.\\n\"\"\")", "_____no_output_____" ], [ "%%writefile $Ccodesdir/ScalarFieldCollapse_Playground.c\n\n// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.\n#include \"ScalarFieldCollapse_Playground_REAL__NGHOSTS__CFL_FACTOR.h\"\n\n#include \"rfm_files/rfm_struct__declare.h\"\n\n#include \"declare_Cparameters_struct.h\"\n\n// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:\n#include \"SIMD/SIMD_intrinsics.h\"\n\n// Step P1: Import needed header files\n#include \"stdio.h\"\n#include \"stdlib.h\"\n#include \"math.h\"\n#include \"time.h\"\n#include \"stdint.h\" // Needed for Windows GCC 6.x compatibility\n#ifndef M_PI\n#define M_PI 3.141592653589793238462643383279502884L\n#endif\n#ifndef M_SQRT1_2\n#define M_SQRT1_2 0.707106781186547524400844362104849039L\n#endif\n#define wavespeed 1.0 // Set CFL-based \"wavespeed\" to 1.0.\n#define alpha_threshold (2e-3) // Value below which we rule gravitational collapse has happened\n\n// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of\n// data in a 1D array. In this case, consecutive values of \"i\"\n// (all other indices held to a fixed value) are consecutive in memory, where\n// consecutive values of \"j\" (fixing all other indices) are separated by\n// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of\n// \"k\" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.\n#define IDX4S(g,i,j,k) \\\n( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )\n#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )\n#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )\n#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \\\n for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)\n#define LOOP_ALL_GFS_GPS(ii) _Pragma(\"omp parallel for\") \\\n for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)\n\n// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()\n#include \"boundary_conditions/gridfunction_defines.h\"\n\n// Step P4: Set xx_to_Cart(const paramstruct *restrict params,\n// REAL *restrict xx[3],\n// const int i0,const int i1,const int i2,\n// REAL xCart[3]),\n// which maps xx->Cartesian via\n// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}\n#include \"xx_to_Cart.h\"\n\n// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],\n// paramstruct *restrict params, REAL *restrict xx[3]),\n// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for\n// the chosen Eigen-CoordSystem if EigenCoord==1, or\n// CoordSystem if EigenCoord==0.\n#include \"set_Nxx_dxx_invdx_params__and__xx.h\"\n\n// Step P6: Include basic functions needed to impose curvilinear\n// parity and boundary conditions.\n#include \"boundary_conditions/CurviBC_include_Cfunctions.h\"\n\n// Step P7: Implement the algorithm for upwinding.\n// *NOTE*: This upwinding is backwards from\n// usual upwinding algorithms, because the\n// upwinding control vector in BSSN (the shift)\n// acts like a *negative* velocity.\n//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0\n\n// Step P8: Include function for enforcing detgammabar constraint.\n#include \"enforce_detgammabar_constraint.h\"\n\n// Step P9: Find the CFL-constrained timestep\n#include \"find_timestep.h\"\n\n// Step P10: Declare initial data input struct:\n// stores data from initial data solver,\n// so they can be put on the numerical grid.\ntypedef struct __ID_inputs {\n int interp_stencil_size;\n int numlines_in_file;\n REAL *r_arr,*sf_arr,*psi4_arr,*alpha_arr;\n} ID_inputs;\n\n// Part P11: Declare all functions for setting up ScalarField initial data.\n/* Routines to interpolate the ScalarField solution and convert to ADM & T^{munu}: */\n#include \"../ScalarField/ScalarField_interp.h\"\n#include \"ID_scalar_field_ADM_quantities.h\"\n#include \"ID_scalar_field_spherical.h\"\n#include \"ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2.h\"\n#include \"ID_scalarfield.h\"\n\n/* Next perform the basis conversion and compute all needed BSSN quantities */\n#include \"ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h\"\n#include \"ID_BSSN__ALL_BUT_LAMBDAs.h\"\n#include \"ID_BSSN_lambdas.h\"\n\n// Step P12: Set the generic driver function for setting up BSSN initial data\nvoid initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,\n const rfm_struct *restrict rfmstruct,\n REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {\n#include \"set_Cparameters.h\"\n\n // Step 1: Set up ScalarField initial data\n // Step 1.a: Read ScalarField initial data from data file\n // Open the data file:\n char filename[100];\n sprintf(filename,\"./SFID.txt\");\n FILE *fp = fopen(filename, \"r\");\n if (fp == NULL) {\n fprintf(stderr,\"ERROR: could not open file %s\\n\",filename);\n exit(1);\n }\n // Count the number of lines in the data file:\n int numlines_in_file = count_num_lines_in_file(fp);\n // Allocate space for all data arrays:\n REAL *r_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);\n REAL *sf_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);\n REAL *psi4_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);\n REAL *alpha_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);\n\n // Read from the data file, filling in arrays\n // read_datafile__set_arrays() may be found in ScalarField/ScalarField_interp.h\n if(read_datafile__set_arrays(fp,r_arr,sf_arr,psi4_arr,alpha_arr) == 1) {\n fprintf(stderr,\"ERROR WHEN READING FILE %s!\\n\",filename);\n exit(1);\n }\n fclose(fp);\n\n const int interp_stencil_size = 12;\n ID_inputs SF_in;\n SF_in.interp_stencil_size = interp_stencil_size;\n SF_in.numlines_in_file = numlines_in_file;\n SF_in.r_arr = r_arr;\n SF_in.sf_arr = sf_arr;\n SF_in.psi4_arr = psi4_arr;\n SF_in.alpha_arr = alpha_arr;\n\n // Step 1.b: Interpolate data from data file to set BSSN gridfunctions\n ID_scalarfield(params,xx,SF_in, in_gfs);\n ID_BSSN__ALL_BUT_LAMBDAs(params,xx,SF_in, in_gfs);\n apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);\n enforce_detgammabar_constraint(rfmstruct, params, in_gfs);\n ID_BSSN_lambdas(params, xx, in_gfs);\n apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);\n enforce_detgammabar_constraint(rfmstruct, params, in_gfs);\n\n free(r_arr);\n free(sf_arr);\n free(psi4_arr);\n free(alpha_arr);\n}\n\n// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)\n#include \"Hamiltonian_constraint.h\"\n\n// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs\n#include \"rhs_eval.h\"\n\n// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor\n#include \"Ricci_eval.h\"\n\n//#include \"NRPyCritCol_regridding.h\"\n\nREAL rho_max = 0.0;\n\n// main() function:\n// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates\n// Step 1: Set up initial data to an exact solution\n// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.\n// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of\n// Lines timestepping algorithm, and output periodic simulation diagnostics\n// Step 3.a: Output 2D data file periodically, for visualization\n// Step 3.b: Step forward one timestep (t -> t+dt) in time using\n// chosen RK-like MoL timestepping algorithm\n// Step 3.c: If t=t_final, output conformal factor & Hamiltonian\n// constraint violation to 1D data file\n// Step 3.d: Progress indicator printing to stderr\n// Step 4: Free all allocated memory\nint main(int argc, const char *argv[]) {\n paramstruct params;\n#include \"set_Cparameters_default.h\"\n\n // Step 0a: Read command-line input, error out if nonconformant\n if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < 2 || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {\n fprintf(stderr,\"Error: Expected three command-line arguments: ./ScalarFieldCollapse_Playground Nx0 Nx1 Nx2,\\n\");\n fprintf(stderr,\"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\\n\");\n fprintf(stderr,\"Nx[] MUST BE larger than NGHOSTS (= %d)\\n\",NGHOSTS);\n exit(1);\n }\n if(argc == 5) {\n CFL_FACTOR = strtod(argv[4],NULL);\n if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {\n fprintf(stderr,\"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\\n\",CFL_FACTOR);\n fprintf(stderr,\" This will generally only be stable if the simulation is purely axisymmetric\\n\");\n fprintf(stderr,\" However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\\n\",atoi(argv[3]));\n }\n }\n // Step 0b: Set up numerical grid structure, first in space...\n const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };\n if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {\n fprintf(stderr,\"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\\n\");\n fprintf(stderr,\" For example, in case of angular directions, proper symmetry zones will not exist.\\n\");\n exit(1);\n }\n\n // Step 0c: Set free parameters, overwriting Cparameters defaults\n // by hand or with command-line input, as desired.\n#include \"free_parameters.h\"\n\n // Step 0d: Uniform coordinate grids are stored to *xx[3]\n REAL *xx[3];\n // Step 0d.i: Set bcstruct\n bc_struct bcstruct;\n {\n int EigenCoord = 1;\n // Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets\n // params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the\n // chosen Eigen-CoordSystem.\n set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, &params, xx);\n // Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot\n#include \"set_Cparameters-nopointer.h\"\n const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;\n // Step 0e: Find ghostzone mappings; set up bcstruct\n#include \"boundary_conditions/driver_bcstruct.h\"\n // Step 0e.i: Free allocated space for xx[][] array\n for(int i=0;i<3;i++) free(xx[i]);\n }\n\n // Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets\n // params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the\n // chosen (non-Eigen) CoordSystem.\n int EigenCoord = 0;\n set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, &params, xx);\n\n // Step 0g: Set all C parameters \"blah\" for params.blah, including\n // Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.\n#include \"set_Cparameters-nopointer.h\"\n const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;\n\n // Step 0h: Time coordinate parameters\n REAL t_final = 16.0; /* Final time is set so that at t=t_final,\n * data at the origin have not been corrupted\n * by the approximate outer boundary condition */\n\n // Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor\n REAL dt = find_timestep(&params, xx);\n //fprintf(stderr,\"# Timestep set to = %e\\n\",(double)dt);\n int N_final = (int)(t_final / dt + 0.5); // The number of points in time.\n // Add 0.5 to account for C rounding down\n // typecasts to integers.\n int output_every_N = 20;//(int)((REAL)N_final/800.0);\n if(output_every_N == 0) output_every_N = 1;\n\n // Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.\n // This is a limitation of the RK method. You are always welcome to declare & allocate\n // additional gridfunctions by hand.\n if(NUM_AUX_GFS > NUM_EVOL_GFS) {\n fprintf(stderr,\"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\\n\");\n fprintf(stderr,\" or allocate (malloc) by hand storage for *diagnostic_output_gfs. \\n\");\n exit(1);\n }\n\n // Step 0k: Allocate memory for gridfunctions\n#include \"MoLtimestepping/RK_Allocate_Memory.h\"\n REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);\n\n // Step 0l: Set up precomputed reference metric arrays\n // Step 0l.i: Allocate space for precomputed reference metric arrays.\n#include \"rfm_files/rfm_struct__malloc.h\"\n\n // Step 0l.ii: Define precomputed reference metric arrays.\n {\n #include \"set_Cparameters-nopointer.h\"\n #include \"rfm_files/rfm_struct__define.h\"\n }\n\n // Step 1: Set up initial data to an exact solution\n initial_data(&params,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);\n\n // Step 1b: Apply boundary conditions, as initial data\n // are sometimes ill-defined in ghost zones.\n // E.g., spherical initial data might not be\n // properly defined at points where r=-1.\n apply_bcs_curvilinear(&params, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);\n enforce_detgammabar_constraint(&rfmstruct, &params, y_n_gfs);\n\n // Step 2: Start the timer, for keeping track of how fast the simulation is progressing.\n#ifdef __linux__ // Use high-precision timer in Linux.\n struct timespec start, end;\n clock_gettime(CLOCK_REALTIME, &start);\n#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs\n // http://www.cplusplus.com/reference/ctime/time/\n time_t start_timer,end_timer;\n time(&start_timer); // Resolution of one second...\n#endif\n\n // Step 3: Integrate the initial data forward in time using the chosen RK-like Method of\n // Lines timestepping algorithm, and output periodic simulation diagnostics\n for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.\n\n // Step 3.a: Output 2D data file periodically, for visualization\n if(n%output_every_N == 0) {\n // Evaluate Hamiltonian constraint violation\n Hamiltonian_constraint(&rfmstruct, &params, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);\n\n char filename[100];\n sprintf(filename,\"out%d-%08d.txt\",Nxx[0],n);\n const int i1mid=Nxx_plus_2NGHOSTS1/2;\n const int i2mid=Nxx_plus_2NGHOSTS2/2;\n FILE *fp = fopen(filename, \"w\");\n for( int i0=NGHOSTS;i0<Nxx_plus_2NGHOSTS0-NGHOSTS;i0++) {\n const int idx = IDX3S(i0,i1mid,i2mid);\n const REAL xx0 = xx[0][i0];\n REAL xCart[3];\n xx_to_Cart(&params,xx,i0,i1mid,i2mid,xCart);\n const REAL rr = sqrt( xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2] );\n fprintf(fp,\"%e %e %e %e %e %e %e\\n\",xx0,rr,\n y_n_gfs[IDX4ptS(SFGF,idx)],y_n_gfs[IDX4ptS(SFMGF,idx)],\n y_n_gfs[IDX4ptS(ALPHAGF,idx)],y_n_gfs[IDX4ptS(CFGF,idx)],\n log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));\n }\n fclose(fp);\n }\n\n // Step 3.b: Step forward one timestep (t -> t+dt) in time using\n // chosen RK-like MoL timestepping algorithm\n#include \"MoLtimestepping/RK_MoL.h\"\n\n // Step 3.c: If t=t_final, output conformal factor & Hamiltonian\n // constraint violation to 2D data file\n if(n==N_final-1) {\n // Evaluate Hamiltonian constraint violation\n Hamiltonian_constraint(&rfmstruct, &params, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);\n char filename[100];\n sprintf(filename,\"out%d.txt\",Nxx[0]);\n FILE *out1D = fopen(filename, \"w\");\n const int i1mid=Nxx_plus_2NGHOSTS1/2;\n const int i2mid=Nxx_plus_2NGHOSTS2/2;\n for(int i0=NGHOSTS;i0<Nxx_plus_2NGHOSTS0-NGHOSTS;i0++) {\n REAL xCart[3];\n xx_to_Cart(&params,xx,i0,i1mid,i2mid,xCart);\n const REAL rr = sqrt( xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2] );\n int idx = IDX3S(i0,i1mid,i2mid);\n fprintf(out1D,\"%e %e\\n\",rr,log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));\n }\n fclose(out1D);\n }\n\n // Step 3.d: Progress indicator printing to stderr\n\n // Step 3.d.i: Measure average time per iteration\n#ifdef __linux__ // Use high-precision timer in Linux.\n clock_gettime(CLOCK_REALTIME, &end);\n const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;\n#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs\n time(&end_timer); // Resolution of one second...\n REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.\n#endif\n const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;\n\n const int iterations_remaining = N_final - n;\n const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;\n\n const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4\n const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);\n \n\n // Step 3.d.ii: Output simulation progress to stderr\n if(n%10 == 0) {\n fprintf(stderr,\"%c[2K\", 27); // Clear the line\n fprintf(stderr,\"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\\r\", // \\r is carriage return, move cursor to the beginning of the line\n n, n*dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),\n (double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);\n fflush(stderr); // Flush the stderr buffer\n } // End progress indicator if(n % 10 == 0)\n } // End main loop to progress forward in time.\n fprintf(stderr,\"\\n\"); // Clear the final line of output from progress indicator.\n\n // Step 4: Free all allocated memory\n#include \"rfm_files/rfm_struct__freemem.h\"\n#include \"boundary_conditions/bcstruct_freemem.h\"\n#include \"MoLtimestepping/RK_Free_Memory.h\"\n free(auxevol_gfs);\n for(int i=0;i<3;i++) free(xx[i]);\n\n return 0;\n}", "Writing BSSN_ScalarFieldCollapse_Ccodes/ScalarFieldCollapse_Playground.c\n" ], [ "import os\nimport time\nimport cmdline_helper as cmd\n\nprint(\"Now compiling, should take ~10 seconds...\\n\")\nstart = time.time()\ncmd.C_compile(os.path.join(Ccodesdir,\"ScalarFieldCollapse_Playground.c\"),\n os.path.join(outdir,\"ScalarFieldCollapse_Playground\"),compile_mode=\"optimized\")\nend = time.time()\nprint(\"(BENCH) Finished in \"+str(end-start)+\" seconds.\\n\")\n\n# Change to output directory\nos.chdir(outdir)\n# Clean up existing output files\ncmd.delete_existing_files(\"out*.txt\")\ncmd.delete_existing_files(\"out*.png\")\n# Run executable\n\nprint(os.getcwd())\nprint(\"Now running, should take ~20 seconds...\\n\")\nstart = time.time()\ncmd.Execute(\"ScalarFieldCollapse_Playground\", \"640 2 2 \"+str(CFL_FACTOR),\"out640.txt\")\nend = time.time()\nprint(\"(BENCH) Finished in \"+str(end-start)+\" seconds.\\n\")\n\n# Return to root directory\nos.chdir(os.path.join(\"../../\"))", "Now compiling, should take ~10 seconds...\n\nCompiling executable...\n(EXEC): Executing `gcc -std=gnu99 -Ofast -fopenmp -march=native -funroll-loops BSSN_ScalarFieldCollapse_Ccodes/ScalarFieldCollapse_Playground.c -o BSSN_ScalarFieldCollapse_Ccodes/output/ScalarFieldCollapse_Playground -lm`...\n(BENCH): Finished executing in 5.7074360847473145 seconds.\nFinished compilation.\n(BENCH) Finished in 5.729342937469482 seconds.\n\n/Users/werneck/nrpytutorial/BSSN_ScalarFieldCollapse_Ccodes/output\nNow running, should take ~20 seconds...\n\n(EXEC): Executing `./ScalarFieldCollapse_Playground 640 2 2 0.5`...\n\u001b[2KIt: 810 t=15.90 dt=1.96e-02 | 99.4%; ETA 0 s | t/h 4089.68 | gp/s 5.92e+0505e+14\n(BENCH): Finished executing in 14.172585248947144 seconds.\n(BENCH) Finished in 14.1827712059021 seconds.\n\n" ] ], [ [ "<a id='visualization'></a>\n\n# Step 7: Visualization \\[Back to [top](#toc)\\]\n$$\\label{visualization}$$\n\n<a id='install_download'></a>\n\n## Step 7.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \\[Back to [top](#toc)\\]\n$$\\label{install_download}$$ \n\nNote that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed using a separate package (on [this site](http://ffmpeg.org/)), or if running Jupyter within Anaconda, use the command: `conda install -c conda-forge ffmpeg`.", "_____no_output_____" ] ], [ [ "!pip install scipy > /dev/null\n\ncheck_for_ffmpeg = !which ffmpeg >/dev/null && echo $?\nif check_for_ffmpeg != ['0']:\n print(\"Couldn't find ffmpeg, so I'll download it.\")\n # Courtesy https://johnvansickle.com/ffmpeg/\n !wget http://astro.phys.wvu.edu/zetienne/ffmpeg-static-amd64-johnvansickle.tar.xz\n !tar Jxf ffmpeg-static-amd64-johnvansickle.tar.xz\n print(\"Copying ffmpeg to ~/.local/bin/. Assumes ~/.local/bin is in the PATH.\")\n !mkdir ~/.local/bin/\n !cp ffmpeg-static-amd64-johnvansickle/ffmpeg ~/.local/bin/\n print(\"If this doesn't work, then install ffmpeg yourself. It should work fine on mybinder.\")", "_____no_output_____" ] ], [ [ "<a id='movie_dynamics'></a>\n\n## Step 7.b: Dynamics of the solution \\[Back to [top](#toc)\\]\n$$\\label{movie_dynamics}$$\n\n<a id='genimages'></a>\n\n### Step 7.b.i: Generate images for visualization animation \\[Back to [top](#toc)\\]\n$$\\label{genimages}$$ \n\nHere we loop through the data files output by the executable compiled and run in [the previous step](#mainc), generating a [png](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image for each data file.\n\n**Special thanks to Terrence Pierre Jacques. His work with the first versions of these scripts greatly contributed to the scripts as they exist below.**", "_____no_output_____" ] ], [ [ "## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ##\nimport numpy as np\nfrom scipy.interpolate import griddata\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import savefig\n\nimport glob\nimport sys\nfrom matplotlib import animation\n\nglobby = glob.glob(os.path.join(outdir,'out640-00*.txt'))\nfile_list = []\nfor x in sorted(globby):\n file_list.append(x)\n\nfor filename in file_list:\n fig = plt.figure(figsize=(8,6))\n x,r,sf,sfM,alpha,cf,logH = np.loadtxt(filename).T #Transposed for easier unpacking\n\n ax = fig.add_subplot(321)\n ax2 = fig.add_subplot(322)\n ax3 = fig.add_subplot(323)\n ax4 = fig.add_subplot(324)\n ax5 = fig.add_subplot(325)\n\n ax.set_title(\"Scalar field\")\n ax.set_ylabel(r\"$\\varphi(t,r)$\")\n ax.set_xlim(0,20)\n ax.set_ylim(-0.6,0.6)\n #ax.set_yticks([-0.005,-0.0025,0,0.0025,0.005])\n ax.plot(r,sf,'k-')\n ax.grid()\n #ax.legend()\n\n ax2.set_title(\"Scalar field conjugate momentum\")\n ax2.set_ylabel(r\"$\\Pi(t,r)$\")\n ax2.set_xlim(0,20)\n ax2.set_ylim(-1,1)\n #ax2.set_yticks([-0.005,-0.0025,0,0.0025,0.005])\n ax2.plot(r,sfM,'b-')\n ax2.grid()\n #ax2.legend()\n\n ax3.set_title(\"Lapse function\")\n ax3.set_ylabel(r\"$\\alpha(t,r)$\")\n ax3.set_xlim(0,20)\n ax3.set_ylim(0,1.02)\n #ax3.set_yticks([-0.005,-0.0025,0,0.0025,0.005])\n ax3.plot(r,alpha,'r-')\n ax3.grid()\n #ax3.legend()\n\n ax4.set_title(\"Conformal factor\")\n ax4.set_xlabel(r\"$r$\")\n ax4.set_ylabel(r\"$W(t,r)$\")\n ax4.set_xlim(0,20)\n ax4.set_ylim(0,1.02)\n #ax3.set_yticks([-0.005,-0.0025,0,0.0025,0.005])\n ax4.plot(r,cf,'g-',label=(\"$p = 0.043149493$\"))\n ax4.grid()\n #ax4.legend()\n\n ax5.set_title(\"Hamiltonian constraint violation\")\n ax5.set_xlabel(r\"$r$\")\n ax5.set_ylabel(r\"$\\mathcal{H}(t,r)$\")\n ax5.set_xlim(0,20)\n ax5.set_ylim(-16,0)\n #ax3.set_yticks([-0.005,-0.0025,0,0.0025,0.005])\n ax5.plot(r,logH,'m-')\n ax5.grid()\n #ax5.legend()\n\n plt.tight_layout()\n savefig(filename+\".png\",dpi=150)\n plt.close(fig)\n sys.stdout.write(\"%c[2K\" % 27)\n sys.stdout.write(\"Processing file \"+filename+\"\\r\")\n sys.stdout.flush()", "\u001b[2KProcessing file BSSN_ScalarFieldCollapse_Ccodes/output/out640-00000800.txt\r" ] ], [ [ "<a id='genvideo'></a>\n\n### Step 7.b.ii: Generate visualization animation \\[Back to [top](#toc)\\]\n$$\\label{genvideo}$$ \n\nIn the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook.", "_____no_output_____" ] ], [ [ "## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##\n\n# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame\n# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation\nfrom IPython.display import HTML\nimport matplotlib.image as mgimg\n\nfig = plt.figure(frameon=False)\nax = fig.add_axes([0, 0, 1, 1])\nax.axis('off')\n\nmyimages = []\n\nfor i in range(len(file_list)):\n img = mgimg.imread(file_list[i]+\".png\")\n imgplot = plt.imshow(img)\n myimages.append([imgplot])\n\nani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)\nplt.close()\nani.save(os.path.join(outdir,'ScalarField_Collapse.mp4'), fps=5, dpi=150)", "_____no_output_____" ], [ "## VISUALIZATION ANIMATION, PART 3: Display movie as embedded HTML5 (see next cell) ##\n\n# https://stackoverflow.com/questions/18019477/how-can-i-play-a-local-video-in-my-ipython-notebook", "_____no_output_____" ], [ "# Embed video based on suggestion:\n# https://stackoverflow.com/questions/39900173/jupyter-notebook-html-cell-magic-with-python-variable\nHTML(\"\"\"\n<video width=\"800\" height=\"600\" controls>\n <source src=\\\"\"\"\"+os.path.join(outdir,\"ScalarField_Collapse.mp4\")+\"\"\"\\\" type=\"video/mp4\">\n</video>\n\"\"\")", "_____no_output_____" ] ], [ [ "<a id='convergence'></a>\n\n## Step 7.c: Convergence of constraint violation \\[Back to [top](#toc)\\]\n$$\\label{convergence}$$", "_____no_output_____" ] ], [ [ "from IPython.display import Image\n\nos.chdir(outdir)\n\ncmd.delete_existing_files(\"out320*.txt\")\ncmd.Execute(\"ScalarFieldCollapse_Playground\", \"320 2 2 \"+str(CFL_FACTOR),\"out320.txt\")\n\nos.chdir(os.path.join(\"..\",\"..\"))\n\noutfig = os.path.join(outdir,\"ScalarFieldCollapse_H_convergence.png\")\n\nfig = plt.figure()\n\nr_640,H_640 = np.loadtxt(os.path.join(outdir,\"out640.txt\")).T\nr_320,H_320 = np.loadtxt(os.path.join(outdir,\"out320.txt\")).T\n\nplt.title(\"Plot demonstrating 4th order\\nconvergence of constraint violations\")\nplt.xlabel(r\"$r$\")\nplt.ylabel(r\"$\\log_{10}|\\mathcal{H}|$\")\nplt.xlim(0,16)\nplt.plot(r_640,H_640,label=r\"$N_{r} = 640$\")\nplt.plot(r_320,H_320+4*np.log10(320.0/640.0),label=r\"$N_{r} = 320$, mult by $(320/640)^{4}$\")\nplt.legend()\n\nplt.tight_layout()\nplt.savefig(outfig,dpi=150)\nplt.close(fig)\nImage(outfig)", "(EXEC): Executing `./ScalarFieldCollapse_Playground 320 2 2 0.5`...\n\u001b[2KIt: 400 t=15.71 dt=3.93e-02 | 98.3%; ETA 0 s | t/h 14137.17 | gp/s 5.12e+055e+14\n(BENCH): Finished executing in 3.873671293258667 seconds.\n" ] ], [ [ "<a id='output_to_pdf'></a>\n\n# Step 8: Output this module as $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{output_to_pdf}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)", "_____no_output_____" ] ], [ [ "import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\ncmd.output_Jupyter_notebook_to_LaTeXed_PDF(\"Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse\")", "Created Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.tex,\n and compiled LaTeX file to PDF file Tutorial-Start_to_Finish-\n BSSNCurvilinear-ScalarField_Collapse.pdf\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e763b7d79755571b19df360410de721cc8e00ec2
34,190
ipynb
Jupyter Notebook
Chess.ipynb
avishkar2001/DataScienceCoolStuff
5b7400bc5b3018fe0e9903f67787ec6221af9ca1
[ "MIT" ]
null
null
null
Chess.ipynb
avishkar2001/DataScienceCoolStuff
5b7400bc5b3018fe0e9903f67787ec6221af9ca1
[ "MIT" ]
null
null
null
Chess.ipynb
avishkar2001/DataScienceCoolStuff
5b7400bc5b3018fe0e9903f67787ec6221af9ca1
[ "MIT" ]
1
2021-01-13T05:26:14.000Z
2021-01-13T05:26:14.000Z
432.78481
32,256
0.9327
[ [ [ "To create a chessboard with the Python programming language, I will use two Python libraries; Matplotlib for visualization, and NumPy for building an algorithm which will help us to create and visualize a chessboard.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.colors import LogNorm\n\ndx, dy = 0.015, 0.05\nx = np.arange(-4.0, 4.0, dx)\ny = np.arange(-4.0, 4.0, dy)\nX, Y = np.meshgrid(x, y)\nextent = np.min(x), np.max(x), np.min(y), np.max(y)\nz1 = np.add.outer(range(8), range(8)) % 2\nplt.imshow(z1, cmap=\"binary_r\", interpolation=\"nearest\", extent=extent, alpha=1)\n\ndef chess(x, y):\n return (1 - x / 2 + x ** 5 + y ** 6) * np.exp(-(x ** 2 + y ** 2))\nz2 = chess(X, Y)\nplt.imshow(z2, alpha=0.7, interpolation=\"bilinear\", extent=extent)\nplt.title(\"Chess Board with Python\")\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
e763cc59de7c68b94ea90ea32dab9b493e39ed59
1,652
ipynb
Jupyter Notebook
notebooks/build_database.ipynb
gdsa-upc/2017-EZficy
4b44dc23ccfca44f66726664cf15b189ea9fcd52
[ "MIT" ]
null
null
null
notebooks/build_database.ipynb
gdsa-upc/2017-EZficy
4b44dc23ccfca44f66726664cf15b189ea9fcd52
[ "MIT" ]
null
null
null
notebooks/build_database.ipynb
gdsa-upc/2017-EZficy
4b44dc23ccfca44f66726664cf15b189ea9fcd52
[ "MIT" ]
null
null
null
23.267606
83
0.470944
[ [ [ "import os\nfrom params import get_params\n\ndef build_database(params):\n\n # List images\n image_names = os.listdir(os.path.join(params['root'],\n params['database'],params['split'],'images'))\n\n # File to be saved\n file = open(os.path.join(params['root'],params['root_save'],\n params['image_lists'],\n params['split'] + '.txt'),'w')\n\n # Save image list to disk\n for imname in image_names:\n file.write(imname + \"\\n\")\n file.close()\n\nif __name__==\"__main__\":\n\n params = get_params()\n\n for split in ['train','val','test']:\n params['split'] = split\n build_database(params)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e763cd182eebc0468a6e4ef2e34518cea16b10e1
44,426
ipynb
Jupyter Notebook
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
9e972a106306f0b3d3f01c08fee7305d3851aa98
[ "MIT" ]
null
null
null
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
9e972a106306f0b3d3f01c08fee7305d3851aa98
[ "MIT" ]
null
null
null
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
9e972a106306f0b3d3f01c08fee7305d3851aa98
[ "MIT" ]
null
null
null
40.68315
9,765
0.605996
[ [ [ "<a href=\"https://cognitiveclass.ai\"><img src = \"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png\" width = 400> </a>\n\n<h1 align=center><font size = 5>Pre-Trained Models</font></h1>", "_____no_output_____" ], [ "## Introduction\n", "_____no_output_____" ], [ "In this lab, you will learn how to leverage pre-trained models to build image classifiers instead of building a model from scratch.", "_____no_output_____" ], [ "## Table of Contents\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n\n<font size = 3> \n \n1. <a href=\"#item31\">Import Libraries and Packages</a>\n2. <a href=\"#item32\">Download Data</a> \n3. <a href=\"#item33\">Define Global Constants</a> \n4. <a href=\"#item34\">Construct ImageDataGenerator Instances</a> \n5. <a href=\"#item35\">Compile and Fit Model</a>\n\n</font>\n \n</div>", "_____no_output_____" ], [ " ", "_____no_output_____" ], [ "<a id='item31'></a>", "_____no_output_____" ], [ "## Import Libraries and Packages", "_____no_output_____" ], [ "Let's start the lab by importing the libraries that we will be using in this lab.", "_____no_output_____" ], [ "First, we will import the ImageDataGenerator module since we will be leveraging it to train our model in batches.", "_____no_output_____" ] ], [ [ "from keras.preprocessing.image import ImageDataGenerator", "Using TensorFlow backend.\n" ] ], [ [ "In this lab, we will be using the Keras library to build an image classifier, so let's download the Keras library.", "_____no_output_____" ] ], [ [ "import keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense", "_____no_output_____" ] ], [ [ "Finally, we will be leveraging the ResNet50 model to build our classifier, so let's download it as well.", "_____no_output_____" ] ], [ [ "from keras.applications import ResNet50\nfrom keras.applications.resnet50 import preprocess_input", "_____no_output_____" ] ], [ [ "<a id='item32'></a>", "_____no_output_____" ], [ "## Download Data", "_____no_output_____" ], [ "For your convenience, I have placed the data on a server which you can retrieve easily using the **wget** command. So let's run the following line of code to get the data. Given the large size of the image dataset, it might take some time depending on your internet speed.", "_____no_output_____" ] ], [ [ "## get the data\n!wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0321EN/data/concrete_data_week3.zip", "--2020-05-05 14:05:56-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0321EN/data/concrete_data_week3.zip\r\nResolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196\r\nConnecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.\r\nHTTP request sent, awaiting response... 200 OK\r\nLength: 261482368 (249M) [application/zip]\r\nSaving to: ‘concrete_data_week3.zip.2’\r\n\r\nconcrete_data_week3 100%[===================>] 249.37M 7.18MB/s in 38s \r\n\r\n2020-05-05 14:06:35 (6.57 MB/s) - ‘concrete_data_week3.zip.2’ saved [261482368/261482368]\r\n\r\n" ] ], [ [ "And now if you check the left directory pane, you should see the zipped file *concrete_data_week3.zip* appear. So, let's go ahead and unzip the file to access the images. Given the large number of images in the dataset, this might take a couple of minutes, so please be patient, and wait until the code finishes running.", "_____no_output_____" ] ], [ [ "!unzip -n concrete_data_week3.zip", "Archive: concrete_data_week3.zip\r\n" ] ], [ [ "Now, you should see the folder *concrete_data_week3* appear in the left pane. If you open this folder by double-clicking on it, you will find that it contains two folders: *train* and *valid*. And if you explore these folders, you will find that each contains two subfolders: *positive* and *negative*. These are the same folders that we saw in the labs in the previous modules of this course, where *negative* is the negative class and it represents the concrete images with no cracks and *positive* is the positive class and it represents the concrete images with cracks.", "_____no_output_____" ], [ "**Important Note**: There are thousands and thousands of images in each folder, so please don't attempt to double click on the *negative* and *positive* folders. This may consume all of your memory and you may end up with a **50*** error. So please **DO NOT DO IT**.", "_____no_output_____" ], [ "<a id='item33'></a>", "_____no_output_____" ], [ "## Define Global Constants", "_____no_output_____" ], [ "Here, we will define constants that we will be using throughout the rest of the lab. \n\n1. We are obviously dealing with two classes, so *num_classes* is 2. \n2. The ResNet50 model was built and trained using images of size (224 x 224). Therefore, we will have to resize our images from (227 x 227) to (224 x 224).\n3. We will training and validating the model using batches of 100 images.", "_____no_output_____" ] ], [ [ "num_classes = 2\n\nimage_resize = 224\n\nbatch_size_training = 100\nbatch_size_validation = 100", "_____no_output_____" ] ], [ [ "<a id='item34'></a>", "_____no_output_____" ], [ "## Construct ImageDataGenerator Instances", "_____no_output_____" ], [ "In order to instantiate an ImageDataGenerator instance, we will set the **preprocessing_function** argument to *preprocess_input* which we imported from **keras.applications.resnet50** in order to preprocess our images the same way the images used to train ResNet50 model were processed.", "_____no_output_____" ] ], [ [ "data_generator = ImageDataGenerator(\n preprocessing_function=preprocess_input,\n)", "_____no_output_____" ] ], [ [ "Next, we will use the *flow_from_directory* method to get the training images as follows:", "_____no_output_____" ] ], [ [ "train_generator = data_generator.flow_from_directory(\n 'concrete_data_week3/train',\n target_size=(image_resize, image_resize),\n batch_size=batch_size_training,\n class_mode='categorical')", "Found 30001 images belonging to 2 classes.\n" ] ], [ [ "**Your Turn**: Use the *flow_from_directory* method to get the validation images and assign the result to **validation_generator**.", "_____no_output_____" ] ], [ [ "## Type your answer here\nvalidation_generator = data_generator.flow_from_directory(\n 'concrete_data_week3/valid',\n target_size=(image_resize, image_resize),\n batch_size=batch_size_validation,\n class_mode='categorical')\n\n\n\n", "Found 10001 images belonging to 2 classes.\n" ] ], [ [ "Double-click __here__ for the solution.\n<!-- The correct answer is:\nvalidation_generator = data_generator.flow_from_directory(\n 'concrete_data_week3/valid',\n target_size=(image_resize, image_resize),\n batch_size=batch_size_validation,\n class_mode='categorical')\n-->\n\n", "_____no_output_____" ], [ "<a id='item35'></a>", "_____no_output_____" ], [ "## Build, Compile and Fit Model", "_____no_output_____" ], [ "In this section, we will start building our model. We will use the Sequential model class from Keras.", "_____no_output_____" ] ], [ [ "model = Sequential()", "_____no_output_____" ] ], [ [ "Next, we will add the ResNet50 pre-trained model to out model. However, note that we don't want to include the top layer or the output layer of the pre-trained model. We actually want to define our own output layer and train it so that it is optimized for our image dataset. In order to leave out the output layer of the pre-trained model, we will use the argument *include_top* and set it to **False**.", "_____no_output_____" ] ], [ [ "model.add(ResNet50(\n include_top=False,\n pooling='avg',\n weights='imagenet',\n ))", "_____no_output_____" ] ], [ [ "Then, we will define our output layer as a **Dense** layer, that consists of two nodes and uses the **Softmax** function as the activation function.", "_____no_output_____" ] ], [ [ "model.add(Dense(num_classes, activation='softmax'))", "_____no_output_____" ] ], [ [ "You can access the model's layers using the *layers* attribute of our model object. ", "_____no_output_____" ] ], [ [ "model.layers", "_____no_output_____" ] ], [ [ "You can see that our model is composed of two sets of layers. The first set is the layers pertaining to ResNet50 and the second set is a single layer, which is our Dense layer that we defined above.", "_____no_output_____" ], [ "You can access the ResNet50 layers by running the following:", "_____no_output_____" ] ], [ [ "model.layers[0].layers", "_____no_output_____" ] ], [ [ "Since the ResNet50 model has already been trained, then we want to tell our model not to bother with training the ResNet part, but to train only our dense output layer. To do that, we run the following.", "_____no_output_____" ] ], [ [ "model.layers[0].trainable = False", "_____no_output_____" ] ], [ [ "And now using the *summary* attribute of the model, we can see how many parameters we will need to optimize in order to train the output layer.", "_____no_output_____" ] ], [ [ "model.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nresnet50 (Model) (None, 2048) 23587712 \n_________________________________________________________________\ndense_1 (Dense) (None, 2) 4098 \n=================================================================\nTotal params: 23,591,810\nTrainable params: 4,098\nNon-trainable params: 23,587,712\n_________________________________________________________________\n" ] ], [ [ "Next we compile our model using the **adam** optimizer.", "_____no_output_____" ] ], [ [ "model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "Before we are able to start the training process, with an ImageDataGenerator, we will need to define how many steps compose an epoch. Typically, that is the number of images divided by the batch size. Therefore, we define our steps per epoch as follows:", "_____no_output_____" ] ], [ [ "steps_per_epoch_training = len(train_generator)\nsteps_per_epoch_validation = len(validation_generator)\nnum_epochs = 2", "_____no_output_____" ] ], [ [ "Finally, we are ready to start training our model. Unlike a conventional deep learning training were data is not streamed from a directory, with an ImageDataGenerator where data is augmented in batches, we use the **fit_generator** method.", "_____no_output_____" ] ], [ [ "fit_history = model.fit_generator(\n train_generator,\n steps_per_epoch=steps_per_epoch_training,\n epochs=num_epochs,\n validation_data=validation_generator,\n validation_steps=steps_per_epoch_validation,\n verbose=1,\n)", "Epoch 1/2\n 52/301 [====>.........................] - ETA: 41:44 - loss: 0.1047 - accuracy: 0.9704\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b" ] ], [ [ "Now that the model is trained, you are ready to start using it to classify images.", "_____no_output_____" ], [ "Since training can take a long time when building deep learning models, it is always a good idea to save your model once the training is complete if you believe you will be using the model again later. You will be using this model in the next module, so go ahead and save your model.", "_____no_output_____" ] ], [ [ "model.save('classifier_resnet_model.h5')", "_____no_output_____" ] ], [ [ "Now, you should see the model file *classifier_resnet_model.h5* apprear in the left directory pane.", "_____no_output_____" ], [ "### Thank you for completing this lab!\n\nThis notebook was created by Alex Aklson. I hope you found this lab interesting and educational.", "_____no_output_____" ], [ "This notebook is part of a course on **Coursera** called *AI Capstone Project with Deep Learning*. If you accessed this notebook outside the course, you can take this course online by clicking [here](https://cocl.us/DL0321EN_Coursera_Week3_LAB1).", "_____no_output_____" ], [ "<hr>\n\nCopyright &copy; 2020 [IBM Developer Skills Network](https://cognitiveclass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e763cdd02d223d43600cefc81e837486415e1876
4,383
ipynb
Jupyter Notebook
Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb
chetannitk/Tensorflow-2.0-Quick-Start-Guide
8f5c81e616b478eaa2d6af259f0bd2d1ed518d8a
[ "MIT" ]
126
2018-06-28T04:50:51.000Z
2022-03-11T02:10:11.000Z
Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb
tjankovic/Tensorflow-2.0-Quick-Start-Guide
4509c97e94412a48a0aba02304d9d878e70437a0
[ "MIT" ]
1
2020-10-08T11:55:10.000Z
2020-10-08T11:55:10.000Z
Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb
tjankovic/Tensorflow-2.0-Quick-Start-Guide
4509c97e94412a48a0aba02304d9d878e70437a0
[ "MIT" ]
84
2019-03-15T06:00:56.000Z
2022-01-26T17:08:47.000Z
20.871429
110
0.538672
[ [ [ "import tensorflow as tf\nfrom tensorflow.keras import backend as K\n\n", "_____no_output_____" ], [ "print(tf.keras.__version__)", "_____no_output_____" ], [ "const = K.constant([[42,24],[11,99]], dtype=tf.float16, shape=[2,2])", "_____no_output_____" ], [ "const", "_____no_output_____" ] ], [ [ "### Acquire data", "_____no_output_____" ] ], [ [ "mnist = tf.keras.datasets.mnist\n(train_x,train_y), (test_x, test_y) = mnist.load_data()\n\nbatch_size = 32 # 32 is default but specify anyway\nepochs=10", "_____no_output_____" ] ], [ [ "### Normalise data", "_____no_output_____" ] ], [ [ "train_x, test_x = tf.cast(train_x/255.0, tf.float32), tf.cast(test_x/255.0, tf.float32)\ntrain_y, test_y = tf.cast(train_y,tf.int64),tf.cast(test_y,tf.int64)\n", "_____no_output_____" ] ], [ [ "### Sequential Model #1", "_____no_output_____" ] ], [ [ "model1 = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(512,activation=tf.nn.relu),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10,activation=tf.nn.softmax)\n])", "_____no_output_____" ], [ "optimiser = tf.keras.optimizers.Adam()\nmodel1.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', metrics = ['accuracy'])", "_____no_output_____" ], [ "model1.fit(train_x, train_y, batch_size=batch_size, epochs=epochs)", "_____no_output_____" ], [ "model1.evaluate(test_x, test_y)", "_____no_output_____" ] ], [ [ "### Sequential Model #2", "_____no_output_____" ] ], [ [ "model2 = tf.keras.models.Sequential();\nmodel2.add(tf.keras.layers.Flatten())\nmodel2.add(tf.keras.layers.Dense(512, activation='relu'))\nmodel2.add(tf.keras.layers.Dropout(0.2))\nmodel2.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))", "_____no_output_____" ], [ "optimiser = tf.keras.optimizers.Adam()\nmodel2.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', metrics = ['accuracy'])\nmodel2.fit(train_x, train_y, batch_size = batch_size, epochs=epochs)\nmodel2.evaluate(test_x, test_y)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e763d04567d1aca6544295b8d8099dd5fa5a4250
37,484
ipynb
Jupyter Notebook
quantile_regression/Keras Quantile Model.ipynb
Alex2Yang97/internship_explore_work
bfd7e84b89583dce15be661dc760cb69965d1933
[ "MIT" ]
null
null
null
quantile_regression/Keras Quantile Model.ipynb
Alex2Yang97/internship_explore_work
bfd7e84b89583dce15be661dc760cb69965d1933
[ "MIT" ]
null
null
null
quantile_regression/Keras Quantile Model.ipynb
Alex2Yang97/internship_explore_work
bfd7e84b89583dce15be661dc760cb69965d1933
[ "MIT" ]
null
null
null
88.197647
15,400
0.803623
[ [ [ "https://github.com/sachinruk/KerasQuantileModel", "_____no_output_____" ], [ "# Deep Quantile Regression\n\nOne area that Deep Learning has not explored extensively is the uncertainty in estimates. However, as far as decision making goes, most people actually require quantiles as opposed to true uncertainty in an estimate. eg. For a given age the weight of an individual will vary. What would be interesting is the (for arguments sake) the 10th and 90th percentile. The uncertainty of _the estimate_ of an individuals weight is less interesting.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\nimport tensorflow.keras.backend as K\n\n%matplotlib inline\n\nmcycle = pd.read_csv('mcycle',delimiter='\\t')", "_____no_output_____" ] ], [ [ "Standardise the inputs and outputs so that it is easier to train. I have't saved the mean and standard deviations, but you should.", "_____no_output_____" ] ], [ [ "mcycle.times = (mcycle.times - mcycle.times.mean())/mcycle.times.std()\nmcycle.accel = (mcycle.accel - mcycle.accel.mean())/mcycle.accel.std()", "_____no_output_____" ], [ "mcycle", "_____no_output_____" ], [ "model = Sequential()\nmodel.add(Dense(units=10, input_dim=1,activation='relu'))\nmodel.add(Dense(units=10, input_dim=1,activation='relu'))\nmodel.add(Dense(1))\nmodel.compile(loss='mae', optimizer='adadelta')\nmodel.fit(mcycle.times.values, mcycle.accel.values, epochs=2000, batch_size=32, verbose=0)\nmodel.evaluate(mcycle.times.values, mcycle.accel.values)", "3/3 [==============================] - 0s 332us/step - loss: 0.8030\n" ], [ "model.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_3 (Dense) (None, 10) 20 \n_________________________________________________________________\ndense_4 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_5 (Dense) (None, 1) 11 \n=================================================================\nTotal params: 141\nTrainable params: 141\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "t_test = np.linspace(mcycle.times.min(),mcycle.times.max(),200)\ny_test = model.predict(t_test)\n\nplt.scatter(mcycle.times, mcycle.accel)\nplt.plot(t_test, y_test,'r')\nplt.show()", "_____no_output_____" ] ], [ [ "## Quantiles 0.1, 0.5, 0.9\n\nThe loss for an individual data point is defined as:\n$$\n\\begin{align}\n\\mathcal{L}(\\xi_i|\\alpha)=\\begin{cases}\n\\alpha \\xi_i &\\text{if }\\xi_i\\ge 0, \\\\\n(\\alpha-1) \\xi_i &\\text{if }\\xi_i< 0.\n\\end{cases}\n\\end{align}\n$$\nwhere $\\alpha$ is the required quantile and $\\xi_i = y_i - f(\\mathbf{x}_i)$ and, $f(\\mathbf{x}_i)$ is the predicted (quantile) model. The final overall loss is defines as:\n$$\\mathcal{L}(\\mathbf{y},\\mathbf{f}|\\alpha)=\\frac{1}{N} \\sum_{i=1}^N \\mathcal{L}(y_i-f(\\mathbf{x}_i)|\\alpha)$$\n\nThe following function defines the loss function for a quantile model.\n\n**Note**: The following 4 lines is ALL that you change in comparison to a normal Deep Learning method, i.e. The loss function is all that changes.", "_____no_output_____" ] ], [ [ "def tilted_loss(q,y,f):\n e = (y-f)\n return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)", "_____no_output_____" ], [ "def mcycleModel():\n model = Sequential()\n model.add(Dense(units=32, input_dim=1,activation='relu'))\n model.add(Dense(units=32,activation='relu'))\n model.add(Dense(1))\n \n return model", "_____no_output_____" ], [ "qs = [0.1, 0.5, 0.9]\n\nt_test = np.linspace(mcycle.times.min(),mcycle.times.max(),200)\nplt.scatter(mcycle.times,mcycle.accel)\n\nfor q in qs:\n model = mcycleModel()\n model.compile(loss=lambda y,f: tilted_loss(q,y,f), optimizer='adadelta')\n model.fit(mcycle.times.values, mcycle.accel.values, epochs=2000, batch_size=32, verbose=0)\n \n # Predict the quantile\n y_test = model.predict(t_test)\n plt.plot(t_test, y_test, label=q) # plot out this quantile\n\nplt.legend() \nplt.show() ", "_____no_output_____" ] ], [ [ "## Final Notes\n\n1. Note that the quantile 0.5 is the same as median, which you can attain by minimising Mean Absolute Error, which you can attain in Keras regardless with `loss='mae'`. \n2. Uncertainty and quantiles are **not** the same thing. But most of the time you care about quantiles and not uncertainty.\n3. If you really do want uncertainty with deep nets checkout http://mlg.eng.cam.ac.uk/yarin/blog_3d801aa532c1ce.html", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e763d0ffa3486068e86a63dd08a243c608c2188c
196,106
ipynb
Jupyter Notebook
.ipynb_checkpoints/4.4-overfitting-and-underfitting-checkpoint.ipynb
gaborstefanics/deep-learning-with-python-notebooks
63df3f62e1814a0a5c3cec849dc931aa6f288610
[ "MIT" ]
null
null
null
.ipynb_checkpoints/4.4-overfitting-and-underfitting-checkpoint.ipynb
gaborstefanics/deep-learning-with-python-notebooks
63df3f62e1814a0a5c3cec849dc931aa6f288610
[ "MIT" ]
null
null
null
.ipynb_checkpoints/4.4-overfitting-and-underfitting-checkpoint.ipynb
gaborstefanics/deep-learning-with-python-notebooks
63df3f62e1814a0a5c3cec849dc931aa6f288610
[ "MIT" ]
null
null
null
135.059229
14,008
0.810684
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e763dd238014e593b7426d0f28a94dc31c0fabf4
52,260
ipynb
Jupyter Notebook
temp_analysis_bonus_2_starter.ipynb
mattkenney9/SQL-Alchemy-Challenge
dc9ab27a4427014e4f8968afda80e9fca4ab5224
[ "ADSL" ]
null
null
null
temp_analysis_bonus_2_starter.ipynb
mattkenney9/SQL-Alchemy-Challenge
dc9ab27a4427014e4f8968afda80e9fca4ab5224
[ "ADSL" ]
null
null
null
temp_analysis_bonus_2_starter.ipynb
mattkenney9/SQL-Alchemy-Challenge
dc9ab27a4427014e4f8968afda80e9fca4ab5224
[ "ADSL" ]
null
null
null
97.318436
26,060
0.830233
[ [ [ "%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nimport datetime as dt", "_____no_output_____" ] ], [ [ "## Reflect Tables into SQLALchemy ORM", "_____no_output_____" ] ], [ [ "# Python SQL toolkit and Object Relational Mapper\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func", "_____no_output_____" ], [ "# create engine to hawaii.sqlite\nengine = create_engine(\"sqlite:///Resources/hawaii.sqlite\")", "_____no_output_____" ], [ "# reflect an existing database into a new model\nBase = automap_base()\n# reflect the tables\nBase.prepare(engine, reflect = True)", "_____no_output_____" ], [ "# View all of the classes that automap found\nBase.classes.keys()", "_____no_output_____" ], [ "# Save references to each table\nMeasurement = Base.classes.measurement\nStation = Base.classes.station", "_____no_output_____" ], [ "# Create our session (link) from Python to the DB\nsession = Session(engine)", "_____no_output_____" ] ], [ [ "## Bonus Challenge Assignment: Temperature Analysis II", "_____no_output_____" ] ], [ [ "# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' \n# and return the minimum, maximum, and average temperatures for that range of dates\ndef calc_temps(start_date, end_date):\n \"\"\"TMIN, TAVG, and TMAX for a list of dates.\n \n Args:\n start_date (string): A date string in the format %Y-%m-%d\n end_date (string): A date string in the format %Y-%m-%d\n \n Returns:\n TMIN, TAVE, and TMAX\n \"\"\"\n \n return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\\\n filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()\n\n# For example\nprint(calc_temps('2012-02-28', '2012-03-05'))", "[(62.0, 69.57142857142857, 74.0)]\n" ], [ "# Use the function `calc_temps` to calculate the tmin, tavg, and tmax \n# for a year in the data set\n\nprint(calc_temps('2013-01-01', '2013-12-31'))\n\nyear_2013 = calc_temps('2013-01-01', '2013-12-31')", "[(53.0, 72.67865168539326, 86.0)]\n" ], [ "min_2013 = year_2013[0][0]\navg_2013 = year_2013[0][1]\nmax_2013 = year_2013[0][2]\ndist = (max_2013 - min_2013)/ 2\n\n#create dataFrame to plot \nsummary_df = pd.DataFrame({\"min\":[year_2013[0][0]], \"avg\":[year_2013[0][1]], \"max\":[year_2013[0][2]]})", "_____no_output_____" ], [ "# Plot the results from your previous query as a bar chart. \n# Use \"Trip Avg Temp\" as your Title\n# Use the average temperature for bar height (y value)\n# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)\n\nplt.figure(figsize = (3,6))\nsummary_df[\"avg\"].plot(kind = \"bar\", color = \"salmon\", alpha = .5, grid = False)\nplt.xticks(np.arange(1), labels = \" \")\nplt.ylim(0, 100)\nplt.grid(axis = \"y\")\nplt.errorbar(x = 0, y = avg_2013, yerr = dist, ecolor = \"black\")\nplt.title(\"Trip Avg Temp\")\nplt.ylabel(\"Temp (F)\")\n\nplt.show()", "_____no_output_____" ] ], [ [ "### Daily Rainfall Average", "_____no_output_____" ] ], [ [ "# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's \n# matching dates.\n# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation\n\nsession.query(Measurement.station, func.sum(Measurement.prcp), Station.name, Station.latitude, Station.longitude, Station.elevation)\\\n .filter(Measurement.date >= '2013-01-01')\\\n .filter(Measurement.date <= '2013-12-31')\\\n .group_by(Measurement.station).order_by(func.sum(Measurement.prcp).desc())\\\n .all()", "C:\\Users\\Matt\\anaconda\\envs\\PythonData\\lib\\site-packages\\sqlalchemy\\sql\\compiler.py:362: SAWarning: SELECT statement has a cartesian product between FROM element(s) \"measurement\" and FROM element \"station\". Apply join condition(s) between each element to resolve.\n util.warn(message)\n" ], [ "# Use this function to calculate the daily normals \n# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)\n\ndef daily_normals(date):\n \"\"\"Daily Normals.\n \n Args:\n date (str): A date string in the format '%m-%d'\n \n Returns:\n A list of tuples containing the daily normals, tmin, tavg, and tmax\n \n \"\"\"\n \n sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]\n return session.query(*sel).filter(func.strftime(\"%m-%d\", Measurement.date) == date).all()\n\n# For example\ndaily_normals(\"01-01\")", "_____no_output_____" ], [ "# calculate the daily normals for your trip\n# push each tuple of calculations into a list called `normals`\n\n# Set the start and end date of the trip\nstart_date = '2017-08-01'\nend_date = '2017-08-07'\n\n# Use the start and end date to create a range of dates\ndr_raw = pd.date_range(dt.date(2017, 8, 1), periods = 7).to_list()\n\n# Strip off the year and save a list of strings in the format %m-%d\ndr = [x.strftime(\"%m-%d\") for x in dr_raw]\n\n# Use the `daily_normals` function to calculate the normals for each date string \n# and append the results to a list called `normals`.\n\nnormals = []\n\nfor i in range(len(dr)):\n normals.append(daily_normals(dr[i]))\n \nprint(normals)", "[[(67.0, 75.54, 83.0)], [(68.0, 75.60377358490567, 84.0)], [(70.0, 76.61111111111111, 85.0)], [(69.0, 76.71153846153847, 84.0)], [(69.0, 76.14814814814815, 82.0)], [(67.0, 76.25, 83.0)], [(71.0, 77.15686274509804, 83.0)]]\n" ], [ "#use list compresion to create list of tuples, making index simpler\n\nout = [item for x in normals for item in x]\n\n#create seperate list for each descriptive statistic\nmins = []\navgs = []\nmaxs = []\n\nfor i in range(len(out)):\n mins.append(out[i][0])\n avgs.append(out[i][1])\n maxs.append(out[i][2])", "_____no_output_____" ], [ "# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index\n\ndr_full = [x.strftime(\"%Y-%m-%d\") for x in dr_raw]\n\ndf = pd.DataFrame({\"tmin\":mins, \"tavg\":avgs, \"tmax\":maxs}, index = dr_full)\ndisplay(df)", "_____no_output_____" ], [ "# Plot the daily normals as an area plot with `stacked=False`\n\ndf.plot(kind = \"area\", stacked = False, rot = 45)\nplt.ylim(0, 100)\nplt.ylabel(\"Temperature\")\nplt.legend(loc = (.05, .02))\nplt.show()", "_____no_output_____" ] ], [ [ "## Close Session", "_____no_output_____" ] ], [ [ "session.close()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e763e66468622551669a03a47f2dca2a07777246
282,131
ipynb
Jupyter Notebook
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
f529a86490110fc6a30d1af6d37c0add2517244f
[ "CC-BY-4.0" ]
3
2019-05-30T15:15:50.000Z
2021-09-23T01:01:08.000Z
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
f529a86490110fc6a30d1af6d37c0add2517244f
[ "CC-BY-4.0" ]
null
null
null
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
f529a86490110fc6a30d1af6d37c0add2517244f
[ "CC-BY-4.0" ]
null
null
null
292.363731
36,748
0.91874
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nfrom lifelines import CoxPHFitter\nfrom lifelines import KaplanMeierFitter\nfrom lifelines.statistics import multivariate_logrank_test\nfrom lifelines.statistics import proportional_hazard_test\nfrom lifelines.plotting import add_at_risk_counts", "_____no_output_____" ], [ "print_plots = False\n\nif print_plots:\n output_path = '~/output'\n import os\n os.makedirs(output_path, exist_ok=True)", "_____no_output_____" ] ], [ [ "***\n# Load Stanford Data\nExported from R", "_____no_output_____" ] ], [ [ "dfp = pd.read_csv('~/stanford.csv')", "_____no_output_____" ], [ "dfp.head(5)", "_____no_output_____" ] ], [ [ "***\n# [Kaplan-Meier Model](https://lifelines.readthedocs.io/en/latest/Quickstart.html#kaplan-meier-nelson-aalen-and-parametric-models)", "_____no_output_____" ] ], [ [ "def km_plot_by_factor(factor, dfp, err=True, tablim=2):\n kmf= []\n fig = plt.figure(figsize=(12, 8))\n ax = plt.axes()\n for i,v in enumerate(sorted(dfp[factor].unique())[::-1]):\n # Create a fitter\n kmf.append(KaplanMeierFitter())\n kmf[i].fit(dfp.query(f'agecat == \"{v}\"')['time'], event_observed=dfp.query(f'agecat == \"{v}\"')['status'], label=v) \n if err:\n kmf[i].plot(ax=ax)\n else:\n kmf[i].survival_function_.plot(ax=ax)\n\n add_at_risk_counts(*kmf[:tablim])\n stat_test = multivariate_logrank_test(dfp['time'], dfp[factor], dfp['status'])\n ax.set_title(f'Logrank Test p-value: {stat_test.p_value:.4g}')\n ax.set_ylabel('Survival')\n ax.set_xlabel('Time')\n fig.tight_layout()\n if print_plots:\n fig.savefig(f'{output_path}/km_by_{factor}.pdf')", "_____no_output_____" ], [ "km_plot_by_factor('agecat', dfp)", "_____no_output_____" ] ], [ [ "***\n# [Cox Model](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#fitting-the-regression)", "_____no_output_____" ], [ "## Fit Model", "_____no_output_____" ] ], [ [ "covariates = ['age']\n\ncph = CoxPHFitter()\ncph.fit(dfp[['time', 'status']+covariates], duration_col='time', event_col='status')", "_____no_output_____" ], [ "cph.print_summary(model='Cox Age', decimals=4)", "_____no_output_____" ], [ "cph.log_likelihood_ratio_test()", "_____no_output_____" ] ], [ [ "## [Plot the hazard ratios](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#plotting-the-coefficients)", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(12, 8))\nax = cph.plot(hazard_ratios=True)\nfig.tight_layout()\nif print_plots:\n fig.savefig(f'{output_path}/cox_hrs.pdf')", "_____no_output_____" ] ], [ [ "## [Plot the effect of varying a covariate](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#plotting-the-effect-of-varying-a-covariate)\n[Note that the mean age is being used as the baseline value](https://github.com/CamDavidsonPilon/lifelines/issues/543).", "_____no_output_____" ] ], [ [ "for covariate in covariates:\n values = sorted(list(dfp[covariate].unique()))\n if len(values) > 3:\n values = np.around(np.linspace(dfp[covariate].min(), dfp[covariate].max(), 4), 2)\n ax = cph.plot_partial_effects_on_outcome(covariates=covariate, values=values, cmap='coolwarm')\n ax.set_title(f'Effects of varying {covariate}')\n ax.set_ylabel('Survival')\n ax.set_xlabel('Time')\n fig = ax.figure\n fig.set_figwidth(12)\n fig.set_figheight(8)\n fig.tight_layout()\n if print_plots:\n fig.savefig(f'{output_path}/cox_survival_varying_{covariate}.pdf')", "_____no_output_____" ] ], [ [ "## Check Assumptions and Fit", "_____no_output_____" ], [ "### [Test the proportional hazard assumptions](https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Proportional%20hazard%20assumption.html)\nWould produce Schoenfeld residual plots if needed", "_____no_output_____" ] ], [ [ "cph.check_assumptions(dfp[['time', 'status']+covariates], p_value_threshold=0.01, show_plots=True)", "Proportional hazard assumption looks okay.\n" ], [ "proportional_hazard_test(cph, dfp[['time', 'status']+covariates], time_transform='rank')", "_____no_output_____" ] ], [ [ "### [Create Linear Predictors for Residual Plots](https://lifelines.readthedocs.io/en/latest/fitters/regression/CoxPHFitter.html#lifelines.fitters.coxph_fitter.SemiParametricPHFitter.predict_log_partial_hazard)\nAnd setup plotting", "_____no_output_____" ] ], [ [ "dfp_linear_predictors = cph.predict_log_partial_hazard(dfp[covariates])", "_____no_output_____" ], [ "def plot_residuals(dfp, residual_type, x_axis='time', x_axis_label='Time', y_axis_label_suffix=' Residuals'):\n fig = plt.figure(figsize=(12, 8))\n ax = plt.axes()\n\n for status in [True, False]:\n if status:\n label = 'Event'\n c = 'tab:blue'\n else:\n label = 'Censored'\n c = 'tab:orange'\n\n ax.scatter(dfp.query(f'status == {status}')[x_axis], dfp.query(f'status == {status}')[residual_type], label=label, s=50, c=c)\n\n ax.set_ylabel(f\"{residual_type.replace('scaled_schoenfeld_', 'scaled schoenfeld ').title()}{y_axis_label_suffix}\")\n ax.set_xlabel(x_axis_label)\n ax.legend()\n\n fig.tight_layout()\n if print_plots:\n fig.savefig(f'{output_path}/cox_{residual_type}_residuals_vs_{x_axis}.pdf')", "_____no_output_____" ], [ "dfp_residuals = dfp[['time', 'status']+covariates].copy()\ndfp_residuals['Observation'] = dfp_residuals.index\ndfp_residuals['linear_predictors'] = dfp_linear_predictors\ndfp_residuals['martingale'] = cph.compute_residuals(dfp[['time', 'status']+covariates], 'martingale')['martingale']\ndfp_residuals['deviance'] = cph.compute_residuals(dfp[['time', 'status']+covariates], 'deviance')['deviance']\nfor covariate in covariates:\n dfp_residuals[f'scaled_schoenfeld_{covariate}'] = cph.compute_residuals(dfp[['time', 'status']+covariates], 'scaled_schoenfeld')[covariate]\n dfp_residuals[f'dfbeta_{covariate}'] = cph.compute_residuals(dfp[['time', 'status']+covariates], 'delta_beta')[covariate]", "_____no_output_____" ] ], [ [ "### Scaled Schoenfeld", "_____no_output_____" ] ], [ [ "for covariate in covariates:\n plot_residuals(dfp_residuals, f'scaled_schoenfeld_{covariate}')", "_____no_output_____" ] ], [ [ "### [Martingale Residuals](https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Cox%20residuals.html#Martingale-residuals)", "_____no_output_____" ] ], [ [ "plot_residuals(dfp_residuals, 'martingale')\nplot_residuals(dfp_residuals, 'martingale', x_axis='linear_predictors', x_axis_label='Linear Predictors')", "_____no_output_____" ] ], [ [ "### [Deviance Residuals](https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Cox%20residuals.html#Deviance-residuals)", "_____no_output_____" ] ], [ [ "plot_residuals(dfp_residuals, 'deviance')\nplot_residuals(dfp_residuals, 'deviance', x_axis='linear_predictors', x_axis_label='Linear Predictors')", "_____no_output_____" ] ], [ [ "### dfbeta", "_____no_output_____" ] ], [ [ "for covariate in covariates:\n plot_residuals(dfp_residuals, f'dfbeta_{covariate}', x_axis='Observation', x_axis_label='Observation', y_axis_label_suffix='')", "_____no_output_____" ] ], [ [ "***\n### You may want to also look into these additional lifelines abilities:\n* [Sample size determination](https://lifelines.readthedocs.io/en/latest/Examples.html#sample-size-determination-under-a-coxph-model)\n* [Model probability calibration](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#model-probability-calibration)\n* [Cross validation](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#cross-validation)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e763ec9dd291755bb0edaae6c98266e0946761e2
139,661
ipynb
Jupyter Notebook
Iris.ipynb
keshav11/iris
3a0ce27d056e2fdb5dbe2dcc8e4fde6f1d9cf2fc
[ "MIT" ]
null
null
null
Iris.ipynb
keshav11/iris
3a0ce27d056e2fdb5dbe2dcc8e4fde6f1d9cf2fc
[ "MIT" ]
null
null
null
Iris.ipynb
keshav11/iris
3a0ce27d056e2fdb5dbe2dcc8e4fde6f1d9cf2fc
[ "MIT" ]
null
null
null
250.28853
69,360
0.896872
[ [ [ "### Imports", "_____no_output_____" ] ], [ [ "import os\nimport pandas as pd\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.cross_validation import train_test_split\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Loading data", "_____no_output_____" ] ], [ [ "data_path = 'data'\ncol_names = ['sepal length in cm', 'sepal width in cm', 'petal length in cm', 'petal width in cm', 'class']\niris_data = pd.read_csv(os.path.join(data_path, 'iris.data'), header=None, names=col_names)\niris_data.head()", "_____no_output_____" ], [ "iris_data.describe()", "_____no_output_____" ], [ "iris_data.plot()\nplt.show()", "_____no_output_____" ] ], [ [ "### Encoding\nLabel encoding and one hot encoding", "_____no_output_____" ] ], [ [ "X = iris_data.values[:,0:4]\nY = iris_data.values[:,4]\n# label encoder\nlabel_encoder = LabelEncoder()\nlabel_encoder = label_encoder.fit(Y)\nlabel_encoded_class = label_encoder.transform(Y)\nlabel_encoded_class", "_____no_output_____" ], [ "# one hot encoder\ncol_mat = label_encoded_class.reshape(-1,1)\none_hot_encoder = OneHotEncoder()\none_hot_encoder = one_hot_encoder.fit(col_mat)\none_hot_encoded_class = one_hot_encoder.transform(col_mat)\none_hot_encoded_class.toarray()[0:10]", "_____no_output_____" ], [ "plt.scatter(X[:,0], X[:,1], c=label_encoded_class)\nplt.xlabel(col_names[0])\nplt.ylabel(col_names[1])\nplt.show()", "_____no_output_____" ], [ "plt.scatter(X[:,2], X[:,3], c=label_encoded_class)\nplt.xlabel(col_names[2])\nplt.ylabel(col_names[3])\nplt.show()", "_____no_output_____" ] ], [ [ "### Classifiers", "_____no_output_____" ] ], [ [ "# partition data\nx_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=11)", "_____no_output_____" ], [ "# using svm\nfrom sklearn.svm import SVC\n\nsvm = SVC(kernel='linear')\nsvm.fit(x_train, y_train)\ns = svm.score(x_test, y_test)\nprint('svm:', s)\n", "svm: 0.98\n" ], [ "# KNN\nfrom sklearn.neighbors import KNeighborsClassifier\nnearest_n = KNeighborsClassifier(n_neighbors=5)\nnearest_n.fit(x_train, y_train)\ns = nearest_n.score(x_test, y_test)\nprint('knn:', s)\n\n", "knn: 0.98\n" ], [ "# decision tree classifier\nfrom sklearn.tree import DecisionTreeClassifier\ndec_tree = DecisionTreeClassifier()\ndec_tree.fit(x_train, y_train)\ns = dec_tree.score(x_test, y_test)\nprint('decision tree classifier:', s)", "decision tree classifier: 0.92\n" ], [ "# random forest classifier\nfrom sklearn.ensemble import RandomForestClassifier\nrandom_forest = RandomForestClassifier(max_depth=5)\nrandom_forest.fit(x_train, y_train)\ns = random_forest.score(x_test, y_test)\nprint('random forest classifier:', s)", "random forest classifier: 0.94\n" ], [ "# Gaussing Naive Bayes\nfrom sklearn.naive_bayes import GaussianNB\ngauss_nb = GaussianNB()\ngauss_nb.fit(x_train, y_train)\ns = gauss_nb.score(x_test, y_test)\nprint('Gaussing Naive Bayes:', s)", "Gaussing Naive Bayes: 0.88\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e7641a5602e1bda795d67f8ee72a9767fc63be54
209,993
ipynb
Jupyter Notebook
model1.ipynb
Debdutta0507/Deep-Single-Shot-Musical-Instrument-IdentificationUsing-Time-Frequency-Localized-Features
965a05f9e428a5cd43378277c8f796c97d0234b6
[ "MIT" ]
null
null
null
model1.ipynb
Debdutta0507/Deep-Single-Shot-Musical-Instrument-IdentificationUsing-Time-Frequency-Localized-Features
965a05f9e428a5cd43378277c8f796c97d0234b6
[ "MIT" ]
null
null
null
model1.ipynb
Debdutta0507/Deep-Single-Shot-Musical-Instrument-IdentificationUsing-Time-Frequency-Localized-Features
965a05f9e428a5cd43378277c8f796c97d0234b6
[ "MIT" ]
null
null
null
244.747086
157,242
0.866781
[ [ [ "<a href=\"https://colab.research.google.com/github/Debdutta0507/Deep-Single-Shot-Musical-Instrument-IdentificationUsing-Time-Frequency-Localized-Features/blob/master/model1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "\nimport tensorflow as tf \ntf.test.gpu_device_name()\n\n", "_____no_output_____" ], [ "from google.colab import drive\ndrive.mount('/content/drive')", "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n" ], [ "\nimport os \n \nif 'COLAB_TPU_ADDR' not in os.environ: \n print('Not connected to TPU') \nelse: \n print(\"Connected to TPU\")", "Not connected to TPU\n" ], [ "from keras.layers import Input, Conv2D, Lambda, merge, Dense, Flatten,MaxPooling2D\nfrom keras.models import Model, Sequential\nfrom keras.layers import merge\nfrom keras.regularizers import l2\nfrom keras import backend as K\nfrom keras.optimizers import SGD,Adam\nfrom keras.losses import binary_crossentropy\nimport numpy.random as rng\nimport numpy as np\nimport os\nimport time\nimport dill as pickle\nimport matplotlib.pyplot as plt\nfrom sklearn.utils import shuffle\nfrom keras.layers import Lambda\nimport matplotlib.pyplot as plt\n\n\ndef initialize_weights(shape,dtype=None):\n \"\"\"Initialize weights as in paper\"\"\"\n values = rng.normal(loc=0,scale=1e-2,size=shape)\n return K.variable(values,dtype=dtype)\n#//TODO: figure out how to initialize layer biases in keras.\ndef initialize_bias(shape,dtype=None):\n \"\"\"Initialize bias as in paper\"\"\"\n values=rng.normal(loc=0.5,scale=1e-2,size=shape)\n return K.variable(values,dtype=dtype)\n\n\ndef get_siamese_model(input_shape):\n \"\"\"\n Model architecture\n \"\"\"\n \n # Define the tensors for the two input images\n input_shape = (224, 224, 3)\n left_input = Input(input_shape)\n right_input = Input(input_shape)\n \n # Convolutional Neural Network\n model = Sequential()\n \n model.add(Conv2D(64, (10,10), activation='relu', input_shape=input_shape,\n kernel_initializer=initialize_weights, kernel_regularizer=l2(2e-4),name='conv1'))\n model.add(MaxPooling2D())\n \n model.add(Conv2D(128, (7,7), activation='relu',\n kernel_initializer=initialize_weights,\n bias_initializer=initialize_bias, kernel_regularizer=l2(2e-4),name='conv2'))\n model.add(MaxPooling2D())\n\n model.add(Conv2D(128, (4,4), activation='relu',\n kernel_initializer=initialize_weights,\n bias_initializer=initialize_bias, kernel_regularizer=l2(2e-4),name='conv3'))\n model.add(MaxPooling2D())\n \n model.add(Conv2D(256, (4,4), activation='relu', kernel_initializer=initialize_weights,\n bias_initializer=initialize_bias, kernel_regularizer=l2(2e-4),name='conv4'))\n\n model.add(Flatten())\n \n model.add(Dense(4096, activation='sigmoid',\n kernel_regularizer=l2(1e-3),\n kernel_initializer=initialize_weights,bias_initializer=initialize_bias,name='dense'))\n \n # Generate the encodings (feature vectors) for the two images\n encoded_l = model(left_input)\n encoded_r = model(right_input)\n \n # Add a customized layer to compute the absolute difference between the encodings\n L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))\n L1_distance = L1_layer([encoded_l, encoded_r])\n \n # Add a dense layer with a sigmoid unit to generate the similarity score\n prediction = Dense(1,activation='sigmoid',bias_initializer=initialize_bias)(L1_distance)\n \n # Connect the inputs with the outputs\n siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)\n \n # return the model\n return siamese_net\n\nsiamese_net=get_siamese_model(3)\n\noptimizer = Adam(0.00006)\n#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking\nsiamese_net.compile(loss=\"binary_crossentropy\",optimizer=optimizer)\n\nsiamese_net.count_params()\nsiamese_net.summary()", "Model: \"functional_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\ninput_2 (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nsequential (Sequential) (None, 4096) 420642112 input_1[0][0] \n input_2[0][0] \n__________________________________________________________________________________________________\nlambda (Lambda) (None, 4096) 0 sequential[0][0] \n sequential[1][0] \n__________________________________________________________________________________________________\ndense (Dense) (None, 1) 4097 lambda[0][0] \n==================================================================================================\nTotal params: 420,646,209\nTrainable params: 420,646,209\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ], [ "from keras.utils import plot_model\nplot_model(siamese_net,to_file='model.png',show_shapes=True,show_layer_names=True,expand_nested=True)", "_____no_output_____" ], [ "PATH=\"/content/drive/My Drive/Preprocess_1\"\nwith open(os.path.join(PATH, \"train.pickle\"), \"rb\") as f:\n (X,c) = pickle.load(f)\n\nwith open(os.path.join(PATH, \"val.pickle\"), \"rb\") as f:\n (Xval,cval) = pickle.load(f)\n \nprint(\"training alphabets\")\nprint(c.keys())\nprint(\"validation alphabets:\")\nprint(cval.keys())", "training alphabets\ndict_keys(['Fluta', 'Celllo', 'Saxofone Alto', 'Clarinet', 'Saxsoprano', 'Oboe', 'Trombon', 'Contrabanjo', 'Guitar'])\nvalidation alphabets:\ndict_keys(['Trompa', 'Tuba', 'Trompeta', 'Viola', 'Violin'])\n" ], [ "class Siamese_Loader:\n \"\"\"For loading batches and testing tasks to a siamese net\"\"\"\n def __init__(self, path, data_subsets = [\"train\", \"val\"]):\n self.data = {}\n self.categories = {}\n self.info = {}\n \n for name in data_subsets:\n file_path = os.path.join(path, name + \".pickle\")\n print(\"loading data from {}\".format(file_path))\n with open(file_path,\"rb\") as f:\n (X,c) = pickle.load(f)\n self.data[name] = X\n self.categories[name] = c\n\n def get_batch(self,batch_size,s=\"train\"):\n \"\"\"Create batch of n pairs, half same class, half different class\"\"\"\n X=self.data[s]\n n_classes, n_examples, w, h ,d= X.shape\n\n #randomly sample several classes to use in the batch\n categories = rng.choice(n_classes,size=(batch_size,),replace=False) #makes an array of the training set with no repetation of class\n #initialize 2 empty arrays for the input image batch\n pairs=[np.zeros((batch_size, w, h,d)) for i in range(2)]\n #initialize vector for the targets, and make one half of it '1's, so 2nd half of batch has same class\n targets=np.zeros((batch_size,))\n targets[batch_size//2:] = 1\n for i in range(batch_size):\n category = categories[i]\n idx_1 = rng.randint(0, n_examples)\n pairs[0][i,:,:,:] = X[category, idx_1].reshape(w, h, d)\n idx_2 = rng.randint(0, n_examples)\n #pick images of same class for 1st half, different for 2nd\n if i >= batch_size // 2:\n category_2 = category \n else: \n #add a random number to the category modulo n classes to ensure 2nd image has\n # ..different category\n category_2 = (category + rng.randint(1,n_classes)) % n_classes\n pairs[1][i,:,:,:] = X[category_2,idx_2].reshape(w, h,d)\n return pairs, targets\n \n def generate(self, batch_size, s=\"train\"):\n \"\"\"a generator for batches, so model.fit_generator can be used. \"\"\"\n while True:\n pairs, targets = self.get_batch(batch_size,s)\n yield (pairs, targets) \n\n def make_oneshot_task(self,N,s=\"val\",language=None):\n \"\"\"Create pairs of test image, support set for testing N way one-shot learning. \"\"\"\n X=self.data[s]\n n_classes, n_examples, w, h,d = X.shape\n indices = rng.randint(0,n_examples,size=(N,))\n if language is not None:\n low, high = self.categories[s][language]\n if N > high - low:\n raise ValueError(\"This language ({}) has less than {} letters\".format(language, N))\n categories = rng.choice(range(low,high),size=(N,),replace=False)\n \n else:#if no language specified just pick a bunch of random letters\n categories = rng.choice(range(n_classes),size=(N,),replace=False) \n true_category = categories[0]\n ex1, ex2 = rng.choice(n_examples,replace=False,size=(2,))\n test_image = np.asarray([X[true_category,ex1,:,:,:]]*N).reshape(N, w, h,d)\n support_set = X[categories,indices,:,:,:]\n support_set[0,:,:,:] = X[true_category,ex2]\n support_set = support_set.reshape(N, w, h,d)\n targets = np.zeros((N,))\n targets[0] = 1\n targets, test_image, support_set = shuffle(targets, test_image, support_set)\n pairs = [test_image,support_set]\n\n return pairs, targets\n \n def test_oneshot(self,model,N,k,s=\"val\",verbose=0):\n \"\"\"Test average N way oneshot learning accuracy of a siamese neural net over k one-shot tasks\"\"\"\n n_correct = 0\n if verbose:\n print(\"Evaluating model on {} random {} way one-shot learning tasks ...\".format(k,N))\n for i in range(k):\n inputs, targets = self.make_oneshot_task(N,s)\n probs = model.predict(inputs)\n if np.argmax(probs) == np.argmax(targets):\n n_correct+=1\n percent_correct = (100.0*n_correct / k)\n if verbose:\n print(\"Got an average of {}% {} way one-shot learning accuracy\".format(percent_correct,N))\n return percent_correct\n \n def train(self, model, epochs, verbosity):\n model.fit_generator(self.generate(batch_size),\n \n )\nPATH=\"/content/drive/My Drive/Preprocess_1\"\n \nloader = Siamese_Loader(PATH)", "loading data from /content/drive/My Drive/Preprocess_1/train.pickle\nloading data from /content/drive/My Drive/Preprocess_1/val.pickle\n" ], [ "print(\"!\")\nevaluate_every = 100 # interval for evaluating on one-shot tasks\nloss_every=50 # interval for printing loss (iterations)\nbatch_size =9\nn_iter = 10000\nN_way = 2# how many classes for testing one-shot tasks>\nn_val = 250#how mahy one-shot tasks to validate on?\nbest = 30\nx = list()\ny = list()\nz=list()\np=list()\nweights_path = os.path.join(PATH, \"weights\")\nprint(\"Starting training process!\")\nprint(\"--------------------------\")\nt_start = time.time()\nfor i in range(1, n_iter):\n (inputs,targets)=loader.get_batch(batch_size)\n loss=siamese_net.train_on_batch(inputs,targets)\n x.append(loss)\n y.append(i)\n print(\"\\n ------------- \\n\")\n print(\"Loss: {0}\".format(loss)) \n if i % evaluate_every == 0:\n print(\"evaluating\")\n print(\"Time for {0} iterations: {1}\".format(i, time.time()-t_start))\n val_acc = loader.test_oneshot(siamese_net,N_way,n_val,verbose=True)\n z.append(val_acc)\n p.append(i)\n if val_acc >= best:\n print(\"saving\")\n print(\"Current best: {0}, previous best: {1}\".format(val_acc, best))\n print(\"Saving weights to: {0} \\n\".format(weights_path))\n siamese_net.save_weights(weights_path)\n best=val_acc\n\n if i % loss_every == 0:\n print(\"iteration {}, training loss: {:.2f},\".format(i,loss))\nplt.scatter(y, x)\nplt.plot(y,x)\nplt.show()\nplt.scatter(z,p)\nplt.plot(z,p)\nplt.show()\n \n#siamese_net.load_weights(weights_path)\n", "!\nStarting training process!\n--------------------------\n" ], [ "#plt.scatter(y, x)\nplt.plot(y,x)\nplt.show()\nplt.scatter(p,z)\nplt.plot(p,z)\nplt.show()", "_____no_output_____" ], [ "weights_path = os.path.join(PATH, \"weights\")\nsiamese_net.load_weights(weights_path)", "_____no_output_____" ], [ "\ndef nearest_neighbour_correct(pairs,targets):\n \"\"\"returns 1 if nearest neighbour gets the correct answer for a one-shot task\n given by (pairs, targets)\"\"\"\n L2_distances = np.zeros_like(targets)\n for i in range(len(targets)):\n L2_distances[i] = np.sum(np.sqrt(pairs[0][i]**2 - pairs[1][i]**2))\n if np.argmin(L2_distances) == np.argmax(targets):\n return 1\n return 0\n\n\ndef test_nn_accuracy(N_ways,n_trials,loader):\n \"\"\"Returns accuracy of one shot \"\"\"\n print(\"Evaluating nearest neighbour on {} unique {} way one-shot learning tasks ...\".format(n_trials,N_ways))\n\n n_right = 0\n \n for i in range(n_trials):\n pairs,targets = loader.make_oneshot_task(N_ways,\"val\")\n correct = nearest_neighbour_correct(pairs,targets)\n n_right += correct\n return 100.0 * n_right / n_trials", "_____no_output_____" ], [ "iteration=1000\nN_way=2\nn_val=50\nshot_accuracy=[]\nnn_accuracy=[]\n\nfor i in range(1,200):\n one_shot= loader.test_oneshot(siamese_net,N_way,n_val,verbose=True)\n shot_accuracy.append(one_shot)\n nn=test_nn_accuracy(N_way,n_val,loader)\n nn_accuracy.append(nn)\nmax_one_shot=max(shot_accuracy)\nmax_nn=max(nn_accuracy)", "Evaluating model on 50 random 2 way one-shot learning tasks ...\nGot an average of 78.0% 2 way one-shot learning accuracy\nEvaluating nearest neighbour on 50 unique 2 way one-shot learning tasks ...\n" ], [ "from statistics import mean\nmax_one_shot=max(shot_accuracy)\nmax_nn=max(nn_accuracy)\nmean_one_shot=mean(shot_accuracy)\nmean_nn=mean(nn_accuracy)\nmax_one_shot\nmax_nn\n#mean_one_shot\nmean_nn", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7641f0c95fe6dc40b80e168370e6e6eef65b7cc
3,050
ipynb
Jupyter Notebook
KNN.ipynb
Sblbl/ML_implementations
02012ab50d06ad0270ff4d515515cb262698d03f
[ "MIT" ]
null
null
null
KNN.ipynb
Sblbl/ML_implementations
02012ab50d06ad0270ff4d515515cb262698d03f
[ "MIT" ]
null
null
null
KNN.ipynb
Sblbl/ML_implementations
02012ab50d06ad0270ff4d515515cb262698d03f
[ "MIT" ]
null
null
null
21.034483
87
0.465574
[ [ [ "# KNN", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "def euc(x, y):\n return np.linalg.norm(x - y)\ndef KNN(X, y, sample, k=3):\n distances = []\n # calculate every distance\n for i, x in enumerate(X):\n distances.append(euc(sample, x)) \n # get the k - smallest distances\n d_ord = distances\n d_ord.sort()\n neigh_dists = d_ord[:k]\n neighbours = []\n neigh_classes = []\n # get the neighbours of the sample\n for neigh_dist in neigh_dists:\n idx = distances.index(neigh_dist)\n neighbours.append(X[idx])\n neigh_classes.append(y[idx])\n print('Neighbours: ', neighbours)\n print('of classes: ', neigh_classes)", "_____no_output_____" ] ], [ [ "## Examples", "_____no_output_____" ] ], [ [ "X = np.array([\n [0.15, 0.35],\n [0.15, 0.28],\n [0.12, 0.2],\n [0.1, 0.32], \n [0.06, 0.25]\n])\ny = np.array([1, 2, 2, 3, 3])\nsample = np.array([0.1, 0.25])", "_____no_output_____" ], [ "KNN(X, y, sample, k=3)", "Neighbours: [array([0.15, 0.35]), array([0.15, 0.28]), array([0.12, 0.2 ])]\nof classes: [1, 2, 2]\n" ], [ "KNN(X, y, sample, k=1)", "Neighbours: [array([0.15, 0.35])]\nof classes: [1]\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e764249803f2ea6c4ee995f3225c07ddf7df0629
22,769
ipynb
Jupyter Notebook
wip/ray/train/pytorch-huggingface-clothing.ipynb
nitish-raj/data-science-on-aws
b760805d28f8375094ce83aee849de8b9d3382a2
[ "Apache-2.0" ]
null
null
null
wip/ray/train/pytorch-huggingface-clothing.ipynb
nitish-raj/data-science-on-aws
b760805d28f8375094ce83aee849de8b9d3382a2
[ "Apache-2.0" ]
null
null
null
wip/ray/train/pytorch-huggingface-clothing.ipynb
nitish-raj/data-science-on-aws
b760805d28f8375094ce83aee849de8b9d3382a2
[ "Apache-2.0" ]
null
null
null
41.099278
99
0.515965
[ [ [ "#\"\"\" Finetuning a 🤗 Transformers model for sequence classification.\"\"\"\nimport argparse\nimport logging\nimport math\nimport os\nimport random\nfrom typing import Dict, Any\n\nimport datasets\nimport ray\nimport transformers\nfrom accelerate import Accelerator\nfrom datasets import load_dataset, load_metric\nfrom ray.train import Trainer\nfrom torch.utils.data.dataloader import DataLoader\nfrom tqdm.auto import tqdm\nfrom transformers import (\n AdamW,\n AutoConfig,\n AutoModelForSequenceClassification,\n AutoTokenizer,\n DataCollatorWithPadding,\n PretrainedConfig,\n SchedulerType,\n default_data_collator,\n get_scheduler,\n set_seed,\n)\nfrom transformers.utils.versions import require_version\n\nlogging.basicConfig(level=logging.ERROR)\nlogger = logging.getLogger(__name__)\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description=\"Finetune a transformers model on a text classification task\"\n )\n parser.add_argument(\n \"-f\",\n type=str,\n default=None,\n help=\"Ignore this!\",\n ) \n parser.add_argument(\n \"--train_file\",\n type=str,\n default=\"data/train/part-algo-1-womens_clothing_ecommerce_reviews.csv\",\n help=\"A csv or a json file containing the training data.\",\n )\n parser.add_argument(\n \"--validation_file\",\n type=str,\n default=\"data/validation/part-algo-1-womens_clothing_ecommerce_reviews.csv\",\n help=\"A csv or a json file containing the validation data.\",\n )\n parser.add_argument(\n \"--max_length\",\n type=int,\n default=64,\n help=(\n \"The maximum total input sequence length after tokenization. \"\n \"Sequences longer than this will be truncated, sequences shorter \"\n \"will be padded if `--pad_to_max_lengh` is passed.\"\n ),\n )\n parser.add_argument(\n \"--pad_to_max_length\",\n action=\"store_true\",\n help=\"If passed, pad all samples to `max_length`. Otherwise, dynamic \"\n \"padding is used.\",\n )\n parser.add_argument(\n \"--model_name_or_path\",\n type=str,\n help=\"Path to pretrained model or model identifier from \"\n \"huggingface.co/models.\",\n default=\"roberta-base\",\n )\n parser.add_argument(\n \"--use_slow_tokenizer\",\n action=\"store_true\",\n help=\"If passed, will use a slow tokenizer (not backed by the 🤗 \"\n \"Tokenizers library).\",\n )\n parser.add_argument(\n \"--per_device_train_batch_size\",\n type=int,\n default=8,\n help=\"Batch size (per device) for the training dataloader.\",\n )\n parser.add_argument(\n \"--per_device_eval_batch_size\",\n type=int,\n default=8,\n help=\"Batch size (per device) for the evaluation dataloader.\",\n )\n parser.add_argument(\n \"--learning_rate\",\n type=float,\n default=5e-5,\n help=\"Initial learning rate (after the potential warmup period) to use.\",\n )\n parser.add_argument(\n \"--weight_decay\", type=float, default=0.0, help=\"Weight decay to use.\"\n )\n parser.add_argument(\n \"--num_train_epochs\",\n type=int,\n default=1,\n help=\"Total number of training epochs to perform.\",\n )\n parser.add_argument(\n \"--max_train_steps\",\n type=int,\n default=50,\n help=\"Total number of training steps to perform. If provided, \"\n \"overrides num_train_epochs.\",\n )\n parser.add_argument(\n \"--gradient_accumulation_steps\",\n type=int,\n default=1,\n help=\"Number of updates steps to accumulate before performing a \"\n \"backward/update pass.\",\n )\n parser.add_argument(\n \"--lr_scheduler_type\",\n type=SchedulerType,\n default=\"linear\",\n help=\"The scheduler type to use.\",\n choices=[\n \"linear\",\n \"cosine\",\n \"cosine_with_restarts\",\n \"polynomial\",\n \"constant\",\n \"constant_with_warmup\",\n ],\n )\n parser.add_argument(\n \"--num_warmup_steps\",\n type=int,\n default=0,\n help=\"Number of steps for the warmup in the lr scheduler.\",\n )\n parser.add_argument(\n \"--output_dir\", type=str, default=None, help=\"Where to store the final model.\"\n )\n parser.add_argument(\n \"--seed\", type=int, default=None, help=\"A seed for reproducible training.\"\n )\n\n # Ray arguments.\n parser.add_argument(\n \"--start_local\", action=\"store_true\", help=\"Starts Ray on local machine.\"\n )\n parser.add_argument(\n \"--address\", \n type=str, \n default=\"127.0.0.1:6379\", \n help=\"Ray address to connect to.\"\n )\n parser.add_argument(\n \"--num_workers\", \n type=int, \n default=72, \n help=\"Number of workers to use.\"\n )\n parser.add_argument(\n \"--use_gpu\", action=\"store_true\", help=\"If training should be done on GPUs.\"\n )\n\n args = parser.parse_args()\n\n # Sanity checks\n if (\n# args.task_name is None\n args.train_file is None\n and args.validation_file is None\n ):\n raise ValueError(\"Need a training/validation file.\")\n else:\n if args.train_file is not None:\n extension = args.train_file.split(\".\")[-1]\n assert extension in [\n \"csv\",\n \"json\",\n ], \"`train_file` should be a csv or a json file.\"\n if args.validation_file is not None:\n extension = args.validation_file.split(\".\")[-1]\n assert extension in [\n \"csv\",\n \"json\",\n ], \"`validation_file` should be a csv or a json file.\"\n\n if args.output_dir is not None:\n os.makedirs(args.output_dir, exist_ok=True)\n\n return args\n\n\ndef train_func(config: Dict[str, Any]):\n args = config[\"args\"]\n # Initialize the accelerator. We will let the accelerator handle device\n # placement for us in this example.\n accelerator = Accelerator()\n # Make one log on every process with the configuration for debugging.\n logging.basicConfig(\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n datefmt=\"%m/%d/%Y %H:%M:%S\",\n level=logging.ERROR,\n )\n logger.info(accelerator.state)\n\n # Setup logging, we only want one process per machine to log things on\n # the screen. accelerator.is_local_main_process is only True for one\n # process per machine.\n logger.setLevel(\n logging.ERROR if accelerator.is_local_main_process else logging.ERROR\n )\n if accelerator.is_local_main_process:\n datasets.utils.logging.set_verbosity_error()\n transformers.utils.logging.set_verbosity_error()\n else:\n datasets.utils.logging.set_verbosity_error()\n transformers.utils.logging.set_verbosity_error()\n\n # If passed along, set the training seed now.\n if args.seed is not None:\n set_seed(args.seed)\n\n # Get the datasets: you can either provide your own CSV/JSON training and\n # evaluation files (see below) or specify a GLUE benchmark task (the\n # dataset will be downloaded automatically from the datasets Hub).\n\n # For CSV/JSON files, this script will use as labels the column called\n # 'label' and as pair of sentences the sentences in columns called\n # 'sentence1' and 'sentence2' if such column exists or the first two\n # columns not named label if at least two columns are provided.\n\n # If the CSVs/JSONs contain only one non-label column, the script does\n # single sentence classification on this single column. You can easily\n # tweak this behavior (see below)\n\n # In distributed training, the load_dataset function guarantee that only\n # one local process can concurrently download the dataset.\n# if args.task_name is not None:\n# # Downloading and loading a dataset from the hub.\n# raw_datasets = load_dataset(\"glue\", args.task_name)\n# else:\n # Loading the dataset from local csv or json file.\n data_files = {}\n if args.train_file is not None:\n data_files[\"train\"] = args.train_file\n if args.validation_file is not None:\n data_files[\"validation\"] = args.validation_file\n extension = (\n args.train_file if args.train_file is not None else args.valid_file\n ).split(\".\")[-1]\n\n raw_datasets = load_dataset(extension, data_files=data_files)\n\n label_list = raw_datasets[\"train\"].unique(\"sentiment\")\n label_list.sort() # Let's sort it for determinism\n num_labels = len(label_list)\n\n # Load pretrained model and tokenizer\n #\n # In distributed training, the .from_pretrained methods guarantee that\n # only one local process can concurrently download model & vocab.\n config = AutoConfig.from_pretrained(\n args.model_name_or_path, num_labels=num_labels, \n )\n tokenizer = AutoTokenizer.from_pretrained(\n args.model_name_or_path, use_fast=not args.use_slow_tokenizer\n )\n model = AutoModelForSequenceClassification.from_pretrained(\n args.model_name_or_path,\n config=config,\n )\n\n # Preprocessing the datasets\n sentence1_key, sentence2_key = \"review_body\", None\n\n # Some models have set the order of the labels to use,\n # so let's make sure we do use it.\n label_to_id = None\n label_to_id = {v: i for i, v in enumerate(label_list)}\n\n if label_to_id is not None:\n model.config.label2id = label_to_id\n model.config.id2label = {id: label for label, id in config.label2id.items()}\n\n padding = \"max_length\" if args.pad_to_max_length else False\n\n def preprocess_function(examples):\n # Tokenize the texts\n texts = (\n (examples[sentence1_key],)\n if sentence2_key is None\n else (examples[sentence1_key], examples[sentence2_key])\n )\n result = tokenizer(\n *texts, padding=padding, max_length=args.max_length, truncation=True\n )\n\n if \"sentiment\" in examples:\n if label_to_id is not None:\n # Map labels to IDs (not necessary for GLUE tasks)\n result[\"labels\"] = [\n label_to_id[l] for l in examples[\"sentiment\"] # noqa:E741\n ]\n else:\n # In all cases, rename the column to labels because the model\n # will expect that.\n result[\"labels\"] = examples[\"sentiment\"]\n\n return result\n\n processed_datasets = raw_datasets.map(\n preprocess_function,\n batched=True,\n remove_columns=raw_datasets[\"train\"].column_names,\n desc=\"Running tokenizer on dataset\",\n )\n\n train_dataset = processed_datasets[\"train\"]\n eval_dataset = processed_datasets[\"validation\"]\n\n # Log a few random samples from the training set:\n for index in random.sample(range(len(train_dataset)), 3):\n logger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\n\n # DataLoaders creation:\n if args.pad_to_max_length:\n # If padding was already done ot max length, we use the default data\n # collator that will just convert everything to tensors.\n data_collator = default_data_collator\n else:\n # Otherwise, `DataCollatorWithPadding` will apply dynamic padding for\n # us (by padding to the maximum length of the samples passed). When\n # using mixed precision, we add `pad_to_multiple_of=8` to pad all\n # tensors to multiple of 8s, which will enable the use of Tensor\n # Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).\n data_collator = DataCollatorWithPadding(\n tokenizer, pad_to_multiple_of=(8 if accelerator.use_fp16 else None)\n )\n\n train_dataloader = DataLoader(\n train_dataset,\n shuffle=True,\n collate_fn=data_collator,\n batch_size=args.per_device_train_batch_size,\n )\n eval_dataloader = DataLoader(\n eval_dataset,\n collate_fn=data_collator,\n batch_size=args.per_device_eval_batch_size,\n )\n\n # Optimizer\n # Split weights in two groups, one with weight decay and the other not.\n no_decay = [\"bias\", \"LayerNorm.weight\"]\n optimizer_grouped_parameters = [\n {\n \"params\": [\n p\n for n, p in model.named_parameters()\n if not any(nd in n for nd in no_decay)\n ],\n \"weight_decay\": args.weight_decay,\n },\n {\n \"params\": [\n p\n for n, p in model.named_parameters()\n if any(nd in n for nd in no_decay)\n ],\n \"weight_decay\": 0.0,\n },\n ]\n optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate)\n\n # Prepare everything with our `accelerator`.\n model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader\n )\n\n# model, optimizer, train_dataloader = accelerator.prepare(\n# model, optimizer, train_dataloader\n# )\n # Note -> the training dataloader needs to be prepared before we grab\n # his length below (cause its length will be shorter in multiprocess)\n\n # Scheduler and math around the number of training steps.\n num_update_steps_per_epoch = math.ceil(\n len(train_dataloader) / args.gradient_accumulation_steps\n )\n if args.max_train_steps is None:\n args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch\n else:\n args.num_train_epochs = math.ceil(\n args.max_train_steps / num_update_steps_per_epoch\n )\n\n lr_scheduler = get_scheduler(\n name=args.lr_scheduler_type,\n optimizer=optimizer,\n num_warmup_steps=args.num_warmup_steps,\n num_training_steps=args.max_train_steps,\n )\n\n # Get the metric function\n# if args.task_name is not None:\n# metric = load_metric(\"glue\", args.task_name)\n# else:\n metric = load_metric(\"accuracy\")\n\n # Train!\n total_batch_size = (\n args.per_device_train_batch_size\n * accelerator.num_processes\n * args.gradient_accumulation_steps\n )\n\n logger.info(\"***** Running training *****\")\n logger.info(f\" Num examples = {len(train_dataset)}\")\n logger.info(f\" Num epochs = {args.num_train_epochs}\")\n logger.info(\n f\" Instantaneous batch size per device =\"\n f\" {args.per_device_train_batch_size}\"\n )\n logger.info(\n f\" Total train batch size (w. parallel, distributed & accumulation) \"\n f\"= {total_batch_size}\"\n )\n logger.info(f\" Gradient Accumulation steps = {args.gradient_accumulation_steps}\")\n logger.info(f\" Total optimization steps = {args.max_train_steps}\")\n # Only show the progress bar once on each machine.\n progress_bar = tqdm(\n range(args.max_train_steps), disable=not accelerator.is_local_main_process\n )\n completed_steps = 0\n\n for epoch in range(args.num_train_epochs):\n model.train()\n for step, batch in enumerate(train_dataloader):\n outputs = model(**batch)\n loss = outputs.loss\n loss = loss / args.gradient_accumulation_steps\n accelerator.backward(loss)\n if (\n step % args.gradient_accumulation_steps == 0\n or step == len(train_dataloader) - 1\n ):\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)\n completed_steps += 1\n\n if completed_steps >= args.max_train_steps:\n break\n\n model.eval()\n for step, batch in enumerate(eval_dataloader):\n outputs = model(**batch)\n predictions = (\n outputs.logits.argmax(dim=-1)\n# if not is_regression\n# else outputs.logits.squeeze()\n )\n metric.add_batch(\n predictions=accelerator.gather(predictions),\n references=accelerator.gather(batch[\"labels\"]),\n )\n\n eval_metric = metric.compute()\n logger.info(f\"epoch {epoch}: {eval_metric}\")\n\n if args.output_dir is not None:\n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save)\n\n\ndef main():\n args = parse_args()\n config = {\"args\": args}\n\n if args.start_local or args.address or args.num_workers > 1 or args.use_gpu:\n if args.start_local:\n # Start a local Ray runtime.\n ray.init(num_cpus=args.num_workers)\n else:\n # Connect to a Ray cluster for distributed training.\n ray.init(address=args.address)\n trainer = Trainer(\"torch\", num_workers=args.num_workers, use_gpu=args.use_gpu,\n resources_per_worker={'CPU': 1, 'GPU': 0})\n trainer.start()\n trainer.run(train_func, config)\n else:\n # Run training locally.\n train_func(config)\n\n\nif __name__ == \"__main__\":\n main()\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e76425e51d67aeeaaee691e5222d10fee4d269ff
666
ipynb
Jupyter Notebook
9_Metaprogramming/Chapter_9.ipynb
gonzalo-munillag/Python_Cookbook
4225ba4cac2c1ccbfbda82570edc692b53e26d5a
[ "MIT" ]
null
null
null
9_Metaprogramming/Chapter_9.ipynb
gonzalo-munillag/Python_Cookbook
4225ba4cac2c1ccbfbda82570edc692b53e26d5a
[ "MIT" ]
null
null
null
9_Metaprogramming/Chapter_9.ipynb
gonzalo-munillag/Python_Cookbook
4225ba4cac2c1ccbfbda82570edc692b53e26d5a
[ "MIT" ]
null
null
null
16.65
34
0.51952
[ [ [ "# Chapter 9: Metaprogramming", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e76427106e3c97807866108a5b95bc9581360a55
391,155
ipynb
Jupyter Notebook
notebooks/EDA.ipynb
mikelkl/TF2-QA
3bca786d26565335df45538714532d6d3c070a2b
[ "MIT" ]
17
2020-01-29T10:31:07.000Z
2022-01-10T03:36:00.000Z
notebooks/EDA.ipynb
mikelkl/TF2-QA
3bca786d26565335df45538714532d6d3c070a2b
[ "MIT" ]
null
null
null
notebooks/EDA.ipynb
mikelkl/TF2-QA
3bca786d26565335df45538714532d6d3c070a2b
[ "MIT" ]
4
2021-01-27T15:42:45.000Z
2021-12-12T20:41:51.000Z
184.420085
45,808
0.893449
[ [ [ "# Abstract\n- Data Understanding\n - A long answer would be a longer section of text that answers the question - several sentences or a paragraph.\n - A short answer might be a sentence or phrase, or even in some cases a YES/NO.\n - The short answers are always contained within / a subset of one of the plausible long answers. \n - A given article can (and very often will) allow for both long and short answers, depending on the question.\n- Data Statistics\n - training data contains 307,373 examples. 152,148 have a long answer and 110,724 have a short answer.\n- Number of Long Answer Candidate\n - Some of data for long answers are swamped with a lot of candidates (7946 in maximum!)\n - n_long_candidates_train is long-tail distribution, range in [1, 7946]\n - n_long_candidates_train and n_long_candidates_test have similar distribution\n- Short answer\n - Short answers can be sets of spans in the document (106,926), or yes or no (3,798).\n - 63.47% 'NO ANSWERS' in short answer labels\n - one question can have multiple short answers within same long anwser\n - when yes-no answer is not 'None' (i.e. yes or no), short answer always be empty\n- Yes-no Answer\n - significant class imbalance in yes-no answer labels, 98% answer is 'None'.\n- Long Answer\n - long answer can either be 0(denoted by 'start_token': -1)or 1 \n - long answer must be selected from long_answer_candidates\n - 50.5% of 'NO ANSWERS' in long answer labels\n - when the start token and/or the end token are -1, yes-no answer is 'NONE'\n - yes-no answer 'NONE' does not always mean that the start token and/or the end token are -1\n - About 95% long answer's top_level is true\n- Text Word Counts\n - Words counts of question text range in [0, 30]\n - Words counts of d document text range in [0, 120000]\n - Words counts of short answers text is long-tail distribution, range in [1, 250]\n - Words counts of long answers text is long-tail distribution, range in [5, 123548]", "_____no_output_____" ], [ "# Preparation", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport json\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom tqdm import tqdm\nfrom IPython.core.display import HTML", "_____no_output_____" ], [ "DIR = '../input/tensorflow2-question-answering/'\nPATH_TRAIN = DIR + 'simplified-nq-train.jsonl'\nPATH_TEST = DIR + 'simplified-nq-test.jsonl'", "_____no_output_____" ] ], [ [ "## Number of samples in train & test dataset", "_____no_output_____" ] ], [ [ "!wc -l '../input/tensorflow2-question-answering/simplified-nq-train.jsonl'\n!wc -l '../input/tensorflow2-question-answering/simplified-nq-test.jsonl'", "307373 ../input/tensorflow2-question-answering/simplified-nq-train.jsonl\n345 ../input/tensorflow2-question-answering/simplified-nq-test.jsonl\n" ] ], [ [ "# Load .jsonl file iteratively", "_____no_output_____" ], [ "As we know, one of the most common way to convert .jsonl file into pd.DataFrame is `pd.read_json(FILENAME, orient='records', lines=True)`:", "_____no_output_____" ] ], [ [ "df_test = pd.read_json(PATH_TEST, orient='records', lines=True)\ndf_test.head()", "_____no_output_____" ] ], [ [ "However, since we have a **HUGE train dataset** for this competition, Kaggle Notebook RAM cannot afford this method.\nInstead, we probablly have to load the train dataset iteratively:", "_____no_output_____" ] ], [ [ "json_train = []\n\nwith open(PATH_TRAIN, 'rt') as f:\n for i in tqdm(f):\n json_train.append(json.loads(i.strip()))", "307373it [01:49, 2813.85it/s]\n" ], [ "df_train = pd.DataFrame(json_train)\ndf_train.head()", "_____no_output_____" ], [ "df_train.iloc[0].loc['long_answer_candidates'][:10]", "_____no_output_____" ], [ "df_train.iloc[9].loc['annotations']", "_____no_output_____" ] ], [ [ "# Data Visualization", "_____no_output_____" ], [ "## Obtain data", "_____no_output_____" ], [ "We must be cautious that <span style=\"color: red\">**\"short answer\" for this competition corresponds to \"yes-no answer\" in the original dataset**.</span> ", "_____no_output_____" ] ], [ [ "N_TRAIN = 307373\nn_long_candidates_train = np.zeros(N_TRAIN)\nt_long_train = np.zeros((N_TRAIN,2)) # [start_token, end_token]\nt_yesno_train = []\nt_short_train = []\ntop_level = []\n\nwith open(PATH_TRAIN, 'rt') as f:\n for i in tqdm(range(N_TRAIN), position=0, leave=True):\n dic = json.loads(f.readline())\n n_long_candidates_train[i] = len(dic['long_answer_candidates'])\n t_long_train[i,0] = dic['annotations'][0]['long_answer']['start_token']\n t_long_train[i,1] = dic['annotations'][0]['long_answer']['end_token']\n t_yesno_train.append(dic['annotations'][0]['yes_no_answer'])\n t_short_train.append(len(dic['annotations'][0]['short_answers']))\n long_answer_candidate_index = dic['annotations'][0]['long_answer']['candidate_index']\n if long_answer_candidate_index != -1:\n top_level.append(dic['long_answer_candidates'][long_answer_candidate_index]['top_level'])", " 14%|█▍ | 42839/307373 [00:10<01:06, 3949.25it/s]\n100%|█████████▉| 307056/307373 [01:16<00:00, 4065.00it/s][A\n100%|██████████| 307373/307373 [01:17<00:00, 3989.80it/s]\n" ], [ "N_TEST = 345\nn_long_candidates_test = np.zeros(N_TEST)", "_____no_output_____" ], [ "with open(PATH_TEST, 'rt') as f:\n for i in tqdm(range(N_TEST)):\n dic = json.loads(f.readline())\n n_long_candidates_test[i] = len(dic['long_answer_candidates'])", "100%|██████████| 345/345 [00:00<00:00, 2649.06it/s]\n" ] ], [ [ "## Visualization", "_____no_output_____" ] ], [ [ "plt.style.use('seaborn-darkgrid')\nplt.style.use('seaborn-poster')", "_____no_output_____" ] ], [ [ "### Take a glance at data", "_____no_output_____" ] ], [ [ "def show_example(example_id):\n example = df_train[df_train['example_id']==example_id]\n document_text = example['document_text'].values[0]\n question = example['question_text'].values[0]\n\n annotations = example['annotations'].values[0]\n la_start_token = annotations[0]['long_answer']['start_token']\n la_end_token = annotations[0]['long_answer']['end_token']\n long_answer = \" \".join(document_text.split(\" \")[la_start_token:la_end_token])\n short_answers = annotations[0]['short_answers']\n sa_list = []\n for sa in short_answers:\n sa_start_token = sa['start_token']\n sa_end_token = sa['end_token']\n short_answer = \" \".join(document_text.split(\" \")[sa_start_token:sa_end_token])\n sa_list.append(short_answer)\n \n document_text = document_text.replace(long_answer,'<LALALALA>')\n sa=False\n la=''\n \n # fix empty short_answers bug\n if short_answers:\n for sa in short_answers:\n sa_start_token = sa['start_token']\n sa_end_token = sa['end_token']\n for i,laword in enumerate(long_answer.split(\" \")):\n ind = i+la_start_token\n if ind==sa_start_token:\n la = la+' SASASASA'+laword\n elif ind==sa_end_token-1:\n la = la+' '+laword+'SESESESE'\n else:\n la = la+' '+laword\n else:\n la=long_answer\n# print(la)\n html = '<div style=\"font-weight: bold;font-size: 20px;color:#00239CFF\">Example Id</div><br/>'\n html = html + '<div>' + str(example_id) + '</div><hr>'\n html = html + '<div style=\"font-weight: bold;font-size: 20px;color:#00239CFF\">Question</div><br/>'\n html = html + '<div>' + question + ' ?</div><hr>'\n html = html + '<div style=\"font-weight: bold;font-size: 20px;color:#00239CFF\">Document Text</div><br/>'\n \n if la_start_token==-1:\n html = html + '<div>There are no answers found in the document</div><hr>'\n else:\n la = la.replace('SASASASA','<span style=\"background-color:#C7D3D4FF; padding:5px\"><font color=\"#000\">')\n la = la.replace('SESESESE','</font></span>')\n document_text = document_text.replace('<LALALALA>','<div style=\"background-color:#603F83FF; padding:5px\"><font color=\"#fff\">'+la+'</font></div>')\n\n #for simplicity, trim words from end of the document\n html = html + '<div>' + \" \".join(document_text.split(\" \")[:la_end_token+200]) + ' </div>'\n display(HTML(html))", "_____no_output_____" ] ], [ [ "To keep scroll bar short, content from the document text is trimmed at the end.\n\n**Long answer is highlighted in dark blue, and short answers are in light blue.**", "_____no_output_____" ] ], [ [ "show_example(5328212470870865242)", "_____no_output_____" ] ], [ [ "### Long answer candidates", "_____no_output_____" ], [ "Some of data for long answers are swamped with a lot of candidates (**7946 in maximum!**):", "_____no_output_____" ] ], [ [ "pd.Series(n_long_candidates_train).describe()", "_____no_output_____" ], [ "pd.Series(n_long_candidates_test).describe()", "_____no_output_____" ] ], [ [ "n_long_candidates_train is long-tail distribution, range in [1, 7946]", "_____no_output_____" ] ], [ [ "plt.hist(n_long_candidates_train, bins=64, alpha=0.5, color='c', label='train')\nplt.xlabel('long answer candidates')\nplt.ylabel('samples')\nplt.title(\"Distribution of number of long answer candidates\")\nplt.legend()", "_____no_output_____" ] ], [ [ "n_long_candidates_train and n_long_candidates_test have similar distribution", "_____no_output_____" ] ], [ [ "plt.hist(n_long_candidates_train[n_long_candidates_train < np.max(n_long_candidates_test)], density=True, bins=64, alpha=0.5, color='c', label='train')\nplt.hist(n_long_candidates_test, density=True, bins=64, alpha=0.5, color='orange', label='test')\nplt.xlabel('long answer candidates')\nplt.ylabel('sample proportion')\nplt.title(\"Normalized distribution of number of long answer candidates\")\nplt.legend()", "_____no_output_____" ] ], [ [ "### Short answer labels", "_____no_output_____" ], [ "- 63.47% 'NO ANSWERS' in short answer labels\n- one question can have multiple short answers within same long anwser\n- when yes-no answer is not 'None' (i.e. yes or no), short answer always be empty", "_____no_output_____" ] ], [ [ "pd.Series(t_short_train).describe()", "_____no_output_____" ], [ "plt.hist(t_short_train, bins=range(max(t_short_train)), align='left', density=True, rwidth=0.6, color='lightseagreen', label='train')\nplt.xlabel('short answer')\nplt.ylabel('sample proportion')\nplt.title(\"Normalized distribution of number of short answer labels\")\nplt.legend()", "_____no_output_____" ] ], [ [ "a example with multiple short answers", "_____no_output_____" ] ], [ [ "show_example(-1413521544318030897)", "_____no_output_____" ] ], [ [ "when yes-no answer is not 'None' (i.e. yes or no), short answer always be empty", "_____no_output_____" ] ], [ [ "# no_answer_state[1,:] is the number of train data whose short answer is empty\n# no_answer_state[:,0] is the number of train data whose yes-no answer is 'YES' OR 'NO'\nt_short_train = np.array(t_short_train)\nyesno_answer_state = np.zeros((2,2))\nyesno_answer_state[1,1] = np.sum((t_short_train==0) * (np.array([ 1 if t=='NONE' else 0 for t in t_yesno_train ])))\nyesno_answer_state[1,0] = np.sum((t_short_train==0) * (np.array([ 0 if t=='NONE' else 1 for t in t_yesno_train ])))\nyesno_answer_state[0,1] = np.sum((t_short_train>0) * (np.array([ 1 if t=='NONE' else 0 for t in t_yesno_train ])))\nyesno_answer_state[0,0] = np.sum((t_short_train>0) * (np.array([ 0 if t=='NONE' else 1 for t in t_yesno_train ]))) ", "_____no_output_____" ], [ "yesno_answer_state", "_____no_output_____" ], [ "sns.heatmap(yesno_answer_state / N_TRAIN, annot=True, annot_kws={\"size\": 25}, fmt='.3f', cmap='Blues_r')", "_____no_output_____" ] ], [ [ "### Number of Yes-no answer labels", "_____no_output_____" ], [ "We can see significant class imbalance in yes-no answer labels.", "_____no_output_____" ] ], [ [ "plt.hist(t_yesno_train, bins=[0,1,2,3], align='left', density=True, rwidth=0.6, color='lightseagreen', label='train')\nplt.xlabel('yes-no answer')\nplt.ylabel('sample proportion')\nplt.title(\"Normalized distribution of yes-no answer labels\")\nplt.legend()", "_____no_output_____" ] ], [ [ "#### Example of NO", "_____no_output_____" ] ], [ [ "df_train[df_train.example_id==3817861884803470204]['annotations'].iloc[0]", "_____no_output_____" ] ], [ [ "#### Example of YES", "_____no_output_____" ] ], [ [ "df_train[df_train.example_id==5429746486027633157]['annotations'].iloc[0]", "_____no_output_____" ] ], [ [ "### Long answer labels", "_____no_output_____" ], [ "Description of start token labels:", "_____no_output_____" ] ], [ [ "pd.Series(t_long_train[:,0]).describe()", "_____no_output_____" ] ], [ [ "Desciption of end token labels:", "_____no_output_____" ] ], [ [ "pd.Series(t_long_train[:,1]).describe()", "_____no_output_____" ] ], [ [ "We can see below that nearly half of the long answers have start/end token -1. \nIn other words, there are a 50.5% of '**NO ANSWERS**' in long answer labels, not only in yes-no labels:", "_____no_output_____" ] ], [ [ "print('{0:.1f}% of start tokens are -1.'.format(np.sum(t_long_train[:,0] < 0) / N_TRAIN * 100))\nprint('{0:.1f}% of end tokens are -1.'.format(np.sum(t_long_train[:,1] < 0) / N_TRAIN * 100))", "50.5% of start tokens are -1.\n50.5% of end tokens are -1.\n" ] ], [ [ "**If the start token is -1, the corresponding end token is also -1**:", "_____no_output_____" ] ], [ [ "np.sum(t_long_train[:,0] * t_long_train[:,1] < 0)", "_____no_output_____" ] ], [ [ "The heatmap below tells us that:\n- when the start token and/or the end token are -1, yes-no answer is 'NONE'\n- yes-no answer 'NONE' does not always mean that the start token and/or the end token are -1", "_____no_output_____" ] ], [ [ "# no_answer_state[1,:] is the number of train data whose start token and end token are -1\n# no_answer_state[:,1] is the number of train data whose yes-no answer is 'NONE'\n\nno_answer_state = np.zeros((2,2))\nno_answer_state[1,1] = np.sum((t_long_train[:,0]==-1) * (np.array([ 1 if t=='NONE' else 0 for t in t_yesno_train ])))\nno_answer_state[1,0] = np.sum((t_long_train[:,0]==-1) * (np.array([ 0 if t=='NONE' else 1 for t in t_yesno_train ])))\nno_answer_state[0,1] = np.sum((t_long_train[:,0]>=0) * (np.array([ 1 if t=='NONE' else 0 for t in t_yesno_train ])))\nno_answer_state[0,0] = np.sum((t_long_train[:,0]>=0) * (np.array([ 0 if t=='NONE' else 1 for t in t_yesno_train ]))) ", "_____no_output_____" ], [ "no_answer_state", "_____no_output_____" ], [ "sns.heatmap(no_answer_state / N_TRAIN, annot=True, annot_kws={\"size\": 25}, fmt='.3f', vmin=0, vmax=1, cmap='Blues_r')", "_____no_output_____" ] ], [ [ "#### top_level\nAbout 95% long answer's top_level is true", "_____no_output_____" ] ], [ [ "series_top_level = pd.Series(top_level)\nseries_top_level.value_counts(normalize=True)", "_____no_output_____" ] ], [ [ "## Text Word Counts", "_____no_output_____" ], [ "Let us look into word counts of question texts & document texts.", "_____no_output_____" ], [ "### Obtain data", "_____no_output_____" ] ], [ [ "q_lens_train = np.zeros(N_TRAIN)\nd_lens_train = np.zeros(N_TRAIN)", "_____no_output_____" ], [ "[{'yes_no_answer': 'NONE',\n 'long_answer': {'start_token': 1952,\n 'candidate_index': 54,\n 'end_token': 2019},\n 'short_answers': [{'start_token': 1960, 'end_token': 1969}],\n 'annotation_id': 593165450220027640}]", "_____no_output_____" ], [ "short_answer_lens_train = []\nlong_answer_lens_train = []\nwith open(PATH_TRAIN, 'rt') as f:\n for i in tqdm(range(N_TRAIN), position=0, leave=True):\n dic = json.loads(f.readline())\n q_lens_train[i] = len(dic['question_text'].split())\n d_lens_train[i] = len(dic['document_text'].split())\n for annotation in dic[\"annotations\"]:\n long_answer_len = annotation[\"long_answer\"][\"end_token\"] - annotation[\"long_answer\"][\"start_token\"]\n if long_answer_len != 0:\n long_answer_lens_train.append(long_answer_len)\n for short_answer in annotation['short_answers']:\n short_answer_len = short_answer[\"end_token\"] - short_answer[\"start_token\"]\n short_answer_lens_train.append(short_answer_len)", "100%|██████████| 307373/307373 [04:01<00:00, 1271.82it/s]\n" ], [ "q_lens_test = np.zeros(N_TEST)\nd_lens_test = np.zeros(N_TEST)", "_____no_output_____" ], [ "with open(PATH_TEST, 'rt') as f:\n for i in tqdm(range(N_TEST)):\n dic = json.loads(f.readline())\n q_lens_test[i] = len(dic['question_text'].split())\n d_lens_test[i] = len(dic['document_text'].split())", "\n 0%| | 0/345 [00:00<?, ?it/s]\u001b[A\n 31%|███ | 107/345 [00:00<00:00, 1061.29it/s]\u001b[A\n 63%|██████▎ | 219/345 [00:00<00:00, 1075.95it/s]\u001b[A\n100%|██████████| 345/345 [00:00<00:00, 1030.25it/s]\u001b[A\n" ] ], [ [ "### Visualization", "_____no_output_____" ], [ "#### Word counts of question text", "_____no_output_____" ], [ "Words counts of question text range in [0, 30]", "_____no_output_____" ] ], [ [ "plt.hist(q_lens_train, density=True, bins=8, alpha=0.5, color='c', label='train')\nplt.hist(q_lens_test, density=True, bins=8, alpha=0.5, color='orange', label='test')\nplt.xlabel('question length')\nplt.ylabel('sample proportion')\nplt.legend()", "_____no_output_____" ] ], [ [ "#### Word counts of document text", "_____no_output_____" ], [ "Words counts of d document text range in [0, 120000]", "_____no_output_____" ] ], [ [ "plt.hist(d_lens_train, density=True, bins=64, alpha=0.5, color='c', label='train')\nplt.hist(d_lens_test, density=True, bins=64, alpha=0.5, color='orange', label='test')\nplt.xlabel('document length')\nplt.ylabel('sample proportion')\nplt.legend()", "_____no_output_____" ] ], [ [ "#### Word counts of short answers text\nWords counts of short answers text is long-tail distribution, range in [1, 250]", "_____no_output_____" ] ], [ [ "series_short = pd.Series(short_answer_lens_train)\nprint(series_short.describe())\nseries_short.value_counts(normalize=True).plot(kind='line', title=\"Normalized distribution of short answer length\")", "count 130233.000000\nmean 4.096942\nstd 5.972028\nmin 1.000000\n25% 2.000000\n50% 2.000000\n75% 4.000000\nmax 250.000000\ndtype: float64\n" ] ], [ [ "#### Word counts of long answers text\nWords counts of long answers text is long-tail distribution, range in [5, 123548]", "_____no_output_____" ] ], [ [ "series_long = pd.Series(long_answer_lens_train)\nprint(series_long.describe())\nseries_long.value_counts(normalize=True).plot(kind='line', title=\"Normalized distribution of long answer length\")", "count 152148.000000\nmean 384.289849\nstd 1496.184997\nmin 5.000000\n25% 75.000000\n50% 117.000000\n75% 192.000000\nmax 123548.000000\ndtype: float64\n" ] ], [ [ "# Feature Visualiztion", "_____no_output_____" ], [ "## Answer type labels", "_____no_output_____" ] ], [ [ "import tensorflow as tf", "_____no_output_____" ], [ "raw_dataset = tf.data.TFRecordDataset(\"../input/bertjointbaseline/nq-train.tfrecords-00000-of-00001\")", "_____no_output_____" ], [ "all_answer_types = []\nfor raw_record in tqdm(raw_dataset, desc=\"Parsing tfrecod\"):\n f = tf.train.Example.FromString(raw_record.numpy())\n all_answer_types.append(f.features.feature[\"answer_types\"].int64_list.value)", "Parsing tfrecod: 494670it [01:25, 5790.53it/s]\n" ], [ "id2label = [\"unk\", \"yes\", \"no\", \"short\", \"long\"]", "_____no_output_____" ], [ "s_at = pd.Series(all_answer_types).apply(lambda x: id2label[x[0]])", "_____no_output_____" ], [ "stats=s_at.value_counts(normalize=True)\nprint(stats)\nstats.plot(\"bar\")", "short 0.507807\nunk 0.341116\nlong 0.137655\nyes 0.008404\nno 0.005017\ndtype: float64\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e76427d01dfd6bc4232f248eee9e5098fdebd832
3,771
ipynb
Jupyter Notebook
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
STEFANIHUNT/workforce-scripts
993bb0d5a22b3d741c5e05cd1822f370f180b4df
[ "Apache-2.0" ]
74
2016-06-28T18:22:25.000Z
2021-12-01T22:06:11.000Z
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
airyadriana/workforce-scripts
85e5b1e706df91c9fa1a7a301e1288daf689b0d7
[ "Apache-2.0" ]
55
2016-08-05T19:34:08.000Z
2022-03-16T14:53:10.000Z
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
airyadriana/workforce-scripts
85e5b1e706df91c9fa1a7a301e1288daf689b0d7
[ "Apache-2.0" ]
64
2016-07-13T21:40:38.000Z
2022-01-14T06:09:32.000Z
25.308725
173
0.55529
[ [ [ "# Import Assignments From a CSV File¶\nIn this example, a CSV file containing the locations of potholes will be imported into a Workforce Project as new assignments.", "_____no_output_____" ], [ "### Import ArcGIS API for Python\nImport the `arcgis` library and some modules within it.", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom arcgis.gis import GIS\nfrom arcgis.apps import workforce\nfrom arcgis.geocoding import geocode", "_____no_output_____" ] ], [ [ "\n### Connect to Organization And Get The Project\nLet's connect to ArcGIS Online and find the new Project to add assignments to.", "_____no_output_____" ] ], [ [ "gis = GIS(\"https://arcgis.com\", \"workforce_scripts\")\nitem = gis.content.get(\"c765482bd0b9479b9104368da54df90d\")\nproject = workforce.Project(item)", "_____no_output_____" ] ], [ [ "\n### Load the CSV File¶\nLet's use the pandas library to read the CSV file and display the potholes.", "_____no_output_____" ] ], [ [ "df = pd.read_csv(\"assignments.csv\")\ndf", "_____no_output_____" ] ], [ [ "### Create An Assignment For Each Row\nFor each assignment, First geocode the address to get the x,y location in (WGS84 Web Mercator) of the assignment. Then supply additional attributes.\n\nFinally use the batch_add method to add multiple assignments at once (this is faster than using the add method since validation is performed once for all assignments).", "_____no_output_____" ] ], [ [ "assignments = []\nfor index, row in df.iterrows():\n geometry = geocode(f\"{row['Location']}\", out_sr=3857)[0][\"location\"]\n assignments.append(\n workforce.Assignment(\n project,\n geometry=geometry,\n location=row[\"Location\"],\n description=row[\"Description\"],\n priority=row[\"Priority\"],\n work_order_id=row[\"Work Order Id\"],\n assignment_type=\"Fill in Pothole\",\n status=\"unassigned\"\n )\n )\nproject.assignments.batch_add(assignments)", "_____no_output_____" ] ], [ [ "\n### Verify the assignments on the map\nLet's verify that the assignments were created.", "_____no_output_____" ] ], [ [ "webmap = gis.map(\"Palm Springs\", zoomlevel=14)\nwebmap.add_layer(project.assignments_layer)\nwebmap", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7642d70dff3fe6cbddb9700bab38de0224b2a7c
1,252
ipynb
Jupyter Notebook
homework/02 Sets/donow/najmabadi_shannon_2_donow.ipynb
barjacks/algorithms_mine
bc248ed9ebb88aed73c6e8da3d3b9553d9173cdd
[ "MIT" ]
null
null
null
homework/02 Sets/donow/najmabadi_shannon_2_donow.ipynb
barjacks/algorithms_mine
bc248ed9ebb88aed73c6e8da3d3b9553d9173cdd
[ "MIT" ]
null
null
null
homework/02 Sets/donow/najmabadi_shannon_2_donow.ipynb
barjacks/algorithms_mine
bc248ed9ebb88aed73c6e8da3d3b9553d9173cdd
[ "MIT" ]
null
null
null
16.918919
43
0.47524
[ [ [ "from math import pi", "_____no_output_____" ], [ "def volume_sphere(r): \n if r > 0:\n return (4/3) * pi * (r**3)\n else:\n return \"No radius\"\n", "_____no_output_____" ], [ "volume_sphere(2)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e7643d61ca5f57d82d7aebf0cbc98d8140962a13
152,718
ipynb
Jupyter Notebook
_doc/notebooks/2016/pydata/js_pyecharts.ipynb
sdpython/jupytalk
34abdf128de24becb21a9f08f243c3a74dadbfd9
[ "MIT" ]
null
null
null
_doc/notebooks/2016/pydata/js_pyecharts.ipynb
sdpython/jupytalk
34abdf128de24becb21a9f08f243c3a74dadbfd9
[ "MIT" ]
16
2016-11-13T19:52:35.000Z
2021-12-29T10:59:41.000Z
_doc/notebooks/2016/pydata/js_pyecharts.ipynb
sdpython/jupytalk
34abdf128de24becb21a9f08f243c3a74dadbfd9
[ "MIT" ]
4
2016-09-10T10:44:50.000Z
2021-09-22T16:28:56.000Z
277.669091
130,758
0.875797
[ [ [ "# pyecharts\n\n[pyecharts](https://github.com/pyecharts/pyecharts) a wrapper for a new library [echarts](https://ecomfe.github.io/echarts-doc/public/en/index.html) made by [Baidu](https://www.baidu.com/).", "_____no_output_____" ], [ "[documentation](https://github.com/pyecharts/pyecharts) [source](https://github.com/pyecharts/pyecharts) [installation](https://github.com/pyecharts/pyecharts#pyecharts-1) [tutorial](https://github.com/pyecharts/pyecharts#basic-usage) [gallery](http://gallery.echartsjs.com/)", "_____no_output_____" ] ], [ [ "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "_____no_output_____" ], [ "from pyecharts import __version__\n__version__", "_____no_output_____" ] ], [ [ "## example", "_____no_output_____" ] ], [ [ "from pyecharts.charts import Bar\nfrom pyecharts import options as opts\n\nattr = [\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\", \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\"]\nv1 = [2.0, 4.9, 7.0, 23.2, 25.6, 76.7, 135.6, 162.2, 32.6, 20.0, 6.4, 3.3]\nv2 = [2.6, 5.9, 9.0, 26.4, 28.7, 70.7, 175.6, 182.2, 48.7, 18.8, 6.0, 2.3]\nbar = Bar().set_global_opts(title_opts=opts.TitleOpts(title=\"Bar charts\",\n subtitle=\"precipitation and evaporation one year\"))\nbar.add_xaxis(attr)\next = opts.MarkLineOpts(data=[opts.MarkLineItem(type_=\"average\")])\nbar.add_yaxis(\"precipitation\", v1, markline_opts=ext)\nbar.add_yaxis(\"evaporation\", v2, markline_opts=ext)\nbar.render_notebook()", "_____no_output_____" ] ], [ [ "After you install [pyecharts-snapshot](https://github.com/pyecharts/pyecharts-snapshot) and [phantom-js](http://phantomjs.org/download.html) (not needed anymore apparently).", "_____no_output_____" ] ], [ [ "bar.render(path=\"echart_render.html\")", "_____no_output_____" ], [ "from pyecharts_snapshot.main import make_a_snapshot\nawait make_a_snapshot(\"echart_render.html\", \"echart_render.png\")", "Generating file ...\n" ], [ "from IPython.display import Image\nImage(\"echart_render.png\", width='600')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7645efc3e28bdb473304e9918bbf8d583379eb3
19,561
ipynb
Jupyter Notebook
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
fa16f959eb938c6a2924c91a03896e389f5ff5ff
[ "BSD-3-Clause" ]
1
2021-03-12T02:49:17.000Z
2021-03-12T02:49:17.000Z
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
fa16f959eb938c6a2924c91a03896e389f5ff5ff
[ "BSD-3-Clause" ]
null
null
null
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
fa16f959eb938c6a2924c91a03896e389f5ff5ff
[ "BSD-3-Clause" ]
null
null
null
41.35518
373
0.596902
[ [ [ "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.", "_____no_output_____" ] ], [ [ "# Fit a volume via raymarching\n\nThis tutorial shows how to fit a volume given a set of views of a scene using differentiable volumetric rendering.\n\nMore specificially, this tutorial will explain how to:\n1. Create a differentiable volumetric renderer.\n2. Create a Volumetric model (including how to use the `Volumes` class).\n3. Fit the volume based on the images using the differentiable volumetric renderer. \n4. Visualize the predicted volume.", "_____no_output_____" ], [ "## 0. Install and Import modules\nEnsure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:", "_____no_output_____" ] ], [ [ "import os\nimport sys\nimport torch\nneed_pytorch3d=False\ntry:\n import pytorch3d\nexcept ModuleNotFoundError:\n need_pytorch3d=True\nif need_pytorch3d:\n if torch.__version__.startswith(\"1.7\") and sys.platform.startswith(\"linux\"):\n # We try to install PyTorch3D via a released wheel.\n version_str=\"\".join([\n f\"py3{sys.version_info.minor}_cu\",\n torch.version.cuda.replace(\".\",\"\"),\n f\"_pyt{torch.__version__[0:5:2]}\"\n ])\n !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n else:\n # We try to install PyTorch3D from source.\n !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n !tar xzf 1.10.0.tar.gz\n os.environ[\"CUB_HOME\"] = os.getcwd() + \"/cub-1.10.0\"\n !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'", "_____no_output_____" ], [ "import os\nimport sys\nimport time\nimport json\nimport glob\nimport torch\nimport math\nfrom tqdm.notebook import tqdm\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom PIL import Image\nfrom IPython import display\n\n# Data structures and functions for rendering\nfrom pytorch3d.structures import Volumes\nfrom pytorch3d.renderer import (\n FoVPerspectiveCameras, \n VolumeRenderer,\n NDCGridRaysampler,\n EmissionAbsorptionRaymarcher\n)\nfrom pytorch3d.transforms import so3_exponential_map\n\n# add path for demo utils functions \nsys.path.append(os.path.abspath(''))\nfrom utils.plot_image_grid import image_grid\nfrom utils.generate_cow_renders import generate_cow_renders\n\n# obtain the utilized device\nif torch.cuda.is_available():\n device = torch.device(\"cuda:0\")\n torch.cuda.set_device(device)\nelse:\n device = torch.device(\"cpu\")", "_____no_output_____" ] ], [ [ "## 1. Generate images of the scene and masks\n\nThe following cell generates our training data.\nIt renders the cow mesh from the `fit_textured_mesh.ipynb` tutorial from several viewpoints and returns:\n1. A batch of image and silhouette tensors that are produced by the cow mesh renderer.\n2. A set of cameras corresponding to each render.\n\nNote: For the purpose of this tutorial, which aims at explaining the details of volumetric rendering, we do not explain how the mesh rendering, implemented in the `generate_cow_renders` function, works. Please refer to `fit_textured_mesh.ipynb` for a detailed explanation of mesh rendering.", "_____no_output_____" ] ], [ [ "target_cameras, target_images, target_silhouettes = generate_cow_renders(num_views=40)\nprint(f'Generated {len(target_images)} images/silhouettes/cameras.')", "_____no_output_____" ] ], [ [ "## 2. Initialize the volumetric renderer\n\nThe following initializes a volumetric renderer that emits a ray from each pixel of a target image and samples a set of uniformly-spaced points along the ray. At each ray-point, the corresponding density and color value is obtained by querying the corresponding location in the volumetric model of the scene (the model is described & instantiated in a later cell).\n\nThe renderer is composed of a *raymarcher* and a *raysampler*.\n- The *raysampler* is responsible for emiting rays from image pixels and sampling the points along them. Here, we use the `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).\n- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm.", "_____no_output_____" ] ], [ [ "# render_size describes the size of both sides of the \n# rendered images in pixels. We set this to the same size\n# as the target images. I.e. we render at the same\n# size as the ground truth images.\nrender_size = target_images.shape[1]\n\n# Our rendered scene is centered around (0,0,0) \n# and is enclosed inside a bounding box\n# whose side is roughly equal to 3.0 (world units).\nvolume_extent_world = 3.0\n\n# 1) Instantiate the raysampler.\n# Here, NDCGridRaysampler generates a rectangular image\n# grid of rays whose coordinates follow the pytorch3d\n# coordinate conventions.\n# Since we use a volume of size 128^3, we sample n_pts_per_ray=150,\n# which roughly corresponds to a one ray-point per voxel.\n# We futher set the min_depth=0.1 since there is no surface within\n# 0.1 units of any camera plane.\nraysampler = NDCGridRaysampler(\n image_width=render_size,\n image_height=render_size,\n n_pts_per_ray=150,\n min_depth=0.1,\n max_depth=volume_extent_world,\n)\n\n\n# 2) Instantiate the raymarcher.\n# Here, we use the standard EmissionAbsorptionRaymarcher \n# which marches along each ray in order to render\n# each ray into a single 3D color vector \n# and an opacity scalar.\nraymarcher = EmissionAbsorptionRaymarcher()\n\n# Finally, instantiate the volumetric render\n# with the raysampler and raymarcher objects.\nrenderer = VolumeRenderer(\n raysampler=raysampler, raymarcher=raymarcher,\n)", "_____no_output_____" ] ], [ [ "## 3. Initialize the volumetric model\n\nNext we instantiate a volumetric model of the scene. This quantizes the 3D space to cubical voxels, where each voxel is described with a 3D vector representing the voxel's RGB color and a density scalar which describes the opacity of the voxel (ranging between [0-1], the higher the more opaque).\n\nIn order to ensure the range of densities and colors is between [0-1], we represent both volume colors and densities in the logarithmic space. During the forward function of the model, the log-space values are passed through the sigmoid function to bring the log-space values to the correct range.\n\nAdditionally, `VolumeModel` contains the renderer object. This object stays unaltered throughout the optimization.\n\nIn this cell we also define the `huber` loss function which computes the discrepancy between the rendered colors and masks.", "_____no_output_____" ] ], [ [ "class VolumeModel(torch.nn.Module):\n def __init__(self, renderer, volume_size=[64] * 3, voxel_size=0.1):\n super().__init__()\n # After evaluating torch.sigmoid(self.log_colors), we get \n # densities close to zero.\n self.log_densities = torch.nn.Parameter(-4.0 * torch.ones(1, *volume_size))\n # After evaluating torch.sigmoid(self.log_colors), we get \n # a neutral gray color everywhere.\n self.log_colors = torch.nn.Parameter(torch.zeros(3, *volume_size))\n self._voxel_size = voxel_size\n # Store the renderer module as well.\n self._renderer = renderer\n \n def forward(self, cameras):\n batch_size = cameras.R.shape[0]\n\n # Convert the log-space values to the densities/colors\n densities = torch.sigmoid(self.log_densities)\n colors = torch.sigmoid(self.log_colors)\n \n # Instantiate the Volumes object, making sure\n # the densities and colors are correctly\n # expanded batch_size-times.\n volumes = Volumes(\n densities = densities[None].expand(\n batch_size, *self.log_densities.shape),\n features = colors[None].expand(\n batch_size, *self.log_colors.shape),\n voxel_size=self._voxel_size,\n )\n \n # Given cameras and volumes, run the renderer\n # and return only the first output value \n # (the 2nd output is a representation of the sampled\n # rays which can be omitted for our purpose).\n return self._renderer(cameras=cameras, volumes=volumes)[0]\n \n# A helper function for evaluating the smooth L1 (huber) loss\n# between the rendered silhouettes and colors.\ndef huber(x, y, scaling=0.1):\n diff_sq = (x - y) ** 2\n loss = ((1 + diff_sq / (scaling**2)).clamp(1e-4).sqrt() - 1) * float(scaling)\n return loss", "_____no_output_____" ] ], [ [ "## 4. Fit the volume\n\nHere we carry out the volume fitting with differentiable rendering.\n\nIn order to fit the volume, we render it from the viewpoints of the `target_cameras`\nand compare the resulting renders with the observed `target_images` and `target_silhouettes`.\n\nThe comparison is done by evaluating the mean huber (smooth-l1) error between corresponding\npairs of `target_images`/`rendered_images` and `target_silhouettes`/`rendered_silhouettes`.", "_____no_output_____" ] ], [ [ "# First move all relevant variables to the correct device.\ntarget_cameras = target_cameras.to(device)\ntarget_images = target_images.to(device)\ntarget_silhouettes = target_silhouettes.to(device)\n\n# Instantiate the volumetric model.\n# We use a cubical volume with the size of \n# one side = 128. The size of each voxel of the volume \n# is set to volume_extent_world / volume_size s.t. the\n# volume represents the space enclosed in a 3D bounding box\n# centered at (0, 0, 0) with the size of each side equal to 3.\nvolume_size = 128\nvolume_model = VolumeModel(\n renderer,\n volume_size=[volume_size] * 3, \n voxel_size = volume_extent_world / volume_size,\n).to(device)\n\n# Instantiate the Adam optimizer. We set its master learning rate to 0.1.\nlr = 0.1\noptimizer = torch.optim.Adam(volume_model.parameters(), lr=lr)\n\n# We do 300 Adam iterations and sample 10 random images in each minibatch.\nbatch_size = 10\nn_iter = 300\nfor iteration in range(n_iter):\n\n # In case we reached the last 75% of iterations,\n # decrease the learning rate of the optimizer 10-fold.\n if iteration == round(n_iter * 0.75):\n print('Decreasing LR 10-fold ...')\n optimizer = torch.optim.Adam(\n volume_model.parameters(), lr=lr * 0.1\n )\n \n # Zero the optimizer gradient.\n optimizer.zero_grad()\n \n # Sample random batch indices.\n batch_idx = torch.randperm(len(target_cameras))[:batch_size]\n \n # Sample the minibatch of cameras.\n batch_cameras = FoVPerspectiveCameras(\n R = target_cameras.R[batch_idx], \n T = target_cameras.T[batch_idx], \n znear = target_cameras.znear[batch_idx],\n zfar = target_cameras.zfar[batch_idx],\n aspect_ratio = target_cameras.aspect_ratio[batch_idx],\n fov = target_cameras.fov[batch_idx],\n device = device,\n )\n \n # Evaluate the volumetric model.\n rendered_images, rendered_silhouettes = volume_model(\n batch_cameras\n ).split([3, 1], dim=-1)\n \n # Compute the silhoutte error as the mean huber\n # loss between the predicted masks and the\n # target silhouettes.\n sil_err = huber(\n rendered_silhouettes[..., 0], target_silhouettes[batch_idx],\n ).abs().mean()\n\n # Compute the color error as the mean huber\n # loss between the rendered colors and the\n # target ground truth images.\n color_err = huber(\n rendered_images, target_images[batch_idx],\n ).abs().mean()\n \n # The optimization loss is a simple\n # sum of the color and silhouette errors.\n loss = color_err + sil_err \n \n # Print the current values of the losses.\n if iteration % 10 == 0:\n print(\n f'Iteration {iteration:05d}:'\n + f' color_err = {float(color_err):1.2e}'\n + f' mask_err = {float(sil_err):1.2e}'\n )\n \n # Take the optimization step.\n loss.backward()\n optimizer.step()\n \n # Visualize the renders every 40 iterations.\n if iteration % 40 == 0:\n # Visualize only a single randomly selected element of the batch.\n im_show_idx = int(torch.randint(low=0, high=batch_size, size=(1,)))\n fig, ax = plt.subplots(2, 2, figsize=(10, 10))\n ax = ax.ravel()\n clamp_and_detach = lambda x: x.clamp(0.0, 1.0).cpu().detach().numpy()\n ax[0].imshow(clamp_and_detach(rendered_images[im_show_idx]))\n ax[1].imshow(clamp_and_detach(target_images[batch_idx[im_show_idx], ..., :3]))\n ax[2].imshow(clamp_and_detach(rendered_silhouettes[im_show_idx, ..., 0]))\n ax[3].imshow(clamp_and_detach(target_silhouettes[batch_idx[im_show_idx]]))\n for ax_, title_ in zip(\n ax, \n (\"rendered image\", \"target image\", \"rendered silhouette\", \"target silhouette\")\n ):\n ax_.grid(\"off\")\n ax_.axis(\"off\")\n ax_.set_title(title_)\n fig.canvas.draw(); fig.show()\n display.clear_output(wait=True)\n display.display(fig)", "_____no_output_____" ] ], [ [ "## 5. Visualizing the optimized volume\n\nFinally, we visualize the optimized volume by rendering from multiple viewpoints that rotate around the volume's y-axis.", "_____no_output_____" ] ], [ [ "def generate_rotating_volume(volume_model, n_frames = 50):\n logRs = torch.zeros(n_frames, 3, device=device)\n logRs[:, 1] = torch.linspace(0.0, 2.0 * 3.14, n_frames, device=device)\n Rs = so3_exponential_map(logRs)\n Ts = torch.zeros(n_frames, 3, device=device)\n Ts[:, 2] = 2.7\n frames = []\n print('Generating rotating volume ...')\n for R, T in zip(tqdm(Rs), Ts):\n camera = FoVPerspectiveCameras(\n R=R[None], \n T=T[None], \n znear = target_cameras.znear[0],\n zfar = target_cameras.zfar[0],\n aspect_ratio = target_cameras.aspect_ratio[0],\n fov = target_cameras.fov[0],\n device=device,\n )\n frames.append(volume_model(camera)[..., :3].clamp(0.0, 1.0))\n return torch.cat(frames)\n \nwith torch.no_grad():\n rotating_volume_frames = generate_rotating_volume(volume_model, n_frames=7*4)\n\nimage_grid(rotating_volume_frames.clamp(0., 1.).cpu().numpy(), rows=4, cols=7, rgb=True, fill=True)\nplt.show()", "_____no_output_____" ] ], [ [ "## 6. Conclusion\n\nIn this tutorial, we have shown how to optimize a 3D volumetric representation of a scene such that the renders of the volume from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's volumetric renderer composed of an `NDCGridRaysampler` and an `EmissionAbsorptionRaymarcher`.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7647248d1bfb3a485bfd07538a9ee56ea56f8f4
7,331
ipynb
Jupyter Notebook
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
4dc3345750f144e3e3f5acec436546a892c8af6d
[ "MIT" ]
null
null
null
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
4dc3345750f144e3e3f5acec436546a892c8af6d
[ "MIT" ]
null
null
null
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
4dc3345750f144e3e3f5acec436546a892c8af6d
[ "MIT" ]
null
null
null
25.020478
157
0.447961
[ [ [ "\n# Python Google Collaboratory\n\nGoogle Colaboratory หรือ Colab คือ IDE ภาษา Python รูปแบบ Online สามารถเข้าใช้บริการผ่าน Google Account โดยไม่จำเป็นต้องติดตั้งโปรแกรมใดๆ บนเครื่อง", "_____no_output_____" ], [ "## สั่งให้ Cell ทำงาน\n\nไม่ว่า cell นั้นจะเป็น Markdown หรือ Code สามารถสั่งให้ทำงานด้วยคำสั่ง Shift+ Enter", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "## เพิ่ม Cell ด้านล่าง\n\nสามารถเพิ่ม Cell เพื่่อใช้สร้าง Code โดยการกด Ctrl ตามด้วย M และ B", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "## เปลี่ยน Cell สำหรับ Code ให้กลายเป็น Cell ข้อความ\n\nคลิก Cell สำหรับ Code ที่ต้องการแล้วกด Ctrl ตามด้วย M และ M", "_____no_output_____" ], [ "cell สำหรับแสดงผล", "_____no_output_____" ], [ "## ลบ Cell ที่ไม่ต้องการ\n\nเลือก Cell ที่ต้องการจากนั้นกด Ctrl ตามด้วย M และ D\n", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "## เริ่มเขียนโปรแกรมเป็นครั้งแรก Hello world\n\nคำสั่ง print สามารถใช้แสดงผลข้อความที่ต้องการ\n\n```print( Input String)``` ( หมายเหตุ ` สามาถพิมพ์ด้วยการกด Alt + 96 ส่วนเครื่องหมาย ~ ใช้ Alt + 126)\n\n**ตัวอย่างเช่น** : print(\"Hello World\")", "_____no_output_____" ] ], [ [ "print(\"Hello World\")\nprint(\"Hello World\")", "_____no_output_____" ] ], [ [ "แต่ถ้าหากต้องการ แสดงตัวอักษร ' หรือ \" สามารถกระทำได้ดังตัวอย่างต่อไปนี้\n* กรณี Single Quote ' : นำเครื่องหมาย Single Quote ให้อยู่ภายใต้เครื่องหมาย Double Quote สองอัน\n* กรณี Double Quote \" : นำเครื่องหมาย Double Quote ให้อยู่ภายใต้เครื่องหมาย Single Quote สองอัน\n", "_____no_output_____" ] ], [ [ "print(\" ต้องการแสดงเครื่่องหมาย Single Quote ' \")\nprint(\"แสดงเครื่องหมาย ' \")\nprint(' ในกรณีที่ต้องการแสดงเครื่องหมาย Double Quote \" ')\n\nprint(' เครื่องหมาย double quote \" ')", " ต้องการแสดงเครื่่องหมาย Single Quote ' \nแสดงเครื่องหมาย ' \n ในกรณีที่ต้องการแสดงเครื่องหมาย Double Quote \" \n เครื่องหมาย double quote \" \n" ] ], [ [ "## การใช้ Comment Line\n\n* Comment line เป็นการบอกให้ Python ทราบว่าบรรทัดที่ระบุนี้มิใช่ Code หากแต่เป็นข้อความที่โปรแกรมเมอร์ใส่ไว้เพื่อให้อธิบายรายละเอียดของ Code\n* ใช้เครื่องหมาย # เพื่อเปลี่ยนข้อความเป็น Comment Line", "_____no_output_____" ] ], [ [ "# ข้อความต่อไปนี้คือ Comment Line ดังนันภาษา Python จะข้าม Comment Line อันนี้ไป\n# ข้อความอธิบายโปรแกรม\nprint(\"Hello World\")", "_____no_output_____" ] ], [ [ "## การใช้ Comment Block\n\nสามารถใส่คำอธิบายที่ยาวหลายบรรทัดได้ โดยนำข้อความเหล่านั้นให้อยู่ภายใต้เครื่องหายต่อไปนี้\n* Single Quote จำนวน 3 อัน ตัวอย่างเช่น ''' ข้อความ '''\n* Double Quote จำนวน 3 อัน ตัวอย่างเช่น \"\"\" ข้อความ \"\"\"", "_____no_output_____" ] ], [ [ "'''\nตัวอย่าง Comment Block ภายใต้เครื่องหมาย Single Quote\n1. บรรทัดที่ 1\n2. บรรทัดที่ 2\n3. บรรทัดที่ 3\n'''\nprint(\"ตัวอย่างการสร้าง Comment Block ด้วย Single Quote\")", "_____no_output_____" ], [ "\"\"\"\nตัวอย่าง Comment Block ภายใต้เครื่องหมาย Double Quote\n1. บรรทัดที่ 1\n2. บรรทัดที่ 2\n3. บรรทัดที่ 3\n\"\"\"\nprint(\"ตัวอย่างการสร้าง Comment Block ด้วย Single Quote\")", "_____no_output_____" ] ], [ [ "## การเขียนคำสั่งยาวมากกว่า 1 บรรทัด\n\nหาก Code ที่ต้องการเขียนมีความยาวมากกว่าหนึ่งบรรทัด สามารถเขียนคำสั่งนั้นต่อในบรรทัดใหม่ได้โดยเชื่อมแต่ละบรรทัดด้วยเครื่องหมาย ```\\```", "_____no_output_____" ] ], [ [ "# ทดสอบคำสั่งที่ยาวมากกว่าหนึ่งบรรทัด\nprint( \\\n \"Hello Wolrld \" \\\n ) ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e7647e12f456657e4a4cb4dd8f948fce45a072d7
3,857
ipynb
Jupyter Notebook
Exam question 1 - The Double Hadamard (Input Ket one).ipynb
tomstevns/Elective-Quantum-Computing-Fall-2020
bed00fb98f11fc958ea211e24dab966f91218058
[ "MIT" ]
null
null
null
Exam question 1 - The Double Hadamard (Input Ket one).ipynb
tomstevns/Elective-Quantum-Computing-Fall-2020
bed00fb98f11fc958ea211e24dab966f91218058
[ "MIT" ]
null
null
null
Exam question 1 - The Double Hadamard (Input Ket one).ipynb
tomstevns/Elective-Quantum-Computing-Fall-2020
bed00fb98f11fc958ea211e24dab966f91218058
[ "MIT" ]
null
null
null
24.566879
134
0.541872
[ [ [ "# Full import of Qiskit library\nfrom qiskit import *", "_____no_output_____" ], [ "try:\n # Create a Quantum Register with 2 Qubits.\n qr = QuantumRegister(1)\n\n # Create a classical register with 2 bits\n cr = ClassicalRegister(1)\n\n # Create a Quantum Circuit containing our QR and CR. \n circuit = QuantumCircuit(qr,cr)\n\n # Prepare the method to draw our quantum program\n circuit.draw();\n\nexcept NameError:\n print(\"ERROR: There is either an error in your code - or you have not run the library import block above correctly\\n\"*10)", "_____no_output_____" ], [ "# Adding a single [X]-gate to the Quantum register will change the value from Ket [0> to [1>\ncircuit.x(qr[0])\n\n# Adding a single [H]-gate to one of the two Quantum Registers\n#TODO: ⚠️⚠️⚠️⚠️⚠️ Add a [H]-gate here (Tip: Looks like the [x]-gate we added before⚠️⚠️⚠️⚠️⚠️\ncircuit.h(qr[0])\n\n# Adding a barrier for visualizing purposes\ncircuit.barrier()\n\n# Prepare the method to draw our quantum program\ncircuit.draw();\n\n# Adding a single [H]-gate to one of the two Quantum Registers\n#TODO: ⚠️⚠️⚠️⚠️⚠️ Add a [H]-gate here (Tip: Looks like the [x]-gate we added before⚠️⚠️⚠️⚠️⚠️\ncircuit.h(qr[0])\n\n# Adding a barrier for visualizing purposes\ncircuit.barrier()\n\n\n# Adding the measurement operation to all Quantum Registers\ncircuit.measure(qr, cr);", "_____no_output_____" ], [ "circuit.draw(output='mpl')", "_____no_output_____" ], [ "# We load the backend to run our Quantum Program, as we did in the last example\nbackend = BasicAer.get_backend('qasm_simulator')\n\n# We execute the Quantum Programwe, but have added \"shots=24, memory=True\" to the execute() function. \njob = qiskit.execute(circuit, backend, shots=1024, memory=True)\n\n# Get the results from the job\nresult = job.result().get_counts(circuit)\n\n# A quick print out of our result\nprint(result)", "_____no_output_____" ], [ "from qiskit.tools.visualization import plot_histogram\n\nplot_histogram(result)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]