hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
e7a5b0c3e9c190c45dfce46103a6df8e6c9558b2
30,227
ipynb
Jupyter Notebook
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
b6f3f760ecba8700a5d989d94389ad044a59e214
[ "MIT" ]
1
2021-07-12T18:30:47.000Z
2021-07-12T18:30:47.000Z
book/tutorials/lis/1_exploring_lis_output.ipynb
slopezon/website
50d47b7977fb5f8ac14f367ff806cf9187dfb268
[ "MIT" ]
null
null
null
book/tutorials/lis/1_exploring_lis_output.ipynb
slopezon/website
50d47b7977fb5f8ac14f367ff806cf9187dfb268
[ "MIT" ]
1
2021-07-09T16:39:19.000Z
2021-07-09T16:39:19.000Z
29.750984
450
0.566414
[ [ [ "# Visualizing and Comparing LIS Output", "_____no_output_____" ], [ "```{figure} ./images/nasa-lis-combined-logos.png\n---\nwidth: 300px\n---\n```", "_____no_output_____" ], [ "## LIS Output Primer\n\nLIS writes model state variables to disk at a frequency selected by the user (e.g., 6-hourly, daily, monthly). The LIS output we will be exploring was originally generated as *daily* NetCDF files, meaning one NetCDF was written per simulated day. We have converted these NetCDF files into a [Zarr](https://zarr.readthedocs.io/en/stable/) store for improved performance in the cloud.", "_____no_output_____" ], [ "## Import Libraries", "_____no_output_____" ] ], [ [ "# interface to Amazon S3 filesystem\nimport s3fs\n\n# interact with n-d arrays\nimport numpy as np\nimport xarray as xr\n\n# interact with tabular data (incl. spatial)\nimport pandas as pd\nimport geopandas as gpd\n\n# interactive plots\nimport holoviews as hv\nimport geoviews as gv\nimport hvplot.pandas\nimport hvplot.xarray\n\n# used to find nearest grid cell to a given location\nfrom scipy.spatial import distance\n\n# set bokeh as the holoviews plotting backend\nhv.extension('bokeh')", "_____no_output_____" ] ], [ [ "## Load the LIS Output\n\nThe `xarray` library makes working with labelled n-dimensional arrays easy and efficient. If you're familiar with the `pandas` library it should feel pretty familiar.\n\nHere we load the LIS output into an `xarray.Dataset` object:", "_____no_output_____" ] ], [ [ "# create S3 filesystem object\ns3 = s3fs.S3FileSystem(anon=False)\n\n# define the name of our S3 bucket\nbucket_name = 'eis-dh-hydro/SNOWEX-HACKWEEK'\n\n# define path to store on S3\nlis_output_s3_path = f's3://{bucket_name}/DA_SNODAS/SURFACEMODEL/LIS_HIST.d01.zarr/'\n\n# create key-value mapper for S3 object (required to read data stored on S3)\nlis_output_mapper = s3.get_mapper(lis_output_s3_path)\n\n# open the dataset\nlis_output_ds = xr.open_zarr(lis_output_mapper, consolidated=True)\n\n# drop some unneeded variables\nlis_output_ds = lis_output_ds.drop_vars(['_history', '_eis_source_path'])", "_____no_output_____" ] ], [ [ "## Explore the Data\n\nDisplay an interactive widget for inspecting the dataset by running a cell containing the variable name. Expand the dropdown menus and click on the document and database icons to inspect the variables and attributes.", "_____no_output_____" ] ], [ [ "lis_output_ds", "_____no_output_____" ] ], [ [ "### Accessing Attributes", "_____no_output_____" ], [ "Dataset attributes (metadata) are accessible via the `attrs` attribute:", "_____no_output_____" ] ], [ [ "lis_output_ds.attrs", "_____no_output_____" ] ], [ [ "### Accessing Variables\n\nVariables can be accessed using either **dot notation** or **square bracket notation**:", "_____no_output_____" ] ], [ [ "# dot notation\nlis_output_ds.SnowDepth_tavg", "_____no_output_____" ], [ "# square bracket notation\nlis_output_ds['SnowDepth_tavg']", "_____no_output_____" ] ], [ [ "#### Which syntax should I use?\nWhile both syntaxes perform the same function, the square-bracket syntax is useful when interacting with a dataset programmatically. For example, we can define a variable `varname` that stores the name of the variable in the dataset we want to access and then use that with the square-brackets notation:", "_____no_output_____" ] ], [ [ "varname = 'SnowDepth_tavg'\n\nlis_output_ds[varname]", "_____no_output_____" ] ], [ [ "The dot notation syntax will not work this way because `xarray` tries to find a variable in the dataset named `varname` instead of the value of the `varname` variable. When `xarray` can't find this variable, it throws an error:", "_____no_output_____" ] ], [ [ "# uncomment and run the code below to see the error\n\n# varname = 'SnowDepth_tavg'\n\n# lis_output_ds.varname", "_____no_output_____" ] ], [ [ "### Dimensions and Coordinate Variables\n\nThe dimensions and coordinate variable fields put the \"*labelled*\" in \"labelled n-dimensional arrays\":\n\n* **Dimensions:** labels for each dimension in the dataset (e.g., `time`)\n* **Coordinates:** labels for indexing along dimensions (e.g., `'2019-01-01'`)\n\nWe can use these labels to select, slice, and aggregate the dataset.", "_____no_output_____" ], [ "#### Selecting/Subsetting\n\n`xarray` provides two methods for selecting or subsetting along coordinate variables:\n\n* index selection: `ds.isel(time=0)`\n* value selection `ds.sel(time='2019-01-01')`\n\nFor example, we can select the first timestep from our dataset using index selection by passing the dimension name as a keyword argument:", "_____no_output_____" ] ], [ [ "# remember: python indexes start at 0\nlis_output_ds.isel(time=0)", "_____no_output_____" ] ], [ [ "Or we can use value selection to select based on the coordinate(s) (think \"labels\") of a given dimension:", "_____no_output_____" ] ], [ [ "lis_output_ds.sel(time='2018-01-01')", "_____no_output_____" ] ], [ [ "The `.sel()` approach also allows the use of shortcuts in some cases. For example, here we select all timesteps in the month of January 2018:", "_____no_output_____" ] ], [ [ "lis_output_ds.sel(time='2018-01')", "_____no_output_____" ] ], [ [ "Select a custom range of dates using Python's built-in `slice()` method:", "_____no_output_____" ] ], [ [ "lis_output_ds.sel(time=slice('2018-01-01', '2018-01-15'))", "_____no_output_____" ] ], [ [ "#### Latitude and Longitude\n\nYou may have noticed that latitude (`lat`) and longitude (`lon`) are listed as data variables, not coordinate variables. This dataset would be easier to work with if `lat` and `lon` were coordinate variables and dimensions. Here we define a helper function that reads the spatial information from the dataset attributes, generates arrays containing the `lat` and `lon` values, and appends them to the dataset:", "_____no_output_____" ] ], [ [ "def add_latlon_coords(dataset: xr.Dataset)->xr.Dataset:\n \"\"\"Adds lat/lon as dimensions and coordinates to an xarray.Dataset object.\"\"\"\n \n # get attributes from dataset\n attrs = dataset.attrs\n \n # get x, y resolutions\n dx = round(float(attrs['DX']), 3)\n dy = round(float(attrs['DY']), 3)\n \n # get grid cells in x, y dimensions\n ew_len = len(dataset['east_west'])\n ns_len = len(dataset['north_south'])\n \n # get lower-left lat and lon\n ll_lat = round(float(attrs['SOUTH_WEST_CORNER_LAT']), 3)\n ll_lon = round(float(attrs['SOUTH_WEST_CORNER_LON']), 3)\n \n # calculate upper-right lat and lon\n ur_lat = ll_lat + (dy * ns_len)\n ur_lon = ll_lon + (dx * ew_len)\n \n # define the new coordinates\n coords = {\n # create an arrays containing the lat/lon at each gridcell\n 'lat': np.linspace(ll_lat, ur_lat, ns_len, dtype=np.float32, endpoint=False),\n 'lon': np.linspace(ll_lon, ur_lon, ew_len, dtype=np.float32, endpoint=False)\n }\n \n lon_attrs = dataset.lon.attrs\n lat_attrs = dataset.lat.attrs\n \n # rename the original lat and lon variables\n dataset = dataset.rename({'lon':'orig_lon', 'lat':'orig_lat'})\n # rename the grid dimensions to lat and lon\n dataset = dataset.rename({'north_south': 'lat', 'east_west': 'lon'})\n # assign the coords above as coordinates\n dataset = dataset.assign_coords(coords)\n dataset.lon.attrs = lon_attrs\n dataset.lat.attrs = lat_attrs\n \n return dataset", "_____no_output_____" ] ], [ [ "Now that the function is defined, let's use it to append `lat` and `lon` coordinates to the LIS output:", "_____no_output_____" ] ], [ [ "lis_output_ds = add_latlon_coords(lis_output_ds)", "_____no_output_____" ] ], [ [ "Inspect the dataset:", "_____no_output_____" ] ], [ [ "lis_output_ds", "_____no_output_____" ] ], [ [ "Now `lat` and `lon` are listed as coordinate variables and have replaced the `north_south` and `east_west` dimensions. This will make it easier to spatially subset the dataset!", "_____no_output_____" ], [ "#### Basic Spatial Subsetting", "_____no_output_____" ], [ "We can use the `slice()` function we used above on the `lat` and `lon` dimensions to select data between a range of latitudes and longitudes:", "_____no_output_____" ] ], [ [ "lis_output_ds.sel(lat=slice(37, 41), lon=slice(-110, -101))", "_____no_output_____" ] ], [ [ "Notice how the sizes of the `lat` and `lon` dimensions have decreased.", "_____no_output_____" ], [ "#### Subset Across Multiple Dimensions", "_____no_output_____" ], [ "Select snow depth for Jan 2017 within a range of lat/lon:", "_____no_output_____" ] ], [ [ "# define a range of dates to select\nwy_2018_slice = slice('2017-10-01', '2018-09-30')\nlat_slice = slice(37, 41)\nlon_slice = slice(-109, -102)\n\n# select the snow depth and subset to wy_2018_slice\nsnd_CO_wy2018_ds = lis_output_ds['SnowDepth_tavg'].sel(time=wy_2018_slice, lat=lat_slice, lon=lon_slice)\n\n# inspect resulting dataset\nsnd_CO_wy2018_ds", "_____no_output_____" ] ], [ [ "### Plotting\n\nWe've imported two plotting libraries:\n\n* `matplotlib`: static plots\n* `hvplot`: interactive plots\n\nWe can make a quick `matplotlib`-based plot for the subsetted data using the `.plot()` function supplied by `xarray.Dataset` objects. For this example, we'll select one day and plot it:", "_____no_output_____" ] ], [ [ "# simple matplotlilb plot\nsnd_CO_wy2018_ds.sel(time='2018-01-01').plot()", "_____no_output_____" ] ], [ [ "Similarly we can make an interactive plot using the `hvplot` accessor and specifying a `quadmesh` plot type:", "_____no_output_____" ] ], [ [ "# hvplot based map\nsnd_CO_20180101_plot = snd_CO_wy2018_ds.sel(time='2018-01-01').hvplot.quadmesh(geo=True, rasterize=True, project=True,\n xlabel='lon', ylabel='lat', cmap='viridis',\n tiles='EsriImagery')\n\nsnd_CO_20180101_plot", "_____no_output_____" ] ], [ [ "Pan, zoom, and scroll around the map. Hover over the LIS data to see the data values.", "_____no_output_____" ], [ "If we try to plot more than one time-step `hvplot` will also provide a time-slider we can use to scrub back and forth in time:", "_____no_output_____" ] ], [ [ "snd_CO_wy2018_ds.sel(time='2018-01').hvplot.quadmesh(geo=True, rasterize=True, project=True,\n xlabel='lon', ylabel='lat', cmap='viridis',\n tiles='EsriImagery')", "_____no_output_____" ] ], [ [ "From here on out we will stick with `hvplot` for plotting.", "_____no_output_____" ], [ "#### Timeseries Plots\n\nWe can generate a timeseries for a given grid cell by selecting and calling the plot function:", "_____no_output_____" ] ], [ [ "# define point to take timeseries (note: must be present in coordinates of dataset)\nts_lon, ts_lat = (-105.65, 40.35)\n\n# plot timeseries (hvplot knows how to plot based on dataset's dimensionality!)\nsnd_CO_wy2018_ds.sel(lat=ts_lat, lon=ts_lon).hvplot(title=f'Snow Depth Timeseries @ Lon: {ts_lon}, Lat: {ts_lat}',\n xlabel='Date', ylabel='Snow Depth (m)') + \\\n snd_CO_20180101_plot * gv.Points([(ts_lon, ts_lat)]).opts(size=10, color='red')\n \n", "_____no_output_____" ] ], [ [ "In the next section we'll learn how to create a timeseries over a broader area.", "_____no_output_____" ], [ "## Aggregation\n\nWe can perform aggregation operations on the dataset such as `min()`, `max()`, `mean()`, and `sum()` by specifying the dimensions along which to perform the calculation.\n\nFor example we can calculate the mean and maximum snow depth at each grid cell over water year 2018 as follows:", "_____no_output_____" ] ], [ [ "# calculate the mean at each grid cell over the time dimension\nmean_snd_CO_wy2018_ds = snd_CO_wy2018_ds.mean(dim='time')\nmax_snd_CO_wy2018_ds = snd_CO_wy2018_ds.max(dim='time')\n\n# plot the mean and max snow depth\nmean_snd_CO_wy2018_ds.hvplot.quadmesh(geo=True, rasterize=True, project=True,\n xlabel='lon', ylabel='lat', cmap='viridis',\n tiles='EsriImagery', title='Mean Snow Depth - WY2018') + \\\n max_snd_CO_wy2018_ds.hvplot.quadmesh(geo=True, rasterize=True, project=True,\n xlabel='lon', ylabel='lat', cmap='viridis',\n tiles='EsriImagery', title='Max Snow Depth - WY2018')", "_____no_output_____" ] ], [ [ "### Area Average", "_____no_output_____" ] ], [ [ "# take area-averaged mean at each timestep\nmean_snd_CO_wy2018_ds = snd_CO_wy2018_ds.mean(['lat', 'lon'])\n\n# inspect the dataset\nmean_snd_CO_wy2018_ds", "_____no_output_____" ], [ "# plot timeseries (hvplot knows how to plot based on dataset's dimensionality!)\nmean_snd_CO_wy2018_ds.hvplot(title='Mean LIS Snow Depth for Colorado', xlabel='Date', ylabel='Snow Depth (m)')", "_____no_output_____" ] ], [ [ "## Comparing LIS Output\n\nNow that we're familiar with the LIS output, let's compare it to two other datasets: SNODAS (raster) and SNOTEL (point).", "_____no_output_____" ], [ "### LIS (raster) vs. SNODAS (raster)", "_____no_output_____" ], [ "First, we'll load the SNODAS dataset which we also have hosted on S3 as a Zarr store:", "_____no_output_____" ] ], [ [ "# load SNODAS dataset\n\n#snodas depth\nkey = \"SNODAS/snodas_snowdepth_20161001_20200930.zarr\" \nsnodas_depth_ds = xr.open_zarr(s3.get_mapper(f\"{bucket_name}/{key}\"), consolidated=True)\n\n# apply scale factor to convert to meters (0.001 per SNODAS user guide)\nsnodas_depth_ds = snodas_depth_ds * 0.001", "_____no_output_____" ] ], [ [ "Next we define a helper function to extract the (lon, lat) of the nearest grid cell to a given point:", "_____no_output_____" ] ], [ [ "def nearest_grid(ds, pt):\n \n \"\"\"\n Returns the nearest lon and lat to pt in a given Dataset (ds).\n \n pt : input point, tuple (longitude, latitude)\n output:\n lon, lat\n \"\"\"\n \n if all(coord in list(ds.coords) for coord in ['lat', 'lon']):\n df_loc = ds[['lon', 'lat']].to_dataframe().reset_index()\n else:\n df_loc = ds[['orig_lon', 'orig_lat']].isel(time=0).to_dataframe().reset_index()\n \n loc_valid = df_loc.dropna()\n pts = loc_valid[['lon', 'lat']].to_numpy()\n idx = distance.cdist([pt], pts).argmin()\n \n return loc_valid['lon'].iloc[idx], loc_valid['lat'].iloc[idx]", "_____no_output_____" ] ], [ [ "The next cell will look pretty similar to what we did earlier to plot a timeseries of a single point in the LIS data. The general steps are:\n\n* Extract the coordinates of the SNODAS grid cell nearest to our LIS grid cell (`ts_lon` and `ts_lat` from earlier)\n* Subset the SNODAS and LIS data to the grid cells and date ranges of interest\n* Create the plots!", "_____no_output_____" ] ], [ [ "# get lon, lat of snodas grid cell nearest to the LIS coordinates we used earlier\nsnodas_ts_lon, snodas_ts_lat = nearest_grid(snodas_depth_ds, (ts_lon, ts_lat))\n\n# define a date range to plot (shorter = quicker for demo)\nstart_date, end_date = ('2018-01-01', '2018-03-01')\nplot_daterange = slice(start_date, end_date)\n\n# select SNODAS grid cell and subset to plot_daterange\nsnodas_snd_subset_ds = snodas_depth_ds.sel(lon=snodas_ts_lon,\n lat=snodas_ts_lat,\n time=plot_daterange)\n\n# select LIS grid cell and subset to plot_daterange\nlis_snd_subset_ds = lis_output_ds['SnowDepth_tavg'].sel(lat=ts_lat,\n lon=ts_lon,\n time=plot_daterange)\n\n# create SNODAS snow depth plot\nsnodas_snd_plot = snodas_snd_subset_ds.hvplot(label='SNODAS')\n\n# create LIS snow depth plot\nlis_snd_plot = lis_snd_subset_ds.hvplot(label='LIS')\n\n# create SNODAS vs LIS snow depth plot\nlis_vs_snodas_snd_plot = (lis_snd_plot * snodas_snd_plot)\n\n# display the plot\nlis_vs_snodas_snd_plot.opts(title=f'Snow Depth @ Lon: {ts_lon}, Lat: {ts_lat}',\n legend_position='right',\n xlabel='Date',\n ylabel='Snow Depth (m)')", "_____no_output_____" ] ], [ [ "### LIS (raster) vs. SNODAS (raster) vs. SNOTEL (point)\n\nNow let's add SNOTEL point data to our plot.\n\nFirst, we're going to define some helper functions to load the SNOTEL data:", "_____no_output_____" ] ], [ [ "# load csv containing metadata for SNOTEL sites in a given state (e.g,. 'colorado')\ndef load_site(state):\n \n # define the path to the file\n key = f\"SNOTEL/snotel_{state}.csv\"\n \n # load the csv into a pandas DataFrame\n df = pd.read_csv(s3.open(f's3://{bucket_name}/{key}', mode='r'))\n \n return df\n\n# load SNOTEL data for a specific site\ndef load_snotel_txt(state, var):\n \n # define the path to the file\n key = f\"SNOTEL/snotel_{state}{var}_20162020.txt\"\n \n # determine how many lines to skip in the file (they start with #)\n fh = s3.open(f\"{bucket_name}/{key}\")\n lines = fh.readlines()\n skips = sum(1 for ln in lines if ln.decode('ascii').startswith('#'))\n \n # load the data into a pandas DataFrame\n df = pd.read_csv(s3.open(f\"s3://{bucket_name}/{key}\"), skiprows=skips)\n \n # convert the Date column from strings to datetime objects\n df['Date'] = pd.to_datetime(df['Date'])\n return df", "_____no_output_____" ] ], [ [ "For the purposes of this tutorial let's load the SNOTEL data for sites in Colorado. We'll pick one site to plot in a few cells.", "_____no_output_____" ] ], [ [ "# load SNOTEL snow depth for Colorado into a dictionary\nsnotel_depth = {'CO': load_snotel_txt('CO', 'depth')}", "_____no_output_____" ] ], [ [ "We'll need another helper function to load the depth data:", "_____no_output_____" ] ], [ [ "# get snotel depth\ndef get_depth(state, site, start_date, end_date):\n \n # grab the depth for the given state (e.g., CO)\n df = snotel_depth[state]\n \n # define a date range mask\n mask = (df['Date'] >= start_date) & (df['Date'] <= end_date)\n \n # use mask to subset between time range\n df = df.loc[mask]\n \n # extract timeseries for the given site\n return pd.concat([df.Date, df.filter(like=site)], axis=1).set_index('Date')", "_____no_output_____" ] ], [ [ "Load the site metadata for Colorado:", "_____no_output_____" ] ], [ [ "co_sites = load_site('colorado')\n\n# peek at the first 5 rows\nco_sites.head()", "_____no_output_____" ] ], [ [ "The point we've been using so far in the tutorial actually corresponds to the coordinates for the Bear Lake SNOTEL site! Let's extract the site data for that point:", "_____no_output_____" ] ], [ [ "# get the depth data by passing the site name to the get_depth() function\nbear_lake_snd_df = get_depth('CO', 'Bear Lake (322)', start_date, end_date)\n\n# convert from cm to m\nbear_lake_snd_df = bear_lake_snd_df / 100", "_____no_output_____" ] ], [ [ "Now we're ready to plot:", "_____no_output_____" ] ], [ [ "# create SNOTEL plot\nbear_lake_plot = bear_lake_snd_df.hvplot(label='SNOTEL')\n\n# combine the SNOTEl plot with the LIS vs SNODAS plot\n(bear_lake_plot * lis_vs_snodas_snd_plot).opts(title=f'Snow Depth @ Lon: {ts_lon}, Lat: {ts_lat}', legend_position='right')", "_____no_output_____" ] ], [ [ "# Conclusion\n\nYou should now be more familiar with LIS data and how to interact with it in Python. The code in this notebook is a great jumping off point for developing more advanced comparisons and interactive widgets. For an example of what is possible, open the next notebook and run all the cells (Run > Run All Cells). After a few minutes, two interactive widgets will appear that allow you to explore and compare LIS output with SNODAS and SNOTEL data.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7a5be367e21239d0a150d0d29c009d23ddb572c
75,351
ipynb
Jupyter Notebook
code/03_run_yolo_train.ipynb
ccjaread/tianchi_tile_defect_detection
11741fd8211a725d009d09bdbac37ad35ec06908
[ "MIT" ]
12
2021-02-08T09:12:23.000Z
2021-05-25T14:49:40.000Z
code/03_run_yolo_train.ipynb
ccjaread/tianchi_tile_defect_detection
11741fd8211a725d009d09bdbac37ad35ec06908
[ "MIT" ]
2
2021-02-03T15:12:37.000Z
2021-04-27T10:09:13.000Z
code/03_run_yolo_train.ipynb
ccjaread/tianchi_tile_defect_detection
11741fd8211a725d009d09bdbac37ad35ec06908
[ "MIT" ]
3
2021-02-20T01:37:16.000Z
2021-06-30T09:08:42.000Z
56.740211
1,623
0.45945
[ [ [ "import os\n\nos.chdir('../code/yolov5/')\n\nos.getcwd()", "_____no_output_____" ] ], [ [ "``` python\n!python train.py --img 160 --batch 4 --epochs 5 --data ./data/train_imgs_sliced_160.yaml --cfg ./models/yolov5s.yaml --weights '' \n```", "_____no_output_____" ], [ "``` python\n!python train.py --img 640 --batch 10 --epochs 100 --data ./data/train_imgs_sliced_640_val.yaml --cfg ./models/yolov5_tile.yaml --weights '' \n``` ", "_____no_output_____" ], [ "``` python\n%run train.py --img 320 --batch 5 --epochs 5 --data ./data/train_imgs_sliced_320.yaml --cfg ./models/yolov5s.yaml --weights '' \n```", "_____no_output_____" ], [ "%run train.py --img 320 --batch 5 --epochs 60 --data ./data/train_imgs_sliced_320_val.yaml --cfg ./models/yolov5st.yaml --weights ./runs/train/exp40/weights/last.pt", "_____no_output_____" ] ], [ [ "%run train.py --img 320 --batch 5 --epochs 60 --data ./data/train_imgs_sliced_320_val.yaml --weights ./runs/train/exp40/weights/last.pt", "Using torch 1.6.0+cu101 CUDA:0 (GeForce RTX 2060, 6144MB)\n\nNamespace(adam=False, batch_size=5, bucket='', cache_images=False, cfg='', data='./data/train_imgs_sliced_320_val.yaml', device='', epochs=60, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[320, 320], local_rank=-1, log_imgs=16, multi_scale=False, name='exp', noautoanchor=False, nosave=False, notest=False, project='runs/train', rect=False, resume=False, save_dir='runs\\\\train\\\\exp41', single_cls=False, sync_bn=False, total_batch_size=5, weights='./runs/train/exp40/weights/last.pt', workers=8, world_size=1)\nStart Tensorboard with \"tensorboard --logdir runs/train\", view at http://localhost:6006/\nHyperparameters {'lr0': 0.01, 'lrf': 0.2, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3.0, 'warmup_momentum': 0.8, 'warmup_bias_lr': 0.1, 'box': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mosaic': 1.0, 'mixup': 0.0}\n\n from n params module arguments \n 0 -1 1 3520 models.common.Focus [3, 32, 3] \n 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] \n 2 -1 1 19904 models.common.BottleneckCSP [64, 64, 1] \n 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] \n 4 -1 1 161152 models.common.BottleneckCSP [128, 128, 3] \n 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] \n 6 -1 1 641792 models.common.BottleneckCSP [256, 256, 3] \n 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] \n 8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]] \n 9 -1 1 1248768 models.common.BottleneckCSP [512, 512, 1, False] \n 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] \n 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n 12 [-1, 6] 1 0 models.common.Concat [1] \n 13 -1 1 378624 models.common.BottleneckCSP [512, 256, 1, False] \n 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] \n 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n 16 [-1, 4] 1 0 models.common.Concat [1] \n 17 -1 1 95104 models.common.BottleneckCSP [256, 128, 1, False] \n 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] \n 19 [-1, 14] 1 0 models.common.Concat [1] \n 20 -1 1 313088 models.common.BottleneckCSP [256, 256, 1, False] \n 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] \n 22 [-1, 10] 1 0 models.common.Concat [1] \n 23 -1 1 1248768 models.common.BottleneckCSP [512, 512, 1, False] \n 24 [17, 20, 23] 1 32364 models.yolo.Detect [7, [[9, 9, 14, 14, 23, 20], [61, 25, 34, 50, 55, 108], [124, 49, 237, 85, 111, 224]], [128, 256, 512]]\nModel Summary: 283 layers, 7271276 parameters, 7271276 gradients\n\nTransferred 370/370 items from ./runs/train/exp40/weights/last.pt\nOptimizer groups: 62 .bias, 70 conv.weight, 59 other\nScanning '..\\train_imgs_sliced_all_320_val6\\labels.cache' for images and labels... 28267 found, 0 missing, 0 empty, 0 c\nScanning '..\\train_imgs_sliced_all_320_val\\labels.cache' for images and labels... 14548 found, 0 missing, 0 empty, 0 co\nImage sizes 320 train, 320 test\nUsing 5 dataloader workers\nLogging results to runs\\train\\exp41\nStarting training for 60 epochs...\n\n Epoch gpu_mem box obj cls total targets img_size\n 0%| | 0/5654 [00:00<?, ?it/s]" ], [ "%run train.py --img 320 --batch 5 --epochs 60 --data ./data/train_imgs_sliced_320_val.yaml --cfg ./models/yolov5s_se.yaml --weights ''", "Using torch 1.6.0+cu101 CUDA:0 (GeForce RTX 2060, 6144MB)\n\nNamespace(adam=False, batch_size=5, bucket='', cache_images=False, cfg='./models/yolov5s_se.yaml', data='./data/train_imgs_sliced_320_val.yaml', device='', epochs=60, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[320, 320], local_rank=-1, log_imgs=16, multi_scale=False, name='exp', noautoanchor=False, nosave=False, notest=False, project='runs/train', rect=False, resume=False, save_dir='runs\\\\train\\\\exp48', single_cls=False, sync_bn=False, total_batch_size=5, weights=\"''\", workers=8, world_size=1)\nStart Tensorboard with \"tensorboard --logdir runs/train\", view at http://localhost:6006/\nHyperparameters {'lr0': 0.01, 'lrf': 0.2, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3.0, 'warmup_momentum': 0.8, 'warmup_bias_lr': 0.1, 'box': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mosaic': 1.0, 'mixup': 0.0}\n\n from n params module arguments \n 0 -1 1 3520 models.common.Focus [3, 32, 3] \n 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] \n 2 -1 1 19904 models.common.BottleneckCSP [64, 64, 1] \n 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] \n 4 -1 1 161152 models.common.BottleneckCSP [128, 128, 3] \n 5 -1 1 2048 models.common.SELayer [128, 16] \n 6 -1 1 295424 models.common.Conv [128, 256, 3, 2] \n 7 -1 1 641792 models.common.BottleneckCSP [256, 256, 3] \n 8 -1 1 8192 models.common.SELayer [256, 16] \n 9 -1 1 1180672 models.common.Conv [256, 512, 3, 2] \n 10 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]] \n 11 -1 1 1248768 models.common.BottleneckCSP [512, 512, 1, False] \n 12 -1 1 131584 models.common.Conv [512, 256, 1, 1] \n 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n 14 [-1, 8] 1 0 models.common.Concat [1] \n 15 -1 1 378624 models.common.BottleneckCSP [512, 256, 1, False] \n 16 -1 1 33024 models.common.Conv [256, 128, 1, 1] \n 17 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n 18 [-1, 5] 1 0 models.common.Concat [1] \n 19 -1 1 95104 models.common.BottleneckCSP [256, 128, 1, False] \n 20 -1 1 147712 models.common.Conv [128, 128, 3, 2] \n 21 [-1, 16] 1 0 models.common.Concat [1] \n 22 -1 1 313088 models.common.BottleneckCSP [256, 256, 1, False] \n 23 -1 1 590336 models.common.Conv [256, 256, 3, 2] \n 24 [-1, 12] 1 0 models.common.Concat [1] \n 25 -1 1 1248768 models.common.BottleneckCSP [512, 512, 1, False] \n 26 [19, 22, 25] 1 32364 models.yolo.Detect [7, [[9, 9, 14, 14, 23, 20], [61, 25, 34, 50, 55, 108], [124, 49, 237, 85, 111, 224]], [128, 256, 512]]\nModel Summary: 297 layers, 7281516 parameters, 7281516 gradients\n\nOptimizer groups: 62 .bias, 74 conv.weight, 59 other\nScanning '..\\train_imgs_sliced_all_320_val6\\labels.cache' for images and labels... 28267 found, 0 missing, 0 empty, 0 c\nScanning '..\\train_imgs_sliced_all_320_val\\labels.cache' for images and labels... 14548 found, 0 missing, 0 empty, 0 co\nImage sizes 320 train, 320 test\nUsing 5 dataloader workers\nLogging results to runs\\train\\exp48\nStarting training for 60 epochs...\n\n Epoch gpu_mem box obj cls total targets img_size\n 0%| | 0/5654 [00:00<?, ?it/s]" ], [ "from utils.plots import plot_results \nplot_results(save_dir='./runs/train/exp21') ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ] ]
e7a5c0884283c2a49ba65e0dbe3be5e9f6c9c4a8
192,040
ipynb
Jupyter Notebook
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
e35dccc65885468a7597ce6bbe16ecd656367ea3
[ "MIT" ]
null
null
null
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
e35dccc65885468a7597ce6bbe16ecd656367ea3
[ "MIT" ]
null
null
null
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
e35dccc65885468a7597ce6bbe16ecd656367ea3
[ "MIT" ]
null
null
null
78.996298
44,436
0.813403
[ [ [ "# bert4sentiment - an easy implementation with BERT with Hugggingface for sentiment analysis\n\nLet's build a Sentiment Classifier using the amazing Transformers library by Hugging Face!\n\nLoad the ber4sentiment environment. Type from the project folder type \n`conda env create -f configuration.yml` \n\nthis will create a conda _bert4sentiment_ environment. then type \n`conda activate bert4sentiment`\n\nand run the notebook\n\n`jupyter notebook`\n", "_____no_output_____" ] ], [ [ "%reload_ext watermark\n%watermark -v -p numpy,pandas,torch,transformers", "Python implementation: CPython\nPython version : 3.9.6\nIPython version : 7.25.0\n\nnumpy : 1.21.0\npandas : 1.3.0\ntorch : 1.9.0\ntransformers: 4.8.2\n\n" ], [ "#@title Setup & Config\nimport transformers\nfrom transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup\nimport torch\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom pylab import rcParams\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, classification_report\nfrom collections import defaultdict\nfrom textwrap import wrap\n\nfrom torch import nn, optim\nfrom torch.utils.data import Dataset, DataLoader\nimport torch.nn.functional as F\n\n%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nsns.set(style='whitegrid', palette='muted', font_scale=1.2)\n\nHAPPY_COLORS_PALETTE = [\"#01BEFE\", \"#FFDD00\", \"#FF7D00\", \"#FF006D\", \"#ADFF02\", \"#8F00FF\"]\n\nsns.set_palette(sns.color_palette(HAPPY_COLORS_PALETTE))\n\nrcParams['figure.figsize'] = 12, 8\n\nRANDOM_SEED = 42\nnp.random.seed(RANDOM_SEED)\ntorch.manual_seed(RANDOM_SEED)\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\ndevice", "_____no_output_____" ] ], [ [ "## Data Exploration\n\nWe'll load the Google Play app reviews dataset, that we've put together in the previous part:", "_____no_output_____" ] ], [ [ "df = pd.read_csv(\"reviews.csv\")\ndf.head()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ] ], [ [ "We have about 16k examples. Let's check for missing values:", "_____no_output_____" ] ], [ [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 15746 entries, 0 to 15745\nData columns (total 11 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 userName 15746 non-null object\n 1 userImage 15746 non-null object\n 2 content 15746 non-null object\n 3 score 15746 non-null int64 \n 4 thumbsUpCount 15746 non-null int64 \n 5 reviewCreatedVersion 13533 non-null object\n 6 at 15746 non-null object\n 7 replyContent 7367 non-null object\n 8 repliedAt 7367 non-null object\n 9 sortOrder 15746 non-null object\n 10 appId 15746 non-null object\ndtypes: int64(2), object(9)\nmemory usage: 1.3+ MB\n" ] ], [ [ "Great, no missing values in the score and review texts! Do we have class imbalance?", "_____no_output_____" ] ], [ [ "sns.countplot(x=df.score)\nplt.xlabel('review score');", "_____no_output_____" ] ], [ [ "That's hugely imbalanced, but it's okay. We're going to convert the dataset into negative, neutral and positive sentiment:", "_____no_output_____" ] ], [ [ "def to_sentiment(rating):\n rating = int(rating)\n if rating <= 2:\n return 0\n elif rating == 3:\n return 1\n else: \n return 2\n\ndf['sentiment'] = df.score.apply(to_sentiment)", "_____no_output_____" ], [ "class_names = ['negative', 'neutral', 'positive']", "_____no_output_____" ], [ "ax = sns.countplot(x=df.sentiment)\nplt.xlabel('review sentiment')\nax.set_xticklabels(class_names);", "_____no_output_____" ] ], [ [ "The balance was (mostly) restored.", "_____no_output_____" ], [ "## Data Preprocessing\n\nWe have to prepare the data for the Transformers that means to: \n\n- Add special tokens to separate sentences and do classification\n- Pass sequences of constant length (introduce padding)\n- Create array of 0s (pad token) and 1s (real token) called *attention mask*\n", "_____no_output_____" ] ], [ [ "PRE_TRAINED_MODEL_NAME = 'bert-base-cased'", "_____no_output_____" ] ], [ [ "Let's load a pre-trained [BertTokenizer](https://huggingface.co/transformers/model_doc/bert.html#berttokenizer):", "_____no_output_____" ] ], [ [ "tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)", "_____no_output_____" ] ], [ [ "We'll use this text to understand the tokenization process:", "_____no_output_____" ] ], [ [ "sample_txt = 'When was I last outside? I am stuck at home for 2 weeks.'", "_____no_output_____" ] ], [ [ "Some basic operations can convert the text to tokens and tokens to unique integers (ids):", "_____no_output_____" ] ], [ [ "tokens = tokenizer.tokenize(sample_txt)\ntoken_ids = tokenizer.convert_tokens_to_ids(tokens)\n\nprint(f' Sentence: {sample_txt}')\nprint(f' Tokens: {tokens}')\nprint(f'Token IDs: {token_ids}')", " Sentence: When was I last outside? I am stuck at home for 2 weeks.\n Tokens: ['When', 'was', 'I', 'last', 'outside', '?', 'I', 'am', 'stuck', 'at', 'home', 'for', '2', 'weeks', '.']\nToken IDs: [1332, 1108, 146, 1314, 1796, 136, 146, 1821, 5342, 1120, 1313, 1111, 123, 2277, 119]\n" ] ], [ [ "### Special Tokens\n\n`[SEP]` - marker for ending of a sentence\n", "_____no_output_____" ] ], [ [ "tokenizer.sep_token, tokenizer.sep_token_id", "_____no_output_____" ] ], [ [ "`[CLS]` - we must add this token to the start of each sentence, so BERT knows we're doing classification", "_____no_output_____" ] ], [ [ "tokenizer.cls_token, tokenizer.cls_token_id", "_____no_output_____" ] ], [ [ "There is also a special token for padding:", "_____no_output_____" ] ], [ [ "tokenizer.pad_token, tokenizer.pad_token_id", "_____no_output_____" ] ], [ [ "BERT understands tokens that were in the training set. Everything else can be encoded using the `[UNK]` (unknown) token:", "_____no_output_____" ] ], [ [ "tokenizer.unk_token, tokenizer.unk_token_id", "_____no_output_____" ] ], [ [ "All of that work can be done using the [`encode_plus()`](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus) method:", "_____no_output_____" ] ], [ [ "encoding = tokenizer.encode_plus(\n sample_txt,\n max_length=32,\n add_special_tokens=True, # Add '[CLS]' and '[SEP]'\n return_token_type_ids=False,\n pad_to_max_length=True,\n return_attention_mask=True,\n return_tensors='pt', # Return PyTorch tensors\n truncation=True,\n)\n\nencoding.keys()", "/home/test/anaconda3/envs/bert4sentiment/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2126: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).\n warnings.warn(\n" ] ], [ [ "The token ids are now stored in a Tensor and padded to a length of 32:", "_____no_output_____" ] ], [ [ "print(len(encoding['input_ids'][0]))\nencoding['input_ids'][0]", "32\n" ] ], [ [ "The attention mask has the same length:", "_____no_output_____" ] ], [ [ "print(len(encoding['attention_mask'][0]))\nencoding['attention_mask']", "32\n" ] ], [ [ "We can inverse the tokenization to have a look at the special tokens:", "_____no_output_____" ] ], [ [ "tokenizer.convert_ids_to_tokens(encoding['input_ids'][0])", "_____no_output_____" ] ], [ [ "### Choosing Sequence Length\n\nBERT works with fixed-length sequences. We'll use a simple strategy to choose the max length. Let's store the token length of each review:", "_____no_output_____" ] ], [ [ "token_lens = []\n\nfor txt in df.content:\n tokens = tokenizer.encode(txt, max_length=512)\n token_lens.append(len(tokens))", "Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\n" ] ], [ [ "and plot the distribution:", "_____no_output_____" ] ], [ [ "sns.histplot(x=token_lens)\nplt.xlim([0, 256]);\nplt.xlabel('Token count');", "_____no_output_____" ] ], [ [ "Most of the reviews seem to contain less than 128 tokens, but we'll be on the safe side and choose a maximum length of 160.", "_____no_output_____" ] ], [ [ "MAX_LEN = 160", "_____no_output_____" ] ], [ [ "We have all building blocks required to create a PyTorch dataset. Let's do it:", "_____no_output_____" ] ], [ [ "class GPReviewDataset(Dataset):\n\n def __init__(self, reviews, targets, tokenizer, max_len):\n self.reviews = reviews\n self.targets = targets\n self.tokenizer = tokenizer\n self.max_len = max_len\n \n def __len__(self):\n return len(self.reviews)\n \n def __getitem__(self, item):\n review = str(self.reviews[item])\n target = self.targets[item]\n\n encoding = self.tokenizer.encode_plus(\n review,\n add_special_tokens=True,\n max_length=self.max_len,\n return_token_type_ids=False,\n pad_to_max_length=True,\n return_attention_mask=True,\n return_tensors='pt',\n )\n\n return {\n 'review_text': review,\n 'input_ids': encoding['input_ids'].flatten(),\n 'attention_mask': encoding['attention_mask'].flatten(),\n 'targets': torch.tensor(target, dtype=torch.long)\n }", "_____no_output_____" ] ], [ [ "The tokenizer is doing most of the heavy lifting for us. We also return the review texts, so it'll be easier to evaluate the predictions from our model. Let's split the data:", "_____no_output_____" ] ], [ [ "df_train, df_test = train_test_split(df, test_size=0.1, random_state=RANDOM_SEED)\ndf_val, df_test = train_test_split(df_test, test_size=0.5, random_state=RANDOM_SEED)", "_____no_output_____" ], [ "df_train.shape, df_val.shape, df_test.shape", "_____no_output_____" ] ], [ [ "We also need to create a couple of data loaders. Here's a helper function to do it:", "_____no_output_____" ] ], [ [ "def create_data_loader(df, tokenizer, max_len, batch_size):\n ds = GPReviewDataset(\n reviews=df.content.to_numpy(),\n targets=df.sentiment.to_numpy(),\n tokenizer=tokenizer,\n max_len=max_len\n )\n\n return DataLoader(\n ds,\n batch_size=batch_size,\n num_workers=4\n )", "_____no_output_____" ], [ "BATCH_SIZE = 16\n\ntrain_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)\nval_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE)\ntest_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)", "_____no_output_____" ] ], [ [ "Let's have a look at an example batch from our training data loader:", "_____no_output_____" ] ], [ [ "data = next(iter(train_data_loader))\ndata.keys()", "/home/test/anaconda3/envs/bert4sentiment/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2126: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).\n warnings.warn(\n/home/test/anaconda3/envs/bert4sentiment/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2126: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).\n warnings.warn(\n/home/test/anaconda3/envs/bert4sentiment/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2126: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).\n warnings.warn(\n/home/test/anaconda3/envs/bert4sentiment/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2126: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).\n warnings.warn(\n" ], [ "print(data['input_ids'].shape)\nprint(data['attention_mask'].shape)\nprint(data['targets'].shape)", "torch.Size([16, 160])\ntorch.Size([16, 160])\ntorch.Size([16])\n" ] ], [ [ "## Sentiment Classification with BERT and Hugging Face", "_____no_output_____" ], [ "We'll use the basic [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) and build our sentiment classifier on top of it. Let's load the model:", "_____no_output_____" ] ], [ [ "bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict = False)\n", "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight']\n- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n" ] ], [ [ "And try to use it on the encoding of our sample text:", "_____no_output_____" ] ], [ [ "last_hidden_state, pooled_output = bert_model(\n input_ids=encoding['input_ids'], \n attention_mask=encoding['attention_mask'], return_dict=False\n)", "_____no_output_____" ] ], [ [ "The `last_hidden_state` is a sequence of hidden states of the last layer of the model. Obtaining the `pooled_output` is done by applying the [BertPooler](https://github.com/huggingface/transformers/blob/edf0582c0be87b60f94f41c659ea779876efc7be/src/transformers/modeling_bert.py#L426) on `last_hidden_state`:", "_____no_output_____" ] ], [ [ "last_hidden_state.shape", "_____no_output_____" ] ], [ [ "We have the hidden state for each of our 32 tokens (the length of our example sequence). But why 768? This is the number of hidden units in the feedforward-networks. We can verify that by checking the config:", "_____no_output_____" ] ], [ [ "bert_model.config.hidden_size", "_____no_output_____" ] ], [ [ "\n\nYou can think of the `pooled_output` as a summary of the content, according to BERT. Albeit, you might try and do better. Let's look at the shape of the output:", "_____no_output_____" ] ], [ [ "pooled_output.shape", "_____no_output_____" ] ], [ [ "We can use all of this knowledge to create a classifier that uses the BERT model:", "_____no_output_____" ] ], [ [ "class SentimentClassifier(nn.Module):\n def __init__(self, n_classes):\n super(SentimentClassifier, self).__init__()\n self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)\n self.drop = nn.Dropout(p=0.3)\n self.out = nn.Linear(self.bert.config.hidden_size, n_classes)\n\n def forward(self, input_ids, attention_mask):\n bertOutput = self.bert(\n input_ids=input_ids,\n attention_mask=attention_mask\n )\n output = self.drop(bertOutput['pooler_output'])\n\n return self.out(output)", "_____no_output_____" ] ], [ [ "Our classifier delegates most of the heavy lifting to the BertModel. We use a dropout layer for some regularization and a fully-connected layer for our output. Note that we're returning the raw output of the last layer since that is required for the cross-entropy loss function in PyTorch to work.\n\nThis should work like any other PyTorch model. Let's create an instance and move it to the GPU:", "_____no_output_____" ] ], [ [ "model = SentimentClassifier(len(class_names))\nmodel = model.to(device)", "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight']\n- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n" ] ], [ [ "We'll move the example batch of our training data to the GPU:", "_____no_output_____" ] ], [ [ "input_ids = data['input_ids'].to(device)\nattention_mask = data['attention_mask'].to(device)\n\nprint(input_ids.shape) # batch size x seq length\nprint(attention_mask.shape) # batch size x seq length", "torch.Size([16, 160])\ntorch.Size([16, 160])\n" ] ], [ [ "To get the predicted probabilities from our trained model, we'll apply the softmax function to the outputs:", "_____no_output_____" ] ], [ [ "F.softmax(model(input_ids, attention_mask), dim=1)", "_____no_output_____" ] ], [ [ "### Training", "_____no_output_____" ], [ " we'll use the [AdamW](https://huggingface.co/transformers/main_classes/optimizer_schedules.html#adamw) optimizer provided by Hugging Face that corrects weight decay.", "_____no_output_____" ] ], [ [ "EPOCHS = 100\n\noptimizer = AdamW(model.parameters(), lr=2e-5, correct_bias=False)\ntotal_steps = len(train_data_loader) * EPOCHS\n\nscheduler = get_linear_schedule_with_warmup(\n optimizer,\n num_warmup_steps=0,\n num_training_steps=total_steps\n)\n\nloss_fn = nn.CrossEntropyLoss().to(device)", "_____no_output_____" ] ], [ [ "How do we come up with all hyperparameters? The BERT authors have some recommendations for fine-tuning:\n\n- Batch size: 16, 32\n- Learning rate (Adam): 5e-5, 3e-5, 2e-5\n- Number of epochs: 2, 3, 4\n\nWe're going to ignore the number of epochs recommendation but stick with the rest. Note that increasing the batch size reduces the training time significantly, but gives you lower accuracy.\n\nLet's continue with writing a helper function for training our model for one epoch:", "_____no_output_____" ] ], [ [ "def train_epoch(\n model, \n data_loader, \n loss_fn, \n optimizer, \n device, \n scheduler, \n n_examples\n):\n model = model.train()\n\n losses = []\n correct_predictions = 0\n \n for d in data_loader:\n input_ids = d[\"input_ids\"].to(device)\n attention_mask = d[\"attention_mask\"].to(device)\n targets = d[\"targets\"].to(device)\n\n outputs = model(\n input_ids=input_ids,\n attention_mask=attention_mask\n )\n\n _, preds = torch.max(outputs, dim=1)\n loss = loss_fn(outputs, targets)\n\n correct_predictions += torch.sum(preds == targets)\n losses.append(loss.item())\n\n loss.backward()\n nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)\n optimizer.step()\n scheduler.step()\n optimizer.zero_grad()\n\n return correct_predictions.double() / n_examples, np.mean(losses)", "_____no_output_____" ] ], [ [ "Training the model should look familiar, except for two things. The scheduler gets called every time a batch is fed to the model. We're avoiding exploding gradients by clipping the gradients of the model using [clip_grad_norm_](https://pytorch.org/docs/stable/nn.html#clip-grad-norm).\n\nLet's write another one that helps us evaluate the model on a given data loader:", "_____no_output_____" ] ], [ [ "def eval_model(model, data_loader, loss_fn, device, n_examples):\n model = model.eval()\n\n losses = []\n correct_predictions = 0\n\n with torch.no_grad():\n for d in data_loader:\n input_ids = d[\"input_ids\"].to(device)\n attention_mask = d[\"attention_mask\"].to(device)\n targets = d[\"targets\"].to(device)\n\n outputs = model(\n input_ids=input_ids,\n attention_mask=attention_mask\n )\n _, preds = torch.max(outputs, dim=1)\n\n loss = loss_fn(outputs, targets)\n\n correct_predictions += torch.sum(preds == targets)\n losses.append(loss.item())\n\n return correct_predictions.double() / n_examples, np.mean(losses)", "_____no_output_____" ] ], [ [ "Using those two, we can write our training loop. We'll also store the training history:", "_____no_output_____" ] ], [ [ "%%time\n\nhistory = defaultdict(list)\nbest_accuracy = 0\n\nfor epoch in range(EPOCHS):\n\n print(f'Epoch {epoch + 1}/{EPOCHS}')\n print('-' * 10)\n\n train_acc, train_loss = train_epoch(\n model,\n train_data_loader, \n loss_fn, \n optimizer, \n device, \n scheduler, \n len(df_train)\n )\n\n print(f'Train loss {train_loss} accuracy {train_acc}')\n\n val_acc, val_loss = eval_model(\n model,\n val_data_loader,\n loss_fn, \n device, \n len(df_val)\n )\n\n print(f'Val loss {val_loss} accuracy {val_acc}')\n print()\n\n history['train_acc'].append(train_acc)\n history['train_loss'].append(train_loss)\n history['val_acc'].append(val_acc)\n history['val_loss'].append(val_loss)\n\n if val_acc > best_accuracy:\n torch.save(model.state_dict(), 'best_model_state.bin')\n best_accuracy = val_acc", "Epoch 1/100\n----------\n" ] ], [ [ "Note that we're storing the state of the best model, indicated by the highest validation accuracy.", "_____no_output_____" ], [ "Whoo, this took some time! We can look at the training vs validation accuracy:", "_____no_output_____" ] ], [ [ "plt.plot(history['train_acc'], label='train accuracy')\nplt.plot(history['val_acc'], label='validation accuracy')\n\nplt.title('Training history')\nplt.ylabel('Accuracy')\nplt.xlabel('Epoch')\nplt.legend()\nplt.ylim([0, 1]);", "_____no_output_____" ] ], [ [ "The training accuracy starts to approach 100% after 10 epochs or so. You might try to fine-tune the parameters a bit more, but this will be good enough for us.\n\n", "_____no_output_____" ] ], [ [ "# !gdown --id 1V8itWtowCYnb2Bc9KlK9SxGff9WwmogA\n\n#model = SentimentClassifier(len(class_names))\n#model.load_state_dict(torch.load('best_model_state.bin'))\n#model = model.to(device)", "_____no_output_____" ] ], [ [ "## Evaluation\n\nSo how good is our model on predicting sentiment? Let's start by calculating the accuracy on the test data:", "_____no_output_____" ] ], [ [ "test_acc, _ = eval_model(\n model,\n test_data_loader,\n loss_fn,\n device,\n len(df_test)\n)\n\ntest_acc.item()", "_____no_output_____" ] ], [ [ "The accuracy is about 1% lower on the test set. Our model seems to generalize well.\n\nWe'll define a helper function to get the predictions from our model:", "_____no_output_____" ] ], [ [ "def get_predictions(model, data_loader):\n model = model.eval()\n \n review_texts = []\n predictions = []\n prediction_probs = []\n real_values = []\n\n with torch.no_grad():\n for d in data_loader:\n\n texts = d[\"review_text\"]\n input_ids = d[\"input_ids\"].to(device)\n attention_mask = d[\"attention_mask\"].to(device)\n targets = d[\"targets\"].to(device)\n\n outputs = model(\n input_ids=input_ids,\n attention_mask=attention_mask\n )\n _, preds = torch.max(outputs, dim=1)\n\n probs = F.softmax(outputs, dim=1)\n\n review_texts.extend(texts)\n predictions.extend(preds)\n prediction_probs.extend(probs)\n real_values.extend(targets)\n\n predictions = torch.stack(predictions).cpu()\n prediction_probs = torch.stack(prediction_probs).cpu()\n real_values = torch.stack(real_values).cpu()\n return review_texts, predictions, prediction_probs, real_values", "_____no_output_____" ] ], [ [ "This is similar to the evaluation function, except that we're storing the text of the reviews and the predicted probabilities (by applying the softmax on the model outputs):", "_____no_output_____" ] ], [ [ "y_review_texts, y_pred, y_pred_probs, y_test = get_predictions(\n model,\n test_data_loader\n)", "_____no_output_____" ] ], [ [ "Let's have a look at the classification report", "_____no_output_____" ] ], [ [ "print(classification_report(y_test, y_pred, target_names=class_names))", "_____no_output_____" ] ], [ [ "Looks like it is really hard to classify neutral (3 stars) reviews. And I can tell you from experience, looking at many reviews, those are hard to classify.\n\nWe'll continue with the confusion matrix:", "_____no_output_____" ] ], [ [ "def show_confusion_matrix(confusion_matrix):\n hmap = sns.heatmap(confusion_matrix, annot=True, fmt=\"d\", cmap=\"Blues\")\n hmap.yaxis.set_ticklabels(hmap.yaxis.get_ticklabels(), rotation=0, ha='right')\n hmap.xaxis.set_ticklabels(hmap.xaxis.get_ticklabels(), rotation=30, ha='right')\n plt.ylabel('True sentiment')\n plt.xlabel('Predicted sentiment');\n\ncm = confusion_matrix(y_test, y_pred)\ndf_cm = pd.DataFrame(cm, index=class_names, columns=class_names)\nshow_confusion_matrix(df_cm)", "_____no_output_____" ] ], [ [ "This confirms that our model is having difficulty classifying neutral reviews. It mistakes those for negative and positive at a roughly equal frequency.\n\nThat's a good overview of the performance of our model. But let's have a look at an example from our test data:", "_____no_output_____" ] ], [ [ "idx = 2\n\nreview_text = y_review_texts[idx]\ntrue_sentiment = y_test[idx]\n", "_____no_output_____" ], [ "print(\"\\n\".join(wrap(review_text)))\nprint()\nprint(f'True sentiment: {class_names[true_sentiment]}')", "_____no_output_____" ] ], [ [ "Now we can look at the confidence of each sentiment of our model:", "_____no_output_____" ] ], [ [ "\npred_df = pd.DataFrame({\n 'class_names': class_names,\n 'values': y_pred_probs[idx].tolist() #converting tensor to numbers\n})\n", "_____no_output_____" ], [ "sns.barplot(x='values', y='class_names', data=pred_df, orient='h')\nplt.ylabel('sentiment')\nplt.xlabel('probability')\nplt.xlim([0, 1]);", "_____no_output_____" ] ], [ [ "### Predicting on Raw Text\n\nLet's use our model to predict the sentiment of some raw text:", "_____no_output_____" ] ], [ [ "review_text = \"I love completing my todos! Best app ever!!!\"", "_____no_output_____" ] ], [ [ "We have to use the tokenizer to encode the text:", "_____no_output_____" ] ], [ [ "encoded_review = tokenizer.encode_plus(\n review_text,\n max_length=MAX_LEN,\n add_special_tokens=True,\n return_token_type_ids=False,\n pad_to_max_length=True,\n return_attention_mask=True,\n return_tensors='pt',\n)", "_____no_output_____" ] ], [ [ "Let's get the predictions from our model:", "_____no_output_____" ] ], [ [ "input_ids = encoded_review['input_ids'].to(device)\nattention_mask = encoded_review['attention_mask'].to(device)\n\noutput = model(input_ids, attention_mask)\n_, prediction = torch.max(output, dim=1)\n\nprint(f'Review text: {review_text}')\nprint(f'Sentiment : {class_names[prediction]}')", "_____no_output_____" ] ], [ [ "\nNice job! You learned how to use BERT for sentiment analysis. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7a5c81875ec8fdbd3f1b77ca176eaa2d69376b5
193,815
ipynb
Jupyter Notebook
Notebooks/model Building2.ipynb
soumya997/Smart-G-Form
d02a2525fe5cc3fb9e14c4b9ab320ea687e53b75
[ "MIT" ]
3
2020-12-13T14:43:06.000Z
2021-03-23T17:27:30.000Z
Notebooks/model Building2.ipynb
soumya997/Smart-G-Form
d02a2525fe5cc3fb9e14c4b9ab320ea687e53b75
[ "MIT" ]
null
null
null
Notebooks/model Building2.ipynb
soumya997/Smart-G-Form
d02a2525fe5cc3fb9e14c4b9ab320ea687e53b75
[ "MIT" ]
1
2020-12-12T04:36:41.000Z
2020-12-12T04:36:41.000Z
193,815
193,815
0.853654
[ [ [ "# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the read-only \"../input/\" directory\n# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory\n\nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input'):\n for filename in filenames:\n print(os.path.join(dirname, filename))\n\n# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using \"Save & Run All\" \n# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session", "/kaggle/input/asap-aes/valid_sample_submission_1_column.csv\n/kaggle/input/asap-aes/Training_Materials.zip\n/kaggle/input/asap-aes/training_set_rel3.xls\n/kaggle/input/asap-aes/valid_sample_submission_1_column_no_header.csv\n/kaggle/input/asap-aes/Essay_Set_Descriptions.zip\n/kaggle/input/asap-aes/training_set_rel3.xlsx\n/kaggle/input/asap-aes/valid_set.xls\n/kaggle/input/asap-aes/training_set_rel3.tsv\n/kaggle/input/asap-aes/valid_sample_submission_5_column.csv\n/kaggle/input/asap-aes/valid_set.xlsx\n/kaggle/input/asap-aes/valid_set.tsv\n/kaggle/input/asap-aes/test_set.tsv\n/kaggle/input/asap-aes/valid_sample_submission_2_column.csv\n" ], [ "import os\nimport pandas as pd\n\ndf = pd.read_csv(\"../input/asap-aes/training_set_rel3.tsv\", sep='\\t', encoding='ISO-8859-1')\ndf = df.dropna(axis=1)\ndf = df.drop(columns=['rater1_domain1', 'rater2_domain1'])\ndf = df.drop(columns=['essay_id', 'essay_set'])\ndf.head()", "_____no_output_____" ], [ "df['essay'][1]", "_____no_output_____" ], [ "import warnings\nwarnings.filterwarnings(\"ignore\") #Ignoring unnecessory warnings\n\nimport numpy as np #for large and multi-dimensional arrays\nimport pandas as pd #for data manipulation and analysis\nimport nltk #Natural language processing tool-kit\n\nfrom nltk.corpus import stopwords #Stopwords corpus\nfrom nltk.stem import PorterStemmer # Stemmer\n\nfrom sklearn.feature_extraction.text import CountVectorizer #For Bag of words\nfrom sklearn.feature_extraction.text import TfidfVectorizer #For TF-IDF\nfrom gensim.models import Word2Vec #For Word2Vec\n\nfrom tensorflow.keras.layers import Embedding\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.preprocessing.text import one_hot\nfrom tensorflow.keras.layers import LSTM\nfrom tensorflow.keras.layers import Dropout\nfrom tensorflow.keras.layers import Dense", "_____no_output_____" ], [ "import numpy as np\nimport nltk\nimport re\nfrom nltk.corpus import stopwords\nfrom gensim.models import Word2Vec\n\ndef essay_to_wordlist(essay_v, remove_stopwords):\n \"\"\"Remove the tagged labels and word tokenize the sentence.\"\"\"\n essay_v = re.sub(\"[^a-zA-Z]\", \" \", essay_v)\n words = essay_v.lower().split()\n if remove_stopwords:\n stops = set(stopwords.words(\"english\"))\n words = [w for w in words if not w in stops]\n return (words)\n\ndef essay_to_sentences(essay_v, remove_stopwords):\n \"\"\"Sentence tokenize the essay and call essay_to_wordlist() for word tokenization.\"\"\"\n tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')\n raw_sentences = tokenizer.tokenize(essay_v.strip())\n sentences = []\n for raw_sentence in raw_sentences:\n if len(raw_sentence) > 0:\n sentences.append(essay_to_wordlist(raw_sentence, remove_stopwords))\n return sentences\n\ndef makeFeatureVec(words, model, num_features):\n \"\"\"Make Feature Vector from the words list of an Essay.\"\"\"\n featureVec = np.zeros((num_features,),dtype=\"float32\")\n num_words = 0.\n index2word_set = set(model.wv.index2word)\n for word in words:\n if word in index2word_set:\n num_words += 1\n featureVec = np.add(featureVec,model[word]) \n featureVec = np.divide(featureVec,num_words)\n return featureVec\n\ndef getAvgFeatureVecs(essays, model, num_features):\n \"\"\"Main function to generate the word vectors for word2vec model.\"\"\"\n counter = 0\n essayFeatureVecs = np.zeros((len(essays),num_features),dtype=\"float32\")\n for essay in essays:\n essayFeatureVecs[counter] = makeFeatureVec(essay, model, num_features)\n counter = counter + 1\n return essayFeatureVecs", "_____no_output_____" ], [ "import nltk\n\nnltk.download('stopwords')", "[nltk_data] Downloading package stopwords to /usr/share/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n" ], [ "from keras.layers import Embedding, LSTM, Dense, Dropout, Lambda, Flatten\nfrom keras.models import Sequential, load_model, model_from_config\nimport keras.backend as K\n\ndef get_model():\n \"\"\"Define the model.\"\"\"\n model = Sequential()\n model.add(LSTM(300, dropout=0.4, recurrent_dropout=0.4, input_shape=[1, 300], return_sequences=True))\n model.add(LSTM(64, recurrent_dropout=0.4))\n model.add(Dropout(0.5))\n model.add(Dense(1, activation='relu'))\n\n model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy','mae'])\n model.summary()\n\n return model", "_____no_output_____" ], [ "X=df\ny = X['domain1_score']", "_____no_output_____" ], [ "from sklearn.model_selection import KFold\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import cohen_kappa_score\n\ncv = KFold(n_splits = 5, shuffle = True)\nresults = []\ny_pred_list = []\n\ncount = 1\nfor traincv, testcv in cv.split(X):\n print(\"\\n--------Fold {}--------\\n\".format(count))\n X_test, X_train, y_test, y_train = X.iloc[testcv], X.iloc[traincv], y.iloc[testcv], y.iloc[traincv]\n \n train_essays = X_train['essay']\n test_essays = X_test['essay']\n \n sentences = []\n \n for essay in train_essays:\n # Obtaining all sentences from the training essays.\n sentences += essay_to_sentences(essay, remove_stopwords = True)\n \n # Initializing variables for word2vec model.\n num_features = 300 \n min_word_count = 40\n num_workers = 4\n context = 10\n downsampling = 1e-3\n\n print(\"Training Word2Vec Model...\")\n model = Word2Vec(sentences, workers=num_workers, size=num_features, min_count = min_word_count, window = context, sample = downsampling)\n\n model.init_sims(replace=True)\n model.wv.save_word2vec_format('word2vecmodel.bin', binary=True)\n\n clean_train_essays = []\n \n # Generate training and testing data word vectors.\n for essay_v in train_essays:\n clean_train_essays.append(essay_to_wordlist(essay_v, remove_stopwords=True))\n trainDataVecs = getAvgFeatureVecs(clean_train_essays, model, num_features)\n \n clean_test_essays = []\n for essay_v in test_essays:\n clean_test_essays.append(essay_to_wordlist( essay_v, remove_stopwords=True ))\n testDataVecs = getAvgFeatureVecs( clean_test_essays, model, num_features )\n \n trainDataVecs = np.array(trainDataVecs)\n testDataVecs = np.array(testDataVecs)\n # Reshaping train and test vectors to 3 dimensions. (1 represnts one timestep)\n trainDataVecs = np.reshape(trainDataVecs, (trainDataVecs.shape[0], 1, trainDataVecs.shape[1]))\n testDataVecs = np.reshape(testDataVecs, (testDataVecs.shape[0], 1, testDataVecs.shape[1]))\n \n lstm_model = get_model()\n lstm_model.fit(trainDataVecs, y_train, batch_size=64, epochs=2)\n #lstm_model.load_weights('./model_weights/final_lstm.h5')\n y_pred = lstm_model.predict(testDataVecs)\n \n # Save any one of the 5 models.\n if count == 5:\n lstm_model.save('./model_weights/final_lstm.h5')\n \n # Round y_pred to the nearest integer.\n y_pred = np.around(y_pred)\n \n # Evaluate the model on the evaluation metric. \"Quadratic mean averaged Kappa\"\n result = cohen_kappa_score(y_test.values,y_pred,weights='quadratic')\n print(\"Kappa Score: {}\".format(result))\n results.append(result)\n\n count += 1\n", "\n--------Fold 1--------\n\nTraining Word2Vec Model...\nModel: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm (LSTM) (None, 1, 300) 721200 \n_________________________________________________________________\nlstm_1 (LSTM) (None, 64) 93440 \n_________________________________________________________________\ndropout (Dropout) (None, 64) 0 \n_________________________________________________________________\ndense (Dense) (None, 1) 65 \n=================================================================\nTotal params: 814,705\nTrainable params: 814,705\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/2\n163/163 [==============================] - 2s 14ms/step - loss: 64.1702 - accuracy: 0.1120 - mae: 4.3562\nEpoch 2/2\n163/163 [==============================] - 2s 13ms/step - loss: 40.5542 - accuracy: 0.0803 - mae: 3.5957\nKappa Score: 0.7127520678030524\n\n--------Fold 2--------\n\nTraining Word2Vec Model...\nModel: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm_2 (LSTM) (None, 1, 300) 721200 \n_________________________________________________________________\nlstm_3 (LSTM) (None, 64) 93440 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 64) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 65 \n=================================================================\nTotal params: 814,705\nTrainable params: 814,705\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/2\n163/163 [==============================] - 2s 12ms/step - loss: 63.9632 - accuracy: 0.1127 - mae: 4.3208\nEpoch 2/2\n163/163 [==============================] - 2s 12ms/step - loss: 39.9875 - accuracy: 0.0761 - mae: 3.6013\nKappa Score: 0.7534653510180308\n\n--------Fold 3--------\n\nTraining Word2Vec Model...\nModel: \"sequential_2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm_4 (LSTM) (None, 1, 300) 721200 \n_________________________________________________________________\nlstm_5 (LSTM) (None, 64) 93440 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 64) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 1) 65 \n=================================================================\nTotal params: 814,705\nTrainable params: 814,705\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/2\n163/163 [==============================] - 2s 12ms/step - loss: 62.7267 - accuracy: 0.1150 - mae: 4.3239\nEpoch 2/2\n163/163 [==============================] - 2s 12ms/step - loss: 38.9225 - accuracy: 0.0753 - mae: 3.5816\nKappa Score: 0.760911157888906\n\n--------Fold 4--------\n\nTraining Word2Vec Model...\nModel: \"sequential_3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm_6 (LSTM) (None, 1, 300) 721200 \n_________________________________________________________________\nlstm_7 (LSTM) (None, 64) 93440 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 64) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 1) 65 \n=================================================================\nTotal params: 814,705\nTrainable params: 814,705\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/2\n163/163 [==============================] - 2s 15ms/step - loss: 63.2130 - accuracy: 0.1110 - mae: 4.3320\nEpoch 2/2\n163/163 [==============================] - 2s 12ms/step - loss: 40.5429 - accuracy: 0.0809 - mae: 3.6119\nKappa Score: 0.7429735456201619\n\n--------Fold 5--------\n\nTraining Word2Vec Model...\nModel: \"sequential_4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm_8 (LSTM) (None, 1, 300) 721200 \n_________________________________________________________________\nlstm_9 (LSTM) (None, 64) 93440 \n_________________________________________________________________\ndropout_4 (Dropout) (None, 64) 0 \n_________________________________________________________________\ndense_4 (Dense) (None, 1) 65 \n=================================================================\nTotal params: 814,705\nTrainable params: 814,705\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/2\n163/163 [==============================] - 2s 12ms/step - loss: 63.6962 - accuracy: 0.1113 - mae: 4.3485\nEpoch 2/2\n163/163 [==============================] - 2s 12ms/step - loss: 40.5028 - accuracy: 0.0794 - mae: 3.6451\n" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 12976 entries, 0 to 12975\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 essay 12976 non-null object\n 1 domain1_score 12976 non-null int64 \ndtypes: int64(1), object(1)\nmemory usage: 202.9+ KB\n" ], [ "print(\"Average Kappa score after a 5-fold cross validation: \",np.around(np.array(results).mean(),decimals=4))", "Average Kappa score after a 5-fold cross validation: 0.9098\n" ], [ "demo = [\"Dear@CAPS1 @CAPS2, I believe that using computers will benefit us in many ways like talking and becoming friends will others through websites like facebook and mysace. Using computers can help us find coordibates, locations, and able ourselfs to millions of information. Also computers will benefit us by helping with jobs as in planning a house plan and typing a @NUM1 page report for one of our jobs in less than writing it. Now lets go into the wonder world of technology. Using a computer will help us in life by talking or making friends on line. Many people have myspace, facebooks, aim, these all benefit us by having conversations with one another. Many people believe computers are bad but how can you make friends if you can never talk to them? I am very fortunate for having a computer that can help with not only school work but my social life and how I make friends. Computers help us with finding our locations, coordibates and millions of information online. If we didn't go on the internet a lot we wouldn't know how to go onto websites that @MONTH1 help us with locations and coordinates like @LOCATION1. Would you rather use a computer or be in @LOCATION3. When your supposed to be vacationing in @LOCATION2. Million of information is found on the internet. You can as almost every question and a computer will have it. Would you rather easily draw up a house plan on the computers or take @NUM1 hours doing one by hand with ugly erazer marks all over it, you are garrenteed that to find a job with a drawing like that. Also when appling for a job many workers must write very long papers like a @NUM3 word essay on why this job fits you the most, and many people I know don't like writing @NUM3 words non-stopp for hours when it could take them I hav an a computer. That is why computers we needed a lot now adays. I hope this essay has impacted your descion on computers because they are great machines to work with. The other day I showed my mom how to use a computer and she said it was the greatest invention sense sliced bread! Now go out and buy a computer to help you chat online with friends, find locations and millions of information on one click of the button and help your self with getting a job with neat, prepared, printed work that your boss will love.\"]\ndemo_df = pd.DataFrame(demo,columns=['essay'])\ndemo_df.head()", "_____no_output_____" ], [ "type(demo_df['essay'])", "_____no_output_____" ], [ "content = \"Dear@CAPS1 @CAPS2, I believe that using computers will benefit us in many ways like talking and becoming friends will others through websites like facebook and mysace. Using computers can help us find coordibates, locations, and able ourselfs to millions of information. Also computers will benefit us by helping with jobs as in planning a house plan and typing a @NUM1 page report for one of our jobs in less than writing it. Now lets go into the wonder world of technology. Using a computer will help us in life by talking or making friends on line. Many people have myspace, facebooks, aim, these all benefit us by having conversations with one another. Many people believe computers are bad but how can you make friends if you can never talk to them? I am very fortunate for having a computer that can help with not only school work but my social life and how I make friends. Computers help us with finding our locations, coordibates and millions of information online. If we didn't go on the internet a lot we wouldn't know how to go onto websites that @MONTH1 help us with locations and coordinates like @LOCATION1. Would you rather use a computer or be in @LOCATION3. When your supposed to be vacationing in @LOCATION2. Million of information is found on the internet. You can as almost every question and a computer will have it. Would you rather easily draw up a house plan on the computers or take @NUM1 hours doing one by hand with ugly erazer marks all over it, you are garrenteed that to find a job with a drawing like that. Also when appling for a job many workers must write very long papers like a @NUM3 word essay on why this job fits you the most, and many people I know don't like writing @NUM3 words non-stopp for hours when it could take them I hav an a computer. That is why computers we needed a lot now adays. I hope this essay has impacted your descion on computers because they are great machines to work with. The other day I showed my mom how to use a computer and she said it was the greatest invention sense sliced bread! Now go out and buy a computer to help you chat online with friends, find locations and millions of information on one click of the button and help your self with getting a job with neat, prepared, printed work that your boss will love.\"", "_____no_output_____" ], [ "from gensim.models import Word2Vec\nfrom gensim.models import KeyedVectors\nnum_features = 300\n \nmodel = KeyedVectors.load_word2vec_format( \"./word2vecmodel.bin\", binary=True)\nclean_test_essays = []\nclean_test_essays.append(essay_to_wordlist( content, remove_stopwords=True ))\ntestDataVecs = getAvgFeatureVecs( clean_test_essays, model, num_features )\ntestDataVecs = np.array(testDataVecs)\ntestDataVecs = np.reshape(testDataVecs, (testDataVecs.shape[0], 1, testDataVecs.shape[1]))\n\n# lstm_model = get_model()\nlstm_model.load_weights(\"./final_lstm.h5\")\npreds = lstm_model.predict(testDataVecs)", "_____no_output_____" ], [ "int(np.around(preds))", "_____no_output_____" ], [ "# val_essays = demo_df['essay']\n# sentences = []\n\n# for essay in val_essays:\n# sentences += essay_to_sentences(essay, remove_stopwords = True)\n \n\n# num_features = 300 \n# min_word_count = 40\n# num_workers = 4\n# context = 10\n# downsampling = 1e-3\n\n\n# model = Word2Vec(sentences, workers=num_workers, size=num_features, min_count = min_word_count, window = context, sample = downsampling)\n\n# model.init_sims(replace=True)\n\n# clean_train_essays = []\n\n# # Generate training and testing data word vectors.\n# for essay_v in val_essays:\n# clean_train_essays.append(essay_to_wordlist(essay_v, remove_stopwords=True))\n# trainDataVecs = getAvgFeatureVecs(clean_train_essays, model, num_features)\n\n# trainDataVecs = np.array(trainDataVecs)\n# # Reshaping train and test vectors to 3 dimensions. (1 represnts one timestep)\n# trainDataVecs = np.reshape(trainDataVecs, (trainDataVecs.shape[0], 1, trainDataVecs.shape[1]))\n\n\n# y_pred = lstm_model.predict(trainDataVecs)\n\n\n\n# y_pred = np.around(y_pred)\n\n\n# # result = cohen_kappa_score(y_test.values,y_pred,weights='quadratic')\n# # print(\"Kappa Score: {}\".format(result))\n# print(y_pred)", "_____no_output_____" ], [ "y_test.T.shape", "_____no_output_____" ], [ "y_pred1 = y_pred.T.reshape(2595,)", "_____no_output_____" ], [ "y_pred1", "_____no_output_____" ], [ "y_test1 = y_test.T.reshape", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred1)", "_____no_output_____" ], [ "# Save a palette to a variable:\npalette = sns.color_palette(\"bright\")\n \n# Use palplot and pass in the variable:\nsns.palplot(palette)", "_____no_output_____" ], [ "import seaborn as sns\nimport pandas as pd\nimport matplotlib.pyplot as plt\narray =cm[0:10,0:10]\ndf_cm = pd.DataFrame(array)\nplt.figure(figsize = (20,20))\nsns.heatmap(df_cm,cmap = 'Blues',square=True, annot=True)", "_____no_output_____" ], [ "cm[0:10,0:10]", "_____no_output_____" ], [ "arr = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])\nprint(arr.shape)\nprint(arr[1:4, 4:1])\n# print(arr)", "(2, 5)\n[]\n" ], [ "cm.shape", "_____no_output_____" ], [ "corr = cm\n\nmask = np.triu(np.ones_like(corr, dtype=np.bool))\n\n# Set up the matplotlib figure\nf, ax = plt.subplots(figsize=(11, 9))\n\n# Generate a custom diverging colormap\ncmap = sns.diverging_palette(220, 10, as_cmap=True)\n\n# Draw the heatmap with the mask and correct aspect ratio\nsns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,\n square=True, linewidths=.5, cbar_kws={\"shrink\": .5})\nplt.title('Fig:1',size=15)", "_____no_output_____" ], [ "from sklearn.metrics import f1_score\nf1_score(y_test, y_pred1, average='macro')", "_____no_output_____" ], [ "from sklearn.metrics import classification_report\ncr = classification_report(y_test, y_pred1)", "_____no_output_____" ], [ "cr", "_____no_output_____" ], [ "import seaborn as sns\nimport numpy as np\nfrom sklearn.metrics import precision_recall_fscore_support\nimport matplotlib.pyplot as plt\n\ndef plot_classification_report(y_tru, y_prd, figsize=(10, 10), ax=None):\n\n plt.figure(figsize=figsize)\n\n xticks = ['precision', 'recall', 'f1-score', 'support']\n yticks = list(np.unique(y_tru))\n yticks += ['avg']\n\n rep = np.array(precision_recall_fscore_support(y_tru, y_prd)).T\n avg = np.mean(rep, axis=0)\n avg[-1] = np.sum(rep[:, -1])\n rep = np.insert(rep, rep.shape[0], avg, axis=0)\n\n sns.heatmap(rep,\n annot=True, \n cbar=False, \n xticklabels=xticks, \n yticklabels=yticks,\n ax=ax)\n\nplot_classification_report(y_test, y_pred1)", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport itertools\n\n\ndef plot_classification_report(classificationReport,\n title='Classification report',\n cmap='RdBu'):\n\n classificationReport = classificationReport.replace('\\n\\n', '\\n')\n classificationReport = classificationReport.replace(' / ', '/')\n lines = classificationReport.split('\\n')\n\n classes, plotMat, support, class_names = [], [], [], []\n for line in lines[1:]: # if you don't want avg/total result, then change [1:] into [1:-1]\n t = line.strip().split()\n if len(t) < 2:\n continue\n classes.append(t[0])\n v = [float(x) for x in t[1: len(t) - 1]]\n support.append(int(t[-1]))\n class_names.append(t[0])\n plotMat.append(v)\n\n plotMat = np.array(plotMat)\n xticklabels = ['Precision', 'Recall', 'F1-score']\n yticklabels = ['{0} ({1})'.format(class_names[idx], sup)\n for idx, sup in enumerate(support)]\n\n plt.imshow(plotMat, interpolation='nearest', cmap=cmap, aspect='auto')\n plt.title(title)\n plt.colorbar()\n plt.xticks(np.arange(3), xticklabels, rotation=45)\n plt.yticks(np.arange(len(classes)), yticklabels)\n\n upper_thresh = plotMat.min() + (plotMat.max() - plotMat.min()) / 10 * 8\n lower_thresh = plotMat.min() + (plotMat.max() - plotMat.min()) / 10 * 2\n for i, j in itertools.product(range(plotMat.shape[0]), range(plotMat.shape[1])):\n plt.text(j, i, format(plotMat[i, j], '.2f'),\n horizontalalignment=\"center\",\n color=\"white\" if (plotMat[i, j] > upper_thresh or plotMat[i, j] < lower_thresh) else \"black\")\n\n plt.ylabel('Metrics')\n plt.xlabel('Classes')\n plt.tight_layout()\n\n\ndef main():\n\n sampleClassificationReport = cr\n plot_classification_report(sampleClassificationReport)\n plt.show()\n plt.close()\n\n\nif __name__ == '__main__':\n main()", "_____no_output_____" ], [ "from sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import classification_report,confusion_matrix,accuracy_score,plot_confusion_matrix", "_____no_output_____" ], [ "classifiers = {\n \"LogisiticRegression\": LogisticRegression(),\n \"KNearest\": KNeighborsClassifier(n_neighbors=1),\n \"Support Vector Classifier\": SVC(),\n \"DecisionTreeClassifier\": DecisionTreeClassifier(),\n \"MultinimialNB\": MultinomialNB()\n}\n", "_____no_output_____" ], [ "trainDataVecs.shape[2]", "_____no_output_____" ], [ "trainDataVecs1 = np.reshape(trainDataVecs,trainDataVecs.shape[0],trainDataVecs.shape[2])\ntrainDataVecs1.shape", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\n\nclassifier = KNeighborsClassifier()\n\nclassifier.fit(trainDataVecs, y_train)\ntraining_score = cross_val_score(classifier, train_vectors, df_train[\"domain1_score\"], cv=5)\nprint(\"Classifiers: \", classifier.__class__.__name__, \"Has a training score of\", round(training_score.mean(), 2) * 100, \"% accuracy score\")", "_____no_output_____" ], [ "np.unique(df['domain1_score'],return_counts=True)", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "from sklearn.utils import class_weight\nclass_weights = class_weight.compute_class_weight('balanced',\n np.unique(df['domain1_score']),\n train)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a5d1ba365f19eba72409f47f70bc15ef6f42cd
95,623
ipynb
Jupyter Notebook
NLP/01-Introduction_to_NLP-01-Introduction.ipynb
NathanielDake/NathanielDake.github.io
82b7013afa66328e06e51304b6af10e1ed648eb8
[ "MIT" ]
3
2018-03-30T06:28:21.000Z
2018-04-25T15:43:24.000Z
NLP/01-Introduction_to_NLP-01-Introduction.ipynb
NathanielDake/NathanielDake.github.io
82b7013afa66328e06e51304b6af10e1ed648eb8
[ "MIT" ]
null
null
null
NLP/01-Introduction_to_NLP-01-Introduction.ipynb
NathanielDake/NathanielDake.github.io
82b7013afa66328e06e51304b6af10e1ed648eb8
[ "MIT" ]
3
2018-02-07T22:21:33.000Z
2018-05-04T20:16:43.000Z
80.830938
26,300
0.756136
[ [ [ "# 1. Introduction to Natural Language Processing\nNatural Language Processing is certainly one of the most fascinating and exciting areas to be involved with at this point in time. It is a wonderful intersection of computer science, artificial intelligence, machine learning and linguistics. With the (somewhat) recent rise of Deep Learning, Natural Language Processing currently has a great deal of buzz surrounding it, and for good reason. The goal of this post is to do three things:\n\n1. Inspire the reader with the beauty of the problem of NLP\n2. Explain how machine learning techniques (i.e. something as simple as Logistic Regression) can be applied to text data.\n3. Prepare the reader for the next sections surrounding Deep Learning as it is applied to NLP.\n\nBefore we dive in, I would like to share the poem _Jabberwocky_ by Lewis Carrol, and an accompanying excerpt from the book \"_Godel, Escher, Bach_\", by Douglas Hofstadter.\n\n<img src=\"https://drive.google.com/uc?id=1ROLVf2p6xYyTqQ3fmeky0eSD6ZCdfJ3M\" width=\"300\">\n\nAnd now, the corresponding excerpt, _**Translations of Jabberwocky**_. \n\n> ### Translations of Jabberwocky<br>\nDouglas R. Hofstadter\nImagine native speakers of English, French, and German, all of whom have excellent command of their respective native languages, and all of whom enjoy wordplay in their own language. Would their symbol networks be similar on a local level, or on a global level? Or is it meaningful to ask such a question? The question becomes concrete when you look at the preceding translations of Lewis Carroll's famous \"Jabberwocky\".\n<br>\n<br>\n[The \"preceding translations\" were \"Jabberwocky\" (English, original), by Lewis Carroll, \"Le Jaseroque\", (French), by Frank L. Warrin, and \"Der Jammerwoch\" (German), by Robert Scott. --kl]\n<br>\n<br>\nI chose this example because it demonstrates, perhaps better than an example in ordinary prose, the problem of trying to find \"the same node\" in two different networks which are, on some level of analysis, extremely nonisomorphic. In ordinary language, the task of translation is more straightforward, since to each word or phrase in the original language, there can usually be found a corresponding word or phrase in the new language. By contrast, in a poem of this type, many \"words\" do not carry ordinary meaning, but act purely as exciters of nearby symbols. However, what is nearby in one language may be remote in another.\n<br>\n<br>\nThus, in the brain of a native speaker of English, \"slithy\" probably activates such symbols as \"slimy\", \"slither\", \"slippery\", \"lithe\", and \"sly\", to varying extents. Does \"lubricilleux\" do the corresponding thing in the brain of a Frenchman? What indeed would be \"the corresponding thing\"? Would it be to activate symbols which are the ordinary translations of those words? What if there is no word, real or fabricated, which will accomplish that? Or what if a word does exist, but it is very intellectual-sounding and Latinate (\"lubricilleux\"), rather than earthy and Anglo-Saxon (\"slithy\")? Perhaps \"huilasse\" would be better than \"lubricilleux\"? Or does the Latin origin of the word \"lubricilleux\" not make itself felt to a speaker of French in the way that it would if it were an English word (\"lubricilious\", perhaps)?\n<br>\n<br>\nAn interesting feature of the translation into French is the transposition into the present tense. To keep it in the past would make some unnatural turns of phrase necessary, and the present tense has a much fresher flavour in French than in the past. The translator sensed that this would be \"more appropriate\"--in some ill-defined yet compelling sense--and made the switch. Who can say whether remaining faithful to the English tense would have been better?\n<br>\n<br>\nIn the German version, the droll phrase \"er an-zu-denken-fing\" occurs; it does not correspond to any English original. It is a playful reversal of words, whose flavour vaguely resembles that of the English phrase \"he out-to-ponder set\", if I may hazard a reverse translation. Most likely this funny turnabout of words was inspired by the similar playful reversal in the English of one line earlier: \"So rested he by the Tumtum tree\". It corresponds, yet doesn't correspond.\n<br>\n<br>\nIncidentally, why did the Tumtum tree get changed into an \"arbre Té-té\" in French? Figure it out for yourself.\n<br>\n<br>\nThe word \"manxome\" in the original, whose \"x\" imbues it with many rich overtones, is weakly rendered in German by \"manchsam\", which back-translates into English as \"maniful\". The French \"manscant\" also lacks the manifold overtones of \"manxome\". There is no end to the interest of this kind of translation task.\n<br>\n<br>\nWhen confronted with such an example, one realizes that it is utterly impossible to make an exact translation. Yet even in this pathologically difficult case of translation, there seems to be some rough equivalence obtainable. Why is this so, if there really is no isomorphism between the brains of people who will read the different versions? The answer is that there is a kind of rough isomorphism, partly global, partly local, between the brains of all the readers of these three poems.\n\n\nNow, the purpose of sharing the above is because if you are reading these posts (and are anything like me), you may very well spend a large chunk of your time studying mathematics, computer science, machine learning, writing code, and so on. But, if you are new to NLP the appreciation for the beauty and deeper meaning surrounding language may not be on the forefront of your mind-that is understandable! But hopefully the passage and commentary above ignited some interest in the wonderfully complex and worthwhile problem of Natural Language Processing and Understanding.\n\n## 2. Spam Detection\nNow, especially at first, I don't want to dive into phonemes, morphemes, syntactical structure, and the like. We will leave those linguistic concepts for later on. The goal here is to quickly allow someone with an understanding of basic machine learning algorithms and techniques to implement them in the domain of NLP. \n\nWe will see that, at least at first, a lot of NLP deals with preprocessing data, which allows us to use algorithms that we already know. The question that most definitely arises is: How do we take a bunch of documents which are basically a bunch of text, and feed them into other machine learning algorithms where the input is usually a vector of numbers? \n\nWell, before we even get to that, let's take a preprocessed data set from the [uci archive](https://archive.ics.uci.edu/ml/datasets/Spambase) and perform a simple classification on it. The data has been processed in such a way that we can consider columns 1-48 to the be the input, and column 49 to the be label (1 = spam, 0 = not spam). \n\nThe input columns are considered the input, and they are a **word frequency measure**. This measure can be calculated via:\n\n$$\\text{Word Frequency Measure} = \\frac{\\text{# of times word appears in a document}}{\\text{Number of words in document}} * 100$$\n\nThis will result in a **Document Term matrix**, which is a matrix where _terms_ (words that appeared in the document) go along the columns, and _documents_ (emails in this case) go along the rows:\n\n| |word 1|word 2|word 3|word 4|word 5|word 6|word 7|word 8|\n|-------|------|------|------|------|------|------|------|------|\n|Email 1|||||||||\n|Email 2|||||||||\n|Email 3|||||||||\n|Email 4|||||||||\n|Email 5|||||||||\n\n### 2.1 Implementation in Code\nWe will now use `Scikit Learn` to show that we can use _any_ model on NLP data, as long as it has been preprocessed correctly. First, let's use scikit learns `NaiveBayes` classifier:", "_____no_output_____" ] ], [ [ "from sklearn.naive_bayes import MultinomialNB\nimport pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "data = pd.read_csv('../../data/nlp/spambase.data')\ndata.head()", "_____no_output_____" ], [ "data = data.values\nnp.random.shuffle(data) # randomly split data into train and test sets\n\nX = data[:, :48]\nY = data[:, -1]\n\nXtrain = X[:-100,]\nYtrain = Y[:-100,]\nXtest = X[-100:,]\nYtest = Y[-100:,]\n\nmodel = MultinomialNB()\nmodel.fit(Xtrain, Ytrain)\nprint (\"Classifcation Rate for NB: \", model.score(Xtest, Ytest))", "Classifcation Rate for NB: 0.87\n" ] ], [ [ "Excellent, a classification rate of 92%! Let's now look utilize `AdaBoost`:", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import AdaBoostClassifier\n\nmodel = AdaBoostClassifier()\nmodel.fit(Xtrain, Ytrain)\nprint (\"Classifcation Rate for Adaboost: \", model.score(Xtest, Ytest))", "Classifcation Rate for Adaboost: 0.94\n" ] ], [ [ "Great, a nice improvement, but more importantly, we have shown that we can take text data and that via correct preprocessing we are able to utilize it with standard machine learning API's. The next step is to dig into _how_ basic preprocessing is performed.", "_____no_output_____" ], [ "---\n\n# 3. Sentiment Analysis\nTo go through the basic preprocessing steps that are frequently used when performing machine learning on text data (often referred to an NLP pipeline) we are going to want to work on the problem of **sentiment analysis**. Sentiment is a measure of how positive or negative something is, and we are going to build a very simple sentiment analyzer to predict the sentiment of Amazon reviews. These are reviews, so they come with 5 star ratings, and we are going to look at the electronics category in particular. These are XML files, so we will need an XML parser. \n\n### 3.1 NLP Terminology \nBefore we begin, I would just like to quickly go over some basic NLP terminology that will come up frequently throughout this post.\n* **Corpus**: Collection of text\n* **Tokens**: Words and punctuation that make up the corpus. \n* **Type**: a distinct token. Ex. \"Run, Lola Run\" has four tokens (comma counts as one) and 3 types.\n* **Vocabulary**: The set of all types. \n* The google corpus (collection of text) has 1 trillion tokens, and only 13 million types. English only has 1 million dictionary words, but the google corpus includes types such as \"www.facebook.com\". \n\n### 3.2 Problem Overview\nNow, we are just going to be looking at the electronics category. We could use the 5 star targets to do regression, but instead we will just do classification since they are already marked \"positive\" and \"negative\". As I mentioned, we are going to be working with XML data, so we will need an XML parser, for which we will use `BeautifulSoup`. We will only look at the `review_text` attribute. To create our feature vector, we will count up the number of occurences of each word, and divided it by the total number of words. However, for that to work we will need two passes through the data:\n\n1. One to collect the total number of distinct words, so that we know the size of our feature vector, in other words the vocabulary size, and possibly remove stop words like \"this\", \"is\", \"I\", \"to\", etc, to decrease the vocabulary size. The goal here is to know the index of each token\n2. On the second pass, we will be able to assign values to each data vector whose index corresponds to which words, and one to create data vectors \n\nOnce we have that, it is simply a matter of creating a classifier like the one we did for our spam detector! Here, we will use logistic regression, so we can intepret the weights! For example, if you see a word like horrible and it has a weight of minus 1, it is associated with negative reviews. With that started, let's begin!", "_____no_output_____" ], [ "## 3.3 Sentiment Analysis in Code", "_____no_output_____" ] ], [ [ "import nltk\nimport numpy as np\n\nfrom nltk.stem import WordNetLemmatizer\nfrom sklearn.linear_model import LogisticRegression\nfrom bs4 import BeautifulSoup\n\nwordnet_lemmatizer = WordNetLemmatizer() # this turns words into their base form \n\nstopwords = set(w.rstrip() for w in open('../../data/nlp/stopwords.txt')) # grab stop words \n\n# get pos reviews\n# only want rev text\npositive_reviews = BeautifulSoup(open('../../data/nlp/electronics/positive.review').read(), \"lxml\") \npositive_reviews = positive_reviews.findAll('review_text') \n\nnegative_reviews = BeautifulSoup(open('../../data/nlp/electronics/negative.review').read(), \"lxml\")\nnegative_reviews = negative_reviews.findAll('review_text')", "_____no_output_____" ] ], [ [ "### 3.3.1 Class Imbalance\nThere are more positive than negative reviews, so we are going to shuffle the positive reviews and then cut off any extra that we may have so that they are both the same size.", "_____no_output_____" ] ], [ [ "np.random.shuffle(positive_reviews)\npositive_reviews = positive_reviews[:len(negative_reviews)]", "_____no_output_____" ] ], [ [ "### 3.3.2 Tokenizer function\nLets now create a tokenizer function that can be used on our specific reviews.", "_____no_output_____" ] ], [ [ "def my_tokenizer(s):\n s = s.lower()\n tokens = nltk.tokenize.word_tokenize(s) # essentially string.split()\n tokens = [t for t in tokens if len(t) > 2] # get rid of short words\n tokens = [wordnet_lemmatizer.lemmatize(t) for t in tokens] # get words to base form\n tokens = [t for t in tokens if t not in stopwords]\n return tokens", "_____no_output_____" ] ], [ [ "### 3.3.3 Index each word\nWe now need to create an index for each of the words, so that each word has an index in the final data vector. However, to able able to do that we need to know the size of the final data vector, and to be able to know that we need to know how big the vocabulary is. Remember, the **vocabulary** is just the set of all types!\n\nWe are essentially going to look at every individual review, tokenize them, and then add those tokens 1 by 1 to the map if they do not exist yet.", "_____no_output_____" ] ], [ [ "word_index_map = {} # our vocabulary - dictionary that will map words to dictionaries\ncurrent_index = 0 # counter increases whenever we see a new word\n\npositive_tokenized = []\nnegative_tokenized = []\n\n# --------- loop through positive reviews ---------\nfor review in positive_reviews: \n tokens = my_tokenizer(review.text) # converts single review into array of tokens (split function)\n positive_tokenized.append(tokens)\n for token in tokens: # loops through array of tokens for specific review\n if token not in word_index_map: # if the token is not in the map, add it\n word_index_map[token] = current_index \n current_index += 1 # increment current index\n \n# --------- loop through negative reviews ---------\nfor review in negative_reviews: \n tokens = my_tokenizer(review.text) \n negative_tokenized.append(tokens)\n for token in tokens: \n if token not in word_index_map: \n word_index_map[token] = current_index \n current_index += 1 ", "_____no_output_____" ] ], [ [ "And we can actually take a look at the contents of `word_index_map` by making use of the `random` module (part of the Python Standard Library):", "_____no_output_____" ] ], [ [ "import random\nprint(dict(random.sample(word_index_map.items(), 20)))", "{'tech-savvy': 5921, 'downloads': 2029, 're-acquire': 9930, 'megapixels': 7066, 'dual-amping': 6499, 'unsupported': 10981, 'configuration': 1183, '6000': 3246, 'obviously..': 9627, 'didn': 10133, 'eligible': 2440, 'lawn': 748, '50-pack': 1002, 'yearly..': 10956, '192.168.1.245': 2607, 'glad': 1844, 'occasionally': 1631, 'floppy': 7170, 'criminal': 4786, 'emptying': 4382}\n" ], [ "print('Vocabulary Size', len(word_index_map))", "Vocabulary Size 11088\n" ] ], [ [ "### 3.3.4 Convert tokens into vector\nNow that we have our tokens and vocabulary, we need to convert our tokens into a vector. Because we are going to shuffle our train and test sets again, we are going to want to put labels and vector into same array for now since it makes it easier to shuffle. \n\nNote, this function operates on **one** review. So the +1 is creating our label, and this function is basically designed to take our input vector from an english form to a numeric vector form.", "_____no_output_____" ] ], [ [ "def tokens_to_vector(tokens, label):\n xy_data = np.zeros(len(word_index_map) + 1) # equal to the vocab size + 1 for the label \n for t in tokens: # loop through every token\n i = word_index_map[t] # get index from word index map\n xy_data[i] += 1 # increment data at that index \n xy_data = xy_data / xy_data.sum() # divide entire array by total, so they add to 1\n xy_data[-1] = label # set last element to label\n return xy_data", "_____no_output_____" ] ], [ [ "Time to actually assign these tokens to vectors.", "_____no_output_____" ] ], [ [ "N = len(positive_tokenized) + len(negative_tokenized) # total number of examples \ndata = np.zeros((N, len(word_index_map) + 1)) # N examples x vocab size + 1 for label\ni = 0 # counter to keep track of sample\n\nfor tokens in positive_tokenized: # loop through postive tokenized reviews\n xy = tokens_to_vector(tokens, 1) # passing in 1 because these are pos reviews\n data[i,:] = xy # set data row to that of the input vector\n i += 1 # increment 1\n \nfor tokens in negative_tokenized: \n xy = tokens_to_vector(tokens, 0) \n data[i,:] = xy \n i += 1 ", "_____no_output_____" ], [ "print(data.shape)", "(2000, 11089)\n" ] ], [ [ "Our data is now 1000 rows of positively labeled reviews, followed by 1000 rows of negatively labeled reviews. We have `11089` columns, which is one more than our vocabulary size because we have a column for the label (positive or negative). Lets shuffle before getting our train and test set.", "_____no_output_____" ] ], [ [ "np.random.shuffle(data)\nX = data[:, :-1]\nY = data[:, -1]\n\nXtrain = X[:-100,]\nYtrain = Y[:-100,]\nXtest = X[-100:,]\nYtest = Y[-100:,]\n\n\nmodel = LogisticRegression()\nmodel.fit(Xtrain, Ytrain)\nprint(\"Classification Rate: \", model.score(Xtest, Ytest))", "Classification Rate: 0.7\n" ] ], [ [ "### 3.3.5 Classification Rate\nWe end up with a classification rate of 0.71, which is not ideal, but it is better than random guessing. \n\n### 3.3.6 Sentiment Analysis\nSomething interesting that we can do is look at the weights of each word, to see if that word has positive or negative sentiment. ", "_____no_output_____" ] ], [ [ "threshold = 0.7 \nlarge_magnitude_weights = []\nfor word, index in word_index_map.items():\n weight = model.coef_[0][index]\n if weight > threshold or weight < -threshold:\n large_magnitude_weights.append((word, weight))\n\ndef sort_by_magnitude(sentiment_dict):\n return sentiment_dict[1]\n \nlarge_magnitude_weights.sort(reverse=True, key=sort_by_magnitude)\nprint(large_magnitude_weights)", "[('price', 2.808163204024058), ('easy', 1.7646511704661152), ('quality', 1.3716522244882545), ('excellent', 1.319811182219224), ('love', 1.237745876552362), ('you', 1.155006377913112), ('perfect', 1.0324004425098248), ('sound', 0.9780126530219685), ('highly', 0.9778749978617105), ('memory', 0.9398953342479317), ('little', 0.9262682823592787), ('fast', 0.905207610856845), ('speaker', 0.8965845758701319), ('ha', 0.8111001120921802), ('pretty', 0.7764302324793534), ('cable', 0.7712191036378001), (\"'ve\", 0.7170298751638035), ('week', -0.7194449455694366), ('returned', -0.7482471935264389), ('bad', -0.7542948554985326), ('poor', -0.7555447694156194), ('tried', -0.7892866982929136), ('buy', -0.8504195601103998), ('month', -0.8771148641617261), ('support', -0.9163137326943319), ('waste', -0.946863186564699), ('item', -0.9518247418299971), ('money', -1.1086664158434432), ('return', -1.1512973579906935), ('then', -1.2084513223482118), ('doe', -1.2197007105871698), ('wa', -1.6630639259918825), (\"n't\", -2.0687949024413546)]\n" ] ], [ [ "Clearly the above list is not perfect, _but_ it should give some insight on what is possible for us already. The logistic regression model was able to pick out `easy`, `quality`, and `excellent` as words that correlate to a positive response, and it was able to find `poor`, `returned`, and `waste` as words the correlate to a negative response. ", "_____no_output_____" ], [ "---\n\n# 4. NLTK Exploration \nBefore we move on any further, I wanted to take a minute to go over a few of the most useful tools for the `nltk` (Natural Language Toolkit) library. This library will encapsulate many NLP tasks for us.\n\n### 4.1 Parts of Speech (POS) Tagging\nParts of speech tagging is meant to do just what it sound like: tag each word with a given part of speech within a document. For example, in the following sentence:\n\n> \"Bob is great.\"\n\n`Bob` is a noun, `is` is a verb, and `great` is an adjective. We can utilize `nltk`'s POS tagger on that sentence and see the same result:", "_____no_output_____" ] ], [ [ "import nltk\nnltk.pos_tag(\"Bob is great\".split())", "_____no_output_____" ], [ "nltk.pos_tag(\"Machine learning is great\".split())", "_____no_output_____" ] ], [ [ "The second entry in the above tuples `NN`, `VBZ`, etc, represents the determined tag of the word. For a description of each tag, check out [this link](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html). ", "_____no_output_____" ], [ "### 4.2 Stemming and Lemmatization\nBoth the process of **stemming** and **lemmatization** are used in reducing words to a \"base\" form. This is very useful because a vocabulary can get very large, while certain words tend to have the same meaning. For example _dog_ and _dogs_, and _jump_ and _jumping_ both have similar meanings. The main difference between stemming and lemmatization is that stemming is a bit more basic. ", "_____no_output_____" ] ], [ [ "porter_stemmer = nltk.stem.porter.PorterStemmer()\nprint(porter_stemmer.stem('dogs'))", "dog\n" ], [ "print(porter_stemmer.stem('wolves'))", "wolv\n" ], [ "lemmatizer = nltk.stem.WordNetLemmatizer()\nprint(lemmatizer.lemmatize('dogs'))", "dog\n" ], [ "print(lemmatizer.lemmatize('wolves'))", "wolf\n" ] ], [ [ "Both the stemmer and lemmatizer managed to get `dogs` correct, but only the lemmatizer managed to correctly convert `wolves` to base form. ", "_____no_output_____" ], [ "### 4.3 Named Entity Recognition \nFinally there is **Named Entity** recognition. Entities refer to nouns such as:\n* \"Albert Einstein\" - a person\n* \"Apple\" - an organization", "_____no_output_____" ] ], [ [ "s = \"Albert Einstein was born on March 14, 1879\"\ntags = nltk.pos_tag(s.split())\nprint(tags)", "[('Albert', 'NNP'), ('Einstein', 'NNP'), ('was', 'VBD'), ('born', 'VBN'), ('on', 'IN'), ('March', 'NNP'), ('14,', 'CD'), ('1879', 'CD')]\n" ], [ "nltk.ne_chunk(tags)", "_____no_output_____" ], [ "s = \"Steve Jobs was the CEO of Apple Corp.\"\ntags = nltk.pos_tag(s.split())\nprint(tags)", "[('Steve', 'NNP'), ('Jobs', 'NNP'), ('was', 'VBD'), ('the', 'DT'), ('CEO', 'NNP'), ('of', 'IN'), ('Apple', 'NNP'), ('Corp.', 'NNP')]\n" ], [ "nltk.ne_chunk(tags)", "_____no_output_____" ] ], [ [ "---\n\n# 5. Latent Semantic Analysis\nWe will now take a moment to extend our semantic analysis example from before, instead now performing **Latent Semantic Analysis**. Latent semantic analysis is utilized to deal with the reality that we will often have _multiple_ words with the _same_ meaning, or on the other hand, _one_ word with _multiple_ meanings. These are referred to as _synonomy_ and _polysemy_ respectively. \n\nIn the case of synonyms here are a few basic examples:\n* \"Buy\" and \"Purchase\"\n* \"Big\" and \"Large\"\n* \"Quick\" and \"Speedy\"\n\nAnd in the case of polysemes:\n* \"Man\" (man as in human, and man as in a male opposed to a female)\n* \"Milk\" (can be a noun or a verb)\n\nIn order to solve this problem, we will need to introduce _Latent Variables_.\n\n## 5.1 Latent Variables\nThe easiest way to get your head around latent variables at first is via an example. Consider the words \"computer\", \"laptop\", and \"PC\"; these words are most likely seen together very often, meaning they are highly correlated. We can thinking a _latent_ or _hidden_ variable that is below representing them all, and we can call that $z$. We can mathematically define $z$ as:\n\n$$z = 0.7*computer \\; + 0.5*PC \\; + 0.6*laptop$$\n\nSo, we now have an idea of what a latent variable is, but what is the job of Latent Semantic Analysis? The entire goal of LSA is:\n\n1. To find the latent/hidden variables.\n2. Then, transform original data into these new variables. \n\nIdeally, after the above has been performed, the dimensionality of the new data will be much smaller than that of the original data set. It is important to note that LSA definitely helps solve the synonomy problem, by combining correlated variables. However, there are conflicting view points about whether or not it helps with polysemy. \n\n## 5.2 The Math Behind LSA\nAs we just discussed, the main goal when applying LSA is to deal with synonyms. For example, \"small\" and \"little\" would each make up their own unique variable, but in reality we know that they mean the same thing, so that is redundant. We could combine them into a single variable, reducing the dimensionality of our data set by one. So, to be clear the goal of LSA is:\n\n> **Goal of LSA**: Reduce redundancy.\n\n### 5.2.1 Redundancy in Numbers\nNow, machine learning at its core is always dealing with numbers, so what exactly do I mean by redundancy from a numerical standpoint? Take a look at the the plot below:\n\n<img src=\"https://drive.google.com/uc?id=1GK0COCtvumKXTI0nB0qHB_e696p2IE_J\" width=\"300\">\n\nWe can see clearly that there is a linear relationship between the dependent and independent variable. In other words, there is a linear relationship between lean body mass and muscle strength. So, we could say that one of these variables is redundant; if we know someones lean body mass, we can accurately predict their muscle strength, and vice versa. If we want a compact representation of attributes related to someones athletic performance, then we may only need to know one of these variables, since the other can be predicted from it. This advantage becomes more apparent as our dimensionality grows; if we could go from 1 million variables down to variables, that is a 200,000x's savings of space! Saving space is good, and hence reducing redundancy is good! \n\nNow the math behind LSA is rather complex and involves a good deal of linear algebra, and to be honest it would slightly bloat this notebook if I placed it here. Because of this, I have decided to move it to my mathematics section under linear algebra. With that said, LSA is essentially just the application of **Singular Value Decomposition** (SVD) to a term document matrix. I highly encourage you to go over my notebook explaining SVD and PCA before continuing, to have a better understanding of how the underlying mechanics work in the code we are about to implement. \n\nNow, we can begin by gaining a brief bit of intuition behind what LSA may look like in code. As usual, we are going to begin with an input matrix `X` of shape $NxD$, where $N$ is the number of samples and $D$ is the number of features. This will be passed into scikit learns svd model, `TruncatedSVD`, call the `fit`, `transform` function, and finally receive an output matrix `Z` of shape $Nx2$, or $Nxd$, where $d << D$. \n\n```\nmodel = TruncatedSVD()\nmodel.fit(X)\nZ = model.transform(X)\n# equivalent: Z = model.fit_transform(X)\n```\n\n## 5.3 LSA in Code", "_____no_output_____" ] ], [ [ "import nltk \nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom nltk.stem import WordNetLemmatizer\nfrom sklearn.decomposition import TruncatedSVD", "_____no_output_____" ] ], [ [ "### Process:\n* we start by pulling in all of the titles, and all of the stop words. Our titles will look like:\n```\n['Philosophy of Sex and Love A Reader',\n 'Readings in Judaism, Christianity, and Islam',\n 'Microprocessors Principles and Applications',\n 'Bernhard Edouard Fernow: Story of North American Forestry',\n 'Encyclopedia of Buddhism',...]\n```\n* we then define our tokenizer which will convert our list of strings into specific tokens, which will look like:\n```\n[['philosophy', 'sex', 'love', 'reader'],\n ['reading', 'judaism', 'christianity', 'islam'],\n ['microprocessor', 'principle'],\n ['bernhard', 'edouard', 'fernow', 'story', 'north', 'american', 'forestry'],\n ['encyclopedia', 'buddhism'],\n```\n* we then create our input matrix. This is going to be D x N, where D is the length of the total number of terms we are using (input features, 2070) and where N is the length of all tokens (2373, the total number of titles)\n* This is essentially the transpose of how our input matrix is generally setup. Usually we have our examples along the rows, and our input features along the columns, however, in NLP it is sometimes the opposite\n* we then loop through all tokens, and create a vector for each one (essentially, if a word occurs, its value in the vector is incremented by 1)\n* the final input matrix is fed into the SVD, where the X matrix is transformed into a Z matrix of only 2 dimensions", "_____no_output_____" ] ], [ [ "wordnet_lemmatizer = WordNetLemmatizer()\n\ntitles = [line.rstrip() for line in open('../../data/nlp/all_book_titles.txt')] # Load all book titles in to an array\n\nstopwords = set(w.rstrip() for w in open('../../data/nlp/stopwords.txt')) # loading stop words (irrelevant)\nstopwords = stopwords.union({\n 'introduction', 'edition', 'series', 'application',\n 'approach', 'card', 'access', 'package', 'plus', 'etext',\n 'brief', 'vol', 'fundamental', 'guide', 'essential', 'printed',\n 'third', 'second', 'fourth', }) # adding additional stop words \n \n\ndef my_tokenizer(s):\n s = s.lower()\n tokens = nltk.tokenize.word_tokenize(s) # essentially string.split()\n tokens = [t for t in tokens if len(t) > 2] # get rid of short words\n tokens = [wordnet_lemmatizer.lemmatize(t) for t in tokens] # get words to base form\n tokens = [t for t in tokens if t not in stopwords] # remove stop words\n tokens = [t for t in tokens if not any(c.isdigit() for c in t)] # get rid of any token that includes a number\n return tokens\n\n# Lets now figure out the index of each word, by going through the entire vocabularly \n# create a word-to-index map so that we can create our word-frequency vectors later\n# let's also save the tokenized versions so we don't have to tokenize again later\nword_index_map = {}\ncurrent_index = 0\nall_tokens = []\nall_titles = []\nindex_word_map = []\nerror_count = 0\n\nfor title in titles:\n try:\n title = title.encode('ascii', 'ignore').decode('utf-8') # this will throw exception if bad characters\n all_titles.append(title)\n tokens = my_tokenizer(title)\n all_tokens.append(tokens)\n for token in tokens:\n if token not in word_index_map:\n word_index_map[token] = current_index\n current_index += 1\n index_word_map.append(token)\n except Exception as e:\n print(e)\n print(title)\n error_count += 1 \n \n# now let's create our input matrices - just indicator variables for this example - works better than proportions\ndef tokens_to_vector(tokens):\n x = np.zeros(len(word_index_map))\n for t in tokens:\n i = word_index_map[t]\n x[i] = 1\n return x\n\n\nN = len(all_tokens) # nested list, has 2373 total entries, each which has several words\nD = len(word_index_map) # total number of words that we are working with (2070)\nX = np.zeros((D, N)) # terms will go along rows, documents along columns\ni = 0\nfor tokens in all_tokens:\n X[:,i] = tokens_to_vector(tokens)\n i += 1\n\ndef main():\n fig, ax = plt.subplots(figsize=(12,8))\n svd = TruncatedSVD()\n Z = svd.fit_transform(X)\n plt.scatter(Z[:,0], Z[:,1])\n for i in range(D):\n plt.annotate(s=index_word_map[i], xy=(Z[i,0], Z[i,1]))\n plt.show()\n\nif __name__ == '__main__':\n main()\n", "_____no_output_____" ] ], [ [ "## Keep in mind...\nWhat is important to remember here is that the main point of this process is to take a bunch of words (2070) be able to plot their relationship to eachother in a 2-d surface. To do this, their use in 2373 book titles are considered, and SVD is performed. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7a5ed6e668628aec3fb793804a887636bf18894
88,823
ipynb
Jupyter Notebook
Project_Plagiarism_Detection/3_Training_a_Model.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project2
cd073530b6d83537f41a74e5dda4207145bf2d87
[ "MIT" ]
null
null
null
Project_Plagiarism_Detection/3_Training_a_Model.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project2
cd073530b6d83537f41a74e5dda4207145bf2d87
[ "MIT" ]
null
null
null
Project_Plagiarism_Detection/3_Training_a_Model.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project2
cd073530b6d83537f41a74e5dda4207145bf2d87
[ "MIT" ]
null
null
null
46.773565
1,174
0.539579
[ [ [ "# Plagiarism Detection Model\n\nNow that you've created training and test data, you are ready to define and train a model. Your goal in this notebook, will be to train a binary classification model that learns to label an answer file as either plagiarized or not, based on the features you provide the model.\n\nThis task will be broken down into a few discrete steps:\n\n* Upload your data to S3.\n* Define a binary classification model and a training script.\n* Train your model and deploy it.\n* Evaluate your deployed classifier and answer some questions about your approach.\n\nTo complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.\n> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.\n\nIt will be up to you to explore different classification models and decide on a model that gives you the best performance for this dataset.\n\n---", "_____no_output_____" ], [ "## Load Data to S3\n\nIn the last notebook, you should have created two files: a `training.csv` and `test.csv` file with the features and class labels for the given corpus of plagiarized/non-plagiarized text data. \n\n>The below cells load in some AWS SageMaker libraries and creates a default bucket. After creating this bucket, you can upload your locally stored data to S3.\n\nSave your train and test `.csv` feature files, locally. To do this you can run the second notebook \"2_Plagiarism_Feature_Engineering\" in SageMaker or you can manually upload your files to this notebook using the upload icon in Jupyter Lab. Then you can upload local files to S3 by using `sagemaker_session.upload_data` and pointing directly to where the training data is saved.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport boto3\nimport sagemaker", "_____no_output_____" ], [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n# session and role\nsagemaker_session = sagemaker.Session()\nrole = sagemaker.get_execution_role()\n\n# create an S3 bucket\nbucket = sagemaker_session.default_bucket()", "_____no_output_____" ] ], [ [ "## EXERCISE: Upload your training data to S3\n\nSpecify the `data_dir` where you've saved your `train.csv` file. Decide on a descriptive `prefix` that defines where your data will be uploaded in the default S3 bucket. Finally, create a pointer to your training data by calling `sagemaker_session.upload_data` and passing in the required parameters. It may help to look at the [Session documentation](https://sagemaker.readthedocs.io/en/stable/session.html#sagemaker.session.Session.upload_data) or previous SageMaker code examples.\n\nYou are expected to upload your entire directory. Later, the training script will only access the `train.csv` file.", "_____no_output_____" ] ], [ [ "# should be the name of directory you created to save your features data\ndata_dir = 'plagiarism_data'\n\n# set prefix, a descriptive name for a directory \nprefix = 'sagemaker/plagiarism-detection'\n\n# upload all data to S3\ninput_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)", "_____no_output_____" ] ], [ [ "### Test cell\n\nTest that your data has been successfully uploaded. The below cell prints out the items in your S3 bucket and will throw an error if it is empty. You should see the contents of your `data_dir` and perhaps some checkpoints. If you see any other files listed, then you may have some old model files that you can delete via the S3 console (though, additional files shouldn't affect the performance of model developed in this notebook).", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n# confirm that data is in S3 bucket\nempty_check = []\nfor obj in boto3.resource('s3').Bucket(bucket).objects.all():\n empty_check.append(obj.key)\n print(obj.key)\n\nassert len(empty_check) !=0, 'S3 bucket is empty.'\nprint('Test passed!')", "Lambda/\nLambda/lambda_function.zip\nLambda/package.zip\nLambda/plagiarism_detection_func-8a2856a0-4389-44ac-a08f-24e25d799fca.zip\nLambda/sample-site-packages-2016-02-20.zip\nPanda_Layer.zip\nboston-update-endpoints/train.csv\nboston-update-endpoints/validation.csv\nboston-xgboost-HL/output/xgboost-2020-10-05-09-40-13-851/output/model.tar.gz\nboston-xgboost-HL/test.csv\nboston-xgboost-HL/train.csv\nboston-xgboost-HL/validation.csv\nboston-xgboost-LL/batch-bransform/test.csv.out\nboston-xgboost-LL/output/boston-xgboost-2020-10-05-08-45-40/output/model.tar.gz\nboston-xgboost-LL/test.csv\nboston-xgboost-LL/train.csv\nboston-xgboost-LL/validation.csv\ncounties/kmeans-2020-10-12-15-19-49-497/output/model.tar.gz\ncounties/kmeans-2020-10-12-16-12-40-795/output/model.tar.gz\ncounties/pca-2020-10-12-11-38-56-130/output/model.tar.gz\ncounties/pca-2020-10-12-12-36-11-407/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-16-25-53-808/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-18-26-50-429/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-18-57-22-416/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-19-27-34-058/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-19-52-03-257/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-20-12-26-687/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-20-33-19-916/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-20-40-04-004/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-13-21-00-43-603/output/model.tar.gz\nfraund_detection/linear-learner-2020-10-14-08-57-57-060/output/model.tar.gz\nlambda.zip\nlambda_panda_layer-835b6e94-93de-4e5d-abda-4586b6b7f69d.zip\nmoon-data/sagemaker-pytorch-2020-10-15-11-44-11-502/debug-output/training_job_end.ts\nmoon-data/sagemaker-pytorch-2020-10-15-11-44-11-502/output/model.tar.gz\nmoon-data/sagemaker-pytorch-2020-10-15-12-19-41-161/debug-output/training_job_end.ts\nmoon-data/sagemaker-pytorch-2020-10-15-12-19-41-161/output/model.tar.gz\nmoon-data/sagemaker-pytorch-2020-10-15-12-37-52-303/output/model.tar.gz\nmoon-data/sagemaker-pytorch-2020-10-15-12-55-13-731/debug-output/training_job_end.ts\nmoon-data/sagemaker-pytorch-2020-10-15-12-55-13-731/output/model.tar.gz\npython.zip\npython3.zip\nsagemaker-pytorch-2020-10-23-18-08-26-412/source/sourcedir.tar.gz\nsagemaker-pytorch-2020-10-23-18-14-07-463/source/sourcedir.tar.gz\nsagemaker-pytorch-2020-10-23-18-19-37-689/source/sourcedir.tar.gz\nsagemaker-record-sets/KMeans-2020-10-12-15-19-36-131/.amazon.manifest\nsagemaker-record-sets/KMeans-2020-10-12-15-19-36-131/matrix_0.pbr\nsagemaker-record-sets/LinearLearner-2020-10-13-16-12-30-860/.amazon.manifest\nsagemaker-record-sets/LinearLearner-2020-10-13-16-12-30-860/matrix_0.pbr\nsagemaker-record-sets/LinearLearner-2020-10-13-16-25-36-894/.amazon.manifest\nsagemaker-record-sets/LinearLearner-2020-10-13-16-25-36-894/matrix_0.pbr\nsagemaker-record-sets/LinearLearner-2020-10-13-18-26-27-878/.amazon.manifest\nsagemaker-record-sets/LinearLearner-2020-10-13-18-26-27-878/matrix_0.pbr\nsagemaker-record-sets/PCA-2020-10-12-11-29-48-805/.amazon.manifest\nsagemaker-record-sets/PCA-2020-10-12-11-29-48-805/matrix_0.pbr\nsagemaker-record-sets/PCA-2020-10-12-12-36-08-957/.amazon.manifest\nsagemaker-record-sets/PCA-2020-10-12-12-36-08-957/matrix_0.pbr\nsagemaker-scikit-learn-2020-10-20-13-20-22-809/debug-output/training_job_end.ts\nsagemaker-scikit-learn-2020-10-20-13-20-22-809/output/model.tar.gz\nsagemaker-scikit-learn-2020-10-20-13-20-22-809/source/sourcedir.tar.gz\nsagemaker-scikit-learn-2020-10-23-17-25-30-271/source/sourcedir.tar.gz\nsagemaker-scikit-learn-2020-10-23-17-32-58-378/source/sourcedir.tar.gz\nsagemaker-scikit-learn-2020-10-23-17-40-45-279/source/sourcedir.tar.gz\nsagemaker-scikit-learn-2020-10-23-17-52-31-242/source/sourcedir.tar.gz\nsagemaker-scikit-learn-2020-10-23-18-02-22-277/debug-output/training_job_end.ts\nsagemaker-scikit-learn-2020-10-23-18-02-22-277/output/model.tar.gz\nsagemaker-scikit-learn-2020-10-23-18-02-22-277/source/sourcedir.tar.gz\nsagemaker/energy_consumption/forecasting-deepar-2020-10-15-20-57-08-845/output/model.tar.gz\nsagemaker/energy_consumption/test.json\nsagemaker/energy_consumption/train.json\nsagemaker/moon-data/train.csv\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-19-20-24-35-558/debug-output/training_job_end.ts\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-19-20-24-35-558/output/model.tar.gz\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-19-20-47-37-049/debug-output/training_job_end.ts\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-19-20-47-37-049/output/model.tar.gz\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-19-21-08-53-391/debug-output/training_job_end.ts\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-19-21-08-53-391/output/model.tar.gz\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-23-18-19-37-689/debug-output/training_job_end.ts\nsagemaker/plagiarism-detection/pytorch/output/sagemaker-pytorch-2020-10-23-18-19-37-689/output/model.tar.gz\nsagemaker/plagiarism-detection/test.csv\nsagemaker/plagiarism-detection/train.csv\nsagemaker/sentiment_rnn/train.csv\nsagemaker/sentiment_rnn/word_dict.pkl\nsample-site-packages-2016-02-20.zip\nsentiment-web-app/output/xgboost-2020-10-07-14-49-14-975/output/model.tar.gz\nsentiment-web-app/test.csv\nsentiment-web-app/train.csv\nsentiment-web-app/validation.csv\nsklearn-build-lambda-master.zip\nsklearn.zip\nsklearn1.zip\nTest passed!\n" ] ], [ [ "---\n\n# Modeling\n\nNow that you've uploaded your training data, it's time to define and train a model!\n\nThe type of model you create is up to you. For a binary classification task, you can choose to go one of three routes:\n* Use a built-in classification algorithm, like LinearLearner.\n* Define a custom Scikit-learn classifier, a comparison of models can be found [here](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html).\n* Define a custom PyTorch neural network classifier. \n\nIt will be up to you to test out a variety of models and choose the best one. Your project will be graded on the accuracy of your final model. \n \n---\n\n## EXERCISE: Complete a training script \n\nTo implement a custom classifier, you'll need to complete a `train.py` script. You've been given the folders `source_sklearn` and `source_pytorch` which hold starting code for a custom Scikit-learn model and a PyTorch model, respectively. Each directory has a `train.py` training script. To complete this project **you only need to complete one of these scripts**; the script that is responsible for training your final model.\n\nA typical training script:\n* Loads training data from a specified directory\n* Parses any training & model hyperparameters (ex. nodes in a neural network, training epochs, etc.)\n* Instantiates a model of your design, with any specified hyperparams\n* Trains that model \n* Finally, saves the model so that it can be hosted/deployed, later\n\n### Defining and training a model\nMuch of the training script code is provided for you. Almost all of your work will be done in the `if __name__ == '__main__':` section. To complete a `train.py` file, you will:\n1. Import any extra libraries you need\n2. Define any additional model training hyperparameters using `parser.add_argument`\n2. Define a model in the `if __name__ == '__main__':` section\n3. Train the model in that same section\n\nBelow, you can use `!pygmentize` to display an existing `train.py` file. Read through the code; all of your tasks are marked with `TODO` comments. \n\n**Note: If you choose to create a custom PyTorch model, you will be responsible for defining the model in the `model.py` file,** and a `predict.py` file is provided. If you choose to use Scikit-learn, you only need a `train.py` file; you may import a classifier from the `sklearn` library.", "_____no_output_____" ] ], [ [ "# directory can be changed to: source_sklearn or source_pytorch\n!pygmentize source_sklearn/train.py", "\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36m__future__\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m print_function\r\n\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36margparse\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mos\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mpandas\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mpd\u001b[39;49;00m\r\n\r\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36msklearn\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36mexternals\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m joblib\r\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36mskorch\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m NeuralNetRegressor\r\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36mmodel\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m BinaryClassifier\r\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36msklearn\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36mmodel_selection\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m GridSearchCV\r\n\r\n\u001b[37m#from sklearn.svm import LinearSVC\u001b[39;49;00m\r\n\u001b[37m## TODO: Import any additional libraries you need to define a model\u001b[39;49;00m\r\n\r\n\u001b[37m#Begin Yanfei's first try#\u001b[39;49;00m\r\n\u001b[37m#from sklearn.svm import LinearSVC\u001b[39;49;00m\r\n\u001b[37m#End Yanfei's first try#\u001b[39;49;00m\r\n\r\n\u001b[37m#Begin Yanfei's second try#\u001b[39;49;00m\r\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36msklearn\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36mlinear_model\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m LogisticRegression\r\n\u001b[37m#Begin Yanfei's second try#\u001b[39;49;00m\r\n\u001b[37m# Provided model load function\u001b[39;49;00m\r\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmodel_fn\u001b[39;49;00m(model_dir):\r\n \u001b[33m\"\"\"Load model from the model_dir. This is the same model that is saved\u001b[39;49;00m\r\n\u001b[33m in the main if statement.\u001b[39;49;00m\r\n\u001b[33m \"\"\"\u001b[39;49;00m\r\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mLoading model.\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\r\n \r\n \u001b[37m# load using joblib\u001b[39;49;00m\r\n model = joblib.load(os.path.join(model_dir, \u001b[33m\"\u001b[39;49;00m\u001b[33mmodel.joblib\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m))\r\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mDone loading model.\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\r\n \r\n \u001b[34mreturn\u001b[39;49;00m model\r\n\r\n\r\n\u001b[37m## TODO: Complete the main code\u001b[39;49;00m\r\n\u001b[34mif\u001b[39;49;00m \u001b[31m__name__\u001b[39;49;00m == \u001b[33m'\u001b[39;49;00m\u001b[33m__main__\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m:\r\n \r\n \u001b[37m# All of the model parameters and training parameters are sent as arguments\u001b[39;49;00m\r\n \u001b[37m# when this script is executed, during a training job\u001b[39;49;00m\r\n \r\n \u001b[37m# Here we set up an argument parser to easily access the parameters\u001b[39;49;00m\r\n parser = argparse.ArgumentParser()\r\n\r\n \u001b[37m# SageMaker parameters, like the directories for training data and saving models; set automatically\u001b[39;49;00m\r\n \u001b[37m# Do not need to change\u001b[39;49;00m\r\n parser.add_argument(\u001b[33m'\u001b[39;49;00m\u001b[33m--output-data-dir\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, \u001b[36mtype\u001b[39;49;00m=\u001b[36mstr\u001b[39;49;00m, default=os.environ[\u001b[33m'\u001b[39;49;00m\u001b[33mSM_OUTPUT_DATA_DIR\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m])\r\n parser.add_argument(\u001b[33m'\u001b[39;49;00m\u001b[33m--model-dir\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, \u001b[36mtype\u001b[39;49;00m=\u001b[36mstr\u001b[39;49;00m, default=os.environ[\u001b[33m'\u001b[39;49;00m\u001b[33mSM_MODEL_DIR\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m])\r\n parser.add_argument(\u001b[33m'\u001b[39;49;00m\u001b[33m--data-dir\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, \u001b[36mtype\u001b[39;49;00m=\u001b[36mstr\u001b[39;49;00m, default=os.environ[\u001b[33m'\u001b[39;49;00m\u001b[33mSM_CHANNEL_TRAIN\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m])\r\n \r\n \u001b[37m## TODO: Add any additional arguments that you will need to pass into your model\u001b[39;49;00m\r\n parser.add_argument(\u001b[33m'\u001b[39;49;00m\u001b[33m--random_state\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, \u001b[36mtype\u001b[39;49;00m=\u001b[36mint\u001b[39;49;00m, default=\u001b[34m0\u001b[39;49;00m, metavar=\u001b[33m'\u001b[39;49;00m\u001b[33mN\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m,\r\n help=\u001b[33m'\u001b[39;49;00m\u001b[33mint, RandomState instance, default=0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n parser.add_argument(\u001b[33m'\u001b[39;49;00m\u001b[33m--solver\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, \u001b[36mtype\u001b[39;49;00m=\u001b[36mstr\u001b[39;49;00m, default=\u001b[33m'\u001b[39;49;00m\u001b[33mlbfgs\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, metavar=\u001b[33m'\u001b[39;49;00m\u001b[33mS\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m,\r\n help=\u001b[33m'\u001b[39;49;00m\u001b[33mPossible values: \u001b[39;49;00m\u001b[33m{\u001b[39;49;00m\u001b[33mnewton-cg, lbfgs, liblinear, sag, saga}, default is lbfgs\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n parser.add_argument(\u001b[33m'\u001b[39;49;00m\u001b[33m--multi_class\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, \u001b[36mtype\u001b[39;49;00m=\u001b[36mstr\u001b[39;49;00m, default=\u001b[33m'\u001b[39;49;00m\u001b[33movr\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, metavar=\u001b[33m'\u001b[39;49;00m\u001b[33mS\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m,\r\n help=\u001b[33m'\u001b[39;49;00m\u001b[33mPossible values: \u001b[39;49;00m\u001b[33m{\u001b[39;49;00m\u001b[33mauto, ovr, multinomial}, default is ovr\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \u001b[37m# args holds all passed-in arguments\u001b[39;49;00m\r\n args = parser.parse_args()\r\n\r\n \u001b[37m# Read in csv training file\u001b[39;49;00m\r\n training_dir = args.data_dir\r\n train_data = pd.read_csv(os.path.join(training_dir, \u001b[33m\"\u001b[39;49;00m\u001b[33mtrain.csv\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m), header=\u001b[34mNone\u001b[39;49;00m, names=\u001b[34mNone\u001b[39;49;00m)\r\n\r\n \u001b[37m# Labels are in the first column\u001b[39;49;00m\r\n train_y = train_data.iloc[:,\u001b[34m0\u001b[39;49;00m]\r\n train_x = train_data.iloc[:,\u001b[34m1\u001b[39;49;00m:]\r\n \r\n \r\n \u001b[37m## --- Your code here --- ##\u001b[39;49;00m\r\n \r\n\r\n \u001b[37m## TODO: Define a model \u001b[39;49;00m\r\n \r\n \u001b[37m#Begin Yanfei's first try#\u001b[39;49;00m\r\n \u001b[37m#model=LinearSVC()\u001b[39;49;00m\r\n \u001b[37m#End Yanfei's first try#\u001b[39;49;00m\r\n net = NeuralNetRegressor(BinaryClassifier(args.input_features,args.hidden_dim,args.output_dim)\r\n , max_epochs=\u001b[34m100\u001b[39;49;00m\r\n , lr=\u001b[34m0.001\u001b[39;49;00m\r\n , verbose=\u001b[34m1\u001b[39;49;00m)\r\n \r\n \u001b[37m#Begin Yanfei's second try#\u001b[39;49;00m\r\n model = LogisticRegression(random_state=args.random_state, solver=args.solver, multi_class=args.multi_class)\r\n \u001b[37m#End Yanfei's first try#\u001b[39;49;00m\r\n params = {\r\n \u001b[33m'\u001b[39;49;00m\u001b[33mlr\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m: [\u001b[34m0.001\u001b[39;49;00m,\u001b[34m0.005\u001b[39;49;00m, \u001b[34m0.01\u001b[39;49;00m, \u001b[34m0.05\u001b[39;49;00m, \u001b[34m0.1\u001b[39;49;00m, \u001b[34m0.2\u001b[39;49;00m, \u001b[34m0.3\u001b[39;49;00m],\r\n \u001b[33m'\u001b[39;49;00m\u001b[33mmax_epochs\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m: \u001b[36mlist\u001b[39;49;00m(\u001b[36mrange\u001b[39;49;00m(\u001b[34m500\u001b[39;49;00m,\u001b[34m5500\u001b[39;49;00m, \u001b[34m500\u001b[39;49;00m))\r\n }\r\n\r\n \u001b[37m## TODO: Train the model\u001b[39;49;00m\r\n \u001b[37m# model.fit(train_x,train_y)\u001b[39;49;00m\r\n \u001b[37m#model = GridSearchCV(net, params, refit=False, scoring='r2', verbose=1, cv=10)\u001b[39;49;00m\r\n\r\n model.fit(train_x, train_y)\r\n \r\n \u001b[37m## --- End of your code --- ##\u001b[39;49;00m\r\n \r\n\r\n \u001b[37m# Save the trained model\u001b[39;49;00m\r\n joblib.dump(model, os.path.join(args.model_dir, \u001b[33m\"\u001b[39;49;00m\u001b[33mmodel.joblib\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m))\r\n" ] ], [ [ "### Provided code\n\nIf you read the code above, you can see that the starter code includes a few things:\n* Model loading (`model_fn`) and saving code\n* Getting SageMaker's default hyperparameters\n* Loading the training data by name, `train.csv` and extracting the features and labels, `train_x`, and `train_y`\n\nIf you'd like to read more about model saving with [joblib for sklearn](https://scikit-learn.org/stable/modules/model_persistence.html) or with [torch.save](https://pytorch.org/tutorials/beginner/saving_loading_models.html), click on the provided links.", "_____no_output_____" ], [ "---\n# Create an Estimator\n\nWhen a custom model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained; the `train.py` function you specified above. To run a custom training script in SageMaker, construct an estimator, and fill in the appropriate constructor arguments:\n\n* **entry_point**: The path to the Python script SageMaker runs for training and prediction.\n* **source_dir**: The path to the training script directory `source_sklearn` OR `source_pytorch`.\n* **entry_point**: The path to the Python script SageMaker runs for training and prediction.\n* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.\n* **entry_point**: The path to the Python script SageMaker runs for training.\n* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.\n* **role**: Role ARN, which was specified, above.\n* **train_instance_count**: The number of training instances (should be left at 1).\n* **train_instance_type**: The type of SageMaker instance for training. Note: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.\n* **sagemaker_session**: The session used to train on Sagemaker.\n* **hyperparameters** (optional): A dictionary `{'name':value, ..}` passed to the train function as hyperparameters.\n\nNote: For a PyTorch model, there is another optional argument **framework_version**, which you can set to the latest version of PyTorch, `1.0`.\n\n## EXERCISE: Define a Scikit-learn or PyTorch estimator\n\nTo import your desired estimator, use one of the following lines:\n```\nfrom sagemaker.sklearn.estimator import SKLearn\n```\n```\nfrom sagemaker.pytorch import PyTorch\n```", "_____no_output_____" ] ], [ [ "# your import and estimator code, here\nfrom sagemaker.sklearn.estimator import SKLearn\n\n# specify an output path\nprefix = 'sagemaker/plagiarism-detection/output'\n\n# define location to store model artifacts\n\noutput_path='s3://{}/{}/'.format(bucket, prefix)\n\n# instantiate a pytorch estimator\nestimator = SKLearn(entry_point=\"train.py\",\n source_dir=\"source_sklearn\",\n role=role,\n train_instance_count=1,\n train_instance_type='ml.c4.xlarge'\n \n )", "This is not the latest supported version. If you would like to use version 0.23-1, please add framework_version=0.23-1 to your constructor.\n" ] ], [ [ "## EXERCISE: Train the estimator\n\nTrain your estimator on the training data stored in S3. This should create a training job that you can monitor in your SageMaker console.", "_____no_output_____" ] ], [ [ "%%time\n\n# Train your estimator on S3 training data\nestimator.fit({'train': input_data})\n", "'s3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.\n" ] ], [ [ "## EXERCISE: Deploy the trained model\n\nAfter training, deploy your model to create a `predictor`. If you're using a PyTorch model, you'll need to create a trained `PyTorchModel` that accepts the trained `<model>.model_data` as an input parameter and points to the provided `source_pytorch/predict.py` file as an entry point. \n\nTo deploy a trained model, you'll use `<model>.deploy`, which takes in two arguments:\n* **initial_instance_count**: The number of deployed instances (1).\n* **instance_type**: The type of SageMaker instance for deployment.\n\nNote: If you run into an instance error, it may be because you chose the wrong training or deployment instance_type. It may help to refer to your previous exercise code to see which types of instances we used.", "_____no_output_____" ] ], [ [ "%%time\n#from sagemaker.sklearn.model import SKLearnModel\n# uncomment, if needed\n# from sagemaker.pytorch import PyTorchModel\n\n#model=SKLearnModel(model_data=estimator.model_data,\n # role = role,\n # framework_version='0.23-1',\n # entry_point='train.py',\n # source_dir='source_sklearn')\n# deploy your model to create a predictor\npredictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')\n", "Parameter image will be renamed to image_uri in SageMaker Python SDK v2.\n" ] ], [ [ "---\n# Evaluating Your Model\n\nOnce your model is deployed, you can see how it performs when applied to our test data.\n\nThe provided cell below, reads in the test data, assuming it is stored locally in `data_dir` and named `test.csv`. The labels and features are extracted from the `.csv` file.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nimport os\n\n# read in test data, assuming it is stored locally\ntest_data = pd.read_csv(os.path.join(data_dir, \"test.csv\"), header=None, names=None)\n\n# labels are in the first column\ntest_y = test_data.iloc[:,0]\ntest_x = test_data.iloc[:,1:]", "_____no_output_____" ] ], [ [ "## EXERCISE: Determine the accuracy of your model\n\nUse your deployed `predictor` to generate predicted, class labels for the test data. Compare those to the *true* labels, `test_y`, and calculate the accuracy as a value between 0 and 1.0 that indicates the fraction of test data that your model classified correctly. You may use [sklearn.metrics](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics) for this calculation.\n\n**To pass this project, your model should get at least 90% test accuracy.**", "_____no_output_____" ] ], [ [ "import numpy as np\n# First: generate predicted, class labels\ntest_y_preds = np.squeeze(np.round(predictor.predict(test_x)))\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n# test that your model generates the correct number of labels\nassert len(test_y_preds)==len(test_y), 'Unexpected number of predictions.'\nprint('Test passed!')", "Test passed!\n" ], [ "# Second: calculate the test accuracy\naccuracy = None\n# calculate true positives, false positives, true negatives, false negatives\ntp = np.logical_and(test_y.values, test_y_preds).sum()\nfp = np.logical_and(1-test_y.values, test_y_preds).sum()\ntn = np.logical_and(1-test_y.values, 1-test_y_preds).sum()\nfn = np.logical_and(test_y.values, 1-test_y_preds).sum()\n \n # calculate binary classification metrics\nrecall = tp / (tp + fn)\nprecision = tp / (tp + fp)\naccuracy = (tp + tn) / (tp + fp + tn + fn)\nprint(f'tp:{tp} fp:{fp} tn:{tn} fn:{fn}')\nprint(accuracy)\n\n\n## print out the array of predicted and true labels, if you want\nprint('\\nPredicted class labels: ')\nprint(test_y_preds)\nprint('\\nTrue class labels: ')\nprint(test_y.values)", "tp:15 fp:1 tn:9 fn:0\n0.96\n\nPredicted class labels: \n[1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 1 1 1 1 0 0]\n\nTrue class labels: \n[1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 1 0 1 1 0 0]\n" ] ], [ [ "### Question 1: How many false positives and false negatives did your model produce, if any? And why do you think this is?", "_____no_output_____" ], [ "** Answer**: Case 1, when use selected_features ['c_11', 'lcs_word'], and Classfier \"LinearSVC\", we have one false negative. \n Case 2, when use selected_features ['c_11', 'lcs_word'], and Classfier \"LogisticRegression\", we have one false positive.\n Case 3, when use selected_features ['c_1','c_11', 'lcs_word'], and Classfier \"LogisticRegression\", we have one false positive.\n The accuary is always 0.96. Because of the testset is only 25, not big data and not large amount of features, the performance between different classfier is not observed yet. If the noise is higher, maybe the accuray will be less than 0.96. we can't Use SKLearn Classifier like \"LinearSVC\", \"LogisticRegression\" have better accuary compared to Pytorch self-defined binary classifier. \n", "_____no_output_____" ] ], [ [ "#Case 1 Check:\npredict_df = pd.concat([pd.DataFrame(test_x), pd.DataFrame(test_y_preds), pd.DataFrame(test_y)], axis=1)\npredict_df.columns=['c_11', 'lcs_word', 'predicted class','true class']\npredict_df", "_____no_output_____" ], [ "#Case 2 Check:\npredict_df = pd.concat([pd.DataFrame(test_x), pd.DataFrame(test_y_preds), pd.DataFrame(test_y)], axis=1)\npredict_df.columns=['c_11', 'lcs_word', 'predicted class','true class']\npredict_df", "_____no_output_____" ], [ "#Case 3 Check:\npredict_df = pd.concat([pd.DataFrame(test_x), pd.DataFrame(test_y_preds), pd.DataFrame(test_y)], axis=1)\npredict_df.columns=['c_1','c_11', 'lcs_word', 'predicted class','true class']\npredict_df", "_____no_output_____" ] ], [ [ "### Question 2: How did you decide on the type of model to use? ", "_____no_output_____" ], [ "** Answer**: If the problem need only binary output (is 0 or 1)/(is true or false), binary classifier is a good choice. When the noise in the training data is minimized, then we can also use simpler classifier for good performance. Otherwise, we need use more complex classifiers like SVM or Neural Network for massive dataset and huge size of features.\n\n", "_____no_output_____" ] ], [ [ "a=pd.DataFrame(np.array([[0.765306, 0.394366, 0.621711]]),columns=[1, 2, 3])\n\ntest_file.getvalue()", "_____no_output_____" ], [ "import boto3\nimport io\nfrom io import StringIO\ntest_file = io.StringIO()\ncheck_data=test_x.iloc[:2,1:] #data.iloc[:2,1:]\ncheck_data.to_csv(test_file,header = None, index = None)\nruntime = boto3.Session().client('sagemaker-runtime')\nresponse = runtime.invoke_endpoint(EndpointName = predictor.endpoint, # The name of the endpoint we created\n ContentType = 'text/csv', # The data format that is expected\n Body ='0.0,0.7914438502673797,0.8207547169811321\\n0.0,0.0,0\\n' ) #test_file.getvalue() )\na=response['Body'].read().decode('utf-8')\neval(a)[0]", "_____no_output_____" ] ], [ [ "----\n## EXERCISE: Clean up Resources\n\nAfter you're done evaluating your model, **delete your model endpoint**. You can do this with a call to `.delete_endpoint()`. You need to show, in this notebook, that the endpoint was deleted. Any other resources, you may delete from the AWS console, and you will find more instructions on cleaning up all your resources, below.", "_____no_output_____" ] ], [ [ "# uncomment and fill in the line below!\n# <name_of_deployed_predictor>.delete_endpoint()\ndef delete_endpoint(predictor):\n try:\n boto3.client('sagemaker').delete_endpoint(EndpointName=predictor.endpoint)\n print('Deleted {}'.format(predictor.endpoint))\n except:\n print('Already deleted: {}'.format(predictor.endpoint))\ndelete_endpoint(predictor)", "Deleted sagemaker-scikit-learn-2020-10-23-18-36-19-083\n" ] ], [ [ "### Deleting S3 bucket\n\nWhen you are *completely* done with training and testing models, you can also delete your entire S3 bucket. If you do this before you are done training your model, you'll have to recreate your S3 bucket and upload your training data again.", "_____no_output_____" ] ], [ [ "# deleting bucket, uncomment lines below\n\nbucket_to_delete = boto3.resource('s3').Bucket(bucket)\nbucket_to_delete.objects.all().delete()", "_____no_output_____" ] ], [ [ "### Deleting all your models and instances\n\nWhen you are _completely_ done with this project and do **not** ever want to revisit this notebook, you can choose to delete all of your SageMaker notebook instances and models by following [these instructions](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html). Before you delete this notebook instance, I recommend at least downloading a copy and saving it, locally.", "_____no_output_____" ], [ "---\n## Further Directions\n\nThere are many ways to improve or add on to this project to expand your learning or make this more of a unique project for you. A few ideas are listed below:\n* Train a classifier to predict the *category* (1-3) of plagiarism and not just plagiarized (1) or not (0).\n* Utilize a different and larger dataset to see if this model can be extended to other types of plagiarism.\n* Use language or character-level analysis to find different (and more) similarity features.\n* Write a complete pipeline function that accepts a source text and submitted text file, and classifies the submitted text as plagiarized or not.\n* Use API Gateway and a lambda function to deploy your model to a web application.\n\nThese are all just options for extending your work. If you've completed all the exercises in this notebook, you've completed a real-world application, and can proceed to submit your project. Great job!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7a5f7be4039212d7c0b7191a740c4aa2fbb82c4
19,283
ipynb
Jupyter Notebook
Python_Class/Class_7.ipynb
rickchen123/Portfolio
3b87b455be187e16b6b8f0151a6aae1466fd342b
[ "MIT" ]
null
null
null
Python_Class/Class_7.ipynb
rickchen123/Portfolio
3b87b455be187e16b6b8f0151a6aae1466fd342b
[ "MIT" ]
null
null
null
Python_Class/Class_7.ipynb
rickchen123/Portfolio
3b87b455be187e16b6b8f0151a6aae1466fd342b
[ "MIT" ]
null
null
null
27.626074
1,433
0.534616
[ [ [ "## Exercise", "_____no_output_____" ], [ "check if x >10, return T/F", "_____no_output_____" ] ], [ [ "x = 12\nif x>10:\n print('True')\nelse:\n print('F')", "True\n" ] ], [ [ "define a function called square that returns the squared value of input x", "_____no_output_____" ] ], [ [ "def square(x):\n return(x**2)", "_____no_output_____" ], [ "square(12)", "_____no_output_____" ] ], [ [ "## Library Importing", "_____no_output_____" ] ], [ [ "##First way\nimport numpy", "_____no_output_____" ], [ "numpy.absolute(-7)", "_____no_output_____" ], [ "numpy.sqrt(8)", "_____no_output_____" ], [ "##Second way", "_____no_output_____" ], [ "from numpy import sqrt", "_____no_output_____" ], [ "sqrt(8)", "_____no_output_____" ], [ "absolute(-7)", "_____no_output_____" ], [ "import numpy as np", "_____no_output_____" ], [ "np.sqrt(8)", "_____no_output_____" ], [ "import random", "_____no_output_____" ], [ "random.randint(a= 0 ,b= 10 )", "_____no_output_____" ], [ "from random import randint", "_____no_output_____" ], [ "randint(0,10)", "_____no_output_____" ] ], [ [ "## Turtle", "_____no_output_____" ] ], [ [ "import turtle as t\nimport numpy as np\nimport random", "_____no_output_____" ], [ "##set up screen\nscreen = t.Screen()\n## set up background color\nscreen.bgcolor('lightgreen')\n## set screen title\nscreen.title(\"Rick's Program\")\n\n##set up a turtle\nrick = t.Turtle()\n\n## move forward\nrick.forward(100)\n# rick.fd(100)\n\n## move backward\n# rick.backward(100)\n# rick.bk(100)\n# rick.back(100)\n\n## move to the left\nrick.left(90)\nrick.forward(100)\n\n## move to the right\nrick.right(90)\nrick.forward(100)\n\n\nscreen.exitonclick()", "_____no_output_____" ] ], [ [ "Question: how do we draw a square?", "_____no_output_____" ] ], [ [ "for i in range(1,5):\n t.fd(90)\n t.left(90)\nt.exitonclick()", "_____no_output_____" ] ], [ [ "Question: how do we draw a circle?", "_____no_output_____" ] ], [ [ "t.circle(100)\nt.exitonclick()", "_____no_output_____" ] ], [ [ "Question: How to we draw this graph using turtle?", "_____no_output_____" ], [ "<img src=\"../tiltedsquares.png\">", "_____no_output_____" ] ], [ [ "##First Square\nt.left(20)\nfor i in range(1,5):\n t.fd(90)\n t.left(90)\n##Second Square\nt.left(20)\nfor i in range(1,5):\n t.fd(90)\n t.left(90)\n##Third Square\nt.left(20)\nfor i in range(1,5):\n t.fd(90)\n t.left(90)\n \nt.exitonclick()", "_____no_output_____" ], [ "for i in range(1,4):\n t.left(20)\n for i in range(1,5):\n t.fd(90)\n t.left(90)\nt.exitonclick()", "_____no_output_____" ], [ "## Adding motions into the screen\n##set up screen\nscreen = t.Screen()\n## set up background color\nscreen.bgcolor('lightgreen')\n## set screen title\nscreen.title(\"Rick's Program\")\n#set up a turtle\nrick = t.Turtle()\n#Change turtle shape\nrick.shape('turtle')\n## Change pen size\nrick.pensize(5)\nrick.forward(100)\n## Change pen color\nrick.pencolor('blue')\nrick.left(90)\nrick.forward(100)\n\n## change turtle color\nrick.color('red')\nrick.left(90)\nrick.forward(100)\n\n## Multiple Turtle\nrick2 = t.Turtle()\n# ##penup\n# rick2.penup()\n# rick2.bk(100)\n# ##pendown\n# rick2.pendown()\n# rick2.left(90)\n# rick2.fd(100)\n\n## Resize a turtle\n# rick.shapesize(2,2,0) #width, length, outline\n# rick.forward(100)\n# rick.shapesize(0.5,0.5,0)\n# rick.forward(100)\n\nrick2.penup()\nrick2.setposition(-100,-100)\n\n## hide turtle\nrick2.pendown()\nrick2.hideturtle()\nrick2.forward(100)\n\n## show turtle\nrick2.showturtle()\nrick2.left(90)\nrick2.forward(100)\n\n\nscreen.exitonclick()", "_____no_output_____" ] ], [ [ "### Game", "_____no_output_____" ] ], [ [ "screen = t.Screen()\n## set up background color\nscreen.bgcolor('lightgreen')\n## set screen title\nscreen.title(\"Rick's Program\")\n\n##Draw a Border\nborder = t.Turtle()\nborder.color('white')\nborder.penup()\nborder.setposition(-300,-300)\nborder.pendown()\nborder.pensize(3)\nfor side in range(4):\n border.fd(600)\n border.lt(90)\nborder.hideturtle()\n\n\n#set up player\nplayer = t.Turtle()\nplayer.color('blue')\nplayer.shape('triangle')\nplayer.penup()\n\n## set up Goal\ngoal = t.Turtle()\ngoal.color('red')\ngoal.shape('circle')\ngoal.penup()\ngoal.setpos(-100,100)\n\n\n\nspeed = 1\n## Define Function\ndef turnleft():\n player.lt(30)\n\ndef turnright():\n player.rt(30)\ndef speed5():\n global speed\n speed = 5\ndef speed1():\n global speed\n speed = 1\n\n##keyboard Binding\nt.listen()\nt.onkey(turnleft, 'Left')\nt.onkey(turnright, 'Right')\nt.onkeypress(speed5, 'Up')\nt.onkeyrelease(speed1, 'Up')\n\n\nwhile True:\n player.forward(speed)\n d = np.sqrt((player.xcor()-goal.xcor())**2\n +(player.ycor()-goal.ycor())**2)\n if d<20:\n goal.setpos(random.randint(-300,300)\n ,random.randint(-300,300))\n\n\n\n\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "raw", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "raw" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e7a600673292cf1ec0042e48a49bcabab8abb7f6
381,868
ipynb
Jupyter Notebook
doc/integrations/pytorch/Cortx-PyTroch Integration - 2, Loading Data from Cotrx-S3 and Train the model.ipynb
sarthakarora1208/cortx
bcd87c79b8743167b14af27b1bf8ff5cc48e99a3
[ "Apache-2.0" ]
552
2020-09-24T18:16:09.000Z
2022-03-25T06:21:55.000Z
doc/integrations/pytorch/Cortx-PyTroch Integration - 2, Loading Data from Cotrx-S3 and Train the model.ipynb
sarthakarora1208/cortx
bcd87c79b8743167b14af27b1bf8ff5cc48e99a3
[ "Apache-2.0" ]
722
2020-09-24T19:48:44.000Z
2022-03-31T17:42:41.000Z
doc/integrations/pytorch/Cortx-PyTroch Integration - 2, Loading Data from Cotrx-S3 and Train the model.ipynb
sarthakarora1208/cortx
bcd87c79b8743167b14af27b1bf8ff5cc48e99a3
[ "Apache-2.0" ]
442
2020-09-24T14:24:21.000Z
2022-03-25T10:40:16.000Z
289.952923
124,388
0.880239
[ [ [ "import boto3\nimport cv2\nimport io\nimport os\nimport s3fs\nimport torch\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nimport seaborn as sns\nimport torch.nn as nn\n\nfrom collections import OrderedDict\nfrom PIL import Image\nfrom torch import optim\nfrom torch.utils.data import Dataset, DataLoader \nfrom torchvision import datasets, models, transforms", "_____no_output_____" ] ], [ [ "#### s3 configuration", "_____no_output_____" ] ], [ [ "\ns3 = boto3.resource(\"s3\",\n endpoint_url = \"http://192.168.0.29\",\n aws_access_key_id=\"AKIAPo19vPR_TJaeVgleCiOSUw\",\n aws_secret_access_key=\"7cSWM1KCXvRpK4ICeDEAfuicEm+QQeuhqOi7cejZ\",\n region_name = 'eu-central-1',\n )\n\nkwargs = {'endpoint_url':\"http://192.168.0.29\",\n }\nclient = s3fs.S3FileSystem(key=\"AKIAPo19vPR_TJaeVgleCiOSUw\", \n secret=\"7cSWM1KCXvRpK4ICeDEAfuicEm+QQeuhqOi7cejZ\",\n use_ssl=False,\n \n client_kwargs=kwargs)\n", "_____no_output_____" ], [ "my_bucket = s3.Bucket(\"sample-dataset\")", "_____no_output_____" ], [ "map_labels = {\"Apple___Apple_scab\":0, \"Apple___Black_rot\":1, \n \"Apple___Cedar_apple_rust\":2, \"Apple___healthy\":3, \"Background_without_leaves\":4}", "_____no_output_____" ] ], [ [ "#### Create Custom Dataset to Load data\n- Pytorch do not have any existing Dataset Loader classes that fetch data from s3. Therefore we need to create a custom Dataset Loader that will fetch the data from Cortx-s3.", "_____no_output_____" ] ], [ [ "\nclass ImageDataset(Dataset):\n \n def __init__(self, path=\"s3://sample-dataset/sample_data/\", transform=None):\n self.path = path\n self.classes = [folder[\"name\"] for folder in client.listdir(path)][2:]\n self.files = []\n for directory in self.classes:\n self.files += [file for file in client.ls(directory)][1:]\n\n self.transform = transform\n\n def __len__(self):\n return len(self.files)\n \n\n def __getitem__(self, idx):\n img_name = self.files[idx]\n label = img_name.split(\"/\")[-2]\n label = map_labels[label]\n \n key = img_name.split(\"/\")\n key = \"/\".join(key[1:])\n \n img_name = my_bucket.Object(key).get().get('Body').read()\n \n image = cv2.imdecode(np.asarray(bytearray(img_name)), cv2.COLOR_BGR2RGB)\n image = Image.fromarray(image)\n if self.transform:\n image = self.transform(image)\n label = torch.tensor(label).long()\n return image, label", "_____no_output_____" ], [ "data_dir = client.glob(\"s3://sample-dataset/sample_data\")[0]\n\ntrain_transforms = transforms.Compose([transforms.RandomRotation(30),\n transforms.RandomResizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406],\n [0.229, 0.224, 0.225])])\n\nval_transforms = transforms.Compose([transforms.Resize(255),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406],\n [0.229, 0.224, 0.225])])\n\ntrain_dataset = ImageDataset(path=\"s3://sample-dataset/sample_data/train\", transform=train_transforms)\n\n \nvalid_dataset = ImageDataset(path=\"s3://sample-dataset/sample_data/val\",\n transform=val_transforms)\n \ntrain_loader = DataLoader(train_dataset, \n batch_size=16, \n shuffle=True, \n num_workers=0)\nval_loader = DataLoader(valid_dataset, \n batch_size=16, \n shuffle=False, \n num_workers=0)", "_____no_output_____" ], [ "dataiter = iter(train_loader)\nimages, labels = dataiter.next()\nprint(type(images))\nprint(images.shape)\nprint(labels.shape)", "<class 'torch.Tensor'>\ntorch.Size([16, 3, 224, 224])\ntorch.Size([16])\n" ], [ "def imshow(image, ax=None, normalize=True):\n if ax is None:\n fig, ax = plt.subplots()\n image = image.numpy().transpose((1, 2, 0))\n\n if normalize:\n # if the data loader has transform.normalize\n # undo preprocessing\n mean = np.array([0.485, 0.456, 0.406])\n std = np.array([0.229, 0.224, 0.225])\n image = std * image + mean\n image = np.clip(image, 0, 1)\n\n ax.imshow(image)\n ax.spines['top'].set_visible(False)\n ax.spines['right'].set_visible(False)\n ax.spines['left'].set_visible(False)\n ax.spines['bottom'].set_visible(False)\n ax.tick_params(axis='both', length=0)\n ax.set_xticklabels('')\n ax.set_yticklabels('')\n \n return ax", "_____no_output_____" ], [ "imshow(images[1]);\nimshow(images[8]);\nimshow(images[12]);", "_____no_output_____" ], [ "model = models.densenet201(pretrained=True)", "_____no_output_____" ], [ "for param in model.parameters():\n param.required_grad = False", "_____no_output_____" ], [ "# change the classifier\nfrom collections import OrderedDict\n\nclassifier = nn.Sequential(OrderedDict([\n ('fc1', nn.Linear(1920, 500)),\n ('relu1', nn.ReLU()),\n ('dropout1', nn.Dropout(p=0.2)),\n ('fc2', nn.Linear(500, 256)),\n ('relu2', nn.ReLU()),\n ('dropout2', nn.Dropout(p=0.2)),\n ('fc3', nn.Linear(256, 5)),\n ('output', nn.LogSoftmax(dim=1))\n \n]))\nmodel.classifier = classifier", "_____no_output_____" ], [ "# Train either on GPU or CPU\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")", "_____no_output_____" ], [ "\ncriterion = nn.NLLLoss()\n\noptimizer = optim.SGD(model.classifier.parameters(), lr = 0.01, momentum=0.9)\n\nmodel.to(device)", "_____no_output_____" ], [ "epochs = 1\n\nfor epoch in range(epochs):\n running_loss = 0\n for images, labels in train_loader:\n images, labels = images.to(device), labels.to(device)\n \n optimizer.zero_grad()\n \n outputs = model(images)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n \n else:\n validation_loss = 0\n accuracy = 0\n \n with torch.no_grad():\n model.eval()\n for images, labels in val_loader:\n images, labels = images.to(device), labels.to(device)\n outputs = model(images)\n loss = criterion(outputs, labels)\n validation_loss += loss.item()\n \n ps = torch.exp(outputs)\n top_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy += torch.mean(equals.type(torch.FloatTensor))\n \n model.train()\n\n print(\"Epoch: {}/{}.. \".format(epoch+1, epochs),\n \"Training Loss: {:.3f}.. \".format(running_loss/len(train_loader)),\n \"Valid Loss: {:.3f}.. \".format(validation_loss/len(val_loader)),\n \"Valid Accuracy: {:.3f}\".format(accuracy/len(val_loader)))", "Epoch: 1/1.. Training Loss: 0.717.. Valid Loss: 0.176.. Valid Accuracy: 0.946\n" ], [ "#map classes to indexes\nmodel.class_to_idx = map_labels", "_____no_output_____" ], [ "# save model\n\ns3_client = boto3.client(\"s3\",\n endpoint_url = \"http://192.168.0.29\",\n aws_access_key_id=\"AKIAPo19vPR_TJaeVgleCiOSUw\",\n aws_secret_access_key=\"7cSWM1KCXvRpK4ICeDEAfuicEm+QQeuhqOi7cejZ\",\n )\n \nbuffer = io.BytesIO()\ntorch.save({\"state_dict\":model.state_dict(),\n \"class_to_idx\":model.class_to_idx}, buffer)\ns3_client.put_object(Bucket=\"saved-models\", Key='classifier.pth', Body=buffer.getvalue())", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a60ba88ee256c7a533b58917438db9bae4d872
8,005
ipynb
Jupyter Notebook
FlappyBird.ipynb
Jack-TBarnett/github-slideshow
5869e8bb464db54b4eabf8365a863d0f856fe8bc
[ "MIT" ]
null
null
null
FlappyBird.ipynb
Jack-TBarnett/github-slideshow
5869e8bb464db54b4eabf8365a863d0f856fe8bc
[ "MIT" ]
3
2020-08-15T23:02:54.000Z
2020-08-17T19:55:09.000Z
FlappyBird.ipynb
jzzy-jeff/github-slideshow
ff9cb9051c5a3ee9c2681132dd06a6ba59dacda6
[ "MIT" ]
null
null
null
33.919492
345
0.467958
[ [ [ "<a href=\"https://colab.research.google.com/github/jzzy-jeff/github-slideshow/blob/master/FlappyBird.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "pip install pygame", "Requirement already satisfied: pygame in /usr/local/lib/python3.6/dist-packages (2.0.1)\n" ], [ "pip install neat-python\r\n", "Requirement already satisfied: neat-python in /usr/local/lib/python3.6/dist-packages (0.92)\n" ], [ "import pygame\r\nimport neat\r\nimport time\r\nimport os\r\nimport random", "_____no_output_____" ], [ "#Window and object images\r\nWIN_WIDTH = 600\r\nWIN_HEIGHT = 800\r\n\r\nBIRD_IMGS = [pygame.transorm.scale2x(pygame.image.load(os.path.join(\"imgs\", \"bird1.png\"))), [pygame.transorm.scale2x(pygame.image.load(os.path.join(\"imgs\", \"bird2.png\"))), [pygame.transorm.scale2x(pygame.image.load(os.path.join(\"imgs\", \"bird3.png\")))]\r\nPIPE_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(\"imgs\", \"pipe.png\")))\r\nBASE_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(\"imgs\", \"base.png\")))\r\nBG_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(\"imgs\", \"bg.png\")))\r\n\r\n\r\n ", "_____no_output_____" ], [ "#Object Classes\r\nclass Bird:\r\n IMGS = BIRD_IMGS\r\n MAX_ROTATION = 25\r\n ROT_VEL = 20\r\n ANIMATION_TIME = 5\r\n\r\n def __init__(self, x, y):\r\n self.x = x\r\n self.y = y\r\n self.tilt = 0\r\n self.tick_count = 0\r\n self.vel = 0\r\n self,height = self.y\r\n self.img_count = 0\r\n self.img = self.IMGS[0]\r\n\r\n def jump(self):\r\n self.vel = -10.5\r\n self.tick_count = 0\r\n self.height = self.y\r\n\r\n def move(self):\r\n self.tick_count += 1\r\n \r\n d = self.vel*self.tick_count + 1.5*self.tick_count**2\r\n \r\n if d >= 16:\r\n d = 16\r\n\r\n if d < 0:\r\n d-=2.4\r\n\r\n self.y = self.y + de\r\n\r\n if d < 0 or self.y < self.height + 50:\r\n if self.tilt < self.MAX_ROTATION:\r\n self.tilt = MAX_ROTATION\r\n else:\r\n if self.tilt> -90:\r\n self.tilt -= self.ROT_VEL\r\n\r\n def draw(self, win):\r\n self.img_count +=1\r\n\r\n if self.img_count < self.ANIMATION_TIME:\r\n self.img = self.IMGS[0]\r\n elif self.img_count < self.ANIMATION_TIME*2:\r\n self.img = self.IMGS[1]\r\n elif self.img_count < self.ANIMATION_TIME*3:\r\n self.img = self.IMGS[2]\r\n elif self.img_count < self.ANIMATION_TIME*4:\r\n self.img = self.IMGS[1]\r\n elif self.img_count < self.ANIMATION_TIME*4+1:\r\n self.img = self.IMGS[0]\r\n self.img_count = 0\r\n\r\n if self.tilt <= -80:\r\n self.img = self.IMGS[1]\r\n self.img_count = self.ANIMATION_TIME*2\r\n\r\n rotated_image = pygame.transorm.rotate(self.img, self.tilt)\r\n new_rect = rotated_image.get_rect(center=self.img.get_rect(topleft = (self.x, self.y)).center)\r\n win.blit(rotated_image, new_rect.topleft)\r\n\r\n def get_mask(self):\r\n return pygame.mask.from_surface(self.img)\r\n \r\ndef draw_window(win, bird):\r\n win.blit(BG_IMG, (0,0))\r\n bird.draw(win)\r\n pygame.display.update()\r\n\r\ndef main():\r\n bird = Bird(200,200)\r\n win = pygame.display.set_mode((WIN_WIDTH, WIN_HEIGHT))\r\n run = True\r\n while run:\r\n for event in pygame.event.get():\r\n if event.type == pygame.QUIT:\r\n run == False\r\n\r\n draw_window(win, bird)\r\n\r\n pygame.quit()\r\n quit()\r\n\r\n main()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7a615e0b152b2d6fde0a63275fc033ff179358c
189,774
ipynb
Jupyter Notebook
III.NLP/III.NLP-with-NLTK-Short-Intro.ipynb
mgrani/LODA-lecture-notes-on-data-analysis
cb7bab9951288443349f388d3cad10a77efb2f03
[ "CC-BY-3.0" ]
36
2015-01-05T14:05:03.000Z
2021-07-16T16:59:07.000Z
III.NLP/III.NLP-with-NLTK-Short-Intro.ipynb
Ngxba/LODA-lecture-notes-on-data-analysis
cb7bab9951288443349f388d3cad10a77efb2f03
[ "CC-BY-3.0" ]
null
null
null
III.NLP/III.NLP-with-NLTK-Short-Intro.ipynb
Ngxba/LODA-lecture-notes-on-data-analysis
cb7bab9951288443349f388d3cad10a77efb2f03
[ "CC-BY-3.0" ]
18
2015-03-25T04:30:45.000Z
2022-02-03T16:18:43.000Z
45.055556
20,051
0.646917
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7a6280f1a94e92058c6e37d66d13d02869869ff
1,283
ipynb
Jupyter Notebook
00_core.ipynb
shgidi/nb_dev_test
6bc500d01892e2ac1b545ef6e062dccdc91c4560
[ "Apache-2.0" ]
null
null
null
00_core.ipynb
shgidi/nb_dev_test
6bc500d01892e2ac1b545ef6e062dccdc91c4560
[ "Apache-2.0" ]
2
2021-05-20T11:40:30.000Z
2022-02-26T06:08:00.000Z
00_core.ipynb
shgidi/nb_dev_test
6bc500d01892e2ac1b545ef6e062dccdc91c4560
[ "Apache-2.0" ]
null
null
null
17.106667
46
0.460639
[ [ [ "# default_exp core", "_____no_output_____" ], [ "from nbdev.showdoc import *", "_____no_output_____" ], [ "# export\ndef power(a,b):\n # a^b\n if b==0:\n return a\n elif b%2==1:\n return power(a,b-1)*a\n else:\n return power(a,b/2)*power(a,b/2)", "_____no_output_____" ], [ "from nbdev.export import *\nnotebook2script()", "Converted 00_core.ipynb.\nConverted 99_index.ipynb.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7a639479868b304ae76d3fc206be1c9018d3f51
7,646
ipynb
Jupyter Notebook
guide/predict_api.ipynb
adujardin/openpifpaf
4fa79162f5529f5b0de72e2312aab54d410bee3f
[ "CC-BY-2.0" ]
null
null
null
guide/predict_api.ipynb
adujardin/openpifpaf
4fa79162f5529f5b0de72e2312aab54d410bee3f
[ "CC-BY-2.0" ]
null
null
null
guide/predict_api.ipynb
adujardin/openpifpaf
4fa79162f5529f5b0de72e2312aab54d410bee3f
[ "CC-BY-2.0" ]
null
null
null
28.423792
365
0.606722
[ [ [ "# Prediction API\n\nProgrammatically use OpenPifPaf to run multi-person pose estimation on an image.\nThe API is for more advanced use cases. Please read {doc}`predict_cli` as well.", "_____no_output_____" ] ], [ [ "import io\nimport numpy as np\nimport openpifpaf\nimport PIL\nimport requests\nimport torch\n\n%matplotlib inline\nopenpifpaf.show.Canvas.show = True\n\ndevice = torch.device('cpu')\n# device = torch.device('cuda') # if cuda is available\n\nprint(openpifpaf.__version__)\nprint(torch.__version__)", "_____no_output_____" ] ], [ [ "## Load an Example Image\n\nImage credit: \"[Learning to surf](https://www.flickr.com/photos/fotologic/6038911779/in/photostream/)\" by fotologic which is licensed under [CC-BY-2.0].\n\n[CC-BY-2.0]: https://creativecommons.org/licenses/by/2.0/", "_____no_output_____" ] ], [ [ "image_response = requests.get('https://raw.githubusercontent.com/vita-epfl/openpifpaf/master/docs/coco/000000081988.jpg')\npil_im = PIL.Image.open(io.BytesIO(image_response.content)).convert('RGB')\nim = np.asarray(pil_im)\n\nwith openpifpaf.show.image_canvas(im) as ax:\n pass", "_____no_output_____" ] ], [ [ "## Load a Trained Neural Network", "_____no_output_____" ] ], [ [ "net_cpu, _ = openpifpaf.network.Factory(checkpoint='shufflenetv2k16', download_progress=False).factory()\nnet = net_cpu.to(device)\n\nopenpifpaf.decoder.utils.CifSeeds.threshold = 0.5\nopenpifpaf.decoder.utils.nms.Keypoints.keypoint_threshold = 0.2\nopenpifpaf.decoder.utils.nms.Keypoints.instance_threshold = 0.2\nprocessor = openpifpaf.decoder.factory([hn.meta for hn in net_cpu.head_nets])", "_____no_output_____" ] ], [ [ "## Preprocessing, Dataset\n\nSpecify the image preprocossing. Beyond the default transforms, we also use `CenterPadTight(16)` which adds padding to the image such that both the height and width are multiples of 16 plus 1. With this padding, the feature map covers the entire image. Without it, there would be a gap on the right and bottom of the image that the feature map does not cover.", "_____no_output_____" ] ], [ [ "preprocess = openpifpaf.transforms.Compose([\n openpifpaf.transforms.NormalizeAnnotations(),\n openpifpaf.transforms.CenterPadTight(16),\n openpifpaf.transforms.EVAL_TRANSFORM,\n])\ndata = openpifpaf.datasets.PilImageList([pil_im], preprocess=preprocess)", "_____no_output_____" ] ], [ [ "## Dataloader, Visualizer", "_____no_output_____" ] ], [ [ "loader = torch.utils.data.DataLoader(\n data, batch_size=1, pin_memory=True, \n collate_fn=openpifpaf.datasets.collate_images_anns_meta)\n\nannotation_painter = openpifpaf.show.AnnotationPainter()", "_____no_output_____" ] ], [ [ "## Prediction", "_____no_output_____" ] ], [ [ "for images_batch, _, __ in loader:\n predictions = processor.batch(net, images_batch, device=device)[0]\n with openpifpaf.show.image_canvas(im) as ax:\n annotation_painter.annotations(ax, predictions)", "_____no_output_____" ] ], [ [ "Each prediction in the `predictions` list above is of type `Annotation`. You can access the joint coordinates in the `data` attribute. It is a numpy array that contains the $x$ and $y$ coordinates and the confidence for every joint:", "_____no_output_____" ] ], [ [ "predictions[0].data", "_____no_output_____" ] ], [ [ "## Fields\n\nBelow are visualizations of the fields.\nWhen using the API here, the visualization types are individually enabled.\nThen, the index for every field to visualize must be specified. In the example below, the fifth CIF (left shoulder) and the fifth CAF (left shoulder to left hip) are activated.\n\nThese plots are also accessible from the command line: use `--debug-indices cif:5 caf:5` to select which joints and connections to visualize.", "_____no_output_____" ] ], [ [ "openpifpaf.visualizer.Base.set_all_indices(['cif,caf:5:confidence'])\n\nfor images_batch, _, __ in loader:\n predictions = processor.batch(net, images_batch, device=device)[0]", "_____no_output_____" ], [ "openpifpaf.visualizer.Base.set_all_indices(['cif,caf:5:regression'])\n\nfor images_batch, _, __ in loader:\n predictions = processor.batch(net, images_batch, device=device)[0]", "_____no_output_____" ] ], [ [ "From the CIF field, a high resolution accumulation (in the code it's called `CifHr`) is generated.\nThis is also the basis for the seeds. Both are shown below.", "_____no_output_____" ] ], [ [ "openpifpaf.visualizer.Base.set_all_indices(['cif:5:hr', 'seeds'])\n\nfor images_batch, _, __ in loader:\n predictions = processor.batch(net, images_batch, device=device)[0]", "_____no_output_____" ] ], [ [ "Starting from a seed, the poses are constructed. At every joint position, an occupancy map marks whether a previous pose was already constructed here. This reduces the number of poses that are constructed from multiple seeds for the same person. The final occupancy map is below:", "_____no_output_____" ] ], [ [ "openpifpaf.visualizer.Base.set_all_indices(['occupancy:5'])\n\nfor images_batch, _, __ in loader:\n predictions = processor.batch(net, images_batch, device=device)[0]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a65592c051394ae1543c745f829b94da78c538
5,979
ipynb
Jupyter Notebook
0004_ubuntu_setup/0005_ubuntu_setup_ch.ipynb
junjiecai/jupyter_demos
8aa8a0320545c0ea09e05e94aea82bc8aa537750
[ "MIT" ]
3
2019-09-16T10:44:39.000Z
2021-09-04T18:55:52.000Z
0004_ubuntu_setup/0005_ubuntu_setup_ch.ipynb
junjiecai/jupyter_demos
8aa8a0320545c0ea09e05e94aea82bc8aa537750
[ "MIT" ]
null
null
null
0004_ubuntu_setup/0005_ubuntu_setup_ch.ipynb
junjiecai/jupyter_demos
8aa8a0320545c0ea09e05e94aea82bc8aa537750
[ "MIT" ]
2
2020-10-24T16:19:29.000Z
2021-09-04T18:55:57.000Z
26.811659
216
0.545911
[ [ [ "# Ubuntu 16装机过程\n## 解决网络问题\n### DNS Server\n默认的DNS Server可能是无效的。 如果不能上网, 试试在network connection的IPV4 setting里面添加有效的DNS Server,如114.114.114.114, 并且把Method设置成Automatic(DHCP) address only\n\n![dns server](dns.png)\n\n配置后需要运行'''sudo systemctl restart network-manager.service'''让设置生效\n\n### 翻墙\nshadowsocks安装和配置看[这里](https://github.com/shadowsocks/shadowsocks/wiki/Shadowsocks-%E4%BD%BF%E7%94%A8%E8%AF%B4%E6%98%8E\n)\n\n注意安装shadowsocks的时候,要用sudo安装\n```\nsudo to install shadowsocks\n```\n\n为了开机的时候能够自动启动ss,可以将\n\n```sslocal -c /etc/shadowsocks.json -d start```\n\n这一命令添加至\n\n```/etc/rc.local```\n\n这样每次开机就能自动启动sslocal。\n\n启动了shadowsocks, 还得配置firefox, [见这里](https://aiguge.xyz/firefox-shadowsocks-foxyproxy-standard/)\n\ngfwlist可以使用```https://aiguge.xyz/firefox-shadowsocks-foxyproxy-standard/```\n\n\n## 升级ubuntu到较新的状态\n```sudo apt-get dist-upgrade```\n\n## python必备库安装\nubuntu 16自带了python3.5, 可以不需要自己安装\n\n### 安装pip\n```apt-get install python-pip```\n\n### virtualenv\n由于Ubuntu默认使用的python是2.7版本的,为了避免不必要的麻烦,可以使用virtualenv创造一个python3.5的环境。先安装virtualenv\n```sudo apt-get install virtualenv```\n\n然后创建虚拟环境,我将虚拟环境的文件夹放在了home下, 如果想放在别的地方请自己调整命令的参数。\n```\nvirtualenv -p /usr/bin/python3.5 ~/venv\n```\n\n然后激活创建的virtualenv,新建一个teminal后,输入。\n```\nsource ~/venv/bin/activate\n```\n可以看到terminal前面出现了一个(venv)字样,提示已经进入了虚拟环境。\n\n这个命令太长,每次输入很麻烦。我们可以为这个命令创建一个简化的命令'venv'。在```~/.bash_aliases```文件中输入\n```\nalias venv='source ~/venv/bin/activate'\n```\n运行\n```\nsource ~/bashrc\n```\n让刚才的设置生效。之后进入terminal后只要输入```venv```就可以进入我们的虚拟环境。\n\n**之后安装python库的时候都是先进入```venv```虚拟环境后安装的,后面不在重复说明。\n\n\n\n\n\n### cx_Oracle\t\n```sudo apt-get install unzip python3.5-dev libaio-dev```\n[具体安装方式见](https://gist.github.com/kimus/10012910)\n\ninstantclient-basic-linux.x64-12.1.0.2.0.zip[下载](http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html)\n\ninstantclient-sdk-linux.x64-12.1.0.2.0.zip[下载](http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html)\n\n### psycopg2\n```sudo apt-get install python-psycopg2```\n\n```pip install psycopg2```\n\n\n## Postgresql-9.5安装\n[见这里](https://www.postgresql.org/download/linux/ubuntu/)\n\n### 添加数据库用户\npostgreSQL会自动给ubuntu和数据库本身创建一个名为postgres的用户名。\n\n首先修改postgres用户(ubuntu)的密码\n```\nsudo passwd postgres\n```\n\n切换到postgres用户(ubuntu),登陆数据库\n```\nsudo su postgres\npsql\n```\n然后建立一些普通的数据库用户,习惯上可以和ubunut的用户名保持一直。 注意这个账号如果要使用pl/python, 就必须得提供超级用户权限。\n\n```\ncreate user datascience superuser password 'xxxxxxx';\n```\n\n### 建立数据库\n可以根据需要为该账号建立数据库。\n```\ncreate database etl;\nalter database etl owner to datascience;\n\\q\n```\n\n之后如果要在非postgres用户状态下进入数据库\n\n```\npsql -U [dbuser] -d [db] -h 127.0.0.1 -p 5432\n```\n\n### plpython的安装\n```\nsudo apt-get install postgresql-plpython3\n```\n登陆想使用plpython3数据库后,运行\n```\nCREATE EXTENSION plpython3u;\n```\n\n如果要让plpython能够使用自己python库, 需要添加PYTHONPATH信息至etc/postgresql/9.5/main/environment, 如\n```\nPYTHONPATH='/home/cjj/jfds\n```\n\n## Wine\n如果要在ubuntu运行一些window程序的话,需要这个\n```\nsudo apt-get update\nsudo apt-get install wine\n```\n注意安装wine的界面要按tab才能切换到OK按钮\n\n\n## Git安装\n```sudo apt-get install git```\n\n\n## 安装常用软件\n### 搜狗拼音\n先用\n```\nsudo apt-get install -f\n```\n修复一些包依赖的问题。\n\n[download](http://pinyin.sogou.com/linux/help.php)很不错的中文输入法, 直接下载双击deb会失败,需要用```sudo dpkg -i xxx.deb```命令安装\n\n如果出现了下面的提示\n\nNo such key 'Gtk/IMModule' in schema 'org.gnome.settings-daemon.plugins.xsettings' as specified in override file '/usr/share/glib-2.0/schemas/50_sogoupinyin.gschema.override'; ignoring override for this key.\n\n并不用在意,这个不会影响搜狗输入法的使用。\n\n\n\n### download them all\n这个是firefox的插件, 可以有更好的下载体验\n\n### okular\n一个带笔记标注功能的pdf浏览器\n\n### startuml2\n很好的uml建模工具\n\n[下载依赖库](https://launchpad.net/ubuntu/+archive/primary/+files/libgcrypt11_1.5.3-2ubuntu4.2_amd64.deb\n\n```\nsudo dpkg -i libgcrypt11_1.5.3-2ubuntu4.2_amd64.deb\n\n```\n\n```\nsudo apt-get libpango1.0-0 libpangox-1.0-0\n```\n\n[官网](www.staruml.io)下载deb安装包安装\n```\nsudo dpkg -i StarUML-v2.7.0-64-bit.deb", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e7a6615a1a08ebb4ffbd7a54d8d91acf28c7e165
13,196
ipynb
Jupyter Notebook
chapter_computational-performance/hybridize.ipynb
femj007/d2l-zh
faa3e6230fd30d56c0400fe610e7f8396fa25f8b
[ "Apache-2.0" ]
null
null
null
chapter_computational-performance/hybridize.ipynb
femj007/d2l-zh
faa3e6230fd30d56c0400fe610e7f8396fa25f8b
[ "Apache-2.0" ]
null
null
null
chapter_computational-performance/hybridize.ipynb
femj007/d2l-zh
faa3e6230fd30d56c0400fe610e7f8396fa25f8b
[ "Apache-2.0" ]
null
null
null
25.183206
399
0.536981
[ [ [ "# 命令式和符号式混合编程\n\n本书到目前为止一直都在使用命令式编程,它使用编程语句改变程序状态。考虑下面这段简单的命令式编程代码。", "_____no_output_____" ] ], [ [ "def add(a, b):\n return a + b\n\ndef fancy_func(a, b, c, d):\n e = add(a, b)\n f = add(c, d)\n g = add(e, f)\n return g\n\nfancy_func(1, 2, 3, 4)", "_____no_output_____" ] ], [ [ "和我们预期的一样,在运行语句`e = add(a, b)`时,Python会做加法运算并将结果存储在变量`e`,从而令程序的状态发生了改变。类似地,后面的两个语句`f = add(c, d)`和`g = add(e, f)`会依次做加法运算并存储变量。\n\n虽然使用命令式编程很方便,但它的运行可能会慢。一方面,即使`fancy_func`函数中的`add`是被重复调用的函数,Python也会逐一执行这三个函数调用语句。另一方面,我们需要保存变量`e`和`f`的值直到`fancy_func`中所有语句执行结束。这是因为在执行`e = add(a, b)`和`f = add(c, d)`这两个语句之后我们并不知道变量`e`和`f`是否会被程序的其他部分使用。\n\n与命令式编程不同,符号式编程通常在计算流程完全定义好后才被执行。多个深度学习框架,例如Theano和TensorFlow,都使用了符号式编程。通常,符号式编程的程序需要下面三个步骤:\n\n1. 定义计算流程;\n2. 把计算流程编译成可执行的程序;\n3. 给定输入,调用编译好的程序执行。\n\n下面我们用符号式编程重新实现本节开头给出的命令式编程代码。", "_____no_output_____" ] ], [ [ "def add_str():\n return '''\ndef add(a, b):\n return a + b\n'''\n\ndef fancy_func_str():\n return '''\ndef fancy_func(a, b, c, d):\n e = add(a, b)\n f = add(c, d)\n g = add(e, f)\n return g\n'''\n\ndef evoke_str():\n return add_str() + fancy_func_str() + '''\nprint(fancy_func(1, 2, 3, 4))\n'''\n\nprog = evoke_str()\nprint(prog)\ny = compile(prog, '', 'exec')\nexec(y)", "\ndef add(a, b):\n return a + b\n\ndef fancy_func(a, b, c, d):\n e = add(a, b)\n f = add(c, d)\n g = add(e, f)\n return g\n\nprint(fancy_func(1, 2, 3, 4))\n\n10\n" ] ], [ [ "以上定义的三个函数都仅以字符串的形式返回计算流程。最后,我们通过`compile`函数编译完整的计算流程并运行。由于在编译时系统能够完整地看到整个程序,因此有更多空间优化计算。例如,编译的时候可以将程序改写成`print((1 + 2) + (3 + 4))`,甚至直接改写成`print(10)`。这样不仅减少了函数调用,还节省了内存。\n\n对比这两种编程方式,我们可以看到\n\n* 命令式编程更方便。当我们在Python里使用命令式编程时,大部分代码编写起来都很直观。同时,命令式编程更容易排错。这是因为我们可以很方便地获取并打印所有的中间变量值,或者使用Python的排错工具。\n\n* 符号式编程更高效并更容易移植。一方面,在编译的时候系统容易做更多优化;另一方面,符号式编程可以将程序变成一个与Python无关的格式,从而可以使程序在非Python环境下运行,以避开Python解释器的性能问题。\n\n\n## 混合式编程取两者之长\n\n大部分的深度学习框架在命令式编程和符号式编程之间二选一。例如Theano和受其启发的后来者TensorFlow使用了符号式编程;Chainer和它的追随者PyTorch使用了命令式编程。开发人员在设计Gluon时思考了这个问题:有没有可能既得到命令式编程的好处,又享受符号式编程的优势?开发者们认为,用户应该用纯命令式编程进行开发和调试;当需要产品级别的计算性能和部署时,用户可以将大部分程序转换成符号式来运行。Gluon通过提供混合式编程做到了这一点。\n\n在混合式编程中,我们可以通过使用HybridBlock类或者HybridSequential类构建模型。默认情况下,它们和Block或者Sequential类一样依据命令式编程的方式执行。当我们调用`hybridize`函数后,Gluon会转换成依据符号式编程的方式执行。事实上,绝大多数模型都可以享受这样的混合式编程的执行方式。\n\n本节将通过实验展示混合式编程的魅力。\n\n## 使用HybridSequential类构造模型\n\n我们之前学习了如何使用Sequential类来串联多个层。为了使用混合式编程,下面我们将Sequential类替换成HybridSequential类。", "_____no_output_____" ] ], [ [ "from mxnet import nd, sym\nfrom mxnet.gluon import nn\nimport time\n\ndef get_net():\n net = nn.HybridSequential() # 这里创建HybridSequential实例\n net.add(nn.Dense(256, activation='relu'),\n nn.Dense(128, activation='relu'),\n nn.Dense(2))\n net.initialize()\n return net\n\nx = nd.random.normal(shape=(1, 512))\nnet = get_net()\nnet(x)", "_____no_output_____" ] ], [ [ "我们可以通过调用`hybridize`函数来编译和优化HybridSequential实例中串联层的计算。模型的计算结果不变。", "_____no_output_____" ] ], [ [ "net.hybridize()\nnet(x)", "_____no_output_____" ] ], [ [ "需要注意的是,只有继承HybridBlock类的层才会被优化计算。例如,HybridSequential类和Gluon提供的`Dense`类都是HybridBlock类的子类,它们都会被优化计算。如果一个层只是继承自Block类而不是HybridBlock类,那么它将不会被优化。\n\n\n### 计算性能\n\n我们比较调用`hybridize`函数前后的计算时间来展示符号式编程的性能提升。这里我们计时1000次`net`模型计算。在`net`调用`hybridize`函数前后,它分别依据命令式编程和符号式编程做模型计算。", "_____no_output_____" ] ], [ [ "def benchmark(net, x):\n start = time.time()\n for i in range(1000):\n _ = net(x)\n nd.waitall() # 等待所有计算完成方便计时\n return time.time() - start\n\nnet = get_net()\nprint('before hybridizing: %.4f sec' % (benchmark(net, x)))\nnet.hybridize()\nprint('after hybridizing: %.4f sec' % (benchmark(net, x)))", "before hybridizing: 0.3017 sec\n" ] ], [ [ "由上面结果可见,在一个HybridSequential实例调用`hybridize`函数后,它可以通过符号式编程提升计算性能。\n\n\n### 获取符号式程序\n\n在模型`net`根据输入计算模型输出后,例如`benchmark`函数中的`net(x)`,我们就可以通过`export`函数来保存符号式程序和模型参数到硬盘。", "_____no_output_____" ] ], [ [ "net.export('my_mlp')", "_____no_output_____" ] ], [ [ "此时生成的.json和.params文件分别为符号式程序和模型参数。它们可以被Python或MXNet支持的其他前端语言读取,例如C++、R、Scala、Perl和其它语言。这样,我们就可以很方便地使用其他前端语言或在其他设备上部署训练好的模型。同时,由于部署时使用的是基于符号式编程的程序,计算性能往往比基于命令式编程时更好。\n\n在MXNet中,符号式程序指的是Symbol类型的程序。我们知道,当给`net`提供NDArray类型的输入`x`后,`net(x)`会根据`x`直接计算模型输出并返回结果。对于调用过`hybridize`函数后的模型,我们还可以给它输入一个Symbol类型的变量,`net(x)`会返回Symbol类型的结果。", "_____no_output_____" ] ], [ [ "x = sym.var('data')\nnet(x)", "_____no_output_____" ] ], [ [ "## 使用HybridBlock类构造模型\n\n和Sequential类与Block类之间的关系一样,HybridSequential类是HybridBlock类的子类。跟Block实例需要实现`forward`函数不太一样的是,对于HybridBlock实例我们需要实现`hybrid_forward`函数。\n\n前面我们展示了调用`hybridize`函数后的模型可以获得更好的计算性能和可移植性。另一方面,调用`hybridize`函数后的模型会影响灵活性。为了解释这一点,我们先使用HybridBlock类构造模型。", "_____no_output_____" ] ], [ [ "class HybridNet(nn.HybridBlock):\n def __init__(self, **kwargs):\n super(HybridNet, self).__init__(**kwargs)\n self.hidden = nn.Dense(10)\n self.output = nn.Dense(2)\n\n def hybrid_forward(self, F, x):\n print('F: ', F)\n print('x: ', x)\n x = F.relu(self.hidden(x))\n print('hidden: ', x)\n return self.output(x)", "_____no_output_____" ] ], [ [ "在继承HybridBlock类时,我们需要在`hybrid_forward`函数中添加额外的输入`F`。我们知道,MXNet既有基于命令式编程的NDArray类,又有基于符号式编程的Symbol类。由于这两个类的函数基本一致,MXNet会根据输入来决定`F`使用NDArray或Symbol。\n\n下面创建了一个HybridBlock实例。可以看到默认下`F`使用NDArray。而且,我们打印出了输入`x`和使用ReLU激活函数的隐藏层的输出。", "_____no_output_____" ] ], [ [ "net = HybridNet()\nnet.initialize()\nx = nd.random.normal(shape=(1, 4))\nnet(x)", "F: <module 'mxnet.ndarray' from '/var/lib/jenkins/miniconda2/envs/d2l-zh-build/lib/python3.6/site-packages/mxnet/ndarray/__init__.py'>\nx: \n[[-0.12225834 0.5429998 -0.9469352 0.59643304]]\n<NDArray 1x4 @cpu(0)>\nhidden: \n[[0.11134676 0.04770704 0.05341475 0. 0.08091211 0.\n 0. 0.04143535 0. 0. ]]\n<NDArray 1x10 @cpu(0)>\n" ] ], [ [ "再运行一次前向计算会得到同样的结果。", "_____no_output_____" ] ], [ [ "net(x)", "F: <module 'mxnet.ndarray' from '/var/lib/jenkins/miniconda2/envs/d2l-zh-build/lib/python3.6/site-packages/mxnet/ndarray/__init__.py'>\nx: \n[[-0.12225834 0.5429998 -0.9469352 0.59643304]]\n<NDArray 1x4 @cpu(0)>\nhidden: \n[[0.11134676 0.04770704 0.05341475 0. 0.08091211 0.\n 0. 0.04143535 0. 0. ]]\n<NDArray 1x10 @cpu(0)>\n" ] ], [ [ "接下来看看调用`hybridize`函数后会发生什么。", "_____no_output_____" ] ], [ [ "net.hybridize()\nnet(x)", "F: <module 'mxnet.symbol' from '/var/lib/jenkins/miniconda2/envs/d2l-zh-build/lib/python3.6/site-packages/mxnet/symbol/__init__.py'>\nx: <Symbol data>\nhidden: <Symbol hybridnet0_relu0>\n" ] ], [ [ "可以看到,`F`变成了Symbol。而且,虽然输入数据还是NDArray,但`hybrid_forward`函数里,相同输入和中间输出全部变成了Symbol类型。\n\n再运行一次前向计算看看。", "_____no_output_____" ] ], [ [ "net(x)", "_____no_output_____" ] ], [ [ "可以看到`hybrid_forward`函数里定义的三行打印语句都没有打印任何东西。这是因为上一次在调用`hybridize`函数后运行`net(x)`的时候,符号式程序已经得到。之后再运行`net(x)`的时候MXNet将不再访问Python代码,而是直接在C++后端执行符号式程序。这也是调用`hybridize`后模型计算性能会提升的一个原因。但它可能的问题在于我们损失了写程序的灵活性。在上面这个例子中,如果我们希望使用那三行打印语句调试代码,执行符号式程序时会跳过它们无法打印。此外,对于少数像`asnumpy`这样的Symbol所不支持的函数,以及像`a += b`和`a[:] = a + b`(需改写为`a = a + b`)这样的原地(in-place)操作,我们无法在`hybrid_forward`函数中使用并在调用`hybridize`函数后进行前向计算。\n\n\n## 小结\n\n* 命令式编程和符号式编程各有优劣。MXNet通过混合式编程取二者之长。\n* 通过HybridSequential类和HybridBlock类构建的模型可以调用`hybridize`函数将命令式程序转成符号式程序。我们建议大家使用这种方法获得计算性能的提升。\n\n\n## 练习\n\n* 在本节HybridNet类的`hybrid_forward`函数中第一行添加`x.asnumpy()`,运行本节全部代码,观察报错的位置和错误类型。\n* 如果在`hybrid_forward`函数中加入Python的`if`和`for`语句会怎么样?\n* 回顾前面几章中你感兴趣的模型,改用HybridBlock类或HybridSequential类实现。\n\n\n## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1665)\n\n![](../img/qr_hybridize.svg)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7a665dad331c0e941a4116bb8689d1fee31cce4
417,726
ipynb
Jupyter Notebook
project_2/Project_2_code.ipynb
skylerl2/skylerl2.github.io
bfc3aee16c4188ae3a4b14e08c491e04423046c1
[ "MIT" ]
null
null
null
project_2/Project_2_code.ipynb
skylerl2/skylerl2.github.io
bfc3aee16c4188ae3a4b14e08c491e04423046c1
[ "MIT" ]
null
null
null
project_2/Project_2_code.ipynb
skylerl2/skylerl2.github.io
bfc3aee16c4188ae3a4b14e08c491e04423046c1
[ "MIT" ]
null
null
null
54.891721
34,304
0.388714
[ [ [ "from selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nimport time\nimport os\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nchromedriver = \"/Applications/chromedriver\"\nos.environ[\"webdriver.chrome.driver\"] = chromedriver", "_____no_output_____" ], [ "# Set url to Five Star Alliance page for New York City\nurl = 'https://www.fivestaralliance.com/luxury-hotels/271/north-america/united-states-northeast/new-york-ny/'\ndriver = webdriver.Chrome(chromedriver)\ndriver.get(url)\n# manually click \"View more Hotels\" (x3)", "_____no_output_____" ], [ "# Find links to hotels and save to list\nhotel_links=driver.find_elements_by_xpath(\"//a[@style='cursor: pointer;']\")", "_____no_output_____" ], [ "hotel_list=[]\nfor hotel in hotel_links:\n hotel_list.append(hotel.text)", "_____no_output_____" ], [ "# Check that hotel list pulled in correctly\nfor hotel in hotel_list:\n print (hotel)", "The Greenwich Hotel\nThe Soho Grand Hotel\nThe James New York - NoMad\nThe Chatwal New York\nMandarin Oriental, New York\nThe Pierre Hotel New York\nThe Wagner at the Battery\nThe Mark New York\nBlakely New York\nInterContinental The Barclay New York\nHotel Plaza Athenee New York\nLangham Place Fifth Avenue\nThe Carlyle\nGramercy Park Hotel\nInterContinental New York Times Square\nThe London NYC\nThe Beekman\nSofitel New York Hotel\nPark Hyatt New York\nThe Plaza Hotel\nThe Quin Hotel\nThe Lotte New York Palace\nGansevoort Meatpacking NYC\nThe Dominick\nTrump International Hotel & Tower New York\nThe Peninsula New York\nThe St Regis New York\nTopping Rose House\nLoews Regency Hotel\nThe Surrey New York\nThe Ritz-Carlton New York, Central Park\nJW Marriott Essex House New York\nBaccarat Hotel New York\nW New York Union Square\nThe Benjamin Hotel\nHotel 50 Bowery\nRoyalton Park Avenue\nThe NoMad Hotel\nThe High Line Hotel\nThe New York EDITION\nFour Seasons New York Downtown\nViceroy New York\nWesthouse Hotel New York\nThe Knickerbocker\nThe Broome Hotel\nHotel Americano\nMarmara Park Avenue\nThe Mercer Hotel\nRefinery Hotel New York\nNoMo Soho\nConrad New York\nRoyalton Hotel\nThe Standard High Line\nWestin New York Times Square\nHotel 48 Lex\nDream Downtown\nONE UN New York\nNolitan Hotel New York\nThe Iroquois New York\nThe Algonquin\nHotel 373 Fifth Avenue\nThe Westin New York Grand Central\nHyatt Regency Jersey City On the Hudson\nSoho House New York\nThe City Club Hotel\nThe Garden City Hotel\nThe Library Hotel\nArcher Hotel New York\nMillenium Hilton\nHamilton Park Hotel\nAndaz 5th Avenue\nW New York Times Square\nCastle Hotel and Spa\nThe Lowell\nParker New York\nFour Seasons New York\nThe James New York - SoHo\nW New York Downtown\nCrosby Street Hotel\nThe Maxwell New York\nAndaz Wall Street\nSt Giles Tuscany New York\nThe Marcel At Gramercy\nCassa Hotel 45th Street\nChambers Hotel\nThe Muse Hotel New York\nMillennium Broadway New York\nHotel Metro New York\n6 Columbus\nWarwick New York Hotel\nPark Lane Hotel New York\nEventi Hotel New York\nThe Kimberly Hotel\nSt Giles The Court New York\nSIXTY SoHo\nBryant Park\nHotel Mela New York\nInk48, A Kimpton Hotel\n70 Park Avenue Hotel\nDuane Street Hotel\nOmni Berkshire Place New York\nGild Hall\nThe Roger New York\nThe Kitano New York\nMorgans\nSmyth\nThe Roxy Hotel Tribeca\nOheka Castle Hotel and Estate\nW Hoboken\nDylan Hotel New York\nThe Premier Hotel New York\nThe Michelangelo Hotel New York\nThe Waldorf-Astoria\nWyndham Midtown 45\n" ], [ "# Remove hotels without ratings\nhotel_list.remove('The Beekman')\nhotel_list.remove('Baccarat Hotel New York')\nhotel_list.remove('The Algonquin')\nprint (hotel_list)", "['The Greenwich Hotel', 'The Soho Grand Hotel', 'The James New York - NoMad', 'The Chatwal New York', 'Mandarin Oriental, New York', 'The Pierre Hotel New York', 'The Wagner at the Battery', 'The Mark New York', 'Blakely New York', 'InterContinental The Barclay New York', 'Hotel Plaza Athenee New York', 'Langham Place Fifth Avenue', 'The Carlyle', 'Gramercy Park Hotel', 'InterContinental New York Times Square', 'The London NYC', 'Sofitel New York Hotel', 'Park Hyatt New York', 'The Plaza Hotel', 'The Quin Hotel', 'The Lotte New York Palace', 'Gansevoort Meatpacking NYC', 'The Dominick', 'Trump International Hotel & Tower New York', 'The Peninsula New York', 'The St Regis New York', 'Topping Rose House', 'Loews Regency Hotel', 'The Surrey New York', 'The Ritz-Carlton New York, Central Park', 'JW Marriott Essex House New York', 'W New York Union Square', 'The Benjamin Hotel', 'Hotel 50 Bowery', 'Royalton Park Avenue', 'The NoMad Hotel', 'The High Line Hotel', 'The New York EDITION', 'Four Seasons New York Downtown', 'Viceroy New York', 'Westhouse Hotel New York', 'The Knickerbocker', 'The Broome Hotel', 'Hotel Americano', 'Marmara Park Avenue', 'The Mercer Hotel', 'Refinery Hotel New York', 'NoMo Soho', 'Conrad New York', 'Royalton Hotel', 'The Standard High Line', 'Westin New York Times Square', 'Hotel 48 Lex', 'Dream Downtown', 'ONE UN New York', 'Nolitan Hotel New York', 'The Iroquois New York', 'Hotel 373 Fifth Avenue', 'The Westin New York Grand Central', 'Hyatt Regency Jersey City On the Hudson', 'Soho House New York', 'The City Club Hotel', 'The Garden City Hotel', 'The Library Hotel', 'Archer Hotel New York', 'Millenium Hilton', 'Hamilton Park Hotel', 'Andaz 5th Avenue', 'W New York Times Square', 'Castle Hotel and Spa', 'The Lowell', 'Parker New York', 'Four Seasons New York', 'The James New York - SoHo', 'W New York Downtown', 'Crosby Street Hotel', 'The Maxwell New York', 'Andaz Wall Street', 'St Giles Tuscany New York', 'The Marcel At Gramercy', 'Cassa Hotel 45th Street', 'Chambers Hotel', 'The Muse Hotel New York', 'Millennium Broadway New York', 'Hotel Metro New York', '6 Columbus', 'Warwick New York Hotel', 'Park Lane Hotel New York', 'Eventi Hotel New York', 'The Kimberly Hotel', 'St Giles The Court New York', 'SIXTY SoHo', 'Bryant Park', 'Hotel Mela New York', 'Ink48, A Kimpton Hotel', '70 Park Avenue Hotel', 'Duane Street Hotel', 'Omni Berkshire Place New York', 'Gild Hall', 'The Roger New York', 'The Kitano New York', 'Morgans', 'Smyth', 'The Roxy Hotel Tribeca', 'Oheka Castle Hotel and Estate', 'W Hoboken', 'Dylan Hotel New York', 'The Premier Hotel New York', 'The Michelangelo Hotel New York', 'The Waldorf-Astoria', 'Wyndham Midtown 45']\n" ], [ "# Initialize an empty dictionary for storing scraped hotel data\nHotel_Data_Dict={}", "_____no_output_____" ], [ "# Loop through hotel names to click on each and scrape ratings, reviews, amenities (& activities)\nfor hotel in hotel_list:\n Hotel_Data_Dict[hotel]=[]\n driver.find_element_by_link_text(hotel).send_keys(Keys.COMMAND,Keys.RETURN)\n time.sleep(1)\n driver.switch_to_window(driver.window_handles[-1])\n rating=driver.find_element_by_class_name('value')\n Hotel_Data_Dict[hotel].append(rating.text.strip())\n reviews=driver.find_element_by_xpath('//a[@id=\"trustyou_review\"]')\n Hotel_Data_Dict[hotel].append(reviews.text)\n amenities=driver.find_element_by_xpath('//li[@rel=\"tab4\"]')\n amenities.click()\n amenities_data=driver.find_elements_by_xpath('//div[@class=\"col-md-4\"]')\n for amenity in amenities_data:\n Hotel_Data_Dict[hotel].append(amenity.text)\n driver.close()\n driver.switch_to_window(driver.window_handles[0])", "/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:5: DeprecationWarning: use driver.switch_to.window instead\n \"\"\"\n/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:16: DeprecationWarning: use driver.switch_to.window instead\n app.launch_new_instance()\n" ], [ "# Check scraped hotel data and include count of hotels (should be 111)\nprint (len(Hotel_Data_Dict), Hotel_Data_Dict)", "111 {'The Greenwich Hotel': ['94', '271 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Pool Indoor\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Soho Grand Hotel': ['86', '2972 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The James New York - NoMad': ['83', '2296 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Chatwal New York': ['90', '334 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Mandarin Oriental, New York': ['89', '594 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Pool Indoor\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Pierre Hotel New York': ['91', '3331 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Biking Touring\\n- Horseback Riding\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Tennis Courts Nearby\\n- Theatre & Museums'], 'The Wagner at the Battery': ['89', '424 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Golfing\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Mark New York': ['92', '791 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Blakely New York': ['86', '3065 Reviews', 'Hotel Amenities\\n- Business Center\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'InterContinental The Barclay New York': ['88', '4210 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Dining\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Hotel Plaza Athenee New York': ['86', '1048 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Spa Facility', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Ice Skating\\n- Shopping\\n- Tennis Courts Nearby\\n- Theatre & Museums'], 'Langham Place Fifth Avenue': ['92', '2006 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Carlyle': ['86', '1796 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Gramercy Park Hotel': ['87', '1301 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed', 'Available Activities\\n- Dining\\n- Horseback Riding\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Tennis Courts Nearby\\n- Theatre & Museums'], 'InterContinental New York Times Square': ['83', '5147 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The London NYC': ['83', '2634 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Shopping\\n- Theatre & Museums'], 'Sofitel New York Hotel': ['87', '10434 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Hiking\\n- Horseback Riding\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Park Hyatt New York': ['93', '931 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pool Indoor\\n- Spa Facility', 'Available Activities\\n- Ice Skating\\n- Jogging Running\\n- Shopping'], 'The Plaza Hotel': ['86', '1048 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Quin Hotel': ['88', '2191 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Lotte New York Palace': ['89', '4233 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Gansevoort Meatpacking NYC': ['84', '3278 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed\\n- Pool Outdoor\\n- Spa Facility', 'Available Activities\\n- Biking Touring\\n- Shopping\\n- Theatre & Museums'], 'The Dominick': ['85', '1377 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pool Outdoor\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Trump International Hotel & Tower New York': ['78', '2194 Reviews', 'Hotel Amenities\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pool Indoor\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Peninsula New York': ['92', '1321 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Pool Indoor\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The St Regis New York': ['93', '691 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Golfing\\n- Shopping\\n- Theatre & Museums'], 'Topping Rose House': ['77', '60 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet\\n- Pool Outdoor\\n- Spa Facility', 'Available Activities\\n- Beach\\n- Biking Touring\\n- Boating\\n- Fishing Fly\\n- Fishing Ocean\\n- Golfing\\n- Horseback Riding\\n- Sailing\\n- Shopping\\n- Tennis Courts Nearby\\n- Theatre & Museums\\n- Winery Tours'], 'Loews Regency Hotel': ['89', '1429 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Dining\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Surrey New York': ['86', '1819 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Ritz-Carlton New York, Central Park': ['89', '581 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Golf Driving Range\\n- Golfing\\n- Jogging Running\\n- Shopping'], 'JW Marriott Essex House New York': ['86', '1568 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Horseback Riding\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'W New York Union Square': ['86', '918 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Benjamin Hotel': ['85', '3251 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Dining\\n- Shopping\\n- Theatre & Museums'], 'Hotel 50 Bowery': ['91', '2115 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Royalton Park Avenue': ['85', '1901 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Pool Outdoor\\n- Spa Facility'], 'The NoMad Hotel': ['91', '1954 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Tennis Courts Nearby\\n- Theatre & Museums'], 'The High Line Hotel': ['87', '892 Reviews', 'Hotel Amenities\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Dining'], 'The New York EDITION': ['90', '1591 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Spa Facility', 'Available Activities\\n- Dining\\n- Shopping\\n- Theatre & Museums'], 'Four Seasons New York Downtown': ['94', '833 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pool Indoor\\n- Spa Facility', 'Available Activities\\n- Boating\\n- Dining\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Viceroy New York': ['80', '2111 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Pool Indoor', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Westhouse Hotel New York': ['82', '1410 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Spa Facility', 'Available Activities\\n- Dining\\n- Shopping\\n- Theatre & Museums'], 'The Knickerbocker': ['89', '3714 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Broome Hotel': ['91', '336 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Hotel Americano': ['80', '667 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet\\n- Pool Outdoor', 'Available Activities\\n- Biking Touring\\n- Shopping\\n- Theatre & Museums'], 'Marmara Park Avenue': ['86', '649 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Pool Indoor\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Mercer Hotel': ['88', '555 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Refinery Hotel New York': ['89', '2359 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'NoMo Soho': ['83', '2992 Reviews', 'Hotel Amenities\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Conrad New York': ['93', '2925 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Royalton Hotel': ['83', '2016 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Standard High Line': ['84', '4550 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Westin New York Times Square': ['84', '4470 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility'], 'Hotel 48 Lex': ['84', '1366 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping'], 'Dream Downtown': ['78', '3537 Reviews', 'Hotel Amenities\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Pool Outdoor\\n- Spa Facility', 'Available Activities\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'ONE UN New York': ['84', '5541 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Tennis Courts On Site', 'Available Activities\\n- Theatre & Museums'], 'Nolitan Hotel New York': ['87', '1057 Reviews', 'Hotel Amenities\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Biking Touring\\n- Shopping\\n- Theatre & Museums'], 'The Iroquois New York': ['90', '1292 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Ice Skating\\n- Shopping\\n- Theatre & Museums'], 'Hotel 373 Fifth Avenue': ['82', '2454 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining On Site\\n- High Speed Internet\\n- Pets Allowed'], 'The Westin New York Grand Central': ['83', '3648 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Hyatt Regency Jersey City On the Hudson': ['88', '2752 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pool Indoor', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Soho House New York': ['90', '573 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet\\n- Pool Outdoor\\n- Spa Facility', 'Available Activities\\n- Dining\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The City Club Hotel': ['83', '1926 Reviews', 'Hotel Amenities\\n- Business Center\\n- Fine Dining On Site\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Garden City Hotel': ['92', '439 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Pool Outdoor', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Library Hotel': ['93', '939 Reviews', 'Hotel Amenities\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Archer Hotel New York': ['92', '4336 Reviews', 'Hotel Amenities\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Millenium Hilton': ['84', '5062 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pool Outdoor', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Hamilton Park Hotel': ['85', '771 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pool Outdoor', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Andaz 5th Avenue': ['88', '1279 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'W New York Times Square': ['81', '3415 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Castle Hotel and Spa': ['87', '98 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pool Outdoor\\n- Spa Facility\\n- Tennis Courts On Site', 'Available Activities\\n- Biking Mountain\\n- Boating\\n- Dining\\n- Ecological Tourism\\n- Hiking\\n- Jogging Running\\n- Tennis Courts Nearby\\n- Theatre & Museums\\n- Winery Tours'], 'The Lowell': ['93', '174 Reviews', 'Hotel Amenities\\n- Business Center\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Parker New York': ['83', '2069 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pool Outdoor\\n- Spa Facility', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Four Seasons New York': ['92', '772 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Ballooning Hot Air\\n- Biking Touring\\n- Boating\\n- Dining\\n- Golf Driving Range\\n- Golfing\\n- Horseback Riding\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The James New York - SoHo': ['82', '1216 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'W New York Downtown': ['84', '1116 Reviews', 'Hotel Amenities\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Crosby Street Hotel': ['93', '586 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'The Maxwell New York': ['78', '2791 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Andaz Wall Street': ['88', '1670 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Jogging Running'], 'St Giles Tuscany New York': ['89', '1326 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Spa Facility', 'Available Activities\\n- Ecological Tourism\\n- Shopping\\n- Theatre & Museums'], 'The Marcel At Gramercy': ['83', '2562 Reviews', 'Hotel Amenities\\n- Fine Dining On Site\\n- High Speed Internet'], 'Cassa Hotel 45th Street': ['86', '2897 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Chambers Hotel': ['88', '1369 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed'], 'The Muse Hotel New York': ['90', '1687 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Millennium Broadway New York': ['76', '7293 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Ice Skating\\n- Shopping\\n- Theatre & Museums'], 'Hotel Metro New York': ['85', '2510 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fitness Center\\n- High Speed Internet'], '6 Columbus': ['78', '744 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- High Speed Internet'], 'Warwick New York Hotel': ['83', '5729 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet'], 'Park Lane Hotel New York': ['77', '8867 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed', 'Available Activities\\n- Ecological Tourism\\n- Shopping\\n- Theatre & Museums'], 'Eventi Hotel New York': ['88', '2164 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Fine Dining On Site\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed'], 'The Kimberly Hotel': ['93', '1905 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Fine Dining On Site\\n- High Speed Internet'], 'St Giles The Court New York': ['77', '2562 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'SIXTY SoHo': ['88', '1126 Reviews', 'Hotel Amenities\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Bryant Park': ['91', '1621 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Hotel Mela New York': ['78', '3888 Reviews', 'Hotel Amenities\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Horseback Riding\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Ink48, A Kimpton Hotel': ['88', '2204 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- High Speed Internet\\n- Pets Allowed'], '70 Park Avenue Hotel': ['84', '3386 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Duane Street Hotel': ['85', '514 Reviews', 'Hotel Amenities\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet\\n- Pets Allowed'], 'Omni Berkshire Place New York': ['88', '1780 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Child Programs\\n- Pets Allowed', 'Available Activities\\n- Dining\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Gild Hall': ['90', '1408 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Roger New York': ['84', '2206 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping'], 'The Kitano New York': ['87', '1687 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Shopping\\n- Theatre & Museums'], 'Morgans': ['84', '480 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Executive Retreat\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Ice Skating\\n- Shopping\\n- Theatre & Museums'], 'Smyth': ['87', '760 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'The Roxy Hotel Tribeca': ['89', '2661 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Dining\\n- Shopping\\n- Theatre & Museums'], 'Oheka Castle Hotel and Estate': ['90', '84 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Golfing\\n- Hiking\\n- Jogging Running\\n- Tennis Courts Nearby\\n- Theatre & Museums\\n- Winery Tours'], 'W Hoboken': ['89', '554 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed\\n- Spa Facility', 'Available Activities\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Dylan Hotel New York': ['79', '2324 Reviews', 'Hotel Amenities\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Dining\\n- Shopping\\n- Theatre & Museums'], 'The Premier Hotel New York': ['82', '1591 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Tennis Courts Nearby\\n- Theatre & Museums'], 'The Michelangelo Hotel New York': ['88', '2589 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet', 'Available Activities\\n- Theatre & Museums'], 'The Waldorf-Astoria': ['84', '2063 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Meeting Space\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- High Speed Internet', 'Available Activities\\n- Biking Touring\\n- Dining\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums'], 'Wyndham Midtown 45': ['86', '849 Reviews', 'Hotel Amenities\\n- Banquets & Meetings\\n- Business Center\\n- Disabled Access\\n- Fine Dining On Site\\n- Fine Dining Nearby\\n- Fitness Center\\n- High Speed Internet\\n- Pets Allowed', 'Available Activities\\n- Dining\\n- Hiking\\n- Ice Skating\\n- Jogging Running\\n- Shopping\\n- Theatre & Museums']}\n" ], [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "_____no_output_____" ], [ "# Put scraped data into data frame\ndf=pd.DataFrame.from_dict(Hotel_Data_Dict, orient='index')", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "# Create new columns for each amenity/activity and code as 1 for having it, and 0 otherwise\n\namenities_list=['Banquets & Meetings','Business Center','Child Programs','Disabled Access',\n 'Executive Retreat','Fine Dining Nearby','Fine Dining On Site','Fitness Center',\n 'High Speed Internet','Meeting Space','Pets Allowed','Pool Indoor','Pool Outdoor',\n 'Spa Facility','Tennis Courts On Site']\n\nfor amenity in amenities_list:\n df[amenity]=0\n for hotel in hotel_list:\n if amenity in str(df.loc[hotel,2]):\n df.loc[hotel,amenity]=1\n else:\n df.loc[hotel,amenity]=0\n\nactivities_list=['Beach','Biking Touring','Boating','Dining','Ecological Tourism','Fishing Ocean',\n 'Fishing Fly','Golf Driving Range','Golfing','Hiking','Horseback Riding','Ice Skating',\n 'Jogging Running','Shopping','Tennis Courts Nearby','Theatre & Museums','Winery Tours']\n\nfor activity in activities_list:\n df[activity]=0\n for hotel in hotel_list:\n if activity in str(df.loc[hotel,3]):\n df.loc[hotel,activity]=1\n else:\n df.loc[hotel,activity]=0", "_____no_output_____" ], [ "# Strip text and whitespace from reviews, leaving only number\ndf[1]=df[1].str.replace(' Reviews','')", "_____no_output_____" ], [ "# Convert ratings and # of reviews to integer\ndf[0]=df[0].astype(int)\ndf[1]=df[1].astype(int)", "_____no_output_____" ], [ "df=df.rename(columns={0:'Rating',1:'Number of Reviews'})", "_____no_output_____" ], [ "# Delete scraped-text columns now that dummy variables have been created based on them\ndel df[2]\ndel df[3]", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df.to_pickle('my_df.pkl')\ndel df\n", "_____no_output_____" ], [ "df = pd.read_pickle('my_df.pkl')\ndf", "_____no_output_____" ], [ "df_corr=df.corr()", "_____no_output_____" ], [ "df_corr", "_____no_output_____" ], [ "df_corr['Rating'].sort_values(ascending=False)", "_____no_output_____" ], [ "import os\nprint(os.getcwd())", "_____no_output_____" ], [ "import csv\ndf_corr.to_csv('correlation_matrix.csv')", "_____no_output_____" ], [ "\"\"\"\"Look at csv to find multicollinearity. Beach, Fishing Ocean, and Fishing Fly are perfectly correlated, \nso dropping the latter two. Shopping and Theatre & Museums are highly correlated at 0.73, but IMO this isn't \nhigh enough to exclude one or the other, especially since they aren't the most noteworthy variables in the \nfinal analysis.", "_____no_output_____" ], [ "del df['Fishing Fly']\ndel df['Fishing Ocean']", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "plt.hist(df.Rating,bins=10)\nplt.title('Distribution of Hotel Ratings')\nplt.xlabel('Hotel Rating')\nplt.ylabel('Number of Hotels')\nplt.xticks([75,80,85,90,95])\nplt.yticks([0,5,10,15,20])\nplt.show", "_____no_output_____" ], [ "# Examine number of reviews by rating\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 8))\n\nax.scatter(df.Rating, df['Number of Reviews'])\nax.set_xlabel('Hotel Rating')\nax.set_ylabel('Number of Reviews')\nax.set_title('Number of Reviews by Hotel Rating')", "_____no_output_____" ], [ "from sklearn.linear_model import LinearRegression\nfrom sklearn import metrics\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.cross_validation import KFold\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.cross_validation import cross_val_score\n\n# Create two new data frames, one with the feature variables and the other with the target column\ndfX=df.drop(columns=['Rating','Number of Reviews']) # Rating is the target, and we're not going to use # of reviews\n# since they are neither discretionary nor do they tell us about customer preferences\nX=dfX\ny=df['Rating']\n\n# Split data into training and holdout/test\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=43) # b/c 42 is overrated", "_____no_output_____" ], [ "# 3-fold cross-validation with our data\nmodel = LinearRegression()\nscores = cross_val_score(model, X_train, y_train, cv=3, scoring='mean_squared_error')\n\nprint(scores)", "[-56.7078362 -50.65953325 -52.6792722 ]\n" ], [ "##### from sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.linear_model import Ridge, Lasso\nfrom sklearn.preprocessing import StandardScaler\n\n# Scale X-axis data for Ridge and Lasso regressions\nssX = StandardScaler()\nX_train_scaled = ssX.fit_transform(X_train)", "_____no_output_____" ], [ "model = Ridge()\nparameters = {'alpha': [1e-3,1e-2,1e-1,1,1e1,1e2,1e3], 'fit_intercept': [True]}\ngrid = GridSearchCV(model, parameters, cv=3, scoring='mean_squared_error', n_jobs=1)\ngrid.fit(X_train_scaled, y_train)\ngrid.cv_results_", "_____no_output_____" ], [ "model = Lasso()\nparameters = {'alpha': [1e-3,1e-2,1e-1,1,1e1,1e2,1e3], 'fit_intercept': [True]}\ngrid = GridSearchCV(model, parameters, cv=3, scoring='mean_squared_error', n_jobs=1)\ngrid.fit(X_train_scaled, y_train)\ngrid.cv_results_", "_____no_output_____" ], [ "type(grid.best_estimator_)", "_____no_output_____" ], [ "# Check predicted values and residuals on best lasso, and compute sum of squared residuals.\nX_test_scaled = ssX.transform(X_test)\nbest_lasso = grid.best_estimator_\nlasso_pred = best_lasso.predict(X_test_scaled)\nresid_list = []\nSSR = 0\nfor true,pred in zip(y_test, lasso_pred):\n resid = true - pred\n resid_list.append(resid)\n SSR += resid**2\n print(\"pred, resid:\", str(pred) + \", $\"+ str(resid))\n \nprint(grid.best_params_, grid.best_score_)", "pred, resid: 86.77009813929656, $-1.7700981392965645\npred, resid: 86.77009813929656, $2.2299018607034355\npred, resid: 85.58809516865138, $2.4119048313486218\npred, resid: 85.58809516865138, $-2.5880951686513782\npred, resid: 86.77009813929656, $5.2299018607034355\npred, resid: 85.58809516865138, $-2.5880951686513782\npred, resid: 85.58809516865138, $7.411904831348622\npred, resid: 86.77009813929656, $0.22990186070343555\npred, resid: 86.77009813929656, $-2.7700981392965645\npred, resid: 85.58809516865138, $-5.588095168651378\npred, resid: 86.77009813929656, $1.2299018607034355\npred, resid: 85.58809516865138, $2.4119048313486218\npred, resid: 85.58809516865138, $-2.5880951686513782\npred, resid: 85.58809516865138, $6.411904831348622\npred, resid: 85.58809516865138, $4.411904831348622\npred, resid: 85.58809516865138, $3.4119048313486218\npred, resid: 85.58809516865138, $-1.5880951686513782\npred, resid: 85.58809516865138, $2.4119048313486218\npred, resid: 86.77009813929656, $1.2299018607034355\npred, resid: 86.77009813929656, $-8.770098139296564\npred, resid: 85.58809516865138, $8.411904831348622\npred, resid: 85.58809516865138, $0.41190483134862177\npred, resid: 86.77009813929656, $0.22990186070343555\npred, resid: 86.77009813929656, $4.2299018607034355\npred, resid: 85.58809516865138, $0.41190483134862177\npred, resid: 85.58809516865138, $4.411904831348622\npred, resid: 85.58809516865138, $3.4119048313486218\npred, resid: 85.58809516865138, $1.4119048313486218\n{'alpha': 1, 'fit_intercept': True} -19.65991145906881\n" ], [ "mean_squared_error=SSR/len(resid_list)\nprint (mean_squared_error)", "15.84707025439869\n" ], [ "# With a decent MSE on the holdout (lower than our CV set!), let's try using this lasso model (lambda=1) as our \n# final model.\n\nfrom sklearn.linear_model import LassoCV\nfinal = LassoCV(alphas=[1],cv=4)\nfinal.fit(X, y)\nfinal.score(X, y)", "_____no_output_____" ], [ "# Lambda=1 is giving an R^2 of 0 on the full set. Looks like it may be collapsing the coefficients to baseline.\n# So let's go with lambda=0.1 instead.\n\nfrom sklearn.linear_model import LassoCV\nfinal = LassoCV(alphas=[0.1],cv=4)\nfinal.fit(X, y)\nfinal.score(X, y)", "_____no_output_____" ], [ "# Now we have something to work with. Time to examine the coefficients.\nlabeled_coef=list(zip(X.columns,final.coef_))\n\nfor e in labeled_coef:\n print (e)", "('Banquets & Meetings', 0.0)\n('Business Center', -0.0)\n('Child Programs', -0.0)\n('Disabled Access', 0.4625503144988509)\n('Executive Retreat', -0.0)\n('Fine Dining Nearby', -0.0)\n('Fine Dining On Site', 0.0)\n('Fitness Center', 0.7602233015751865)\n('High Speed Internet', 0.0)\n('Meeting Space', -0.0)\n('Pets Allowed', 0.6237839406670173)\n('Pool Indoor', 0.0)\n('Pool Outdoor', -1.181228854210307)\n('Spa Facility', 0.15021986607349797)\n('Tennis Courts On Site', -0.0)\n('Beach', -0.0)\n('Biking Touring', -0.12874010618420537)\n('Boating', 0.0)\n('Dining', 0.0)\n('Ecological Tourism', -0.0)\n('Golf Driving Range', 0.0)\n('Golfing', 0.0)\n('Hiking', 0.0)\n('Horseback Riding', -0.28365682430377764)\n('Ice Skating', 0.0)\n('Jogging Running', 1.6772959749304803)\n('Shopping', -0.0)\n('Tennis Courts Nearby', -0.0)\n('Theatre & Museums', -0.0)\n('Winery Tours', -0.0)\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a66b941894790338f259b4a8c203f6d48ab1cf
4,499
ipynb
Jupyter Notebook
notebooks/Comprehensions.ipynb
deepettas/advanced-python-workshop
2d7f783dc8b96b1e41a71bf9ef1937720c6f9105
[ "MIT" ]
null
null
null
notebooks/Comprehensions.ipynb
deepettas/advanced-python-workshop
2d7f783dc8b96b1e41a71bf9ef1937720c6f9105
[ "MIT" ]
4
2020-03-24T18:09:19.000Z
2021-08-23T20:34:07.000Z
notebooks/Comprehensions.ipynb
deepettas/advanced-python-workshop
2d7f783dc8b96b1e41a71bf9ef1937720c6f9105
[ "MIT" ]
null
null
null
21.526316
131
0.500556
[ [ [ "## Comprehensions:\n#### Documentation: https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html\n\n#### Situation: \n - We have one or more sources of iterable data.\n \n#### Need:\n - We want to do something with that data, and output it into a list, dictionary or generator format.\n \n#### Solution:\n - Python offers a cleaner/faster way of working without using traditional for loops.\n\n-----\n\n \n#### Example:\n - Lets take a traditional for loop", "_____no_output_____" ] ], [ [ "even_squares = []\nfor num in range(11):\n if num%2 == 0:\n even_squares.append(num * num)\n\neven_squares\n", "_____no_output_____" ], [ "# Can we do better than the above?\n\neven_squares = [num*num for num in range(11) if num%2 == 0]\n\neven_squares", "_____no_output_____" ] ], [ [ "### List comprehension Pattern:\n![alt text](https://miro.medium.com/max/1716/1*xUhlknsL6rR-s_DcVQK7kQ.png)\n##### [Figure reference](https://towardsdatascience.com/comprehending-the-concept-of-comprehensions-in-python-c9dafce5111)\n \n", "_____no_output_____" ], [ "### We can do the same with dictionaries, or generators:", "_____no_output_____" ] ], [ [ "first_names = ['Mark', 'Demmis', 'Elon', 'Jeff', 'Lex']\nlast_names = ['Zuckerberg','Hasabis', 'Musk','Bezos','Fridman']\n\nfull_names = {}\nfor first, last in zip(first_names, last_names):\n full_names[first] = last\n \nfull_names", "_____no_output_____" ], [ "full_names = {first: last for first, last in zip(first_names, last_names)}\n\nfull_names\n# len(full_names)", "_____no_output_____" ] ], [ [ "## How about a generator?\nLike a comprehension but waits, and yields each item out of the expression, one by one.", "_____no_output_____" ] ], [ [ "# even_squares was [0, 4, 16, 36, 64, 100] with the list comprehension\n\n# generator equivallent\neven_squares = (num*num for num in range(11) if num%2 == 0)", "_____no_output_____" ], [ "next(even_squares)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a671607fcb5a3c98e08fc28b8ff3e977727b6a
51,449
ipynb
Jupyter Notebook
notebooks/year_node_8layers.ipynb
wbgalvao/node
52090f25621b617af93468ad54c5d6d91163f9ab
[ "MIT" ]
420
2019-09-16T15:43:58.000Z
2022-03-31T01:17:35.000Z
notebooks/year_node_8layers.ipynb
gy0425/node
3bae6a8a63f0205683270b6d566d9cfa659403e4
[ "MIT" ]
1
2020-03-02T16:08:29.000Z
2020-03-02T16:08:29.000Z
notebooks/year_node_8layers.ipynb
gy0425/node
3bae6a8a63f0205683270b6d566d9cfa659403e4
[ "MIT" ]
58
2019-09-16T16:27:11.000Z
2022-02-09T00:36:17.000Z
173.814189
43,456
0.897802
[ [ [ "%load_ext autoreload\n%autoreload 2\n%env CUDA_VISIBLE_DEVICES=0\nimport os, sys\nimport time\nsys.path.insert(0, '..')\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport lib\nimport torch, torch.nn as nn\nimport torch.nn.functional as F\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\n\nexperiment_name = 'year_node_8layers'\nexperiment_name = '{}_{}.{:0>2d}.{:0>2d}_{:0>2d}:{:0>2d}'.format(experiment_name, *time.gmtime()[:5])\nprint(\"experiment:\", experiment_name)", "env: CUDA_VISIBLE_DEVICES=0\nexperiment: year_node_8layers_2019.08.27_17:11\n" ], [ "data = lib.Dataset(\"YEAR\", random_state=1337, quantile_transform=True, quantile_noise=1e-3)\nin_features = data.X_train.shape[1]\n\nmu, std = data.y_train.mean(), data.y_train.std()\nnormalize = lambda x: ((x - mu) / std).astype(np.float32)\ndata.y_train, data.y_valid, data.y_test = map(normalize, [data.y_train, data.y_valid, data.y_test])\n\nprint(\"mean = %.5f, std = %.5f\" % (mu, std))", "Downloading https://www.dropbox.com/s/l09pug0ywaqsy0e/YearPredictionMSD.txt?dl=1 > ./data/YEAR/data.csv\n" ], [ "model = nn.Sequential(\n lib.DenseBlock(in_features, 128, num_layers=8, tree_dim=3, depth=6, flatten_output=False,\n choice_function=lib.entmax15, bin_function=lib.entmoid15),\n lib.Lambda(lambda x: x[..., 0].mean(dim=-1)), # average first channels of every tree\n \n).to(device)\n\nwith torch.no_grad():\n res = model(torch.as_tensor(data.X_train[:5000], device=device))\n # trigger data-aware init\n \nif torch.cuda.device_count() > 1:\n model = nn.DataParallel(model)", "_____no_output_____" ], [ "from qhoptim.pyt import QHAdam\noptimizer_params = { 'nus':(0.7, 1.0), 'betas':(0.95, 0.998) }", "_____no_output_____" ], [ "trainer = lib.Trainer(\n model=model, loss_function=F.mse_loss,\n experiment_name=experiment_name,\n warm_start=False,\n Optimizer=QHAdam,\n optimizer_params=optimizer_params,\n verbose=True,\n n_last_checkpoints=5\n)", "_____no_output_____" ], [ "from tqdm import tqdm\nfrom IPython.display import clear_output\nloss_history, mse_history = [], []\nbest_mse = float('inf')\nbest_step_mse = 0\nearly_stopping_rounds = 5000\nreport_frequency = 100", "_____no_output_____" ], [ "for batch in lib.iterate_minibatches(data.X_train, data.y_train, batch_size=1024, \n shuffle=True, epochs=float('inf')):\n metrics = trainer.train_on_batch(*batch, device=device)\n \n loss_history.append(metrics['loss'])\n\n if trainer.step % report_frequency == 0:\n trainer.save_checkpoint()\n trainer.average_checkpoints(out_tag='avg')\n trainer.load_checkpoint(tag='avg')\n mse = trainer.evaluate_mse(\n data.X_valid, data.y_valid, device=device, batch_size=16384)\n\n if mse < best_mse:\n best_mse = mse\n best_step_mse = trainer.step\n trainer.save_checkpoint(tag='best_mse')\n mse_history.append(mse)\n \n trainer.load_checkpoint() # last\n trainer.remove_old_temp_checkpoints()\n\n clear_output(True)\n plt.figure(figsize=[18, 6])\n plt.subplot(1, 2, 1)\n plt.plot(loss_history)\n plt.title('Loss')\n plt.grid()\n plt.subplot(1, 2, 2)\n plt.plot(mse_history)\n plt.title('MSE')\n plt.grid()\n plt.show()\n print(\"Loss %.5f\" % (metrics['loss']))\n print(\"Val MSE: %0.5f\" % (mse))\n if trainer.step > best_step_mse + early_stopping_rounds:\n print('BREAK. There is no improvment for {} steps'.format(early_stopping_rounds))\n print(\"Best step: \", best_step_mse)\n print(\"Best Val MSE: %0.5f\" % (best_mse))\n break", "_____no_output_____" ], [ "trainer.load_checkpoint(tag='best_mse')\nmse = trainer.evaluate_mse(data.X_test, data.y_test, device=device)\nprint('Best step: ', trainer.step)\nprint(\"Test MSE: %0.5f\" % (mse))", "Loaded logs/year_node_8layers_2019.08.27_17:11/checkpoint_best_mse.pth\nBest step: 3400\nTest MSE: 0.63787\n" ], [ "mse * std ** 2", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a692e3fa4952a30e2ee8d5951486dcb8af7643
58,293
ipynb
Jupyter Notebook
perceptron_Iris.ipynb
skimaza/assist
a4ced84bf1e9df4907bee526377dc97001e53354
[ "MIT" ]
2
2021-09-25T01:38:27.000Z
2021-11-09T03:08:38.000Z
perceptron_Iris.ipynb
skimaza/assist
a4ced84bf1e9df4907bee526377dc97001e53354
[ "MIT" ]
null
null
null
perceptron_Iris.ipynb
skimaza/assist
a4ced84bf1e9df4907bee526377dc97001e53354
[ "MIT" ]
3
2021-09-25T01:38:29.000Z
2021-10-02T06:50:50.000Z
47.859606
11,434
0.577874
[ [ [ "<a href=\"https://colab.research.google.com/github/skimaza/assist/blob/main/perceptron_Iris.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# AI 전략경영MBA 경영자를 위한 딥러닝 원리의 이해\n# Perceptron 실습 예제\n# 붓꽃 분류 문제", "_____no_output_____" ], [ "The original code comes from Sebastian Reschka's blog (http://sebastianraschka.com/Articles/2015_singlelayer_neurons.html).<br/>\nSlightly modified for the lecture. -skimaza", "_____no_output_____" ], [ "# 라이브러리 import\n- numpy: number, 특히 다차원 배열을 다루는 라이브러리(패키지)\n- pandas: 데이터를 다양한 표 형태로 취급할 수 있는 패키지\n- matplotlib: 이미지와 그래프 표시", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "# Colab으로 배정된 가상머신 확인", "_____no_output_____" ], [ "### 현재 디렉토리(폴더)\n### '!'로 시작하는 명령은 가상머신의 명령을 실행하라는 의미", "_____no_output_____" ] ], [ [ "!pwd", "/content\n" ] ], [ [ "### 현재 디렉토리의 내용", "_____no_output_____" ] ], [ [ "!ls -l", "total 12\n-rw-r--r-- 1 root root 4551 Sep 22 01:24 iris.dat\ndrwxr-xr-x 1 root root 4096 Sep 16 13:40 sample_data\n" ] ], [ [ "### sample_data directory에는 Google Colab에서 기본으로 제공하는 데이터가 있음\n### (이번 특강에서 사용할 데이터는 아님)", "_____no_output_____" ] ], [ [ "!ls sample_data", "anscombe.json\t\t mnist_test.csv\ncalifornia_housing_test.csv mnist_train_small.csv\ncalifornia_housing_train.csv README.md\n" ] ], [ [ "# 예제 코드", "_____no_output_____" ] ], [ [ "weights = []\nerrors_log = []\nepochs = 20\neta = 0.01\n\nIRIS_DATA = \"iris.dat\" # Iris 데이터셋을 저장할 파일이름", "_____no_output_____" ] ], [ [ "### os는 운영체제 관련 기능, urllib는 인터넷으로 데이터를 다운로드받기 위한 패키지\n### 인터넷에서 Iris 데이터셋을 다운로드하여 IRIS_DATA 파일에 저장", "_____no_output_____" ] ], [ [ "import os\nfrom urllib.request import urlopen\n\nif not os.path.exists(IRIS_DATA):\n raw = urlopen('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data').read()\n with open(IRIS_DATA, \"wb\") as f:\n f.write(raw)", "_____no_output_____" ], [ "!ls -l", "total 12\n-rw-r--r-- 1 root root 4551 Sep 22 01:24 iris.dat\ndrwxr-xr-x 1 root root 4096 Sep 16 13:40 sample_data\n" ] ], [ [ "# pandas의 read_csv 명령을 사용하여 데이터를 pandas DataFrame 구조로 읽어들임", "_____no_output_____" ] ], [ [ "df = pd.read_csv(IRIS_DATA, header=None)", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "꽃받침 길이, 꽃받침 너비, 꽃잎 길이, 꽃잎 너비 (cm), 붓꽃 종류", "_____no_output_____" ] ], [ [ "df[4].values", "_____no_output_____" ], [ "df.iloc[0:100, 4]", "_____no_output_____" ], [ "df.iloc[0:100, 4].values", "_____no_output_____" ], [ "# setosa and versicolor\ny = np.asarray(df.iloc[0:100, 4].values)\ny = np.where(y == 'Iris-setosa', -1, 1)\n\n# sepal length and petal length\nX = np.asarray(df.iloc[0:100, [0,2]].values)", "_____no_output_____" ], [ "print(y)", "[-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1\n -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1\n -1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1]\n" ], [ "print(X)", "[[5.1 1.4]\n [4.9 1.4]\n [4.7 1.3]\n [4.6 1.5]\n [5. 1.4]\n [5.4 1.7]\n [4.6 1.4]\n [5. 1.5]\n [4.4 1.4]\n [4.9 1.5]\n [5.4 1.5]\n [4.8 1.6]\n [4.8 1.4]\n [4.3 1.1]\n [5.8 1.2]\n [5.7 1.5]\n [5.4 1.3]\n [5.1 1.4]\n [5.7 1.7]\n [5.1 1.5]\n [5.4 1.7]\n [5.1 1.5]\n [4.6 1. ]\n [5.1 1.7]\n [4.8 1.9]\n [5. 1.6]\n [5. 1.6]\n [5.2 1.5]\n [5.2 1.4]\n [4.7 1.6]\n [4.8 1.6]\n [5.4 1.5]\n [5.2 1.5]\n [5.5 1.4]\n [4.9 1.5]\n [5. 1.2]\n [5.5 1.3]\n [4.9 1.5]\n [4.4 1.3]\n [5.1 1.5]\n [5. 1.3]\n [4.5 1.3]\n [4.4 1.3]\n [5. 1.6]\n [5.1 1.9]\n [4.8 1.4]\n [5.1 1.6]\n [4.6 1.4]\n [5.3 1.5]\n [5. 1.4]\n [7. 4.7]\n [6.4 4.5]\n [6.9 4.9]\n [5.5 4. ]\n [6.5 4.6]\n [5.7 4.5]\n [6.3 4.7]\n [4.9 3.3]\n [6.6 4.6]\n [5.2 3.9]\n [5. 3.5]\n [5.9 4.2]\n [6. 4. ]\n [6.1 4.7]\n [5.6 3.6]\n [6.7 4.4]\n [5.6 4.5]\n [5.8 4.1]\n [6.2 4.5]\n [5.6 3.9]\n [5.9 4.8]\n [6.1 4. ]\n [6.3 4.9]\n [6.1 4.7]\n [6.4 4.3]\n [6.6 4.4]\n [6.8 4.8]\n [6.7 5. ]\n [6. 4.5]\n [5.7 3.5]\n [5.5 3.8]\n [5.5 3.7]\n [5.8 3.9]\n [6. 5.1]\n [5.4 4.5]\n [6. 4.5]\n [6.7 4.7]\n [6.3 4.4]\n [5.6 4.1]\n [5.5 4. ]\n [5.5 4.4]\n [6.1 4.6]\n [5.8 4. ]\n [5. 3.3]\n [5.6 4.2]\n [5.7 4.2]\n [5.7 4.2]\n [6.2 4.3]\n [5.1 3. ]\n [5.7 4.1]]\n" ], [ "# Versicolor\npos = X[[y == 1]]\n# Setosa\nneg = X[[y == -1]]", "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n \n/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:4: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n after removing the cwd from sys.path.\n" ], [ "print(pos)", "[[7. 4.7]\n [6.4 4.5]\n [6.9 4.9]\n [5.5 4. ]\n [6.5 4.6]\n [5.7 4.5]\n [6.3 4.7]\n [4.9 3.3]\n [6.6 4.6]\n [5.2 3.9]\n [5. 3.5]\n [5.9 4.2]\n [6. 4. ]\n [6.1 4.7]\n [5.6 3.6]\n [6.7 4.4]\n [5.6 4.5]\n [5.8 4.1]\n [6.2 4.5]\n [5.6 3.9]\n [5.9 4.8]\n [6.1 4. ]\n [6.3 4.9]\n [6.1 4.7]\n [6.4 4.3]\n [6.6 4.4]\n [6.8 4.8]\n [6.7 5. ]\n [6. 4.5]\n [5.7 3.5]\n [5.5 3.8]\n [5.5 3.7]\n [5.8 3.9]\n [6. 5.1]\n [5.4 4.5]\n [6. 4.5]\n [6.7 4.7]\n [6.3 4.4]\n [5.6 4.1]\n [5.5 4. ]\n [5.5 4.4]\n [6.1 4.6]\n [5.8 4. ]\n [5. 3.3]\n [5.6 4.2]\n [5.7 4.2]\n [5.7 4.2]\n [6.2 4.3]\n [5.1 3. ]\n [5.7 4.1]]\n" ], [ "print(neg)", "[[5.1 1.4]\n [4.9 1.4]\n [4.7 1.3]\n [4.6 1.5]\n [5. 1.4]\n [5.4 1.7]\n [4.6 1.4]\n [5. 1.5]\n [4.4 1.4]\n [4.9 1.5]\n [5.4 1.5]\n [4.8 1.6]\n [4.8 1.4]\n [4.3 1.1]\n [5.8 1.2]\n [5.7 1.5]\n [5.4 1.3]\n [5.1 1.4]\n [5.7 1.7]\n [5.1 1.5]\n [5.4 1.7]\n [5.1 1.5]\n [4.6 1. ]\n [5.1 1.7]\n [4.8 1.9]\n [5. 1.6]\n [5. 1.6]\n [5.2 1.5]\n [5.2 1.4]\n [4.7 1.6]\n [4.8 1.6]\n [5.4 1.5]\n [5.2 1.5]\n [5.5 1.4]\n [4.9 1.5]\n [5. 1.2]\n [5.5 1.3]\n [4.9 1.5]\n [4.4 1.3]\n [5.1 1.5]\n [5. 1.3]\n [4.5 1.3]\n [4.4 1.3]\n [5. 1.6]\n [5.1 1.9]\n [4.8 1.4]\n [5.1 1.6]\n [4.6 1.4]\n [5.3 1.5]\n [5. 1.4]]\n" ], [ "# versicolor with blue dots and setosa with red dots\nplt.scatter(pos[:,0], pos[:, 1], color='blue', label=\"pos\")\nplt.scatter(neg[:,0], neg[:, 1], color='red', label=\"neg\")\nplt.xlabel(\"x1\")\nplt.ylabel(\"x2\")\nplt.legend(loc=2, scatterpoints=1, fontsize=10)", "_____no_output_____" ], [ "def train(X, y, epochs=epochs, eta=eta):\n global weights\n global errors_log\n weights = np.zeros(1 + X.shape[1])\n print(\"Initial weights\", weights)\n errors_log = []\n\n for i in range(epochs):\n errors = 0\n print(\"EPOCHS\", i+1)\n for xi, target in zip(X, y):\n update = eta * (target - predict(xi))\n #print(xi, \"target\", target, \"sum\", net_input(xi), \"update\", update)\n if update != 0:\n weights[1:] += update * xi\n weights[0] += update\n print(\"Updated WEIGHTS\", weights)\n errors += int(update != 0.0)\n errors_log.append(errors)\n return\n\ndef net_input(X):\n global weights\n return np.dot(X, weights[1:]) + weights[0]\n\ndef predict(X):\n return np.where(net_input(X) > 0.0, 1, -1)", "_____no_output_____" ], [ "train(X, y)", "Initial weights [0. 0. 0.]\nEPOCHS 1\nUpdated WEIGHTS [0.02 0.14 0.094]\nEPOCHS 2\nUpdated WEIGHTS [0. 0.038 0.066]\nUpdated WEIGHTS [-0.02 -0.06 0.038]\nUpdated WEIGHTS [0. 0.08 0.132]\nEPOCHS 3\nUpdated WEIGHTS [-0.02 -0.022 0.104]\nUpdated WEIGHTS [-0.04 -0.12 0.076]\nUpdated WEIGHTS [-0.02 0.02 0.17]\nEPOCHS 4\nUpdated WEIGHTS [-0.04 -0.082 0.142]\nUpdated WEIGHTS [-0.02 0.032 0.212]\nEPOCHS 5\nUpdated WEIGHTS [-0.04 -0.07 0.184]\nEPOCHS 6\nEPOCHS 7\nEPOCHS 8\nEPOCHS 9\nEPOCHS 10\nEPOCHS 11\nEPOCHS 12\nEPOCHS 13\nEPOCHS 14\nEPOCHS 15\nEPOCHS 16\nEPOCHS 17\nEPOCHS 18\nEPOCHS 19\nEPOCHS 20\n" ], [ "print(errors_log)", "[1, 3, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n" ], [ "print(weights)", "[-0.04 -0.07 0.184]\n" ] ], [ [ "$w_{1}x_{1} + w_{2}x_{2} + w_{0} = 0$ \n$x_{2} = - \\frac{w_{1}}{w_{2}}x_{1} - \\frac{w_{0}}{w_{2}}$", "_____no_output_____" ] ], [ [ "\nfig = plt.figure()\nax = fig.add_subplot(111)\n\n# draw between 4 and 7 of x1\npoint_x = np.array([4, 7])\n# x2 = -(w0 + w1 * x1) / w2\npoint_y = np.array([- (weights[0] + weights[1] * 4) / weights[2], - (weights[0] + weights[1] * 7) / weights[2]])\nline, = ax.plot(point_x, point_y, 'b-', picker=5)\n\nax.scatter(pos[:,0], pos[:, 1], color='blue', label=\"pos\")\nax.scatter(neg[:,0], neg[:, 1], color='red', label=\"neg\")\nplt.xlabel(\"x1\")\nplt.ylabel(\"x2\")\nplt.legend(loc=2, scatterpoints=1, fontsize=10)\n\nplt.show()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a6931a1739e65df828c502a61691aa111569e7
211,973
ipynb
Jupyter Notebook
module2-baselines-validation/LS_DS_232_Baselines_Validation.ipynb
damerei/DS-Unit-2-Sprint-3-Classification-Validation
ac781afbda84d3804b7f252b5dc09f2c0148bdd8
[ "MIT" ]
null
null
null
module2-baselines-validation/LS_DS_232_Baselines_Validation.ipynb
damerei/DS-Unit-2-Sprint-3-Classification-Validation
ac781afbda84d3804b7f252b5dc09f2c0148bdd8
[ "MIT" ]
null
null
null
module2-baselines-validation/LS_DS_232_Baselines_Validation.ipynb
damerei/DS-Unit-2-Sprint-3-Classification-Validation
ac781afbda84d3804b7f252b5dc09f2c0148bdd8
[ "MIT" ]
null
null
null
112.095717
83,074
0.766701
[ [ [ "_Lambda School Data Science — Classification & Validation_ \n\n# Baselines & Validation\n\nObjectives\n- Train/Validate/Test split\n- Cross-Validation\n- Begin with baselines", "_____no_output_____" ], [ "## Weather data — mean baseline\n\nLet's try baselines for regression.\n\nYou can [get Past Weather by Zip Code from Climate.gov](https://www.climate.gov/maps-data/dataset/past-weather-zip-code-data-table). I downloaded the data for my town: Normal, Illinois.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd \n\nurl = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Sprint-3-Classification-Validation/master/module2-baselines-validation/weather-normal-il.csv'\nweather = pd.read_csv(url, parse_dates=['DATE']).set_index('DATE')\nweather['2014-05':'2019-05'].plot(y='TMAX')\nplt.title('Daily high temperature in Normal, IL');", "_____no_output_____" ] ], [ [ "Over the years, across the seasons, the average daily high temperature in my town is about 63 degrees.", "_____no_output_____" ] ], [ [ "weather['TMAX'].mean()", "_____no_output_____" ] ], [ [ "Remember from [the preread:](https://github.com/LambdaSchool/DS-Unit-2-Sprint-3-Classification-Validation/blob/master/module2-baselines-validation/model-validation-preread.md#what-does-baseline-mean) \"A baseline for regression can be the mean of the training labels.\"", "_____no_output_____" ], [ "If I predicted that every day, the high will be 63 degrees, I'd be off by about 19 degrees on average.", "_____no_output_____" ] ], [ [ "from sklearn.metrics import mean_absolute_error\npredicted = [weather['TMAX'].mean()] * len(weather) \nmean_absolute_error(weather['TMAX'], predicted)", "_____no_output_____" ] ], [ [ "But, we can get a better baseline here: \"A baseline for time-series regressions can be the value from the previous timestep.\"\n\n*Data Science for Business* explains, \n\n> Weather forecasters have two simple—but not simplistic—baseline models that they compare against. ***One (persistence) predicts that the weather tomorrow is going to be whatever it was today.*** The other (climatology) predicts whatever the average historical weather has been on this day from prior years. Each model performs considerably better than random guessing, and both are so easy to compute that they make natural baselines of comparison. Any new, more complex model must beat these.", "_____no_output_____" ], [ "Let's predict that the weather tomorrow is going to be whatever it was today. Which is another way of saying that the weather today is going to be whatever it was yesterday.\n\nWe can engineer this feature with one line of code, using the pandas [`shift`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html) function.\n\nThis new baseline is off by less than 6 degress on average.", "_____no_output_____" ] ], [ [ "weather['TMAX_yesterday'] = weather.TMAX.shift(1)\nweather = weather.dropna() # Drops the first date, because it doesn't have a \"yesterday\"\nmean_absolute_error(weather.TMAX, weather.TMAX_yesterday)", "_____no_output_____" ] ], [ [ "I applied this same concept for [my first submission to the Kaggle Instacart competition.](https://github.com/rrherr/springboard/blob/master/Kaggle%20Instacart%20first%20submission.ipynb)", "_____no_output_____" ], [ "## Bank Marketing — majority class baseline\n\nhttps://archive.ics.uci.edu/ml/datasets/Bank+Marketing\n\n>The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed. \n\n>Output variable (desired target): \n>y - has the client subscribed a term deposit? (binary: 'yes','no')\n\n>bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010)", "_____no_output_____" ], [ "Get and read the data", "_____no_output_____" ] ], [ [ "!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip", "--2019-05-09 19:00:23-- https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip\nResolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.252\nConnecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 444572 (434K) [application/x-httpd-php]\nSaving to: ‘bank-additional.zip.1’\n\nbank-additional.zip 100%[===================>] 434.15K 2.43MB/s in 0.2s \n\n2019-05-09 19:00:23 (2.43 MB/s) - ‘bank-additional.zip.1’ saved [444572/444572]\n\n" ], [ "!unzip bank-additional.zip", "Archive: bank-additional.zip\nreplace bank-additional/.DS_Store? [y]es, [n]o, [A]ll, [N]one, [r]ename: " ], [ "bank = pd.read_csv('bank-additional/bank-additional-full.csv', sep=';')", "_____no_output_____" ] ], [ [ "Assign to X and y", "_____no_output_____" ] ], [ [ "X = bank.drop(columns='y')\ny = bank['y'] == 'yes'", "_____no_output_____" ] ], [ [ "## 3-way split: Train / Validation / Test ", "_____no_output_____" ], [ "We know how to do a _two-way split_, with the [**`sklearn.model_selection.train_test_split`**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function:", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.2, random_state=42, stratify=y)", "_____no_output_____" ] ], [ [ "How can we get from a two-way split, to a three-way split?\n\nWe can use the same function again, to split the training data into training and validation data.", "_____no_output_____" ] ], [ [ "X_train, X_val, y_train, y_val = train_test_split(\n X_train, y_train, test_size=0.3, random_state=42, stratify=y_train)", "_____no_output_____" ], [ "X_train.shape, X_val.shape, X_test.shape, y_train.shape, y_val.shape, y_test.shape", "_____no_output_____" ] ], [ [ "## Majority class baseline", "_____no_output_____" ], [ "Determine the majority class:", "_____no_output_____" ] ], [ [ "y_train.value_counts(normalize=True)", "_____no_output_____" ] ], [ [ "What if we guessed the majority class for every prediction?", "_____no_output_____" ] ], [ [ "majority_class = y_train.mode()[0]\ny_pred = [majority_class] * len(y_val)", "_____no_output_____" ] ], [ [ "#### [`sklearn.metrics.accuracy_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html)\n\nBaseline accuracy by guessing the majority class for every prediction:", "_____no_output_____" ] ], [ [ "from sklearn.metrics import accuracy_score\naccuracy_score(y_val, y_pred)", "_____no_output_____" ] ], [ [ "#### [`sklearn.metrics.roc_auc_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)\n\nBaseline \"ROC AUC\" score by guessing the majority class for every prediction:", "_____no_output_____" ] ], [ [ "from sklearn.metrics import roc_auc_score\nroc_auc_score(y_val, y_pred)", "_____no_output_____" ] ], [ [ "## Fast first models", "_____no_output_____" ], [ "### Ignore rows/columns with nulls", "_____no_output_____" ], [ "Does this dataset have nulls?", "_____no_output_____" ] ], [ [ "X_train.isnull().sum()", "_____no_output_____" ] ], [ [ "### Ignore nonnumeric features", "_____no_output_____" ], [ "Here are the numeric features:", "_____no_output_____" ] ], [ [ "X_train.describe(include='number')", "_____no_output_____" ] ], [ [ "Here are the nonnumeric features:", "_____no_output_____" ] ], [ [ "X_train.describe(exclude='number')", "_____no_output_____" ] ], [ [ "Just select the nonnumeric features:", "_____no_output_____" ] ], [ [ "X_train_numeric = X_train.select_dtypes('number')\nX_val_numeric = X_val.select_dtypes('number')", "_____no_output_____" ] ], [ [ "### Shallow trees are good for fast, first baselines, and to look for \"leakage\"", "_____no_output_____" ], [ "#### Shallow trees", "_____no_output_____" ], [ "After naive baselines, *Data Science for Business* suggests [\"decision stumps.\"](https://en.wikipedia.org/wiki/Decision_stump)\n\n> A slightly more complex alternative is a model that only considers a very small amount of feature information. ...\n\n> One example is to build a \"decision stump\"—a decision tree with only one internal node, the root node. A tree limited to one internal node simply means that the tree induction selects the single most informative feature to make a decision. In a well-known paper in machine learning, [Robert Holte (1993)](https://link.springer.com/article/10.1023/A:1022631118932) showed that ***decision stumps often produce quite good baseline performance*** ...\n\n> A decision stump is an example of the strategy of ***choosing the single most informative piece of information*** available and basing all decisions on it. In some cases most of the leverage may be coming from a single feature, and this method assesses whether and to what extent this is the case.\n\nTo fit a \"decision stump\" we could use a [`DecisionTreeClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) model with parameter `max_depth=1`.\n\nIn this case, we'll let our tree grow a little deeper, and use the parameter `max_depth=2`\n\nIn the previous code cell, we selected only the numeric features, to avoid data wrangling and save time. For now, we'll use only the numeric features.", "_____no_output_____" ], [ "#### Looking for leakage", "_____no_output_____" ], [ "[Xavier Amatriain recommends,](https://www.quora.com/What-are-some-best-practices-for-training-machine-learning-models/answer/Xavier-Amatriain)\n\n\"Make sure your training features do not contain data from the “future” (aka time traveling). While this might be easy and obvious in some cases, it can get tricky. ... If your test metric becomes really good all of the sudden, ask yourself what you might be doing wrong. Chances are you are time travelling or overfitting in some way.\"", "_____no_output_____" ], [ "We can test this with the [UCI repository's Bank Marketing dataset](https://archive.ics.uci.edu/ml/datasets/Bank+Marketing). It has a feature which leaks information from the future and should be dropped:\n\n>11 - duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input ... should be discarded if the intention is to have a realistic predictive model.", "_____no_output_____" ], [ "#### Let's train a shallow tree basline\n\n... without dropping the leaky `duration` feature.", "_____no_output_____" ] ], [ [ "\nfrom sklearn.tree import DecisionTreeClassifier\n\ntree = DecisionTreeClassifier(max_depth=2)\ntree.fit(X_train_numeric,y_train)\ny_pred_proba = tree.predict_proba(X_val_numeric)[:,1]\nroc_auc_score(y_val, y_pred_proba)", "_____no_output_____" ] ], [ [ "Then we can visualize the tree to see which feature(s) were the \"most informative\":", "_____no_output_____" ] ], [ [ "import graphviz\nfrom sklearn.tree import export_graphviz\n\ndot_data = export_graphviz(tree, out_file=None, feature_names=X_train_numeric.columns, \n class_names=['No', 'Yes'], filled=True, impurity=False, proportion=True)\n\ngraphviz.Source(dot_data)", "_____no_output_____" ] ], [ [ "This baseline has a ROC AUC score above 0.85, and it uses the `duration` feature, as well as `nr.employed`, a \"social and economic context attribute\" for \"number of employees - quarterly indicator.\"", "_____no_output_____" ], [ "#### Let's drop the `duration` feature", "_____no_output_____" ] ], [ [ "\nX_train = X_train.drop(columns='duration')\nX_val = X_val.drop(columns='duration')\nX_test = X_test.drop(columns='duration')\n\nX_train_numeric = X_train.select_dtypes('number')\nX_val_numeric = X_val_numeric.select_dtypes('number')", "_____no_output_____" ] ], [ [ "When the `duration` feature is dropped, then the ROC AUC score drops. Which is what we expect, it's not a bad thing in this situation!", "_____no_output_____" ] ], [ [ "tree = DecisionTreeClassifier(max_depth=2)\ntree.fit(X_train_numeric,y_train)\ny_pred_proba = tree.predict_proba(X_val_numeric)[:,1]\nroc_auc_score(y_val, y_pred_proba)", "_____no_output_____" ], [ "dot_data = export_graphviz(tree, out_file=None, feature_names=X_train_numeric.columns, \n class_names=['No', 'Yes'], filled=True, impurity=False, proportion=True)\n\ngraphviz.Source(dot_data)", "_____no_output_____" ] ], [ [ "### Logistic Regression\n\nLogistic Regression is another great option for fast, first baselines!", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\n\nmodel = LogisticRegression(solver='lbfgs', max_iter=1000)\nmodel.fit(X_train_numeric,y_train)\ny_pred_proba = model.predict_proba(X_val_numeric)[:,1]\nroc_auc_score(y_val,y_pred_proba)", "_____no_output_____" ] ], [ [ "### With Scaler\nhttps://scikit-learn.org/stable/modules/preprocessing.html", "_____no_output_____" ] ], [ [ "import warnings\nfrom sklearn.exceptions import DataConversionWarning\nwarnings.filterwarnings(action='ignore', category=DataConversionWarning)", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train_numeric)\nX_val_scaled = scaler.transform(X_val_numeric)\n\nmodel = LogisticRegression(solver='lbfgs', max_iter=1000)\nmodel.fit(X_train_scaled, y_train)\ny_pred_proba = model.predict_proba(X_val_scaled)[:,1]\nroc_auc_score(y_val,y_pred_proba)", "_____no_output_____" ] ], [ [ "### Same, as a pipeline", "_____no_output_____" ] ], [ [ "\nfrom sklearn.pipeline import make_pipeline\npipeline = make_pipeline(\n StandardScaler(),\n LogisticRegression(solver='lbfgs',max_iter=1000))\n\npipeline.fit(X_train_numeric,y_train)\ny_pred_proba = pipeline.predict_proba(X_val_numeric)[:,1]", "_____no_output_____" ] ], [ [ "### Encode \"low cardinality\" categoricals", "_____no_output_____" ], [ "[Cardinality](https://simple.wikipedia.org/wiki/Cardinality) means the number of unique values that a feature has:\n> In mathematics, the cardinality of a set means the number of its elements. For example, the set A = {2, 4, 6} contains 3 elements, and therefore A has a cardinality of 3. \n\nOne-hot encoding adds a dimension for each unique value of each categorical feature. So, it may not be a good choice for \"high cardinality\" categoricals that have dozens, hundreds, or thousands of unique values. \n\nIn this dataset, all the categoricals seem to be \"low cardinality\", so we can use one-hot encoding.", "_____no_output_____" ] ], [ [ "!pip install category_encoders\n\nimport category_encoders as ce\npipeline = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True),\n StandardScaler(),\n LogisticRegression(solver='lbfgs', max_iter=1000))\n\npipeline.fit(X_train,y_train)\ny_pred_proba = pipeline.predict_proba(X_val)[:,1]\nroc_auc_score(y_val,y_pred_proba)", "Collecting category_encoders\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/6e/a1/f7a22f144f33be78afeb06bfa78478e8284a64263a3c09b1ef54e673841e/category_encoders-2.0.0-py2.py3-none-any.whl (87kB)\n\u001b[K |████████████████████████████████| 92kB 5.9MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.16.3)\nRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.20.3)\nRequirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.9.0)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.5.1)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.2.1)\nRequirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.24.2)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders) (1.12.0)\nRequirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2.5.3)\nRequirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2018.9)\nInstalling collected packages: category-encoders\nSuccessfully installed category-encoders-2.0.0\n" ] ], [ [ "#### Install the [Category Encoders](https://github.com/scikit-learn-contrib/categorical-encoding) library\n\nIf you're running on Google Colab:\n\n```\n!pip install category_encoders\n```\n\nIf you're running locally with Anaconda:\n\n```\n!conda install -c conda-forge category_encoders\n```", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "# Baseline with cross-validation + independent test set\nA complete example, as an alternative to Train/Validate/Test\n\n\n#### scikit-learn documentation\n- [`sklearn.model_selection.cross_val_score`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html)\n- [ The `scoring` parameter: defining model evaluation rules](https://scikit-learn.org/stable/modules/model_evaluation.html#the-scoring-parameter-defining-model-evaluation-rules)", "_____no_output_____" ] ], [ [ "# Imports\n%matplotlib inline\nimport warnings\nimport category_encoders as ce\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.exceptions import DataConversionWarning\nfrom sklearn.preprocessing import StandardScaler\nwarnings.filterwarnings(action='ignore', category=DataConversionWarning)\n\n# Load data\nbank = pd.read_csv('bank-additional/bank-additional-full.csv', sep=';')\n\n# Assign to X, y\nX = bank.drop(columns='y')\ny = bank['y'] == 'yes'\n\n# Drop leaky & random features\nX = X.drop(columns='duration')\n\n# Split Train, Test\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.2, random_state=42, stratify=y)\n\n# Make pipeline\npipeline = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True), \n StandardScaler(), \n LogisticRegression(solver='lbfgs', max_iter=1000)\n)\n\n# Cross-validate with training data\nscores = cross_val_score(pipeline, X_train, y_train, scoring='roc_auc', cv=10, n_jobs=-1, verbose=10)", "[Parallel(n_jobs=-1)]: Using backend LokyBackend with 2 concurrent workers.\n[Parallel(n_jobs=-1)]: Done 1 tasks | elapsed: 3.9s\n[Parallel(n_jobs=-1)]: Done 4 tasks | elapsed: 6.7s\n[Parallel(n_jobs=-1)]: Done 10 out of 10 | elapsed: 12.8s finished\n" ] ], [ [ "This is the baseline score that more sophisticated models must beat. ", "_____no_output_____" ] ], [ [ "print('Cross-Validation ROC AUC scores:', scores)\nprint('Average:', scores.mean())", "Cross-Validation ROC AUC scores: [0.82042478 0.79227573 0.79162088 0.762977 0.78662274 0.78877613\n 0.76414311 0.79607284 0.80670867 0.77968487]\nAverage: 0.7889306746390174\n" ] ], [ [ "Is more effort justified? It depends. The blogpost [\"Always start with a stupid model\"](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) explains,\n\n> Here is a very common story: a team wants to implement a model to predict something like the probability of a user clicking an ad. They start with a logistic regression and quickly (after some minor tuning) reach 90% accuracy.\n\n> From there, the question is: Should the team focus on getting the accuracy up to 95%, or should they solve other problems 90% of the way?\n\n> ***If a baseline does well, then you’ve saved yourself the headache of setting up a more complex model. If it does poorly, the kind of mistakes it makes are very instructive*** ...\n\nSo what else can we learn from this baseline? \n\n[\"Always start with a stupid model\"](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) suggests to look at\n\n> **What type of signal your model picks up on.** Most baselines will allow you to extract ***feature importances***, revealing which aspects of the input are most predictive. Analyzing feature importance is a great way to realize how your model is making decisions, and what it might be missing.\n\nWe can do that:", "_____no_output_____" ] ], [ [ "# (Re)fit on training data\npipeline.fit(X_train, y_train)\n\n# Visualize coefficients\nplt.figure(figsize=(10,30))\nplt.title('Coefficients')\ncoefficients = pipeline.named_steps['logisticregression'].coef_[0]\nfeature_names = pipeline.named_steps['onehotencoder'].transform(X_train).columns\npd.Series(coefficients, feature_names).sort_values().plot.barh(color='gray');", "_____no_output_____" ] ], [ [ "[The post](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) also recommends we consider, \n\n> **What signal your model is missing.** If there is a certain aspect of the data that seems intuitively important but that your model is ignoring, ***a good next step is to engineer a feature*** or pick a different model that could better leverage this particular aspect of your data.", "_____no_output_____" ], [ "### Look at your data (you still need to do it!)\n\nCautionary tales\n- [Exploring the ChestXray14 dataset: problems](https://lukeoakdenrayner.wordpress.com/2017/12/18/the-chestxray14-dataset-problems/)\n- [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide)\n\nIncomplete list of issues to address\n- Categoricals (text, dates/times, high cardinality)\n- Feature Engineering (extraction, interaction, transformations)\n- Missing Values\n- Outliers", "_____no_output_____" ], [ "# ASSIGNMENT options\n\n- **Replicate the lesson code.** [Do it \"the hard way\" or with the \"Benjamin Franklin method.\"](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit)\n- Apply the lesson to other datasets you've worked with before, and compare results.\n- Iterate and improve your **Bank Marketing** model. Engineer new features.\n- Get **weather** data for your own area and calculate both baselines. _\"One (persistence) predicts that the weather tomorrow is going to be whatever it was today. The other (climatology) predicts whatever the average historical weather has been on this day from prior years.\"_ What is the mean absolute error for each baseline? What if you average the two together? \n- [This example from scikit-learn documentation](https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html) demonstrates its improved `OneHotEncoder` and new `ColumnTransformer` objects, which can replace functionality from [third-party libraries](https://github.com/scikit-learn-contrib) like category_encoders and sklearn-pandas. Adapt this example, which uses Titanic data, to work with Bank Marketing or another dataset.\n- When would this notebook's pipelines fail? How could you fix them? Add more [preprocessing](https://scikit-learn.org/stable/modules/preprocessing.html) and [imputation](https://scikit-learn.org/stable/modules/impute.html) to your [pipelines](https://scikit-learn.org/stable/modules/compose.html) with scikit-learn.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e7a69c9c11c7a43aa1215881497db5a1296f4b8b
39,920
ipynb
Jupyter Notebook
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
c461aa215339a6816810dfef5a92a6e375f9bc66
[ "Apache-2.0" ]
11
2021-09-08T05:39:02.000Z
2022-03-25T14:35:22.000Z
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
c461aa215339a6816810dfef5a92a6e375f9bc66
[ "Apache-2.0" ]
118
2021-08-28T03:09:44.000Z
2022-03-31T00:38:44.000Z
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
c461aa215339a6816810dfef5a92a6e375f9bc66
[ "Apache-2.0" ]
110
2021-09-02T15:01:35.000Z
2022-03-31T12:32:48.000Z
39.022483
404
0.605461
[ [ [ "# Kubeflow pipelines\n\n**Learning Objectives:**\n 1. Learn how to deploy a Kubeflow cluster on GCP\n 1. Learn how to create a experiment in Kubeflow\n 1. Learn how to package you code into a Kubeflow pipeline\n 1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way\n\n\n## Introduction\n\nIn this notebook, we will first setup a Kubeflow cluster on GCP.\nThen, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.", "_____no_output_____" ] ], [ [ "!pip3 install --user kfp --upgrade", "Requirement already satisfied: kfp in /home/jupyter/.local/lib/python3.7/site-packages (1.7.2)\nRequirement already satisfied: google-auth<2,>=1.6.1 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.34.0)\nRequirement already satisfied: jsonschema<4,>=3.0.1 in /opt/conda/lib/python3.7/site-packages (from kfp) (3.2.0)\nRequirement already satisfied: PyYAML<6,>=5.3 in /opt/conda/lib/python3.7/site-packages (from kfp) (5.4.1)\nRequirement already satisfied: kfp-server-api<2.0.0,>=1.1.2 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (1.7.0)\nRequirement already satisfied: pydantic<2,>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.8.2)\nRequirement already satisfied: kfp-pipeline-spec<0.2.0,>=0.1.9 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.1.9)\nRequirement already satisfied: cloudpickle<2,>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.6.0)\nRequirement already satisfied: requests-toolbelt<1,>=0.8.0 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.9.1)\nRequirement already satisfied: kubernetes<13,>=8.0.0 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (12.0.1)\nRequirement already satisfied: protobuf<4,>=3.13.0 in /opt/conda/lib/python3.7/site-packages (from kfp) (3.16.0)\nRequirement already satisfied: Deprecated<2,>=1.2.7 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (1.2.12)\nRequirement already satisfied: tabulate<1,>=0.8.6 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.8.9)\nRequirement already satisfied: google-cloud-storage<2,>=1.20.0 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.41.1)\nRequirement already satisfied: google-api-python-client<2,>=1.7.8 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (1.12.8)\nRequirement already satisfied: click<8,>=7.1.1 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (7.1.2)\nRequirement already satisfied: fire<1,>=0.3.1 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.4.0)\nRequirement already satisfied: absl-py<=0.11,>=0.9 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.11.0)\nRequirement already satisfied: docstring-parser<1,>=0.7.3 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.10)\nRequirement already satisfied: strip-hints<1,>=0.1.8 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.1.10)\nRequirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from absl-py<=0.11,>=0.9->kfp) (1.16.0)\nRequirement already satisfied: wrapt<2,>=1.10 in /opt/conda/lib/python3.7/site-packages (from Deprecated<2,>=1.2.7->kfp) (1.12.1)\nRequirement already satisfied: termcolor in /opt/conda/lib/python3.7/site-packages (from fire<1,>=0.3.1->kfp) (1.1.0)\nRequirement already satisfied: httplib2<1dev,>=0.15.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (0.19.1)\nRequirement already satisfied: google-auth-httplib2>=0.0.3 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (0.1.0)\nRequirement already satisfied: google-api-core<2dev,>=1.21.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (1.31.1)\nRequirement already satisfied: uritemplate<4dev,>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (3.0.1)\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (2.25.1)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (1.53.0)\nRequirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (49.6.0.post20210108)\nRequirement already satisfied: packaging>=14.3 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (21.0)\nRequirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (2021.1)\nRequirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.1->kfp) (4.7.2)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.1->kfp) (0.2.7)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.1->kfp) (4.2.2)\nRequirement already satisfied: google-resumable-media<3.0dev,>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage<2,>=1.20.0->kfp) (1.3.2)\nRequirement already satisfied: google-cloud-core<3.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage<2,>=1.20.0->kfp) (1.7.2)\nRequirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.7/site-packages (from google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage<2,>=1.20.0->kfp) (1.1.2)\nRequirement already satisfied: cffi>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from google-crc32c<2.0dev,>=1.0->google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage<2,>=1.20.0->kfp) (1.14.6)\nRequirement already satisfied: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi>=1.0.0->google-crc32c<2.0dev,>=1.0->google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage<2,>=1.20.0->kfp) (2.20)\nRequirement already satisfied: pyparsing<3,>=2.4.2 in /opt/conda/lib/python3.7/site-packages (from httplib2<1dev,>=0.15.0->google-api-python-client<2,>=1.7.8->kfp) (2.4.7)\nRequirement already satisfied: pyrsistent>=0.14.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema<4,>=3.0.1->kfp) (0.17.3)\nRequirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from jsonschema<4,>=3.0.1->kfp) (4.6.3)\nRequirement already satisfied: attrs>=17.4.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema<4,>=3.0.1->kfp) (21.2.0)\nRequirement already satisfied: python-dateutil in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=1.1.2->kfp) (2.8.2)\nRequirement already satisfied: urllib3>=1.15 in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=1.1.2->kfp) (1.26.6)\nRequirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=1.1.2->kfp) (2021.5.30)\nRequirement already satisfied: requests-oauthlib in /opt/conda/lib/python3.7/site-packages (from kubernetes<13,>=8.0.0->kfp) (1.3.0)\nRequirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from kubernetes<13,>=8.0.0->kfp) (0.57.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.1->kfp) (0.4.8)\nRequirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/lib/python3.7/site-packages (from pydantic<2,>=1.8.2->kfp) (3.10.0.0)\nRequirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (2.10)\nRequirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (4.0.0)\nRequirement already satisfied: wheel in /opt/conda/lib/python3.7/site-packages (from strip-hints<1,>=0.1.8->kfp) (0.36.2)\nRequirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema<4,>=3.0.1->kfp) (3.5.0)\nRequirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib->kubernetes<13,>=8.0.0->kfp) (3.1.1)\n" ] ], [ [ "### Restart the kernel\n\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.", "_____no_output_____" ], [ "### Import libraries and define constants", "_____no_output_____" ] ], [ [ "from os import path\n\nimport kfp\nimport kfp.compiler as compiler\nimport kfp.components as comp\nimport kfp.dsl as dsl\nimport kfp.gcp as gcp\nimport kfp.notebook", "_____no_output_____" ] ], [ [ "## Setup a Kubeflow cluster on GCP", "_____no_output_____" ], [ "**TODO 1**", "_____no_output_____" ], [ "To deploy a [Kubeflow](https://www.kubeflow.org/) cluster\nin your GCP project, use the [AI Platform pipelines](https://console.cloud.google.com/ai-platform/pipelines):\n\n1. Go to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines) in the GCP Console.\n1. Create a new instance\n2. Hit \"Configure\"\n3. Check the box \"Allow access to the following Cloud APIs\"\n1. Hit \"Create Cluster\"\n4. Hit \"Deploy\"\n\nWhen the cluster is ready, go back to the AI Platform pipelines page and click on \"SETTINGS\" entry for your cluster.\nThis will bring up a pop up with code snippets on how to access the cluster \nprogrammatically. \n\nCopy the \"host\" entry and set the \"HOST\" variable below with that.\n", "_____no_output_____" ] ], [ [ "HOST = \"\" # TODO: fill in the HOST information for the cluster", "_____no_output_____" ] ], [ [ "### Authenticate your KFP cluster with a Kubernetes secret\n\nIf you run pipelines that requires calling any GCP services, you need to set the application default credential to a pipeline step by mounting the proper GCP service account token as a Kubernetes secret.\n\nFirst point your kubectl current context to your cluster. Go back to your [Kubeflow cluster dashboard](https://console.cloud.google.com/ai-platform/pipelines/clusters) or navigate to `Navigation menu > AI Platform > Pipelines` and look to see the cluster name, zone and namespace for the pipeline you deployed above. It's likely called `cluster-1` if this is the first AI Pipelines you've created. ", "_____no_output_____" ] ], [ [ "# Change below if necessary\nPROJECT = !gcloud config get-value project # noqa: E999\nPROJECT = PROJECT[0]\nBUCKET = PROJECT # change if needed\nCLUSTER = \"cluster-1\" # change if needed\nZONE = \"us-central1-a\" # change if needed\nNAMESPACE = \"default\" # change if needed\n\n%env PROJECT=$PROJECT\n%env CLUSTER=$CLUSTER\n%env ZONE=$ZONE\n%env NAMESPACE=$NAMESPACE", "env: PROJECT=dsparing-sandbox\nenv: CLUSTER=cluster-1\nenv: ZONE=us-central1-a\nenv: NAMESPACE=default\n" ], [ "# Configure kubectl to connect with the cluster\n!gcloud container clusters get-credentials \"$CLUSTER\" --zone \"$ZONE\" --project \"$PROJECT\"", "Fetching cluster endpoint and auth data.\nkubeconfig entry generated for cluster-1.\n" ] ], [ [ "We'll create a service account called `kfpdemo` with the necessary IAM permissions for our cluster secret. We'll give this service account permissions for any GCP services it might need. This `taxifare` pipeline needs access to Cloud Storage, so we'll give it the `storage.admin` role and `ml.admin`. Open a Cloud Shell and copy/paste this code in the terminal there.\n\n```bash\nPROJECT=$(gcloud config get-value project)\n\n# Create service account\ngcloud iam service-accounts create kfpdemo \\\n --display-name kfpdemo --project $PROJECT\n\n# Grant permissions to the service account by binding roles\ngcloud projects add-iam-policy-binding $PROJECT \\\n --member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \\\n --role=roles/storage.admin\n \ngcloud projects add-iam-policy-binding $PROJECT \\\n --member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \\\n --role=roles/ml.admin \n```", "_____no_output_____" ], [ "Then, we'll create and download a key for this service account and store the service account credential as a Kubernetes secret called `user-gcp-sa` in the cluster.", "_____no_output_____" ] ], [ [ "%%bash\ngcloud iam service-accounts keys create application_default_credentials.json \\\n --iam-account kfpdemo@$PROJECT.iam.gserviceaccount.com", "_____no_output_____" ], [ "# Check that the key was downloaded correctly.\n!ls application_default_credentials.json", "application_default_credentials.json\n" ], [ "# Create a k8s secret. If already exists, override.\n!kubectl create secret generic user-gcp-sa \\\n --from-file=user-gcp-sa.json=application_default_credentials.json \\\n -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -", "secret/user-gcp-sa configured\n" ] ], [ [ "## Create an experiment", "_____no_output_____" ], [ "**TODO 2**", "_____no_output_____" ], [ "We will start by creating a Kubeflow client to pilot the Kubeflow cluster:", "_____no_output_____" ] ], [ [ "client = kfp.Client(host=HOST)", "_____no_output_____" ] ], [ [ "Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single \"Default\" experiment:", "_____no_output_____" ] ], [ [ "client.list_experiments()", "_____no_output_____" ] ], [ [ "Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:", "_____no_output_____" ] ], [ [ "exp = client.create_experiment(name=\"taxifare\")", "_____no_output_____" ] ], [ [ "Let's make sure the experiment has been created correctly:", "_____no_output_____" ] ], [ [ "client.list_experiments()", "_____no_output_____" ] ], [ [ "## Packaging your code into Kubeflow components", "_____no_output_____" ], [ "We have packaged our taxifare ml pipeline into three components:\n* `./components/bq2gcs` that creates the training and evaluation data from BigQuery and exports it to GCS\n* `./components/trainjob` that launches the training container on AI-platform and exports the model\n* `./components/deploymodel` that deploys the trained model to AI-platform as a REST API\n\nEach of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.\n\nIf you inspect the code in these folders, you'll notice that the `main.py` or `main.sh` files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the `Dockerfile` tells you that these files are executed when the container is run. \nSo we just packaged our ml code into light container images for reproducibility. \n\nWe have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project:", "_____no_output_____" ] ], [ [ "# Builds the taxifare trainer container in case you skipped the optional part\n# of lab 1\n!taxifare/scripts/build.sh", "Sending build context to Docker daemon 157.2kB\nStep 1/4 : FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-5:latest\n ---> 19875ee1008d\nStep 2/4 : COPY . /code\n ---> Using cache\n ---> 9c12e76129be\nStep 3/4 : WORKDIR /code\n ---> Using cache\n ---> 0b77723599c9\nStep 4/4 : ENTRYPOINT [\"python3\", \"-m\", \"trainer.task\"]\n ---> Using cache\n ---> ec727c7b2e63\nSuccessfully built ec727c7b2e63\nSuccessfully tagged gcr.io/dsparing-sandbox/taxifare_training_container:latest\n" ], [ "# Pushes the taxifare trainer container to gcr/io\n!taxifare/scripts/push.sh", "Using default tag: latest\nThe push refers to repository [gcr.io/dsparing-sandbox/taxifare_training_container]\n\n\u001b[1B5a90caa8: Preparing \n\u001b[1B0326dd85: Preparing \n\u001b[1Bbe2a94be: Preparing \n\u001b[1B2667b401: Preparing \n\u001b[1B97d991c7: Preparing \n\u001b[1Bbdf9b557: Preparing \n\u001b[1Bdbc2b748: Preparing \n\u001b[1Bb8f29c2e: Preparing \n\u001b[1B7b2f7486: Preparing \n\u001b[1B506c54ba: Preparing \n\u001b[1B3dc8d38f: Preparing \n\u001b[1Bd7ce97e4: Preparing \n\u001b[1B97e5c777: Preparing \n\u001b[1B5dfd94f2: Preparing \n\u001b[1Bafebd1ec: Preparing \n\u001b[1Bbf18a086: Preparing \n\u001b[1B7318f223: Preparing \n\u001b[1B3d35a813: Preparing \n\u001b[1Ba1af4c10: Preparing \n\u001b[1B9b09744f: Layer already exists \u001b[18A\u001b[2K\u001b[14A\u001b[2K\u001b[10A\u001b[2K\u001b[8A\u001b[2K\u001b[3A\u001b[2K\u001b[2A\u001b[2Klatest: digest: sha256:90f6400640d2e3c16d7e3d5bbeb4974ef1e9c99155fa2a67032608bab735d002 size: 4507\n" ], [ "# Builds the KF component containers and push them to gcr/io\n!cd pipelines && make components", "make[1]: Entering directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/bq2gcs'\nrm: cannot remove './venv': No such file or directory\nOK\nSending build context to Docker daemon 21.5kB\nStep 1/6 : FROM google/cloud-sdk:latest\n ---> 915a516535e8\nStep 2/6 : RUN apt-get update && apt-get install --yes python3-pip\n ---> Using cache\n ---> 0f653294e07c\nStep 3/6 : COPY . /code\n ---> Using cache\n ---> 7d8f8d185c30\nStep 4/6 : WORKDIR /code\n ---> Using cache\n ---> 20d39822bdb4\nStep 5/6 : RUN pip3 install google-cloud-bigquery\n ---> Using cache\n ---> a1baf2091090\nStep 6/6 : ENTRYPOINT [\"python3\", \"./main.py\"]\n ---> Using cache\n ---> 05e9191c9619\nSuccessfully built 05e9191c9619\nSuccessfully tagged gcr.io/dsparing-sandbox/taxifare-bq2gcs:latest\nUsing default tag: latest\nThe push refers to repository [gcr.io/dsparing-sandbox/taxifare-bq2gcs]\n\n\u001b[1B038e5a12: Preparing \n\u001b[1B7ee6f1b4: Preparing \n\u001b[1Bced31aad: Preparing \n\u001b[1Be268b455: Preparing \n\u001b[1B15a7c280: Preparing \n\u001b[1B9ae3a881: Preparing \n\u001b[1B24ad8c63: Preparing \n\u001b[1Bd95c5384: Preparing \n\u001b[1Bd7a1159c: Preparing \n\u001b[1Bd1217615: Preparing \n\u001b[1Bc1bc2645: Layer already exists \u001b[6A\u001b[2K\u001b[5A\u001b[2K\u001b[1A\u001b[2Klatest: digest: sha256:2b9f25d58c03019983d785b6fd7910e61d886d9d84e96781aade0ae05a50e020 size: 2633\nmake[1]: Leaving directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/bq2gcs'\nmake[1]: Entering directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/trainjob'\nrm: cannot remove './venv': No such file or directory\nOK\nSending build context to Docker daemon 14.85kB\nStep 1/5 : FROM google/cloud-sdk:latest\n ---> 915a516535e8\nStep 2/5 : COPY . /code\n ---> Using cache\n ---> f598ddbd44a2\nStep 3/5 : WORKDIR /code\n ---> Using cache\n ---> 79f9d21c1bcf\nStep 4/5 : RUN pip3 install cloudml-hypertune\n ---> Using cache\n ---> 9edc5e05ae65\nStep 5/5 : ENTRYPOINT [\"./main.sh\"]\n ---> Using cache\n ---> ae939b43e795\nSuccessfully built ae939b43e795\nSuccessfully tagged gcr.io/dsparing-sandbox/taxifare-trainjob:latest\nUsing default tag: latest\nThe push refers to repository [gcr.io/dsparing-sandbox/taxifare-trainjob]\n\n\u001b[1B9e8dbe19: Preparing \n\u001b[1Bf0a19019: Preparing \n\u001b[1Be268b455: Preparing \n\u001b[1B15a7c280: Preparing \n\u001b[1B9ae3a881: Preparing \n\u001b[1B24ad8c63: Preparing \n\u001b[1Bd95c5384: Preparing \n\u001b[1Bd7a1159c: Preparing \n\u001b[1Bd1217615: Preparing \n\u001b[2Bd1217615: Layer already exists \u001b[6A\u001b[2K\u001b[5A\u001b[2Klatest: digest: sha256:4d28b7a097e81f911782a0101b395b2c3d56f83b7a55bddd8a3a83d2ab3e18b6 size: 2420\nmake[1]: Leaving directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/trainjob'\nmake[1]: Entering directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/deploymodel'\nrm: cannot remove './venv': No such file or directory\nOK\nSending build context to Docker daemon 14.85kB\nStep 1/4 : FROM google/cloud-sdk:latest\n ---> 915a516535e8\nStep 2/4 : COPY . /code\n ---> Using cache\n ---> 8ce1e25d9ba8\nStep 3/4 : WORKDIR /code\n ---> Using cache\n ---> 9b585070839b\nStep 4/4 : ENTRYPOINT [\"./main.sh\"]\n ---> Using cache\n ---> 0559c7f14028\nSuccessfully built 0559c7f14028\nSuccessfully tagged gcr.io/dsparing-sandbox/taxifare-deploymodel:latest\nUsing default tag: latest\nThe push refers to repository [gcr.io/dsparing-sandbox/taxifare-deploymodel]\n\n\u001b[1B539f9c38: Preparing \n\u001b[1Be268b455: Preparing \n\u001b[1B15a7c280: Preparing \n\u001b[1B9ae3a881: Preparing \n\u001b[1B24ad8c63: Preparing \n\u001b[1Bd95c5384: Preparing \n\u001b[1Bd7a1159c: Preparing \n\u001b[1Bd1217615: Preparing \n\u001b[1Bc1bc2645: Layer already exists \u001b[5A\u001b[2K\u001b[3A\u001b[2Klatest: digest: sha256:6028a2f793dc35ee033376f1c43294cfdb0e3128443b6dbea7836393d6f53fd2 size: 2211\nmake[1]: Leaving directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/deploymodel'\n" ] ], [ [ "Now that the container images are pushed to the [registry in your project](https://console.cloud.google.com/gcr), we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to\n* describing what arguments Kubeflow needs to pass to the containers when it runs them\n* telling Kubeflow where to fetch the corresponding Docker images\n\nIn the cells below, we have three of these \"Kubeflow component description files\", one for each of our components.", "_____no_output_____" ], [ "**TODO 3**", "_____no_output_____" ], [ "**IMPORTANT: Modify the image URI in the cell \nbelow to reflect that you pushed the images into the gcr.io associated with your project.**", "_____no_output_____" ] ], [ [ "%%writefile bq2gcs.yaml\n\nname: bq2gcs\n \ndescription: |\n This component creates the training and\n validation datasets as BiqQuery tables and export\n them into a Google Cloud Storage bucket at\n gs://qwiklabs-gcp-00-568a75dfa3e1/taxifare/data.\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-bq2gcs\n args: [\"--bucket\", {inputValue: Input Bucket}]", "Overwriting bq2gcs.yaml\n" ], [ "%%writefile trainjob.yaml\n\nname: trainjob\n \ndescription: |\n This component trains a model to predict that taxi fare in NY.\n It takes as argument a GCS bucket and expects its training and\n eval data to be at gs://<BUCKET>/taxifare/data/ and will export\n the trained model at gs://<BUCKET>/taxifare/model/.\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-trainjob\n args: [{inputValue: Input Bucket}]", "Overwriting trainjob.yaml\n" ], [ "%%writefile deploymodel.yaml\n\nname: deploymodel\n \ndescription: |\n This component deploys a trained taxifare model on GCP as taxifare:dnn.\n It takes as argument a GCS bucket and expects the model to deploy \n to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-deploymodel\n args: [{inputValue: Input Bucket}]", "Overwriting deploymodel.yaml\n" ] ], [ [ "## Create a Kubeflow pipeline", "_____no_output_____" ], [ "The code below creates a kubeflow pipeline by decorating a regular function with the\n`@dsl.pipeline` decorator. Now the arguments of this decorated function will be\nthe input parameters of the Kubeflow pipeline.\n\nInside the function, we describe the pipeline by\n* loading the yaml component files we created above into a Kubeflow `op`\n* specifying the order into which the Kubeflow ops should be run", "_____no_output_____" ] ], [ [ "# TODO 3\nPIPELINE_TAR = \"taxifare.tar.gz\"\nBQ2GCS_YAML = \"./bq2gcs.yaml\"\nTRAINJOB_YAML = \"./trainjob.yaml\"\nDEPLOYMODEL_YAML = \"./deploymodel.yaml\"\n\n\[email protected](\n name=\"Taxifare\",\n description=\"Train a ml model to predict the taxi fare in NY\",\n)\ndef pipeline(gcs_bucket_name=\"<bucket where data and model will be exported>\"):\n\n bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)\n bq2gcs = bq2gcs_op(\n input_bucket=gcs_bucket_name,\n )\n\n\n\"\"\"\n trainjob_op = comp.load_component_from_file(TRAINJOB_YAML)\n trainjob = trainjob_op(\n input_bucket=gcs_bucket_name,\n )\n\n deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML)\n deploymodel = deploymodel_op(\n input_bucket=gcs_bucket_name,\n )\n\n trainjob.after(bq2gcs)\n deploymodel.after(trainjob)\n\"\"\"", "_____no_output_____" ] ], [ [ "The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:", "_____no_output_____" ] ], [ [ "compiler.Compiler().compile(pipeline, PIPELINE_TAR)", "/home/jupyter/.local/lib/python3.7/site-packages/kfp/components/_components.py:175: FutureWarning: Container component must specify command to be compatible with KFP v2 compatible mode and emissary executor, which will be the default executor for KFP v2.https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/\n category=FutureWarning,\n" ], [ "ls $PIPELINE_TAR", "taxifare.tar.gz\n" ] ], [ [ "If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the\nPython description of the pipeline into yaml description!\n\nNow let's feed Kubeflow with our pipeline and run it using our client:", "_____no_output_____" ] ], [ [ "# TODO 4\nrun = client.run_pipeline(\n experiment_id=exp.id,\n job_name=\"taxifare\",\n pipeline_package_path=\"taxifare.tar.gz\",\n params={\n \"gcs_bucket_name\": BUCKET,\n },\n)", "_____no_output_____" ] ], [ [ "Have a look at the link to monitor the run. ", "_____no_output_____" ], [ "Now all the runs are nicely organized under the experiment in the UI, and new runs can be either manually launched or scheduled through the UI in a completely repeatable and traceable way!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7a69f74c0fc315ef38272eafbaa2c88cefb2730
42,413
ipynb
Jupyter Notebook
planet_experimental2.ipynb
amittal27/course-v3
fd637b698276b4aa7816cdbfbd3ce599460b0161
[ "Apache-2.0" ]
null
null
null
planet_experimental2.ipynb
amittal27/course-v3
fd637b698276b4aa7816cdbfbd3ce599460b0161
[ "Apache-2.0" ]
null
null
null
planet_experimental2.ipynb
amittal27/course-v3
fd637b698276b4aa7816cdbfbd3ce599460b0161
[ "Apache-2.0" ]
null
null
null
68.964228
2,359
0.614623
[ [ [ "<a href=\"https://colab.research.google.com/github/amittal27/course-v3/blob/master/planet_experimental2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "### **Setup**", "_____no_output_____" ] ], [ [ "%reload_ext autoreload\n%autoreload 2\n%matplotlib inline", "_____no_output_____" ], [ "from fastai.vision import *", "_____no_output_____" ] ], [ [ "### **Configure data**", "_____no_output_____" ] ], [ [ "path = Path.cwd()/'planet'\npath.mkdir(parents=True, exist_ok=True)\npath", "_____no_output_____" ] ], [ [ "### **Multiclassification**", "_____no_output_____" ] ], [ [ "# read the csv file using the pandas library (popular way of dealing with tabular data in python), print the first five rows\ndf = pd.read_csv(path/'train_classes.csv')\ndf.head()", "_____no_output_____" ], [ "# data augmentation\ntfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0) # do not want warp on the satellite images", "_____no_output_____" ], [ "# ensure we have the same validation set each time\nnp.random.seed(42)\n# get images, get labels\nsrc = (ImageList.from_csv(path, 'train_classes.csv', folder='train-jpg', suffix='.jpg')\n .split_by_rand_pct(0.2) # set aside 20% of the training set (the current set of images) for the validation set\n .label_from_df(label_delim=' '))", "_____no_output_____" ], [ "# apply transforms, construct dataset\ndata = (src.transform(tfms, size=128) # standardize images to be a little smaller, 128 x 128\n .databunch().normalize(imagenet_stats)) # use databunch to bind training and validation datasets", "_____no_output_____" ], [ "data.show_batch(rows=3, figsize=(12,9))", "_____no_output_____" ], [ "# base artchitecture\narch = models.resnet50", "_____no_output_____" ] ], [ [ "### **Metrics**\nmetrics to print out during training (NOTE: they do not impact how our model trains); just shows us how we're doing", "_____no_output_____" ] ], [ [ "# however, instead of picking just one of the classes in len(data.classes) as our prediction label, we want to pick out n of those classes\n# anything higher than a desired threshold will be assumed to be a label for the input image\n# accuracy uses argmax to find the category with the maximum probability of being represented by the image/data given\n# it compared it to the actual accuracy and took the average; this method can't be implemented when we have multiple labels per image\n# in this case, our threshold value is 0.2 (experimentally found to be pretty good)\n\n# a partial function takes in a function and a list of keywords & values and creates a new func that's exactly the same but with the arguments provided; new function was generated\nacc_02 = partial(accuracy_thresh, thresh=0.2) \ndata.c # number of outputs we want our model to create = len(data.classes); given one probability for each of these classes", "_____no_output_____" ], [ "f_score = partial(fbeta, thresh=0.2) # a metric used by Kaggle to weigh false positive and false negatives\nlearn = cnn_learner(data, arch, metrics=[acc_02, f_score])", "_____no_output_____" ], [ "# find a good learning rate\nlearn.lr_find()", "_____no_output_____" ], [ "# plot results\nlearn.recorder.plot()", "_____no_output_____" ], [ "# pick learning rate\nlr = 1e-2", "_____no_output_____" ], [ "# fit_one_cycle five times with that learning rate\nlearn.fit_one_cycle(5, slice(lr))", "_____no_output_____" ], [ "# save\nlearn.save('stage-1-rn50')", "_____no_output_____" ], [ "learn.unfreeze()", "_____no_output_____" ], [ "learn.lr_find()\nlearn.recorder.plot()", "_____no_output_____" ], [ "# fitting with original dataset, though we could create a new databunch with just the misclassified instances\nlearn.fit_one_cycle(5, slice(1e-5, lr/5))", "_____no_output_____" ], [ "learn.save('stage-2-rn50')", "_____no_output_____" ] ], [ [ "### **Now, create a whole \"new\" dataset where the images are 256 x 256 to hopefully increase our fbeta metric score**\nno concern of overfitting then", "_____no_output_____" ] ], [ [ "# create new databunch with 256 x 256 images (higher res images)\ndata = (src.transform(tfms, size=256), # same transforms as before\n .databunch.normalize(imagenet_stats))", "_____no_output_____" ], [ "# start with our pre-trained model\nlearn.data = data # replace learner data with new databunch\ndata.train_ds[0][0].shape", "_____no_output_____" ], [ "# freeze to just train the last few layers\nlearn.lr_find()\nlearn.recorder.plot()", "_____no_output_____" ], [ "# new learning rate\nlr=1e-2/2", "_____no_output_____" ], [ "# just training the last few layers\nlearn.fit_one_cycle(5, slice(lr))", "_____no_output_____" ], [ "learn.recorder.plot_losses()", "_____no_output_____" ], [ "learn.save('stage-2-256-rn50')", "_____no_output_____" ], [ "learn.export()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a6a923f301df29a6a39b9312848850c61396ee
5,514
ipynb
Jupyter Notebook
Notebook/Methyl.ipynb
omicscodeathon/Human-Methylomics
dd14c93468622cd2c8f6b701a0d10a5fea9a8421
[ "MIT" ]
1
2021-10-02T20:02:26.000Z
2021-10-02T20:02:26.000Z
Notebook/Methyl.ipynb
omicscodeathon/Human-Methylomics
dd14c93468622cd2c8f6b701a0d10a5fea9a8421
[ "MIT" ]
null
null
null
Notebook/Methyl.ipynb
omicscodeathon/Human-Methylomics
dd14c93468622cd2c8f6b701a0d10a5fea9a8421
[ "MIT" ]
1
2021-10-09T12:40:31.000Z
2021-10-09T12:40:31.000Z
21.045802
116
0.513058
[ [ [ "#install.packages(\"BiocManager\")\n#install.packages(\"forcats\")\n#install.packages(\"stringr\")\n#install.packages(\"ggplot2\")\n#install.packages(\"ggrepel\")\n#install.packages(\"readr\")\n#install.packages(\"tidyr\")\n#install.packages(\"survminer\")\n#BiocManager::install(\"GEOquery\")\n#BiocManager::install(\"limma\")\n#BiocManager::install(\"pheatmap\")\n#BiocManager::install(\"org.Hs.eg.db\")\nSys.setenv(\"VROOM_CONNECTION_SIZE\" = 131072 * 2)\noptions(timeout = max(10000, getOption(\"timeout\")))\noptions(warn=0)", "_____no_output_____" ], [ "library(GEOquery)\nsetwd('Documents/')", "_____no_output_____" ], [ "ids = c('GSE190540','GSE147040','GSE174818','GSE152204')\ngse = c()\nfor (i in ids){\n gse = append (gse,getGEO(i,destdir = '.'))\n}", "_____no_output_____" ], [ "library(dplyr)", "_____no_output_____" ], [ "sampleInfo1 <- pData(gse[[1]])\nsampleInfo2 <- pData(gse[[2]])\nsampleInfo3 <- pData(gse[[3]])\nsampleInfo4 <- pData(gse[[4]])", "_____no_output_____" ], [ "sampleInfo1[,c('age:ch1','disease status:ch1','gender:ch1')] -> sampleInfo1", "_____no_output_____" ], [ "colnames(sampleInfo1) = c('Age','Status','Gender')", "_____no_output_____" ], [ "sampleInfo1[\"Status\"][sampleInfo1[\"Status\"] == \"Case\"] <- \"MCI\"", "_____no_output_____" ], [ "sampleInfo2[,c('age at death:ch1','current smoking status:ch1','sex (m, male; f, female):ch1')] -> sampleInfo2", "_____no_output_____" ], [ "colnames(sampleInfo2) = c('Age','Status','Gender')", "_____no_output_____" ], [ "sampleInfo3[,c('age:ch1','covid status:ch1','Sex:ch1')] -> sampleInfo3", "_____no_output_____" ], [ "colnames(sampleInfo3) = c('Age','Status','Gender')", "_____no_output_____" ], [ "sampleInfo3$Status = as.character(sampleInfo3$Status)", "_____no_output_____" ], [ "sampleInfo3[\"Status\"][sampleInfo3[\"Status\"] == '0'] <- \"Control\"", "_____no_output_____" ], [ "sampleInfo4[,c('epigenetic age:ch1','source_name_ch1','Sex:ch1')] -> sampleInfo4", "_____no_output_____" ], [ "colnames(sampleInfo4) = c('Age','Status','Gender')", "_____no_output_____" ], [ "sampleInfo4[\"Status\"][sampleInfo4[\"Status\"] == 'Case'] <- \"OAV\"", "_____no_output_____" ], [ "metadata <- rbind(sampleInfo1, sampleInfo2,sampleInfo3,sampleInfo4)", "_____no_output_____" ], [ "metadata$Age = as.numeric(metadata$Age)", "_____no_output_____" ], [ "metadata$Status=as.factor(metadata$Status)", "_____no_output_____" ], [ "summary(metadata)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a6adcfa6698612aa72610ed39ebf4e771da13d
74,456
ipynb
Jupyter Notebook
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
1a49c5666317893ccd65f292043e7e1e5068cc6e
[ "MIT" ]
null
null
null
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
1a49c5666317893ccd65f292043e7e1e5068cc6e
[ "MIT" ]
null
null
null
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
1a49c5666317893ccd65f292043e7e1e5068cc6e
[ "MIT" ]
null
null
null
41.113197
5,696
0.566617
[ [ [ "# Convolutional Neural Networks: Step by Step\n\nWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. \n\nBy the end of this notebook, you'll be able to: \n\n* Explain the convolution operation\n* Apply two different types of pooling operation\n* Identify the components used in a convolutional neural network (padding, stride, filter, ...) and their purpose\n* Build a convolutional neural network \n\n**Notation**:\n- Superscript $[l]$ denotes an object of the $l^{th}$ layer. \n - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.\n\n\n- Superscript $(i)$ denotes an object from the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example input.\n \n \n- Subscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.\n \n \n- $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. \n- $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. \n\nYou should be familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!", "_____no_output_____" ], [ "## Table of Contents\n\n- [1 - Packages](#1)\n- [2 - Outline of the Assignment](#2)\n- [3 - Convolutional Neural Networks](#3)\n - [3.1 - Zero-Padding](#3-1)\n - [Exercise 1 - zero_pad](#ex-1)\n - [3.2 - Single Step of Convolution](#3-2)\n - [Exercise 2 - conv_single_step](#ex-2)\n - [3.3 - Convolutional Neural Networks - Forward Pass](#3-3)\n - [Exercise 3 - conv_forward](#ex-3)\n- [4 - Pooling Layer](#4)\n - [4.1 - Forward Pooling](#4-1)\n - [Exercise 4 - pool_forward](#ex-4)\n- [5 - Backpropagation in Convolutional Neural Networks (OPTIONAL / UNGRADED)](#5)\n - [5.1 - Convolutional Layer Backward Pass](#5-1)\n - [5.1.1 - Computing dA](#5-1-1)\n - [5.1.2 - Computing dW](#5-1-2)\n - [5.1.3 - Computing db](#5-1-3)\n - [Exercise 5 - conv_backward](#ex-5)\n - [5.2 Pooling Layer - Backward Pass](#5-2)\n - [5.2.1 Max Pooling - Backward Pass](#5-2-1)\n - [Exercise 6 - create_mask_from_window](#ex-6)\n - [5.2.2 - Average Pooling - Backward Pass](#5-2-2)\n - [Exercise 7 - distribute_value](#ex-7)\n - [5.2.3 Putting it Together: Pooling Backward](#5-2-3)\n - [Exercise 8 - pool_backward](#ex-8)", "_____no_output_____" ], [ "<a name='1'></a>\n## 1 - Packages\n\nLet's first import all the packages that you will need during this assignment. \n- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.\n- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.\n- np.random.seed(1) is used to keep all the random function calls consistent. This helps to grade your work.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nfrom public_tests import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)", "_____no_output_____" ] ], [ [ "<a name='2'></a>\n## 2 - Outline of the Assignment\n\nYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions to walk you through the steps:\n\n- Convolution functions, including:\n - Zero Padding\n - Convolve window \n - Convolution forward\n - Convolution backward (optional)\n- Pooling functions, including:\n - Pooling forward\n - Create mask \n - Distribute value\n - Pooling backward (optional)\n \nThis notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:\n\n<img src=\"images/model.png\" style=\"width:800px;height:300px;\">\n\n**Note**: For every forward function, there is a corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. ", "_____no_output_____" ], [ "<a name='3'></a>\n## 3 - Convolutional Neural Networks\n\nAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. \n\n<img src=\"images/conv_nn.png\" style=\"width:350px;height:200px;\">\n\nIn this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. ", "_____no_output_____" ], [ "<a name='3-1'></a>\n### 3.1 - Zero-Padding\n\nZero-padding adds zeros around the border of an image:\n\n<img src=\"images/PAD.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> <font color='purple'> <b>Figure 1</b> </u><font color='purple'> : <b>Zero-Padding</b><br> Image (3 channels, RGB) with a padding of 2. </center></caption>\n\nThe main benefits of padding are:\n\n- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the \"same\" convolution, in which the height/width is exactly preserved after one layer. \n\n- It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels at the edges of an image.\n\n<a name='ex-1'></a>\n### Exercise 1 - zero_pad\nImplement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array \"a\" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:\n```python\na = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), mode='constant', constant_values = (0,0))\n```", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: zero_pad\n\ndef zero_pad(X, pad):\n \"\"\"\n Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, \n as illustrated in Figure 1.\n \n Argument:\n X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images\n pad -- integer, amount of padding around each image on vertical and horizontal dimensions\n \n Returns:\n X_pad -- padded image of shape (m, n_H + 2 * pad, n_W + 2 * pad, n_C)\n \"\"\"\n \n #(≈ 1 line)\n X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), mode='constant', constant_values = (0,0))\n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n \n return X_pad", "_____no_output_____" ], [ "np.random.seed(1)\nx = np.random.randn(4, 3, 3, 2)\nx_pad = zero_pad(x, 3)\nprint (\"x.shape =\\n\", x.shape)\nprint (\"x_pad.shape =\\n\", x_pad.shape)\nprint (\"x[1,1] =\\n\", x[1, 1])\nprint (\"x_pad[1,1] =\\n\", x_pad[1, 1])\n\nassert type(x_pad) == np.ndarray, \"Output must be a np array\"\nassert x_pad.shape == (4, 9, 9, 2), f\"Wrong shape: {x_pad.shape} != (4, 9, 9, 2)\"\nprint(x_pad[0, 0:2,:, 0])\nassert np.allclose(x_pad[0, 0:2,:, 0], [[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]], 1e-15), \"Rows are not padded with zeros\"\nassert np.allclose(x_pad[0, :, 7:9, 1].transpose(), [[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]], 1e-15), \"Columns are not padded with zeros\"\nassert np.allclose(x_pad[:, 3:6, 3:6, :], x, 1e-15), \"Internal values are different\"\n\nfig, axarr = plt.subplots(1, 2)\naxarr[0].set_title('x')\naxarr[0].imshow(x[0, :, :, 0])\naxarr[1].set_title('x_pad')\naxarr[1].imshow(x_pad[0, :, :, 0])\nzero_pad_test(zero_pad)", "x.shape =\n (4, 3, 3, 2)\nx_pad.shape =\n (4, 9, 9, 2)\nx[1,1] =\n [[ 0.90085595 -0.68372786]\n [-0.12289023 -0.93576943]\n [-0.26788808 0.53035547]]\nx_pad[1,1] =\n [[0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]]\n[[0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n\u001b[92m All tests passed.\n" ] ], [ [ "<a name='3-2'></a>\n### 3.2 - Single Step of Convolution \n\nIn this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: \n\n- Takes an input volume \n- Applies a filter at every position of the input\n- Outputs another volume (usually of different size)\n\n<img src=\"images/Convolution_schematic.gif\" style=\"width:500px;height:300px;\">\n<caption><center> <u> <font color='purple'> <b>Figure 2</b> </u><font color='purple'> : <b>Convolution operation</b><br> with a filter of 3x3 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>\n\nIn a computer vision application, each value in the matrix on the left corresponds to a single pixel value. You convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. \n\nLater in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. \n\n<a name='ex-2'></a>\n### Exercise 2 - conv_single_step\nImplement `conv_single_step()`. \n \n[Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).", "_____no_output_____" ], [ "**Note**: The variable b will be passed in as a numpy array. If you add a scalar (a float or integer) to a numpy array, the result is a numpy array. In the special case of a numpy array containing a single value, you can cast it as a float to convert it to a scalar.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: conv_single_step\n\ndef conv_single_step(a_slice_prev, W, b):\n \"\"\"\n Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation \n of the previous layer.\n \n Arguments:\n a_slice_prev -- slice of input data of shape (f, f, n_C_prev)\n W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)\n b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)\n \n Returns:\n Z -- a scalar value, the result of convolving the sliding window (W, b) on a slice x of the input data\n \"\"\"\n\n #(≈ 3 lines of code)\n # Element-wise product between a_slice_prev and W. Do not add the bias yet.\n s = a_slice_prev * W\n # Sum over all entries of the volume s.\n Z = s.reshape(1,-1).sum()\n # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.\n Z = Z + float(b)\n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n\n return Z", "_____no_output_____" ], [ "np.random.seed(1)\na_slice_prev = np.random.randn(4, 4, 3)\nW = np.random.randn(4, 4, 3)\nb = np.random.randn(1, 1, 1)\n\nZ = conv_single_step(a_slice_prev, W, b)\nprint(\"Z =\", Z)\nconv_single_step_test(conv_single_step)\n\nassert (type(Z) == np.float64 or type(Z) == np.float32), \"You must cast the output to float\"\nassert np.isclose(Z, -6.999089450680221), \"Wrong value\"", "Z = -6.999089450680221\n\u001b[92m All tests passed.\n" ] ], [ [ "<a name='3-3'></a>\n### 3.3 - Convolutional Neural Networks - Forward Pass\n\nIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: \n\n<center>\n<video width=\"620\" height=\"440\" src=\"images/conv_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n\n<a name='ex-3'></a>\n### Exercise 3 - conv_forward\nImplement the function below to convolve the filters `W` on an input activation `A_prev`. \nThis function takes the following inputs:\n* `A_prev`, the activations output by the previous layer (for a batch of m inputs); \n* Weights are denoted by `W`. The filter window size is `f` by `f`.\n* The bias vector is `b`, where each filter has its own (single) bias. \n\nYou also have access to the hyperparameters dictionary, which contains the stride and the padding. \n\n**Hint**: \n1. To select a 2x2 slice at the upper left corner of a matrix \"a_prev\" (shape (5,5,3)), you would do:\n```python\na_slice_prev = a_prev[0:2,0:2,:]\n```\nNotice how this gives a 3D slice that has height 2, width 2, and depth 3. Depth is the number of channels. \nThis will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.\n\n2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find out how each of the corners can be defined using h, w, f and s in the code below.\n\n<img src=\"images/vert_horiz_kiank.png\" style=\"width:400px;height:300px;\">\n<caption><center> <u> <font color='purple'> <b>Figure 3</b> </u><font color='purple'> : <b>Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)</b> <br> This figure shows only a single channel. </center></caption>\n\n\n**Reminder**:\n \nThe formulas relating the output shape of the convolution to the input shape are:\n \n$$n_H = \\Bigl\\lfloor \\frac{n_{H_{prev}} - f + 2 \\times pad}{stride} \\Bigr\\rfloor +1$$\n$$n_W = \\Bigl\\lfloor \\frac{n_{W_{prev}} - f + 2 \\times pad}{stride} \\Bigr\\rfloor +1$$\n$$n_C = \\text{number of filters used in the convolution}$$\n \n\n\n\nFor this exercise, don't worry about vectorization! Just implement everything with for-loops.", "_____no_output_____" ], [ "#### Additional Hints (if you're stuck):\n\n\n* Use array slicing (e.g.`varname[0:1,:,3:5]`) for the following variables: \n `a_prev_pad` ,`W`, `b` \n - Copy the starter code of the function and run it outside of the defined function, in separate cells. \n - Check that the subset of each array is the size and dimension that you're expecting. \n* To decide how to get the `vert_start`, `vert_end`, `horiz_start`, `horiz_end`, remember that these are indices of the previous layer. \n - Draw an example of a previous padded layer (8 x 8, for instance), and the current (output layer) (2 x 2, for instance). \n - The output layer's indices are denoted by `h` and `w`. \n* Make sure that `a_slice_prev` has a height, width and depth.\n* Remember that `a_prev_pad` is a subset of `A_prev_pad`. \n - Think about which one should be used within the for loops.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: conv_forward\n\ndef conv_forward(A_prev, W, b, hparameters):\n '''\n Implements the forward propagation for a convolution function\n \n Arguments:\n A_prev -- output activations of the previous layer, \n numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)\n b -- Biases, numpy array of shape (1, 1, 1, n_C)\n hparameters -- python dictionary containing \"stride\" and \"pad\"\n \n Returns:\n Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward() function\n '''\n \n # Retrieve dimensions from A_prev's shape (≈1 line) \n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve dimensions from W's shape (≈1 line)\n (f, f, n_C_prev, n_C) = W.shape\n \n # Retrieve information from \"hparameters\" (≈2 lines)\n stride = hparameters['stride']\n pad = hparameters['pad']\n \n # Compute the dimensions of the CONV output volume using the formula given above. \n # Hint: use int() to apply the 'floor' operation. (≈2 lines)\n n_H = int((n_H_prev+(2*pad) -f )/stride) + 1\n n_W = int((n_W_prev+(2*pad) -f )/stride) + 1\n \n # Initialize the output volume Z with zeros. (≈1 line)\n Z = np.zeros((m, n_H, n_W, n_C))\n \n # Create A_prev_pad by padding A_prev\n A_prev_pad = zero_pad(A_prev, pad)\n \n for i in range(m): # loop over the batch of training examples\n a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation\n for h in range(n_H): # loop over vertical axis of the output volume\n # Find the vertical start and end of the current \"slice\" (≈2 lines)\n vert_start = h * stride\n vert_end = h * stride+ f\n \n for w in range(n_W): # loop over horizontal axis of the output volume\n # Find the horizontal start and end of the current \"slice\" (≈2 lines)\n horiz_start = w * stride\n horiz_end = w * stride + f\n \n for c in range(n_C): # loop over channels (= #filters) of the output volume\n \n # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)\n a_slice_prev = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]\n \n # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈3 line)\n weights = W[:,:,:,c]\n biases = b[:,:,:,c]\n Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c])\n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n \n # Save information in \"cache\" for the backprop\n cache = (A_prev, W, b, hparameters)\n \n return Z, cache", "_____no_output_____" ], [ "np.random.seed(1)\nA_prev = np.random.randn(2, 5, 7, 4)\nW = np.random.randn(3, 3, 4, 8)\nb = np.random.randn(1, 1, 1, 8)\nhparameters = {\"pad\" : 1,\n \"stride\": 2}\n\nZ, cache_conv = conv_forward(A_prev, W, b, hparameters)\nprint(\"Z's mean =\\n\", np.mean(Z))\nprint(\"Z[0,2,1] =\\n\", Z[0, 2, 1])\nprint(\"cache_conv[0][1][2][3] =\\n\", cache_conv[0][1][2][3])\n\nconv_forward_test(conv_forward)\n", "Z's mean =\n 0.5511276474566768\nZ[0,2,1] =\n [-2.17796037 8.07171329 -0.5772704 3.36286738 4.48113645 -2.89198428\n 10.99288867 3.03171932]\ncache_conv[0][1][2][3] =\n [-1.1191154 1.9560789 -0.3264995 -1.34267579]\n(2, 13, 15, 8)\n\u001b[92m All tests passed.\n" ] ], [ [ "Finally, a CONV layer should also contain an activation, in which case you would add the following line of code:\n\n```python\n# Convolve the window to get back one output neuron\nZ[i, h, w, c] = ...\n# Apply activation\nA[i, h, w, c] = activation(Z[i, h, w, c])\n```\n\nYou don't need to do it here, however. \n", "_____no_output_____" ], [ "<a name='4'></a>\n## 4 - Pooling Layer \n\nThe pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: \n\n- Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.\n\n- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.\n\n<table>\n<td>\n<img src=\"images/max_pool1.png\" style=\"width:500px;height:300px;\">\n<td>\n\n<td>\n<img src=\"images/a_pool.png\" style=\"width:500px;height:300px;\">\n<td>\n</table>\n\nThese pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the $f \\times f$ window you would compute a *max* or *average* over. \n\n<a name='4-1'></a>\n### 4.1 - Forward Pooling\nNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. \n\n<a name='ex-4'></a>\n### Exercise 4 - pool_forward\n\nImplement the forward pass of the pooling layer. Follow the hints in the comments below.\n\n**Reminder**:\nAs there's no padding, the formulas binding the output shape of the pooling to the input shape is:\n\n$$n_H = \\Bigl\\lfloor \\frac{n_{H_{prev}} - f}{stride} \\Bigr\\rfloor +1$$\n\n$$n_W = \\Bigl\\lfloor \\frac{n_{W_{prev}} - f}{stride} \\Bigr\\rfloor +1$$\n\n$$n_C = n_{C_{prev}}$$\n\n\n", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: pool_forward\n\ndef pool_forward(A_prev, hparameters, mode = \"max\"):\n \"\"\"\n Implements the forward pass of the pooling layer\n \n Arguments:\n A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n hparameters -- python dictionary containing \"f\" and \"stride\"\n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters \n \"\"\"\n \n # Retrieve dimensions from the input shape\n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve hyperparameters from \"hparameters\"\n f = hparameters[\"f\"]\n stride = hparameters[\"stride\"]\n \n # Define the dimensions of the output\n n_H = int(1 + (n_H_prev - f) / stride)\n n_W = int(1 + (n_W_prev - f) / stride)\n n_C = n_C_prev\n \n # Initialize output matrix A\n A = np.zeros((m, n_H, n_W, n_C)) \n \n for i in range(m): # loop over the training examples\n for h in range(n_H): # loop on the vertical axis of the output volume\n # Find the vertical start and end of the current \"slice\" (≈2 lines)\n vert_start = h * stride\n vert_end = h * stride+ f\n \n for w in range(n_W): # loop on the horizontal axis of the output volume\n #Find the vertical start and end of the current \"slice\" (≈2 lines)\n horiz_start = w * stride\n horiz_end = w * stride + f\n \n for c in range (n_C): # loop over the channels of the output volume\n \n # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)\n a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end,c]\n \n # Compute the pooling operation on the slice. \n # Use an if statement to differentiate the modes. \n # Use np.max and np.mean.\n if mode == \"max\":\n A[i, h, w, c] = np.max(a_prev_slice)\n elif mode == \"average\":\n A[i, h, w, c] = np.mean(a_prev_slice)\n \n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n \n # Store the input and hparameters in \"cache\" for pool_backward()\n cache = (A_prev, hparameters)\n \n # Making sure your output shape is correct\n assert(A.shape == (m, n_H, n_W, n_C))\n \n return A, cache", "_____no_output_____" ], [ "# Case 1: stride of 1\nnp.random.seed(1)\nA_prev = np.random.randn(2, 5, 5, 3)\nhparameters = {\"stride\" : 1, \"f\": 3}\n\nA, cache = pool_forward(A_prev, hparameters, mode = \"max\")\nprint(\"mode = max\")\nprint(\"A.shape = \" + str(A.shape))\nprint(\"A[1, 1] =\\n\", A[1, 1])\nprint()\nA, cache = pool_forward(A_prev, hparameters, mode = \"average\")\nprint(\"mode = average\")\nprint(\"A.shape = \" + str(A.shape))\nprint(\"A[1, 1] =\\n\", A[1, 1])\n\npool_forward_test(pool_forward)", "mode = max\nA.shape = (2, 3, 3, 3)\nA[1, 1] =\n [[1.96710175 0.84616065 1.27375593]\n [1.96710175 0.84616065 1.23616403]\n [1.62765075 1.12141771 1.2245077 ]]\n\nmode = average\nA.shape = (2, 3, 3, 3)\nA[1, 1] =\n [[ 0.44497696 -0.00261695 -0.31040307]\n [ 0.50811474 -0.23493734 -0.23961183]\n [ 0.11872677 0.17255229 -0.22112197]]\n\u001b[92m All tests passed.\n" ] ], [ [ "**Expected output**\n\n```\nmode = max\nA.shape = (2, 3, 3, 3)\nA[1, 1] =\n [[1.96710175 0.84616065 1.27375593]\n [1.96710175 0.84616065 1.23616403]\n [1.62765075 1.12141771 1.2245077 ]]\n\nmode = average\nA.shape = (2, 3, 3, 3)\nA[1, 1] =\n [[ 0.44497696 -0.00261695 -0.31040307]\n [ 0.50811474 -0.23493734 -0.23961183]\n [ 0.11872677 0.17255229 -0.22112197]]\n```", "_____no_output_____" ] ], [ [ "# Case 2: stride of 2\nnp.random.seed(1)\nA_prev = np.random.randn(2, 5, 5, 3)\nhparameters = {\"stride\" : 2, \"f\": 3}\n\nA, cache = pool_forward(A_prev, hparameters)\nprint(\"mode = max\")\nprint(\"A.shape = \" + str(A.shape))\nprint(\"A[0] =\\n\", A[0])\nprint()\n\nA, cache = pool_forward(A_prev, hparameters, mode = \"average\")\nprint(\"mode = average\")\nprint(\"A.shape = \" + str(A.shape))\nprint(\"A[1] =\\n\", A[1])", "mode = max\nA.shape = (2, 2, 2, 3)\nA[0] =\n [[[1.74481176 0.90159072 1.65980218]\n [1.74481176 1.6924546 1.65980218]]\n\n [[1.13162939 1.51981682 2.18557541]\n [1.13162939 1.6924546 2.18557541]]]\n\nmode = average\nA.shape = (2, 2, 2, 3)\nA[1] =\n [[[-0.17313416 0.32377198 -0.34317572]\n [ 0.02030094 0.14141479 -0.01231585]]\n\n [[ 0.42944926 0.08446996 -0.27290905]\n [ 0.15077452 0.28911175 0.00123239]]]\n" ] ], [ [ "**Expected Output:**\n \n```\nmode = max\nA.shape = (2, 2, 2, 3)\nA[0] =\n [[[1.74481176 0.90159072 1.65980218]\n [1.74481176 1.6924546 1.65980218]]\n\n [[1.13162939 1.51981682 2.18557541]\n [1.13162939 1.6924546 2.18557541]]]\n\nmode = average\nA.shape = (2, 2, 2, 3)\nA[1] =\n [[[-0.17313416 0.32377198 -0.34317572]\n [ 0.02030094 0.14141479 -0.01231585]]\n\n [[ 0.42944926 0.08446996 -0.27290905]\n [ 0.15077452 0.28911175 0.00123239]]]\n```", "_____no_output_____" ], [ "<font color='blue'>\n \n**What you should remember**:\n\n* A convolution extracts features from an input image by taking the dot product between the input data and a 3D array of weights (the filter). \n* The 2D output of the convolution is called the feature map\n* A convolution layer is where the filter slides over the image and computes the dot product \n * This transforms the input volume into an output volume of different size \n* Zero padding helps keep more information at the image borders, and is helpful for building deeper networks, because you can build a CONV layer without shrinking the height and width of the volumes\n* Pooling layers gradually reduce the height and width of the input by sliding a 2D window over each specified region, then summarizing the features in that region", "_____no_output_____" ], [ "**Congratulations**! You have now implemented the forward passes of all the layers of a convolutional network. Great work!\n\nThe remainder of this notebook is optional, and will not be graded. If you carry on, just remember to hit the Submit button to submit your work for grading first. ", "_____no_output_____" ], [ "<a name='5'></a>\n## 5 - Backpropagation in Convolutional Neural Networks (OPTIONAL / UNGRADED)\n\nIn modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. \n\nWhen in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and were not derived in lecture, but are briefly presented below.\n\n<a name='5-1'></a>\n### 5.1 - Convolutional Layer Backward Pass \n\nLet's start by implementing the backward pass for a CONV layer. \n\n<a name='5-1-1'></a>\n#### 5.1.1 - Computing dA:\nThis is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:\n\n$$dA \\mathrel{+}= \\sum _{h=0} ^{n_H} \\sum_{w=0} ^{n_W} W_c \\times dZ_{hw} \\tag{1}$$\n\nWhere $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, you multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, you are just adding the gradients of all the a_slices. \n\nIn code, inside the appropriate for-loops, this formula translates into:\n```python\nda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]\n```\n\n<a name='5-1-2'></a>\n#### 5.1.2 - Computing dW:\nThis is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:\n\n$$dW_c \\mathrel{+}= \\sum _{h=0} ^{n_H} \\sum_{w=0} ^ {n_W} a_{slice} \\times dZ_{hw} \\tag{2}$$\n\nWhere $a_{slice}$ corresponds to the slice which was used to generate the activation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. \n\nIn code, inside the appropriate for-loops, this formula translates into:\n```python\ndW[:,:,:,c] \\mathrel{+}= a_slice * dZ[i, h, w, c]\n```\n\n<a name='5-1-3'></a>\n#### 5.1.3 - Computing db:\n\nThis is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:\n\n$$db = \\sum_h \\sum_w dZ_{hw} \\tag{3}$$\n\nAs you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. \n\nIn code, inside the appropriate for-loops, this formula translates into:\n```python\ndb[:,:,:,c] += dZ[i, h, w, c]\n```\n\n<a name='ex-5'></a>\n### Exercise 5 - conv_backward\n\nImplement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above. ", "_____no_output_____" ] ], [ [ "def conv_backward(dZ, cache):\n \"\"\"\n Implement the backward propagation for a convolution function\n \n Arguments:\n dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward(), output of conv_forward()\n \n Returns:\n dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),\n numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n dW -- gradient of the cost with respect to the weights of the conv layer (W)\n numpy array of shape (f, f, n_C_prev, n_C)\n db -- gradient of the cost with respect to the biases of the conv layer (b)\n numpy array of shape (1, 1, 1, n_C)\n \"\"\" \n \n \n # Retrieve information from \"cache\"\n (A_prev, W, b, hparameters) = cache\n # Retrieve dimensions from A_prev's shape\n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n # Retrieve dimensions from W's shape\n (f, f, n_C_prev, n_C) = W.shape\n \n # Retrieve information from \"hparameters\"\n stride = hparameters['stride']\n pad = hparameters['pad']\n \n # Retrieve dimensions from dZ's shape\n (m, n_H, n_W, n_C) = dZ.shape\n \n # Initialize dA_prev, dW, db with the correct shapes\n dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) \n dW = np.zeros((f, f, n_C_prev, n_C))\n db = np.zeros((1, 1, 1, n_C))\n \n # Pad A_prev and dA_prev\n A_prev_pad = zero_pad(A_prev, pad)\n dA_prev_pad = zero_pad(dA_prev, pad)\n \n for i in range(m): # loop over the training examples\n \n # select ith training example from A_prev_pad and dA_prev_pad\n a_prev_pad = A_prev_pad[i]\n da_prev_pad = dA_prev_pad[i]\n \n for h in range(n_H): # loop over vertical axis of the output volume\n for w in range(n_W): # loop over horizontal axis of the output volume\n for c in range(n_C): # loop over the channels of the output volume\n \n # Find the corners of the current \"slice\"\n vert_start = h * stride\n vert_end = h * stride + f\n horiz_start = w * stride\n horiz_end = w * stride + f\n\n # Use the corners to define the slice from a_prev_pad\n a_slice = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]\n\n\n # Update gradients for the window and the filter's parameters using the code formulas given above\n da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]\n dW[:,:,:,c] += a_slice * dZ[i, h, w, c]\n db[:,:,:,c] += dZ[i, h, w, c]\n \n # Set the ith training example's dA_prev to the unpadded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])\n dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]\n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))\n \n return dA_prev, dW, db", "_____no_output_____" ], [ "# We'll run conv_forward to initialize the 'Z' and 'cache_conv\",\n# which we'll use to test the conv_backward function\nnp.random.seed(1)\nA_prev = np.random.randn(10, 4, 4, 3)\nW = np.random.randn(2, 2, 3, 8)\nb = np.random.randn(1, 1, 1, 8)\nhparameters = {\"pad\" : 2,\n \"stride\": 2}\nZ, cache_conv = conv_forward(A_prev, W, b, hparameters)\n\n# Test conv_backward\ndA, dW, db = conv_backward(Z, cache_conv)\n\nprint(\"dA_mean =\", np.mean(dA))\nprint(\"dW_mean =\", np.mean(dW))\nprint(\"db_mean =\", np.mean(db))\n\nassert type(dA) == np.ndarray, \"Output must be a np.ndarray\"\nassert type(dW) == np.ndarray, \"Output must be a np.ndarray\"\nassert type(db) == np.ndarray, \"Output must be a np.ndarray\"\nassert dA.shape == (10, 4, 4, 3), f\"Wrong shape for dA {dA.shape} != (10, 4, 4, 3)\"\nassert dW.shape == (2, 2, 3, 8), f\"Wrong shape for dW {dW.shape} != (2, 2, 3, 8)\"\nassert db.shape == (1, 1, 1, 8), f\"Wrong shape for db {db.shape} != (1, 1, 1, 8)\"\nassert np.isclose(np.mean(dA), 1.4524377), \"Wrong values for dA\"\nassert np.isclose(np.mean(dW), 1.7269914), \"Wrong values for dW\"\nassert np.isclose(np.mean(db), 7.8392325), \"Wrong values for db\"\n\nprint(\"\\033[92m All tests passed.\")", "dA_mean = 1.4524377775388075\ndW_mean = 1.7269914583139097\ndb_mean = 7.839232564616838\n\u001b[92m All tests passed.\n" ] ], [ [ "**Expected Output**:\n<table>\n <tr>\n <td>\n dA_mean\n </td>\n <td>\n 1.45243777754\n </td>\n </tr>\n <tr>\n <td>\n dW_mean\n </td>\n <td>\n 1.72699145831\n </td>\n </tr>\n <tr>\n <td>\n db_mean\n </td>\n <td>\n 7.83923256462\n </td>\n </tr>\n\n</table>\n", "_____no_output_____" ], [ "<a name='5-2'></a>\n## 5.2 Pooling Layer - Backward Pass\n\nNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagate the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. \n\n<a name='5-2-1'></a>\n### 5.2.1 Max Pooling - Backward Pass \n\nBefore jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: \n\n$$ X = \\begin{bmatrix}\n1 && 3 \\\\\n4 && 2\n\\end{bmatrix} \\quad \\rightarrow \\quad M =\\begin{bmatrix}\n0 && 0 \\\\\n1 && 0\n\\end{bmatrix}\\tag{4}$$\n\nAs you can see, this function creates a \"mask\" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling is similar to this, but uses a different mask. \n\n<a name='ex-6'></a>\n### Exercise 6 - create_mask_from_window\n\nImplement `create_mask_from_window()`. This function will be helpful for pooling backward. \nHints:\n- [np.max()]() may be helpful. It computes the maximum of an array.\n- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:\n```\nA[i,j] = True if X[i,j] = x\nA[i,j] = False if X[i,j] != x\n```\n- Here, you don't need to consider cases where there are several maxima in a matrix.", "_____no_output_____" ] ], [ [ "def create_mask_from_window(x):\n \"\"\"\n Creates a mask from an input matrix x, to identify the max entry of x.\n \n Arguments:\n x -- Array of shape (f, f)\n \n Returns:\n mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.\n \"\"\" \n # (≈1 line)\n mask = np.max(x) == x\n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n return mask", "_____no_output_____" ], [ "np.random.seed(1)\nx = np.random.randn(2, 3)\nmask = create_mask_from_window(x)\nprint('x = ', x)\nprint(\"mask = \", mask)\n\nx = np.array([[-1, 2, 3],\n [2, -3, 2],\n [1, 5, -2]])\n\ny = np.array([[False, False, False],\n [False, False, False],\n [False, True, False]])\nmask = create_mask_from_window(x)\n\nassert type(mask) == np.ndarray, \"Output must be a np.ndarray\"\nassert mask.shape == x.shape, \"Input and output shapes must match\"\nassert np.allclose(mask, y), \"Wrong output. The True value must be at position (2, 1)\"\n\nprint(\"\\033[92m All tests passed.\")", "x = [[ 1.62434536 -0.61175641 -0.52817175]\n [-1.07296862 0.86540763 -2.3015387 ]]\nmask = [[ True False False]\n [False False False]]\n\u001b[92m All tests passed.\n" ] ], [ [ "**Expected Output:** \n\n<table> \n<tr> \n<td>\n\n**x =**\n</td>\n\n<td>\n\n[[ 1.62434536 -0.61175641 -0.52817175] <br>\n [-1.07296862 0.86540763 -2.3015387 ]]\n\n </td>\n</tr>\n\n<tr> \n<td>\nmask =\n</td>\n<td>\n[[ True False False] <br>\n [False False False]]\n</td>\n</tr>\n\n\n</table>", "_____no_output_____" ], [ "Why keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will \"propagate\" the gradient back to this particular input value that had influenced the cost. ", "_____no_output_____" ], [ "<a name='5-2-2'></a>\n### 5.2.2 - Average Pooling - Backward Pass \n\nIn max pooling, for each input window, all the \"influence\" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.\n\nFor example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: \n$$ dZ = 1 \\quad \\rightarrow \\quad dZ =\\begin{bmatrix}\n1/4 && 1/4 \\\\\n1/4 && 1/4\n\\end{bmatrix}\\tag{5}$$\n\nThis implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. \n\n<a name='ex-7'></a>\n### Exercise 7 - distribute_value\n\nImplement the function below to equally distribute a value dz through a matrix of dimension shape. \n\n[Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)", "_____no_output_____" ] ], [ [ "def distribute_value(dz, shape):\n \"\"\"\n Distributes the input value in the matrix of dimension shape\n \n Arguments:\n dz -- input scalar\n shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz\n \n Returns:\n a -- Array of size (n_H, n_W) for which we distributed the value of dz\n \"\"\" \n # Retrieve dimensions from shape (≈1 line)\n (n_H, n_W) = shape\n \n # Compute the value to distribute on the matrix (≈1 line)\n average = dz / (n_H * n_W)\n \n # Create a matrix where every entry is the \"average\" value (≈1 line)\n a = np.ones(shape) * average\n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n return a", "_____no_output_____" ], [ "a = distribute_value(2, (2, 2))\nprint('distributed value =', a)\n\n\nassert type(a) == np.ndarray, \"Output must be a np.ndarray\"\nassert a.shape == (2, 2), f\"Wrong shape {a.shape} != (2, 2)\"\nassert np.sum(a) == 2, \"Values must sum to 2\"\n\na = distribute_value(100, (10, 10))\nassert type(a) == np.ndarray, \"Output must be a np.ndarray\"\nassert a.shape == (10, 10), f\"Wrong shape {a.shape} != (10, 10)\"\nassert np.sum(a) == 100, \"Values must sum to 100\"\n\nprint(\"\\033[92m All tests passed.\")", "distributed value = [[0.5 0.5]\n [0.5 0.5]]\n\u001b[92m All tests passed.\n" ] ], [ [ "**Expected Output**: \n\n<table> \n<tr> \n<td>\ndistributed_value =\n</td>\n<td>\n[[ 0.5 0.5]\n<br\\> \n[ 0.5 0.5]]\n</td>\n</tr>\n</table>", "_____no_output_____" ], [ "<a name='5-2-3'></a>\n### 5.2.3 Putting it Together: Pooling Backward \n\nYou now have everything you need to compute backward propagation on a pooling layer.\n\n<a name='ex-8'></a>\n### Exercise 8 - pool_backward\n\nImplement the `pool_backward` function in both modes (`\"max\"` and `\"average\"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA.", "_____no_output_____" ] ], [ [ "def pool_backward(dA, cache, mode = \"max\"):\n \"\"\"\n Implements the backward pass of the pooling layer\n \n Arguments:\n dA -- gradient of cost with respect to the output of the pooling layer, same shape as A\n cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters \n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev\n \"\"\"\n # Retrieve information from cache (≈1 line)\n (A_prev, hparameters) = cache\n \n # Retrieve hyperparameters from \"hparameters\" (≈2 lines)\n stride = hparameters['stride']\n f = hparameters['f']\n \n # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)\n m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape\n m, n_H, n_W, n_C = dA.shape\n \n # Initialize dA_prev with zeros (≈1 line)\n dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))\n \n for i in range(m): # loop over the training examples\n \n # select training example from A_prev (≈1 line)\n a_prev = A_prev[i]\n \n for h in range(n_H): # loop on the vertical axis\n for w in range(n_W): # loop on the horizontal axis\n for c in range(n_C): # loop over the channels (depth)\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h * stride\n vert_end = h * stride + f\n horiz_start = w * stride\n horiz_end = w * stride + f\n \n # Compute the backward propagation in both modes.\n if mode == \"max\":\n \n # Use the corners and \"c\" to define the current slice from a_prev (≈1 line)\n a_prev_slice = a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]\n \n # Create the mask from a_prev_slice (≈1 line)\n mask = create_mask_from_window(a_prev_slice)\n\n # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i, h, w, c]\n \n elif mode == \"average\":\n \n # Get the value da from dA (≈1 line)\n da = dA[i, h, w, c]\n \n # Define the shape of the filter as fxf (≈1 line)\n shape = (f, f)\n\n # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)\n # YOUR CODE STARTS HERE\n \n \n # YOUR CODE ENDS HERE\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == A_prev.shape)\n \n return dA_prev", "_____no_output_____" ], [ "np.random.seed(1)\nA_prev = np.random.randn(5, 5, 3, 2)\nhparameters = {\"stride\" : 1, \"f\": 2}\nA, cache = pool_forward(A_prev, hparameters)\nprint(A.shape)\nprint(cache[0].shape)\ndA = np.random.randn(5, 4, 2, 2)\n\ndA_prev1 = pool_backward(dA, cache, mode = \"max\")\nprint(\"mode = max\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev1[1,1] = ', dA_prev1[1, 1]) \nprint()\ndA_prev2 = pool_backward(dA, cache, mode = \"average\")\nprint(\"mode = average\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev2[1,1] = ', dA_prev2[1, 1]) \n\nassert type(dA_prev1) == np.ndarray, \"Wrong type\"\nassert dA_prev1.shape == (5, 5, 3, 2), f\"Wrong shape {dA_prev1.shape} != (5, 5, 3, 2)\"\nassert np.allclose(dA_prev1[1, 1], [[0, 0], \n [ 5.05844394, -1.68282702],\n [ 0, 0]]), \"Wrong values for mode max\"\nassert np.allclose(dA_prev2[1, 1], [[0.08485462, 0.2787552], \n [1.26461098, -0.25749373], \n [1.17975636, -0.53624893]]), \"Wrong values for mode average\"\nprint(\"\\033[92m All tests passed.\")", "(5, 4, 2, 2)\n(5, 5, 3, 2)\nmode = max\nmean of dA = 0.14571390272918056\ndA_prev1[1,1] = [[ 0. 0. ]\n [ 5.05844394 -1.68282702]\n [ 0. 0. ]]\n\nmode = average\nmean of dA = 0.14571390272918056\ndA_prev2[1,1] = [[ 0.08485462 0.2787552 ]\n [ 1.26461098 -0.25749373]\n [ 1.17975636 -0.53624893]]\n\u001b[92m All tests passed.\n" ] ], [ [ "**Expected Output**: \n\nmode = max:\n<table> \n<tr> \n<td>\n\n**mean of dA =**\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\ndA_prev[1,1] =\n</td>\n<td>\n[[ 0. 0. ] <br>\n [ 5.05844394 -1.68282702] <br>\n [ 0. 0. ]]\n</td>\n</tr>\n</table>\n\nmode = average\n<table> \n<tr> \n<td>\n\nmean of dA =\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\ndA_prev[1,1] =\n</td>\n<td>\n[[ 0.08485462 0.2787552 ] <br>\n [ 1.26461098 -0.25749373] <br>\n [ 1.17975636 -0.53624893]]\n</td>\n</tr>\n</table>", "_____no_output_____" ], [ "**Congratulations**! You've completed the assignment and its optional portion. You now understand how convolutional neural networks work, and have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow. Nicely done! See you there.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
e7a6b174340e91faf645511505197cd565ba65fd
42,823
ipynb
Jupyter Notebook
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
1c716ec29d7487dbc294a3b4e6a6486e4fbe4647
[ "MIT" ]
1
2019-06-04T16:40:17.000Z
2019-06-04T16:40:17.000Z
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
1c716ec29d7487dbc294a3b4e6a6486e4fbe4647
[ "MIT" ]
null
null
null
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
1c716ec29d7487dbc294a3b4e6a6486e4fbe4647
[ "MIT" ]
3
2019-04-11T23:20:11.000Z
2019-06-11T02:59:41.000Z
18.965013
387
0.437242
[ [ [ "Beginning Programming with Python\n===========================\n\n\nSession 1: Getting Started\n--------------------------\n\n* The Zen of Python\n* Variables\n* Comments\n* Strings\n* Numbers\n* Booleans\n* String Methods\n* Using Variables in Strings\n* Numerical Operations\n* Working with Numerical Data\n* Using the Math library\n* Setting up Anaconda Python\n * Anaconda\n * Good Youtube Video\n \n**Assignment**\n\n1. Install Anaconda Python\nhttps://www.youtube.com/watch?v=Z1Yd7upQsXY\n2. Create GitHub account\n3. Create Codewars account\n4. Create Reddit account", "_____no_output_____" ] ], [ [ "2 + 5", "_____no_output_____" ], [ "import this", "_____no_output_____" ] ], [ [ "Variables\n---------", "_____no_output_____" ] ], [ [ "phoebe = 8", "_____no_output_____" ], [ "print(phoebe)", "8\n" ], [ "print(id(phoebe))", "4473468240\n" ], [ "maru = 7", "_____no_output_____" ], [ "id(maru)", "_____no_output_____" ], [ "4fun = 3", "_____no_output_____" ], [ "fun4 = 3", "_____no_output_____" ], [ "m = 7", "_____no_output_____" ], [ "phoebe_age_in_years = 8", "_____no_output_____" ], [ "dog_name = \"Phoebe\"", "_____no_output_____" ], [ "print(dog_name)", "Phoebe\n" ], [ "type(dog_name)", "_____no_output_____" ], [ "type(phoebe)", "_____no_output_____" ], [ "dog_name + phoebe", "_____no_output_____" ], [ "dog_name + \" is a dog\"", "_____no_output_____" ] ], [ [ "Comments\n--------", "_____no_output_____" ] ], [ [ "# I'm attempting to calculate dog years for humans\ndog = 1222", "_____no_output_____" ], [ "# Here's something\n# ... And etc.\n# and more...", "_____no_output_____" ], [ "\"\"\"Calculate the age very simply.\nI'm using the standard algorithm...\n\"\"\"\nage = 10", "_____no_output_____" ] ], [ [ "Strings\n-------", "_____no_output_____" ] ], [ [ "4 + 4", "_____no_output_____" ], [ "\"4\" + \"4\"", "_____no_output_____" ], [ "len(\"robb\")", "_____no_output_____" ], [ "dir(\"robb\")", "_____no_output_____" ], [ "\"robb\".endswith('b')", "_____no_output_____" ], [ "\"robb\".endswith('x')", "_____no_output_____" ], [ "\"Phoebe\".lower()\n", "_____no_output_____" ], [ "\"Here's a meanlingless sentence.\".split()", "_____no_output_____" ], [ "\"phoebe\"", "_____no_output_____" ], [ "print(\"hey\")", "hey\n" ], [ "name = \"Ramona\"", "_____no_output_____" ], [ "f\"{name} said thanks!\"", "_____no_output_____" ], [ "name + \" said thanks!\"", "_____no_output_____" ], [ "\"%s said thanks!\", name", "_____no_output_____" ], [ "\"{} said thanks! {}\".format(name, \"hey\")", "_____no_output_____" ] ], [ [ "Numbers\n-------", "_____no_output_____" ] ], [ [ "type(5)", "_____no_output_____" ], [ "type(5.5)", "_____no_output_____" ], [ "account_balance = 100.1111111", "_____no_output_____" ], [ "5.5 + 5\n", "_____no_output_____" ], [ "5.5 + \"5\"", "_____no_output_____" ], [ "5.5 + float(\"5\")", "_____no_output_____" ], [ "float(\"0\")", "_____no_output_____" ], [ "type(0.0)", "_____no_output_____" ], [ "float(\"x\")", "_____no_output_____" ], [ "int(\"0\")", "_____no_output_____" ], [ "int(\"x\")", "_____no_output_____" ], [ "\"x\"", "_____no_output_____" ], [ "x", "_____no_output_____" ], [ "x = 1\n", "_____no_output_____" ], [ "x", "_____no_output_____" ], [ "id(x)", "_____no_output_____" ], [ "\"x\"", "_____no_output_____" ] ], [ [ "Booleans\n--------", "_____no_output_____" ] ], [ [ "1, -1, 0 ", "_____no_output_____" ], [ "type(1)", "_____no_output_____" ], [ "False", "_____no_output_____" ], [ "True", "_____no_output_____" ], [ "1 == 2", "_____no_output_____" ], [ "1 == 1", "_____no_output_____" ], [ "not (1 == 1)", "_____no_output_____" ], [ "not (True and False)", "_____no_output_____" ], [ "(not True) or (not False)", "_____no_output_____" ], [ "if 1 == 2:\n print(\"The world's gone mad\")", "_____no_output_____" ], [ "if 1 == 1:\n print(\"No problem with that\")", "No problem with that\n" ], [ "phoebe_is_old = phoebe > 12", "_____no_output_____" ], [ "phoebe_is_old\n", "_____no_output_____" ], [ "if phoebe_is_old:\n print(\"old\")", "_____no_output_____" ], [ "phoebe", "_____no_output_____" ], [ "o = phoebe > 12", "_____no_output_____" ], [ "O = maru > 12", "_____no_output_____" ], [ "Math.PI", "_____no_output_____" ], [ "import Math", "_____no_output_____" ], [ "import math", "_____no_output_____" ], [ "math.PI", "_____no_output_____" ], [ "dir(math)\n", "_____no_output_____" ], [ "GOOGLE_ID = 12354617234567234", "_____no_output_____" ], [ "GOOGLE_ID", "_____no_output_____" ], [ "GOOGLE_ID = 'fu'", "_____no_output_____" ], [ "GOOGLE_ID", "_____no_output_____" ] ], [ [ "Numerical Data and the Math Library\n-----------------------------------", "_____no_output_____" ] ], [ [ "round(1.23456)", "_____no_output_____" ], [ "round(1.5)", "_____no_output_____" ], [ "round(1.4)", "_____no_output_____" ], [ "help(round)", "Help on built-in function round in module builtins:\n\nround(number, ndigits=None)\n Round a number to a given precision in decimal digits.\n \n The return value is an integer if ndigits is omitted or None. Otherwise\n the return value has the same type as the number. ndigits may be negative.\n\n" ], [ "round(1.23456, 2)", "_____no_output_____" ], [ "1.23 * 1.07", "_____no_output_____" ], [ "round(1.23 * 1.07, 2)", "_____no_output_____" ], [ "round('x' + 'y', 2)", "_____no_output_____" ], [ "abs(3)", "_____no_output_____" ], [ "abs(-3)", "_____no_output_____" ], [ "2 % 3", "_____no_output_____" ], [ "3 % 2", "_____no_output_____" ], [ "3 % 2 == 0", "_____no_output_____" ], [ "10 % 2 == 0", "_____no_output_____" ], [ "10 % 3 == 0", "_____no_output_____" ], [ "15 % 3 == 0", "_____no_output_____" ], [ "not(10 % 2 == 0)", "_____no_output_____" ], [ "10 % 7 != 0", "_____no_output_____" ] ], [ [ "Using the Math Library\n----------------------", "_____no_output_____" ] ], [ [ "import math", "_____no_output_____" ], [ "x", "_____no_output_____" ], [ "math", "_____no_output_____" ], [ "dir(math)", "_____no_output_____" ], [ "math", "_____no_output_____" ], [ "pi", "_____no_output_____" ], [ "math.pi", "_____no_output_____" ], [ "math.factorial(4)", "_____no_output_____" ], [ "math.factorial(10)", "_____no_output_____" ], [ "help(math.ceil)", "Help on built-in function ceil in module math:\n\nceil(x, /)\n Return the ceiling of x as an Integral.\n \n This is the smallest integer >= x.\n\n" ], [ "math.ceil(1.1)", "_____no_output_____" ], [ "math.ceil", "_____no_output_____" ], [ "type(math.ceil)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a6b2b6de0073c42925e09a79079c4b327a764f
38,089
ipynb
Jupyter Notebook
Stage1_Invest_list.ipynb
maxiaotian520/Stock-Trading-Strategy
9248f6b44c4f7cd813a106c383c920c5268bdfc4
[ "Apache-2.0" ]
3
2020-11-30T18:43:21.000Z
2022-02-23T05:50:25.000Z
Stage1_Invest_list.ipynb
maxiaotian520/Stock-Trading-Strategy
9248f6b44c4f7cd813a106c383c920c5268bdfc4
[ "Apache-2.0" ]
10
2021-07-02T02:30:33.000Z
2021-07-06T04:27:53.000Z
Stage1_Invest_list.ipynb
maxiaotian520/Stock-Trading-Strategy
9248f6b44c4f7cd813a106c383c920c5268bdfc4
[ "Apache-2.0" ]
4
2020-11-26T17:27:58.000Z
2021-07-02T04:26:52.000Z
67.894831
18,792
0.722492
[ [ [ "#backtest \nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import precision_score, mean_squared_error, explained_variance_score, r2_score\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "def status_calc(stock, sp500, outperformance=10):\n #A simple function to classify whether a stock outperformed the S&P500\n #:param stock: stock price\n #:param sp500: S&P500 price\n #:param outperformance: stock is classified 1 if stock price > S&P500 price + outperformance\n #:return: true/false\n \n if outperformance < 0:\n raise ValueError(\"outperformance must be positive\")\n return stock - sp500 >= outperformance", "_____no_output_____" ], [ "# Build the dataset, and drop any rows with missing values\nbacktest_df = pd.read_csv(\"ketstats_to_train.csv\", index_col=\"calendardate\")\nbacktest_df.dropna(axis=0, how=\"any\", inplace=True)\n\nfeatures = backtest_df.columns[1:-4]\n#X = backtest_df[features].values\nX = pd.DataFrame(backtest_df[features])\n\n# The labels are generated by applying the status_calc to the dataframe.\n# '1' if a stock beats the S&P500 by more than x%, else '0'. Here x is the\n# outperformance parameter, which is set to 10 by default but can be redefined.\ny = pd.DataFrame(list(\n status_calc(\n backtest_df[\"stock_p_change\"], backtest_df[\"sp500_p_change\"], outperformance=10\n )\n ))\n\n# z is required for us to track returns\nz = np.array(backtest_df[[\"stock_p_change\", \"sp500_p_change\"]])\n\n# Generate the train set and test set by randomly splitting the dataset\nX_train, X_test, y_train, y_test, z_train, z_test = train_test_split(\n X, y, z, test_size=0.1\n)\n\n# Instantiate a RandomForestClassifier with 100 trees, then fit it to the training data\nclf = RandomForestClassifier(n_estimators=100, random_state=0)\nclf.fit(X_train, y_train)\n\n# Generate the predictions, then print test set accuracy and precision\ny_pred = clf.predict(X_test)\nprint(\"Classifier performance\\n\", \"=\" * 20)\nprint(f\"Accuracy score: {clf.score(X_test, y_test): .2f}\")\nprint(f\"Precision score: {precision_score(y_test, y_pred): .2f}\")\n\n# Because y_pred is an array of 1s and 0s, the number of positive predictions\n# is equal to the sum of the array\nnum_positive_predictions = sum(y_pred)\nif num_positive_predictions < 0:\n print(\"No stocks predicted!\")\n\n# Recall that z_test stores the change in stock price in column 0, and the\n# change in S&P500 price in column 1.\n# Whenever a stock is predicted to outperform (y_pred = 1), we 'buy' that stock\n# and simultaneously `buy` the index for comparison.\nstock_returns = 1 + z_test[y_pred, 0] / 100\nmarket_returns = 1 + z_test[y_pred, 1] / 100\n\n# Calculate the average growth for each stock we predicted 'buy'\n# and the corresponding index growth\navg_predicted_stock_growth = sum(stock_returns) / num_positive_predictions\nindex_growth = sum(market_returns) / num_positive_predictions\npercentage_stock_returns = 100 * (avg_predicted_stock_growth - 1)\npercentage_market_returns = 100 * (index_growth - 1)\ntotal_outperformance = percentage_stock_returns - percentage_market_returns\n\nprint(\"\\n Stock prediction performance report \\n\", \"=\" * 40)\nprint(f\"Total Trades:\", num_positive_predictions)\nprint(f\"Average return for stock predictions: {percentage_stock_returns: .1f} %\")\nprint(\n f\"Average market return in the same period: {percentage_market_returns: .1f}% \"\n)\nprint(\n f\"Compared to the index, our strategy earns {total_outperformance: .1f} percentage points more\"\n)", "<ipython-input-6-330fb1fcabdc>:28: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n clf.fit(X_train, y_train)\n" ], [ "#Feature Importance\nfeature_importances = pd.Series(clf.feature_importances_, index=X.columns)\nfeature_importances.sort_values(inplace=True)\nax = feature_importances.plot.barh()\nax.set(ylabel='Importance (Gini Coefficient)', title='Feature importances');\nax.set_title('Feature Importance (MDI)')\n\n#RMSE\nprint('Root Mean Squared Error:', \n np.sqrt(mean_squared_error(y_test, y_pred)))", "Root Mean Squared Error: 0.12698699833659\n" ], [ "# The percentage by which a stock has to beat the S&P500 to be considered a 'buy'\nOUTPERFORMANCE = 10\n\ndef build_data_set():\n \"\"\"\n Reads the keystats.csv file and prepares it for scikit-learn\n :return: X_train and y_train numpy arrays\n \"\"\"\n training_data = pd.read_csv(\"ketstats_to_train.csv\", index_col=\"calendardate\")\n training_data.dropna(axis=0, how=\"any\", inplace=True)\n features = training_data.columns[1:-4]\n\n X_train = training_data[features].values\n # Generate the labels: '1' if a stock beats the S&P500 by more than 10%, else '0'.\n y_train = list(\n status_calc(\n training_data[\"stock_p_change\"],\n training_data[\"sp500_p_change\"],\n OUTPERFORMANCE,\n )\n )\n\n return X_train, y_train\n\ndef predict_stocks():\n X_train, y_train = build_data_set()\n # Remove the random_state parameter to generate actual predictions\n clf = RandomForestClassifier(n_estimators=100, random_state=0)\n clf.fit(X_train, y_train)\n\n # Now we get the actual data from which we want to generate predictions.\n data = pd.read_csv(\"forward_sample.csv\", index_col=\"calendardate\")\n data.dropna(axis=0, how=\"any\", inplace=True)\n features = data.columns[1:-4]\n X_test = data[features].values\n z = data[\"ticker\"].values\n\n # Get the predicted tickers\n y_pred = clf.predict(X_test)\n if sum(y_pred) == 0:\n print(\"No stocks predicted!\")\n else:\n invest_list = z[y_pred].tolist()\n print(\n f\"{len(invest_list)} stocks predicted to outperform the S&P500 by more than {OUTPERFORMANCE}%:\"\n )\n print(\" \".join(invest_list))\n return invest_list", "_____no_output_____" ], [ "invest_list = predict_stocks()\ninvest_list", "7 stocks predicted to outperform the S&P500 by more than 10%:\nBE FMI STRP TWLO ZM SLP DKNG\n" ], [ "metadata = pd.read_csv(\"Tickers and Metadata.csv\")\ninvest_list_metadata = metadata[metadata.ticker.isin(invest_list)]\ninvest_list_metadata[[\"ticker\",\"name\",\"category\",\n \"sicindustry\",\"scalemarketcap\",\n \"scalerevenue\",\"location\"]]", "_____no_output_____" ], [ "pd.DataFrame(invest_list).to_csv(\"invest_list.csv\")", "_____no_output_____" ], [ "data = pd.read_csv(\"stock_prices.csv\")\ndata[['BE', 'FMI', 'STRP', 'TWLO', 'ZM', 'SLP', 'DKNG']].tail(10)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a6c92095c8bd1d5c00bda25baafcb25af7482c
16,246
ipynb
Jupyter Notebook
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
42db4ef5a7ee15ca2e9d841850b1bacb21e9917a
[ "MIT" ]
107
2020-05-03T23:39:13.000Z
2022-03-27T13:18:00.000Z
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
42db4ef5a7ee15ca2e9d841850b1bacb21e9917a
[ "MIT" ]
2
2021-05-13T13:38:44.000Z
2021-09-21T07:51:00.000Z
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
42db4ef5a7ee15ca2e9d841850b1bacb21e9917a
[ "MIT" ]
48
2020-05-09T05:56:55.000Z
2022-03-27T04:33:55.000Z
25.746434
483
0.535209
[ [ [ "import queue\nimport multiprocessing as mp\nimport concurrent.futures as cf\n\nfrom queue import Queue, SimpleQueue\nfrom concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor\n\nfrom datetime import datetime, timedelta\n\nimport requests", "_____no_output_____" ] ], [ [ "# `concurrent.futures`\n\nThis lesson has a strange name. `concurrent.futures` is the name of a (relative) modern package in the Python standard library. It's a package with a beautiful and Pythonic API that abstracts us from the low level mechanisms of concurrency.\n\n**`concurrent.futures` should be your default choice for concurrent programming as much as possible**\n\nIn this tutorial, we started from the low levels `threading` and `multiprocessing` because we wanted to explain the concepts behind concurrency, but `concurrent.futures` offers a much safer and intuitive API. Let's start with it.\n\n## Executors and futures\n\n#### Executors\nExecutors are the entry points of `cf`. They are similar to `multiprocessing.Pool`s. Once an executor has been instantiated, we can `submit` jobs, or even `map` tasks, similar to `multiprocessin.Pool.map`. `concurrent.futures.Executor` is an abstract class. `cf` includes two concrete classes: `ThreadPoolExecutor` and `ProcessPoolExecutor`. This means that we can keep the same interface, but use completely different mechanisms just by changing the executor type we're using:", "_____no_output_____" ] ], [ [ "def check_price(exchange, symbol, date):\n base_url = \"http://localhost:5000\"\n resp = requests.get(f\"{base_url}/price/{exchange}/{symbol}/{date}\")\n return resp.json()", "_____no_output_____" ], [ "with ThreadPoolExecutor(max_workers=10) as ex:\n future = ex.submit(check_price, 'bitstamp', 'btc', '2020-04-01')\n print(f\"Price: ${future.result()['close']}\")", "Price: $6421.14\n" ], [ "with ProcessPoolExecutor(max_workers=10, mp_context=mp.get_context('fork')) as ex:\n future = ex.submit(check_price, 'bitstamp', 'btc', '2020-04-01')\n print(f\"Price: ${future.result()['close']}\")", "Price: $6421.14\n" ] ], [ [ "This is the beauty of `cf`: we're using the same logic with two completely different executors; the API is the same.\n\n#### Futures\n\nAs you can see from the the examples above, the `submit` method returns immediately a `Future` object. These objects are an abstraction of a task that is being processed. They have multiple useful methods that we can use (as seen in the following example). The most important one, `result(timeout=None)` will block for `timeout` seconds until a result was produced:", "_____no_output_____" ] ], [ [ "with ThreadPoolExecutor(max_workers=10) as ex:\n future = ex.submit(check_price, 'bitstamp', 'btc', '2020-04-01')\n print(future.done())\n print(f\"Price: ${future.result()['close']}\")\n print(future.done())", "False\nPrice: $6421.14\nTrue\n" ] ], [ [ "#### The `map` method\n\nExecutors have a `map` method that is similar to `mp.Pool.map`, it's convenient as there are no futures to work with, but it's limited as only one parameter can be passed:", "_____no_output_____" ] ], [ [ "EXCHANGES = ['bitfinex', 'bitstamp', 'kraken']", "_____no_output_____" ], [ "def check_price_tuple(arg):\n exchange, symbol, date = arg\n base_url = \"http://localhost:5000\"\n resp = requests.get(f\"{base_url}/price/{exchange}/{symbol}/{date}\")\n return resp.json()", "_____no_output_____" ], [ "with ThreadPoolExecutor(max_workers=10) as ex:\n results = ex.map(check_price_tuple, [\n (exchange, 'btc', '2020-04-01')\n for exchange in EXCHANGES\n ])\n print([price['close'] for price in results])", "[6409.8, 6421.14, 6401.9]\n" ], [ "('bitstamp', 'btc', '2020-04-01')", "_____no_output_____" ] ], [ [ "As you can see, we had to define a new special function that works by receiving a tuple instead of the individual elements.\n\n#### `submit` & `as_completed` pattern\n\nTo overcome the limitation of `Executor.map`, we can use a common pattern of creating multiple futures with `Executor.submit` and waiting for them to complete with the module-level function `concurrent.futures.as_completed`:", "_____no_output_____" ] ], [ [ "with ThreadPoolExecutor(max_workers=10) as ex:\n futures = {\n ex.submit(check_price, exchange, 'btc', '2020-04-01'): exchange\n for exchange in EXCHANGES\n }\n for future in cf.as_completed(futures):\n exchange = futures[future]\n print(f\"{exchange.title()}: ${future.result()['close']}\")", "Kraken: $6401.9\nBitfinex: $6409.8\nBitstamp: $6421.14\n" ] ], [ [ "## Producer/Consumer with `concurrent.futures`\n\nI'll show you an example of the producer/consumer pattern using the `cf` module. There are multiple ways to create this pattern, I'll stick to the basics.", "_____no_output_____" ] ], [ [ "BASE_URL = \"http://localhost:5000\"", "_____no_output_____" ], [ "resp = requests.get(f\"{BASE_URL}/exchanges\")", "_____no_output_____" ], [ "EXCHANGES = resp.json()\nEXCHANGES[:3]", "_____no_output_____" ], [ "START_DATE = datetime(2020, 3, 1)", "_____no_output_____" ], [ "DATES = [(START_DATE + timedelta(days=i)).strftime('%Y-%m-%d') for i in range(31)]", "_____no_output_____" ], [ "DATES[:3]", "_____no_output_____" ], [ "resp = requests.get(f\"{BASE_URL}/symbols\")", "_____no_output_____" ], [ "SYMBOLS = resp.json()\nSYMBOLS", "_____no_output_____" ] ], [ [ "Queues:", "_____no_output_____" ] ], [ [ "work_to_do = Queue()\nwork_done = SimpleQueue()", "_____no_output_____" ], [ "for exchange in EXCHANGES:\n for date in DATES:\n for symbol in SYMBOLS:\n task = {\n 'exchange': exchange,\n 'symbol': symbol,\n 'date': date,\n }\n work_to_do.put(task)", "_____no_output_____" ], [ "work_to_do.qsize()", "_____no_output_____" ], [ "def worker(task_queue, results_queue):\n while True:\n try:\n task = task_queue.get(block=False)\n except queue.Empty:\n print('Queue is empty! My work here is done. Exiting.')\n return\n exchange, symbol, date = task['exchange'], task['symbol'], task['date']\n price = check_price(exchange, symbol, date)\n results_queue.put((price, exchange, symbol, date))\n task_queue.task_done()", "_____no_output_____" ], [ "with ThreadPoolExecutor(max_workers=32) as ex:\n futures = [\n ex.submit(worker, work_to_do, work_done) for _ in range(32)\n ]\n work_to_do.join()", "Queue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.Queue is empty! My work here is done. Exiting.\n\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.Queue is empty! My work here is done. Exiting.\n\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.Queue is empty! My work here is done. Exiting.\n\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.Queue is empty! My work here is done. Exiting.\n\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\nQueue is empty! My work here is done. Exiting.\n" ], [ "all([f.done() for f in futures])", "_____no_output_____" ], [ "work_done.qsize()", "_____no_output_____" ], [ "results = {}", "_____no_output_____" ], [ "while True:\n try:\n price, exchange, symbol, date = work_done.get(block=None)\n results.setdefault(exchange, {})\n results[exchange].setdefault(date, {})\n results[exchange][date][symbol] = price['close'] if price else None\n except queue.Empty:\n break", "_____no_output_____" ], [ "results['bitfinex']['2020-03-10']['btc']", "_____no_output_____" ], [ "results['bitstamp']['2020-03-10']['btc']", "_____no_output_____" ], [ "results['coinbase-pro']['2020-03-10']['btc']", "_____no_output_____" ] ], [ [ "## Summary\n\nThe `concurrent.futures` module is the most abstract, highest level concurrency module in the Python standard library and **it SHOULD be your default option** when writing concurrent code. Only if you need more advanced capabilities, you should use the `threading` or `multiprocessing` modules directly.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7a6c9ae93a42358a9894d1bc018afc523df38c9
41,720
ipynb
Jupyter Notebook
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
f5814b8acfcf31b02248eac78da1dd7efba64960
[ "MIT" ]
null
null
null
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
f5814b8acfcf31b02248eac78da1dd7efba64960
[ "MIT" ]
null
null
null
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
f5814b8acfcf31b02248eac78da1dd7efba64960
[ "MIT" ]
null
null
null
67.837398
27,936
0.821285
[ [ [ "## Basic Ensemble Learinng : Hard/Soft Voting/Bagging - OOB/bootstraps/bootstrap_features", "_____no_output_____" ] ], [ [ "##-----------------\n## Copyright in private\n## Modify History :\n## 2018 - 9 - 24\n## Purpose:\n## 1. 集成学习分类器 构建 - 集成多个子模型来投票,提供准确率,理论上子模型越多,整体模型的准确率将很高!\n## 2. Hard(少数服从多数) and Soft voting classifier \n## \n## 3. 为提高每个子模型的差异性。希望每个子模型只看一部分的数据样本。 在看样本的形式上可以分为:放回取样(bagging)和不放回取样方法(pasting)\n## 放回取样的方法,整体有30% 的数据是取不到, 参数bootstraps \n## 4. n_estimators=500 个不同的子模型构成集成学习,而且模型之见存在差异,就构成了随机森林\n## \n## Parameters:\n## \n## ", "_____no_output_____" ], [ "from sklearn import datasets\nimport numpy as np\nimport matplotlib.pyplot as plt\n# help(make_moons)", "_____no_output_____" ], [ "## data sets \nX,y = datasets.make_moons(n_samples = 1200, noise = 0.25, random_state = 100)", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ], [ "y.shape", "_____no_output_____" ], [ "#plot the show \nplt.scatter(X[y == 0,0],X [y == 0,1])\nplt.scatter(X[y == 1,0],X [y == 1,1])\nplt.show()", "_____no_output_____" ], [ "# try to split data into test and train data sets \nfrom sklearn.model_selection import train_test_split\nX_train,X_test,y_train,y_test =train_test_split(X,y)", "_____no_output_____" ] ], [ [ "## 1. EnsembleLearning Classifier ", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\n\n# Logistic Regression \nlog_clf = LogisticRegression()\nlog_clf.fit(X_train,y_train)\n#log_clf.score(X_test,y_test) # 0.83\n\n# SVM\nfrom sklearn.svm import SVC\nsvc_clf = SVC()\nsvc_clf.fit(X_train,y_train)\n#svc_clf.score(X_test,y_test) #0.98\n\n# Decision Tree\nfrom sklearn.tree import DecisionTreeClassifier \ndt_clf = DecisionTreeClassifier(random_state = 100) # max_depth = 2,criterion = 'gini'\ndt_clf.fit(X_train,y_train)\n#dt_clf.score(X_test,y_test) # 1.0", "_____no_output_____" ], [ "## test on data sets \nlog_clf.score(X_test,y_test) # 0.88\n", "_____no_output_____" ], [ "svc_clf.score(X_test,y_test) #0.94\n", "_____no_output_____" ], [ "dt_clf.score(X_test,y_test) # 0.92", "_____no_output_____" ] ], [ [ "### 1.1 Ensemble Leaning ", "_____no_output_____" ], [ "### 1.1.1集成学习的效果- 人多力量大,明显比单个分类器的分类结果准确率要高", "_____no_output_____" ] ], [ [ "predict_1 = log_clf.predict(X_test)\npredict_2 = svc_clf.predict(X_test)\npredict_3 = dt_clf.predict(X_test)\n\n# \npredict_y = np.array((predict_1 + predict_2 + predict_3) >=2 ,dtype = 'int')\n", "_____no_output_____" ], [ "predict_y[:10]", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score\naccuracy_score(y_test,predict_y)", "_____no_output_____" ] ], [ [ "## 2. Voting Classifier - Hard Voting - 少数服从多数准则", "_____no_output_____" ] ], [ [ "# Harding voting - 少数服从多数\nfrom sklearn.ensemble import VotingClassifier \n\nvoting_clf_hard = VotingClassifier(estimators = [\n ('log_clf',LogisticRegression()),\n ('svm_clf',SVC()),\n ('dt_clf',DecisionTreeClassifier(random_state = 200))], voting = 'hard') ", "_____no_output_____" ], [ "# test score by hard voting classifier \nvoting_clf_hard.fit(X_train,y_train)\nvoting_clf_hard.score(X_test,y_test)", "c:\\users\\h155809\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\sklearn\\preprocessing\\label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.\n if diff:\n" ] ], [ [ "## 3. Voting Classifier - Soft Voting -考虑以分类的概率作为权值 来分类", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import VotingClassifier \n\nvoting_clf_soft = VotingClassifier(estimators = [\n ('log_clf', LogisticRegression()),\n ('svc_clf',SVC(probability=True)),\n ('dt_clf', DecisionTreeClassifier(random_state = 200))],voting = 'soft')", "_____no_output_____" ], [ "voting_clf_soft.fit(X_train,y_train)\nvoting_clf_soft.score(X_test,y_test)", "c:\\users\\h155809\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\sklearn\\preprocessing\\label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.\n if diff:\n" ] ], [ [ "## 4. 放回的取样方法 - Bagging", "_____no_output_____" ] ], [ [ "# 提高子模型的差异可以提高整体模型的准确率,并且可以让模型看不同数量的样本数量。\n# 在取样本时,分为放回取样方法- Bagging and 不放回取样 pasting \nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import BaggingClassifier", "_____no_output_____" ], [ "# 以DecisionTree 为例\n# n_estimators : 集成多少个分类器模型\n# max_sample: 分类器一次看多少数据\n# bootstrap = True: 可放回取样\n\nbagging_clf = BaggingClassifier(\n DecisionTreeClassifier(),\n n_estimators = 100,max_samples = 100, bootstrap = True\n)\n\nbagging_clf.fit(X_train,y_train)\nbagging_clf.score(X_test,y_test)", "_____no_output_____" ] ], [ [ "### 4.1 放回取样方法Bagging - Out of Bag(OOB) 有大约30%的数据取不到", "_____no_output_____" ] ], [ [ "# 有放回的取样本,并通过 oob_score = True 来确定标记没有被取到的样本数量。并将这些样本用于测试样本准确度\nbagging_clf = BaggingClassifier(\n DecisionTreeClassifier(),\n n_estimators = 100,max_samples = 100, bootstrap = True,oob_score = True\n)\n\nbagging_clf.fit(X_train,y_train)\nbagging_clf.score(X_test,y_test)", "_____no_output_____" ], [ "# test model on out of bag data\nbagging_clf.oob_score_", "_____no_output_____" ] ], [ [ "### 4.2 放回取样方法Bagging - Out of Bag(OOB) - 基于样本特征的取样方法 - bootstrap_features", "_____no_output_____" ] ], [ [ "## 有放回的取样本 , 基于样本特征的取样方法,并标记最大取样特征数量max_features\n##", "_____no_output_____" ], [ "from sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import BaggingClassifier\n\n# max_features - 每次取样特征数量\n# bootstrap_features = True 基于特征的取样方法\n\nbagging_clf_features = BaggingClassifier(\n DecisionTreeClassifier(),\n n_estimators = 100,max_samples = 100, bootstrap = True,oob_score = True,max_features = 2, bootstrap_features = True\n)\n\nbagging_clf_features.fit(X_train,y_train)\nbagging_clf_features.score(X_test,y_test)", "_____no_output_____" ], [ "# test model on out of bag data \nbagging_clf_features.oob_score_", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7a6f058386961ddea225bcba19a66941500b484
15,097
ipynb
Jupyter Notebook
project/NN_5.ipynb
SJSlavin/phys202-project
bc81aebefd38b4c31e10d95fe46277a707cb0e6d
[ "MIT" ]
null
null
null
project/NN_5.ipynb
SJSlavin/phys202-project
bc81aebefd38b4c31e10d95fe46277a707cb0e6d
[ "MIT" ]
null
null
null
project/NN_5.ipynb
SJSlavin/phys202-project
bc81aebefd38b4c31e10d95fe46277a707cb0e6d
[ "MIT" ]
null
null
null
35.859857
120
0.487912
[ [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.html.widgets import interact\n\nfrom sklearn.datasets import load_digits\ndigits = load_digits()", "_____no_output_____" ], [ "def sigmoid(x):\n return 1/(1 + np.exp(-x))\n\nsigmoid_v = np.vectorize(sigmoid)\n\ndef sigmoidprime(x):\n return sigmoid(x) * (1 - sigmoid(x))\n\nsigmoidprime_v = np.vectorize(sigmoidprime)", "_____no_output_____" ], [ "size = [64, 20, 10]\n\nweights = []\nfor n in range(1, len(size)):\n weights.append(np.random.rand(size[n-1], size[n]) * 2 - 1)\n\nbiases = []\nfor n in range(1, len(size)):\n biases.append(np.random.rand(size[n]) * 2 - 1)\n\ntrainingdata = digits.data[0:1200]\ntraininganswers = digits.target[0:1200]\nlc = 0.02\n\n#convert the integer answers into a 10-dimension array\ntraininganswervectors = np.zeros((1796,10))\nfor n in range(1796):\n traininganswervectors[n][digits.target[n]] = 1", "_____no_output_____" ], [ "def feedforward(weights, biases, a):\n b = []\n #first element is inputs \"a\"\n b.append(a)\n for n in range(1, len(size)):\n #all other elements depend on the number of neurons\n b.append(np.zeros(size[n]))\n for n2 in range(0, size[n]):\n b[n][n2] = sigmoid_v(np.dot(weights[n-1][0:,n2], b[n-1]) + biases[n-1][n2])\n \n return b", "_____no_output_____" ], [ "feedforward(weights, biases, trainingdata[0])", "_____no_output_____" ], [ "def gradient_descent(weights, biases, inputs, answers, batchsize, lc, epochs):\n for n in range(epochs):\n #pick random locations for input/result data\n locations = np.random.randint(0, len(inputs), batchsize)\n minibatch = []\n #create tuples (inputs, result) based on random locations\n for n2 in range(batchsize):\n minibatch.append((inputs[locations[n2]], answers[locations[n2]]))\n for n3 in range(batchsize):\n weights, biases = train(weights, biases, minibatch, lc)\n \n \n results = []\n for n4 in range(len(trainingdata)):\n results.append(feedforward(weights, biases, inputs[n4])[-1])\n \n accresult = accuracy(inputs, results, answers)\n print(\"Epoch \", n, \" : \", accresult)\n \n return weights, biases", "_____no_output_____" ], [ "def train(weights, biases, minibatch, lc):\n #set the nabla functions to be the functions themselves initially, same size\n nb = [np.zeros(b.shape) for b in biases]\n nw = [np.zeros(w.shape) for w in weights]\n #largely taken from Michael Nielsen's implementation\n for i, r in minibatch:\n dnb, dnw = backprop(weights, biases, i, r)\n nb = [a+b for a, b in zip(nb, dnb)]\n nw = [a+b for a, b in zip(nw, dnw)]\n \n weights = [w-(lc/len(minibatch))*n_w for w, n_w in zip(weights, nw)]\n biases = [b-(lc/len(minibatch))*n_b for b, n_b in zip(biases, nb)]\n return weights, biases", "_____no_output_____" ], [ "def backprop(weights, biases, inputs, answers):\n #set the nabla functions to be the same size as functions\n nb = [np.zeros(b.shape) for b in biases]\n nw = [np.zeros(w.shape) for w in weights]\n a = inputs\n alist = [inputs]\n zlist = []\n #from feedforward\n for n in range(1, len(size)):\n #all other elements depend on the number of neurons\n zlist.append(np.zeros(size[n]))\n alist.append(np.zeros(size[n]))\n for n2 in range(1, size[n]):\n zlist[n-1][n2] = np.dot(weights[n-1][0:,n2], alist[n-1]) + biases[n-1][n2]\n alist[n][n2] = sigmoid_v(alist[n-1][n2])\n \n delta = costderivative(alist[-1], answers) * sigmoidprime_v(zlist[-1])\n nb[-1] = delta\n #different from MN, alist[-2] not same size as delta?\n nw[-1] = np.dot(delta, alist[-1].transpose())\n \n for n in range(2, len(size)):\n delta = np.dot(weights[-n+1], delta) * sigmoidprime_v(zlist[-n])\n nb[-n] = delta\n #same here\n nw[-n] = np.dot(delta, alist[-n].transpose())\n \n return nb, nw", "_____no_output_____" ], [ "def costderivative(output, answers):\n return (output - answers)", "_____no_output_____" ], [ "def accuracy(inputs, results, answers):\n correct = 0\n binresults = results\n for n in range(0, len(results)):\n #converts the output into a binary y/n for each digit\n for n2 in range(len(results[n])):\n if results[n][n2] == np.amax(results[n]):\n binresults[n][n2] = 1\n else:\n binresults[n][n2] = 0\n \n if np.array_equal(answers[n], binresults[n]):\n correct += 1\n return correct / len(results)", "_____no_output_____" ], [ "size = [64, 20, 10]\n\nweights = []\nfor n in range(1, len(size)):\n weights.append(np.random.rand(size[n-1], size[n]) * 2 - 1)\n\nbiases = []\nfor n in range(1, len(size)):\n biases.append(np.random.rand(size[n]) * 2 - 1)\n\ntrainingdata = digits.data[0:1000]\ntraininganswers = digits.target[0:1000]\n\ntraininganswervectors = np.zeros((1000,10))\nfor n in range(1000):\n traininganswervectors[n][digits.target[n]] = 1", "_____no_output_____" ], [ "final_weights, final_biases = gradient_descent(weights, biases, trainingdata,\n traininganswervectors, 5, 1, 30)\n\nprint(final_weights)", "Epoch 0 : 0.048\nEpoch 1 : 0.093\nEpoch 2 : 0.095\nEpoch 3 : 0.101\nEpoch 4 : 0.097\nEpoch 5 : 0.095\nEpoch 6 : 0.101\nEpoch 7 : 0.101\nEpoch 8 : 0.105\nEpoch 9 : 0.091\nEpoch 10 : 0.09\nEpoch 11 : 0.091\nEpoch 12 : 0.091\nEpoch 13 : 0.09\nEpoch 14 : 0.092\nEpoch 15 : 0.091\nEpoch 16 : 0.091\nEpoch 17 : 0.091\nEpoch 18 : 0.09\nEpoch 19 : 0.09\nEpoch 20 : 0.089\nEpoch 21 : 0.089\nEpoch 22 : 0.081\nEpoch 23 : 0.14\nEpoch 24 : 0.139\nEpoch 25 : 0.125\nEpoch 26 : 0.117\nEpoch 27 : 0.113\nEpoch 28 : 0.114\nEpoch 29 : 0.114\n[array([[ 0.76450383, 0.59251208, -0.33663917, ..., 0.03105159,\n -0.50664191, 0.55243318],\n [-0.32381734, -0.19473504, 0.47964496, ..., -0.74232456,\n 0.63391058, 0.11945287],\n [ 0.6231575 , 0.86058574, 0.88342131, ..., -0.22242994,\n 0.25655237, -0.23629923],\n ..., \n [-0.36932122, -0.3959529 , 0.81687002, ..., -0.97764035,\n -0.32230678, 0.12894721],\n [ 0.3298412 , 0.6209686 , 0.84756657, ..., -0.12449487,\n 0.39450935, -0.18668442],\n [-0.95583099, 0.08569311, 0.71531 , ..., 0.87901573,\n -0.25818984, -0.67319341]]), array([[ 0.14131349, -1.45527079, -0.54529945, -1.0817927 , -1.63675429,\n -0.82509237, -0.50577114, -1.25099751, -1.57167739, -1.58229838],\n [-0.26268802, -0.46828519, -0.27403453, -0.27378482, -0.11569697,\n -1.14896415, -1.0756586 , -1.00494417, -1.1274975 , -1.18015684],\n [-0.02772253, 0.15362551, -0.73992948, -0.38885413, 0.10669895,\n -1.04529539, -0.51262275, 0.04903211, 0.07113167, -0.95034216],\n [-1.10066271, -0.4192437 , -0.83153843, -0.08715551, 0.0762273 ,\n -1.61585745, -0.07269733, 0.22141072, -1.47321102, -1.06635828],\n [-1.53605171, -0.20236728, -1.59564818, -0.7736486 , -0.85743672,\n -1.65075395, 0.22338501, -0.41452082, -1.19489058, -0.28412976],\n [-0.97130133, -1.26394292, -0.86485605, 0.21719278, 0.04154922,\n -0.56971456, -1.64255665, -1.71024067, -1.65795937, -1.58724435],\n [-1.17607687, -1.66427757, -1.26226142, 0.03540448, -0.09032715,\n -0.57956052, -1.30498373, 0.19528167, -0.61780775, -1.71499853],\n [-1.32944467, -0.58069153, -0.7444898 , 0.13856652, -1.36328519,\n -0.13302136, -1.30499062, -0.1462735 , 0.09600149, -1.51221257],\n [-1.2194706 , -0.23253221, -0.66908131, -0.61349542, -0.69434693,\n -0.5319174 , -0.53994301, -1.40761353, -0.77824655, -0.47336865],\n [-0.57874113, -1.45537269, -0.52541559, 0.00194201, -1.72241672,\n -0.15125062, -1.50899977, -1.31169813, -0.38594925, 0.17597085],\n [-1.15019981, 0.15592889, -0.31838663, -0.85503774, -1.15652188,\n -0.19242718, -0.88554113, -0.27003906, -0.59227189, -0.38658356],\n [-0.19594764, -0.27451577, 0.12450115, -1.16536117, 0.22683144,\n -1.45717936, 0.05982142, -0.48704248, 0.16055401, 0.07586088],\n [-1.31938199, -0.15037955, -0.54360006, -0.16815748, -0.53387818,\n -1.06238962, -0.71740935, -1.72122009, -0.43972491, -0.02027598],\n [-1.5115723 , -0.84107486, -1.14661439, -1.11010468, -0.39087975,\n 0.10574395, -1.70313065, -1.37999991, -1.2981514 , -0.4824089 ],\n [ 0.17303988, -0.90082792, -0.36351988, -1.04551961, -0.80590839,\n 0.18561133, 0.1314899 , -0.74618971, -0.60030412, -1.45901202],\n [ 0.25517938, -1.462488 , -1.29089867, -0.52794909, 0.10719908,\n -0.02585236, -0.37550124, -0.59261764, -1.13334063, -0.21033767],\n [-0.43664434, -1.39479159, -1.69268381, -0.94948049, -1.27133524,\n -0.70130297, -1.33301927, -0.81116035, -1.07527586, -1.66143407],\n [-0.45148065, -0.91159968, -1.03189045, -1.10330699, -0.22954374,\n -1.23667512, -1.15779083, -1.13007106, 0.11576962, -1.59387454],\n [-1.2342925 , 0.22825142, -0.52002247, -0.94077492, -1.31173937,\n -0.04128284, -0.60117536, -0.52280868, -0.14590897, -0.16754535],\n [-1.38041727, 0.0245013 , -0.88616725, -1.41358959, -0.71826232,\n -0.9898886 , -1.22985433, -0.52484158, -1.2708501 , -1.64708579]])]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a6f39741c1dcecc33a89f74e724018866bb1f5
12,518
ipynb
Jupyter Notebook
RL1.ipynb
bsivavenu/Google_Colab_Notebooks
30e8f25c5ac1410b7e34c32879c391780ebc7048
[ "Apache-2.0" ]
null
null
null
RL1.ipynb
bsivavenu/Google_Colab_Notebooks
30e8f25c5ac1410b7e34c32879c391780ebc7048
[ "Apache-2.0" ]
null
null
null
RL1.ipynb
bsivavenu/Google_Colab_Notebooks
30e8f25c5ac1410b7e34c32879c391780ebc7048
[ "Apache-2.0" ]
null
null
null
12,518
12,518
0.703307
[ [ [ "import gym", "_____no_output_____" ], [ "env = gym.make('CartPole-v0')", "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py:14: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.\n result = entry_point.load(False)\n" ], [ "env", "_____no_output_____" ], [ "env.reset()", "_____no_output_____" ], [ "env.action_space", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "\nenv.step(0)", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "env.step(0)", "_____no_output_____" ], [ "env.step(0)", "\u001b[33mWARN: You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior.\u001b[0m\n" ] ], [ [ "here true says is is stopped and we can say drone is hit something and stopped we can say ", "_____no_output_____" ] ], [ [ "env.step(0)", "_____no_output_____" ] ], [ [ "multiarmbandit", "_____no_output_____" ] ], [ [ "env = MultiArmedBandit()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a6ffc8fd00939eab40271129a5637c55f88e1a
667,897
ipynb
Jupyter Notebook
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
e3f9deadf51c97029a0f9a4bb669a5af68abf7c6
[ "MIT" ]
null
null
null
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
e3f9deadf51c97029a0f9a4bb669a5af68abf7c6
[ "MIT" ]
null
null
null
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
e3f9deadf51c97029a0f9a4bb669a5af68abf7c6
[ "MIT" ]
null
null
null
1,027.533846
99,152
0.95405
[ [ [ "from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))", "_____no_output_____" ], [ "# This will reload imports before executing code, allowing you to easily change contents of custom scripts\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "# Stay classification: MBC evaluation\n\n**31.08.2020**", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "import os, sys\nsys.path.append('/home/sandm/Notebooks/stay_classification/src/')", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "from synthetic_data.trajectory_class import get_pickle_trajectory\nfrom synthetic_data.trajectory import get_stay_segs, get_adjusted_stays", "_____no_output_____" ], [ "from stay_classification.metric_box_classifier.metric_box_classifier import stay_classifier_testing", "_____no_output_____" ] ], [ [ "# Batch data evaluation", "_____no_output_____" ] ], [ [ "dsec = 1/3600.0\nt_total = np.arange(0,24,dsec)", "_____no_output_____" ], [ "time_thresh = 1/6\ndist_thresh=0.25", "_____no_output_____" ], [ "nr_stays = 3", "_____no_output_____" ], [ "data_dir = os.path.abspath('../../')+f\"/classifiers_playground/metric_box_classifier/testdata_training_set__canonical_{nr_stays}stays/\"\nos.path.isdir(data_dir)\n\nimport glob\n#data_dir = os.path.abspath('../../')+f\"/testdata/testdata_training_set__general/\"\nos.path.isdir(data_dir)\npkls = glob.glob(data_dir + \"*.pkl\")", "_____no_output_____" ], [ "from stay_classification.metrics_etc import eval_synth_data\nfrom stay_classification.metrics import get_segments_scores, get_segments_errs\nfrom stay_classification.metrics_cluster_tools import get_pred_labels, get_labels_from_clusters", "_____no_output_____" ], [ "from synthetic_data.trajectory import get_stay_indices, get_adjusted_stays\n\nget_err = lambda trues, preds: np.sum(abs(trues-preds))/trues.size", "_____no_output_____" ] ], [ [ "## Load, Classify, Measure", "_____no_output_____" ] ], [ [ "# For the correct nr of stays\nlens3 = []\nprecs3, a_precs3, w_precs3 = [], [], []\nrecs3, a_recs3, w_recs3 = [], [], []\nerrs3, a_errs3, w_errs3 = [], [], []\n\n# For the incorrect nr of stays\nlens = []\nprecs, a_precs, w_precs = [], [], []\nrecs, a_recs, w_recs = [], [], []\nerrs, a_errs, w_errs = [], [], []\n\n\nbad_list = []\n\nprecrec_limit = 0.80\n\nii = 0\n\nlength_criterion_break = False\niqr_trim = False\nverbose = False\n\ntotal = 1000\n\nfor ii in range(0, total):\n \n # Load the data\n trajectory_tag = f\"trajectory{ii}_{nr_stays}stays\" \n path_to_file = pkls[ii]#data_dir + trajectory_tag\n t_arr, r_arr, x_arr, segments = get_pickle_trajectory(path_to_file)\n t_segs, x_segs = get_stay_segs(get_adjusted_stays(segments, t_arr))\n\n # Get the true event indices (needed for the total error)\n true_indices = get_stay_indices(get_adjusted_stays(segments, t_arr), t_arr)\n true_labels = np.zeros(t_arr.shape)\n for pair in true_indices:\n true_labels[pair[0]:pair[1]+1] = 1\n \n # Get the stay clusters\n #clusters = quick_box_method(t_arr, x_arr, dist_thresh, time_thresh, 1, False) \n all_clusters = stay_classifier_testing(t_arr, x_arr, dist_thresh, time_thresh, verbose) \n clusters = all_clusters[-1].copy()\n # Make some measurements\n final_len=len(clusters)\n # total scores\n prec, rec, conmat = eval_synth_data(t_arr, segments, clusters)\n # seg. scores\n _, a_prec, w_prec, _, a_rec, w_rec = get_segments_scores(t_arr, segments, clusters) \n \n # Total error\n pred_labels = get_pred_labels(clusters, t_arr.shape)\n err = get_err(true_labels,pred_labels)\n # Segment errors\n _, a_err, w_err = get_segments_errs(t_arr, segments, clusters)\n \n # Get the expected number of stays (in general)\n stays_tag = int((x_segs.size)/3)\n \n len_all_clusts = len(clusters)\n if final_len != stays_tag:\n lens.append(final_len)\n precs.append(prec)\n a_precs.append(a_prec)\n w_precs.append(w_prec)\n recs.append(rec) \n a_recs.append(a_rec) \n w_recs.append(w_rec)\n errs.append(err) \n a_errs.append(a_err) \n w_errs.append(w_err) \n else:\n lens3.append(final_len)\n precs3.append(prec)\n a_precs3.append(a_prec)\n w_precs3.append(w_prec)\n recs3.append(rec) \n a_recs3.append(a_rec) \n w_recs3.append(w_rec) \n errs3.append(err) \n a_errs3.append(a_err)\n w_errs3.append(w_err)\n \n # progress output\n if ii % int(0.1*total) == 0:\n print(f\"{ii:4d} of {total:5d}\")", " 0 of 1000\n 100 of 1000\n 200 of 1000\n 300 of 1000\n 400 of 1000\n 500 of 1000\n 600 of 1000\n 700 of 1000\n 800 of 1000\n 900 of 1000\n" ], [ "correct_frac = (len(lens3)/total)\nincorrect_frac = (len(lens)/total)", "_____no_output_____" ], [ "print(f\"\\n * correct number of stays, {correct_frac:6.3f} \", \n f\"\\n * prec.: {sum(w_precs3)/len(w_precs3):6.3}\",\n f\"\\n * rec.: {sum(w_recs3)/len(w_recs3):6.3}\",\n f\"\\n * incorrect number of stays, {incorrect_frac:6.3f}\",\n f\"\\n * prec.: {sum(w_precs)/len(w_precs):6.3}\",\n f\"\\n * rec.: {sum(w_recs)/len(w_recs):6.3}\")", "\n * correct number of stays, 0.796 \n * prec.: 0.956 \n * rec.: 0.994 \n * incorrect number of stays, 0.204 \n * prec.: 0.856 \n * rec.: 0.95\n" ] ], [ [ "## Visualizations", "_____no_output_____" ] ], [ [ "from stay_classification.metrics_plotting import plot_scores_stats, plot_errs_stats, plot_scores_stats_cominbed", "_____no_output_____" ], [ "os.mkdir('./visualizations/metrics_new/')", "_____no_output_____" ] ], [ [ "### Prec/rec score distributions\n\n#### Total scores", "_____no_output_____" ] ], [ [ "title = f\"Tot. scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, axs = plot_scores_stats(precs3, recs3, precs, recs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays.png\")", "_____no_output_____" ], [ "title = f\"Total scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, axs = plot_scores_stats_cominbed(precs3, recs3, precs, recs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays_seg_scores_combined_tot.png\")", "_____no_output_____" ] ], [ [ "#### Segment-averaged score distributions", "_____no_output_____" ] ], [ [ "title = f\"Avg. scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, axs = plot_scores_stats(a_precs3, a_recs3, a_precs, a_recs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays_seg_scores_avg.png\")", "_____no_output_____" ], [ "title = f\"Avg. Scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, axs = plot_scores_stats_cominbed(a_precs3, a_recs3, a_precs, a_recs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays_seg_scores_combined_avg.png\")", "_____no_output_____" ] ], [ [ "#### Segment, weighted-averaged score distributions", "_____no_output_____" ] ], [ [ "title = f\"W-avg. scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, axs = plot_scores_stats(w_precs3, w_recs3, w_precs, w_recs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays_seg_scores_wavg.png\")", "_____no_output_____" ], [ "title = f\"W-avg. Scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, axs = plot_scores_stats_cominbed(w_precs3, w_recs3, w_precs, w_recs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays_seg_scores_combined_wavg.png\")", "_____no_output_____" ] ], [ [ "### Error stats", "_____no_output_____" ] ], [ [ "title = f\"Avg. Error: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, ax = plot_errs_stats(w_errs3, w_errs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays_seg_errs_avg.png\")", "_____no_output_____" ], [ "title = f\"W-avg. Error: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}\"\n\nfig, ax = plot_errs_stats(w_errs3, w_errs, title)\n\nfig.savefig(\"./visualizations/metrics_new/\" + f\"metrics__{nr_stays}stays_seg_errs_wavg.png\")", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e7a702c5ecd22b1a1fade644bebc400b6e5a3849
3,003
ipynb
Jupyter Notebook
Jour 2/chap02-04-saisie-utilisateur.ipynb
bellash13/SmartAcademyPython
44d0f6db0fcdcbbf1449a45b073a2b3182a19714
[ "MIT" ]
null
null
null
Jour 2/chap02-04-saisie-utilisateur.ipynb
bellash13/SmartAcademyPython
44d0f6db0fcdcbbf1449a45b073a2b3182a19714
[ "MIT" ]
null
null
null
Jour 2/chap02-04-saisie-utilisateur.ipynb
bellash13/SmartAcademyPython
44d0f6db0fcdcbbf1449a45b073a2b3182a19714
[ "MIT" ]
null
null
null
34.918605
375
0.6337
[ [ [ "<h3>Capturer un texte saisi par l'utilisateur</h3>\n<p>Nous nous proposons d'écrire un programme qui demande et stocke le nom de l'utilisateur, l'âge de l'utilisateur et stocke une valeur qui renseigne si l'utilisateur est majeur ou non. Par la suite nous afficherons un message de salutation à cet utilisateur, son âge ainsi que sa majorité.</p>\n<p>Voici comment se présente notre programme</p>", "_____no_output_____" ] ], [ [ "nom = input(\"Veuillez saisir votre nom: \")\nage = int(input(\"Veuillez saisir votre age: \"))\nestMajeur = age >= 18\n\nprint(f'Bonjour {nom}')\nprint(f'vous avez {age} ans')\nprint(f'Etes-vous majeur? {estMajeur}')", "Veuillez saisir votre nom: Hippo\nVeuillez saisir votre age: 15\n" ] ], [ [ "<p>Noter l'utilisation de la fonction <code>input()</code> avec le texte à afficher à l'utilisateur pour lui demander son nom. Cette fonction affichera le message placé entre parenthèses et guillements, ensuite stockera la valeur tapée par l'utilisateur dans la variable <code>nom</code> pour une utilisateur ultérieure dans le programme.</p>\n<p>Autre détail à noter, l'utilisation de la fonction <code>int()</code> qui enveloppe la fonction <code>input()</code> sur la deuxième ligne du programme; il s'agit d'une conversion de texte saisi par l'utilisateur en nombre, car l'age est un nombre et non un texte. Nous verrons ceci en détail au point suivant, sur la <em>conversion des types des variables</em>.</p>", "_____no_output_____" ], [ "NB: si l'utilisateur saisit un texte à la place de l'age, Python va vous générer une erreur comme quoi le texte saisi n'est pas un nombre; en programmation, on appelle cela une <strong><em>exception</em></strong>; nous apprendrons plus tard comment gérer les exceptions, afin de rester à l'abri des erreurs dans des cas pareils.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7a704da9e1b211702801279f4cc3200f0b4f2f2
23,898
ipynb
Jupyter Notebook
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
62837503ac33b3fb4e4d2d23bd7b8388bbecc02d
[ "MIT" ]
null
null
null
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
62837503ac33b3fb4e4d2d23bd7b8388bbecc02d
[ "MIT" ]
null
null
null
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
62837503ac33b3fb4e4d2d23bd7b8388bbecc02d
[ "MIT" ]
null
null
null
32.164199
1,105
0.472592
[ [ [ "# Repaso de ciclos, cadenas, tuplas y listas\n", "_____no_output_____" ], [ "## Cobra Mosmas\n\nAl ver los precios y los anuncios del almacén Cobra Mosmas, un cliente le\npide crear un programa de computador que le permita ingresar el precio\nindividual de tres productos y el precio de la promoción en combo de los\ntres productos anunciada por el almacen y determine si es preferible\ncomprarlos por separado o en el combo promoción. (Pensemos por 3\nminutos en definir claramente el problema)", "_____no_output_____" ] ], [ [ "#piense aquí la solución", "_____no_output_____" ] ], [ [ "### Solución", "_____no_output_____" ] ], [ [ "def comprar(p1,p2,p3,pc):\n if pc <= p1+p2+p3:\n return 'Combo'\n else:\n return 'Por separado'\n\na=float(input('Precio primer producto?'))\nb=float(input('Precio segundo producto?'))\nc=float(input('Precio tercer producto?'))\nd=float(input('Precio combo?'))\nprint(\"Comprar\",comprar(a,b,c,d))", "Precio primer producto?1\nPrecio segundo producto?2\nPrecio tercer producto?3\nPrecio combo?4\nComprar Combo\n" ] ], [ [ "## La cerca\n\nUn campesino de la región le pide crear un programa de computador que\nle permita determinar cual de dos opciones (madera o alambre) es la mejor\nopción (menor costo) para encerrar un terreno rectangular de 𝑛 ∗ 𝑚 metros\ncuadrados, sabiendo el costo de un metro lineal de alambre, el costo de un\nmetro de madera y la cantdad de hilos de alambre o hileras de madera. El\ncampesino solo piensa en usar una de las dos opciones, no las piensa\ncombinar. (Pensemos por 3 minutos en definir claramente el problema)", "_____no_output_____" ] ], [ [ "def en_madera(n,m,w,p):\n return (2*n+2*m)*w*p\n\ndef en_alambre(n,m,h,a):\n return (2*n+2*m)*h*a\n\ndef usar(n,m,h,a,w,p):\n if en_madera(n,m,w,p) <= en_alambre(n,m,h,a):\n return 'Madera'\n else:\n return 'Alambre'\n \nn=float(input('Largo terreno?'))\nm=float(input('Ancho terreno?'))\na=float(input('Costo metro alambre?'))\nh=int(input('Hilos de alambre?'))\np=float(input('Costo metro madera?'))\nw=int(input('Hileras de madera?'))\nprint(\"Usar\",usar(n,m,a,h,p,w))", "Largo terreno?2\nAncho terreno?4\nCosto metro alambre?4\nHilos de alambre?3\nCosto metro madera?2\nHileras de madera?2\nUsar Madera\n" ] ], [ [ "## Lista Escolar\n\nUnos padres de familia desesperados por determinar el dinero que debenpedir prestado para pagar los útiles escolares de su hijo, le han pedido crearun programa de computador que a partir de una lista de los precios de cada útil escolar y de la cantidad de cada útil escolar en la lista, determineel precio total de la lista. (Pensemos por 5 minutos en la solución)", "_____no_output_____" ] ], [ [ "def costo(precio, cantidad):\n costo = 0\n for i in range(0,len(precio)):\n costo = costo + precio[i] * cantidad[i]\n return costo\n\nprecio = []\ncantidad = []\nwhile input('Ingresar otro útil?').upper()=='S':\n precio.append(float(input('Precio útil?')))\n cantidad.append(float(input('Cantidad?')))\nprint(\"La lista cuesta\", costo(precio, cantidad))", "Ingresar otro útil?S\nPrecio útil?3999\nCantidad?1\nIngresar otro útil?N\nLa lista cuesta 3999.0\n" ] ], [ [ "## ADN\n\nEn la última edición de la revista científica ”ADN al día” se indica que laspruebas de relación entre individuos a partir de código genético se definede la siguiente manera: Si las dos cadenas se diferencian en menos de𝑝letras, existe una relación de padre-hijo, si se diferencian en menos de𝑓 > 𝑝letras, existe una relación de formar parte de la misma familia. Deotra manera no existe relación. El laboratorioTein Cul Pan, le pidedesarrollar un programa que a partir de dos cadenas de ADN del mismo tamaño, determine si existe una relación pader-hijo, de la misma familia o ninguna, siguiendo las reglas definidas por la revista científica ”ADN aldía”. (Pensemos por 5 minutos en la solución)", "_____no_output_____" ] ], [ [ "def diferencia(a,b):\n cuenta = 0\n for i in range(0,len(a)):\n if a[i] != b[i]:\n cuenta = cuenta + 1\n return cuenta\n \n def relacion(a,b,p,f):\n d = diferencia(a,b)\n if d <= p:\n return 'Padre-Hijo'\n elif d <= f:\n return 'Familia'\n else:\n return 'Ninguna'\n \nind1=input('Cadena ADN individuo 1?')\nind2=input('Cadena ADN individuo 2?')\np=int(input('Diferencia máxima para ser Padre-Hijo?'))\nf=int(input('Diferencia máxima para ser Familia?'))\nprint(\"Relación\", relacion(ind1, ind2, f, p)", "_____no_output_____" ] ], [ [ "## Extraer nombres de universidades Colombianas\n\nDada una lista de Universidades Colombianas, obtener el nombre del sitio web. Se asume que un nombre de universidad está entre los caracteres www. y edu.co. Por ejemplo de www.unal.edu.co se obtiene unal.\n\n*Entrada:*\nUn numero n indicando la cantidad de nombres de sitios web a procesar\n\n*Salida:*\nListado de posibles nombres de universidades.\n\n*Ejemplo:*\n\n<table>\n <tr>\n <td>\n Input\n </td>\n <td>\n Output\n </td>\n </tr>\n <tr>\n <td>\n 5<br>\n www.unal.edu.co<br> \n www.udistrital.edu.co<br>\n www.univalle.edu.co<br>\n www.javeriana.edu.co<br>\n www.konradlorenz.edu.co<br>\n </td>\n <td> \n unal<br>\n udistrital<br>\n univalle<br>\n javeriana<br>\n konradlorenz<br>\n </td>\n </tr> \n</table>\n\n", "_____no_output_____" ] ], [ [ "#escriba aquí la solución", "_____no_output_____" ] ], [ [ "## Solucion:\n", "_____no_output_____" ] ], [ [ "def process(uni):\n return uni.split(\".\")[1]\n\ndef main():\n n = int(input())\n for i in range(n):\n uni = input()\n print(process(uni))\nmain()", "www.k.edu.co\n" ] ], [ [ "## Leer información de estudiantes y calcular el promedio de notas", "_____no_output_____" ], [ "Se tienen que procesar algunos comandos para realizar el procesamiento de notas de una Universidad. Se tiene una lista de estudiantes \n\n- Comando 1: Agregar estudiante y nota `1&nombre_estudiante&nota`\n- Comando 2: Calcular promedio de los estudiantes en un momento dado `2`\n- Comando 3: Ordenar estudiantes agregados por nombre `3`\n- Comando 4: Consultar la nota de un estudiante `4&nombre_estudiante`\n- Comando 5: Visualizar lista de estudiantes `5`\n- Comando 6: Salir `6`\n\n", "_____no_output_____" ] ], [ [ "#ingrese aquí la solución", "_____no_output_____" ] ], [ [ "Para poder resolver el problema se pueden identificar varias partes a resolver que pueden modelarse como funciones:\n\n- Definir la lista de estudiantes.\n- agregar un estudiante dada la información\n- calcular el promedio de notas de los estudiantes en un momento dado\n- ordenar estudiantes agregados por nombre\n- consultar la nota de un estudiante \n- visualizar lista\n- procesar los comandos\n- mostrar menu\n ", "_____no_output_____" ], [ "## Solución", "_____no_output_____" ] ], [ [ "# definir la lista de estudiantes un estudiante puede ser modelado como una tupla (por ahora)", "_____no_output_____" ], [ "\ndef agregar_estudiante(estudiantes, est): \n estudiantes.append(est)\n\ndef promedio(estudiantes):\n prom = 0\n #print(estudiantes)\n for estudiante in estudiantes:\n prom += float(estudiante[1])\n print(\"El promedio de los estudiantes es: \" + str(prom/len(estudiantes)))\n\ndef ordenar(estudiantes):\n estudiantes.sort()\n\ndef consultar(estudiantes, nombre):\n encontrado = False\n for estudiante in estudiantes:\n if estudiante[0] == nombre:\n encontrado = True\n print(estudiante[1])\n if not encontrado:\n print(\"Estudiante no encontrado\")\n \ndef visualizar(estudiantes):\n print(\"Lista de estudiantes\".center(30, \"#\"))\n if len(estudiantes) == 0:\n print(\"No hay estudiantes registrados.\")\n for e in estudiantes:\n print(\"Nombre: \" + e[0] + \", nota:\" + str(e[1]))\n \ndef procesar_comandos():\n bandera = True\n estudiantes = []\n comando = [0]\n while bandera or comando[0] != \"6\":\n bandera = False\n mostrar_menu()\n comando = input().split(\"&\")\n print(comando[0])\n if comando[0] == \"1\":\n agregar_estudiante(estudiantes, (comando[1], float(comando[2])))\n elif comando[0] == \"2\":\n promedio(estudiantes)\n elif comando[0] == \"3\":\n ordenar(estudiantes)\n elif comando[0] == \"4\":\n consultar(estudiantes, comando[1]) \n elif comando[0] == \"5\":\n visualizar(estudiantes)\n \ndef mostrar_menu():\n print(\"Seleccione una opción:\")\n print(\"Comando 1: Agregar estudiante y nota `1&nombre_estudiante&nota`\")\n print(\"Comando 2: Calcular promedio de los estudiantes en un momento dado.\")\n print(\"Comando 3: Ordenar estudiantes agregados por nombre\")\n print(\"Comando 4: Consultar la nota de un estudiante `4&nombre_estudiante`\")\n print(\"Comando 5: Visualizar\")\n print(\"Comando 6: Salir\")\n\n\nprocesar_comandos()\n\n\"\"\"\n1&Antonia&5.0\n1&Juan&2.4\n1&Pedro&4.3\n\"\"\"", "_____no_output_____" ] ], [ [ "## Entendiendo los sentimientos de Groot\n\nEl lenguaje de Groot es muy complicado para expresar sentimientos. Los sentimientos tienen n capas.\n\nSi n = 1, el sentimiento será \"I hate it\", si n = 2 es \"I hate that I love it\", y si n = 3 es \"I hate that I love that I hate it\" y así sucesivamente.\n\n*Entrada:*\nLa cantidad n de capas donde $n \\geq 1$\n\n*Salida:*\nMuestre la frase que Groot está tratando de decir.\n\n\n*Ejemplo 1:*\n\n<table>\n <tr>\n <td>Input</td><td>Output</td>\n </tr>\n <tr>\n <td>1</td><td>I hate it</td>\n </tr>\n</table>\n\n\nEjemplo 2:\n\n<table>\n <tr>\n <td>Input</td><td>Output</td>\n </tr>\n <tr>\n <td>2</td><td>I hate that I love it</td>\n </tr>\n</table>\n\nEjemplo 3:\n\n<table>\n <tr>\n <td>Input</td><td>Output</td>\n </tr>\n <tr>\n <td>3</td><td>I hate that I love that I hate it</td>\n </tr>\n</table>", "_____no_output_____" ] ], [ [ "#piense aquí la solución en caso de no plantear ninguna abra la siguiente celda", "_____no_output_____" ] ], [ [ "### Una posible opción para groot\n\nUna posible opción puede ser construir una tupla ó lista con dos elementos: \n", "_____no_output_____" ] ], [ [ "emocion = [\"I hate\", \"I love\"]\n", "_____no_output_____" ] ], [ [ "Puede ir alternando entre posiciones de la lista el ciclo así:", "_____no_output_____" ] ], [ [ "salida = []\n\nn = int(input())\n\nfor i in range(n):\n salida.append(emocion[i%2])\n\nprint(salida)", "_____no_output_____" ] ], [ [ "Se puede usar join con that...", "_____no_output_____" ] ], [ [ "res = \" that \".join(salida) ", "_____no_output_____" ] ], [ [ "Agregue it:", "_____no_output_____" ] ], [ [ "res += \" it \"\nprint(res)", "_____no_output_____" ] ], [ [ "Es posible generar esta solución con una lista creada por comprensión?", "_____no_output_____" ] ], [ [ "#intentelo aquí", "_____no_output_____" ] ], [ [ "### Solución", "_____no_output_____" ] ], [ [ "emocion = [\"I hate\", \"I love\"]\nsalida = \" that \".join([emocion[i % 2] for i in range(n)])+\" it\"\nprint(salida)", "_____no_output_____" ] ], [ [ "## Simplificador de Fracciones\n\nUtilizando funciones recursivas, elabore un programa que simplifique una fracción escrita de la forma a/b.\n\nEjemplo: \n<table>\n <tr><td>Entrada</td><td>Salida</td></tr>\n <tr><td>6/4</td><td>3/2</td></tr>\n</table>\n", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a71fe3399b32696e345d74d3f606565df8a5f5
948,333
ipynb
Jupyter Notebook
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
98f4ce26fd63681ca3ed3d714af14c9a52b21874
[ "Apache-2.0" ]
1
2022-03-12T05:18:46.000Z
2022-03-12T05:18:46.000Z
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deployment
98f4ce26fd63681ca3ed3d714af14c9a52b21874
[ "Apache-2.0" ]
null
null
null
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deployment
98f4ce26fd63681ca3ed3d714af14c9a52b21874
[ "Apache-2.0" ]
null
null
null
421.856317
188,933
0.916871
[ [ [ "# King-County-House-Price-Prediction\n", "_____no_output_____" ] ], [ [ "# Importing libraries\nimport numpy as np \nimport pandas as pd \nimport matplotlib.pyplot as plt \nimport seaborn as sns\nimport sklearn\nfrom sklearn.metrics import mean_absolute_error, mean_squared_error,r2_score \nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler \n\n# Ignore warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# we can set numbers for how many rows and columns will be displayed\npd.set_option('display.min_rows', 10)\npd.set_option('display.max_columns', 30)", "_____no_output_____" ] ], [ [ "## 1. Loading Dataset and Explore", "_____no_output_____" ] ], [ [ "df = pd.read_csv('/content/dataset/kc_house_data.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 21613 entries, 0 to 21612\nData columns (total 21 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 id 21613 non-null int64 \n 1 date 21613 non-null object \n 2 price 21613 non-null float64\n 3 bedrooms 21613 non-null int64 \n 4 bathrooms 21613 non-null float64\n 5 sqft_living 21613 non-null int64 \n 6 sqft_lot 21613 non-null int64 \n 7 floors 21613 non-null float64\n 8 waterfront 21613 non-null int64 \n 9 view 21613 non-null int64 \n 10 condition 21613 non-null int64 \n 11 grade 21613 non-null int64 \n 12 sqft_above 21613 non-null int64 \n 13 sqft_basement 21613 non-null int64 \n 14 yr_built 21613 non-null int64 \n 15 yr_renovated 21613 non-null int64 \n 16 zipcode 21613 non-null int64 \n 17 lat 21613 non-null float64\n 18 long 21613 non-null float64\n 19 sqft_living15 21613 non-null int64 \n 20 sqft_lot15 21613 non-null int64 \ndtypes: float64(5), int64(15), object(1)\nmemory usage: 3.5+ MB\n" ], [ "# Delete Unwanted columns\ndf.drop(['id', 'date'], axis=1, inplace=True)", "_____no_output_____" ], [ "# Display missing values information\ndf.isna().sum().sort_values(ascending=False)", "_____no_output_____" ], [ "df.describe().T ", "_____no_output_____" ] ], [ [ "## 2. EDA", "_____no_output_____" ] ], [ [ "# classification cat_col and num_col for plotting\ncat_col = []\nnum_col = []\ncol_name = df.columns\nfor idx, value in enumerate(col_name):\n con = df[value].nunique()\n if(con<20):\n cat_col.append(value)\n else:\n num_col.append(value)\n", "_____no_output_____" ], [ "def bar_plot(data, categorical_features):\n \"\"\"Bar plot for categorical fetures of all columns\"\"\"\n\n print(\"Bar Plot for Categorical features\")\n for col in categorical_features:\n counts = data[col].value_counts().sort_index()\n fig = plt.figure(figsize=(9, 6))\n ax = fig.gca()\n counts.plot.bar(ax = ax, color='steelblue')\n ax.set_title(col + ' counts')\n ax.set_xlabel(col) \n ax.set_ylabel(\"Frequency\")\n\n return plt.show()\n\n\ndef histogram_plot(data, numeric_columns):\n \"\"\"Histogram for numerical fetures of all columns\"\"\"\n\n print(\"Histogram for numeric_columns\")\n for col in numeric_columns:\n fig = plt.figure(figsize=(9, 6))\n ax = fig.gca()\n feature = data[col]\n feature.hist(bins=50, ax = ax)\n ax.axvline(feature.mean(), color='magenta', linestyle='dashed', linewidth=2)\n ax.axvline(feature.median(), color='cyan', linestyle='dashed', linewidth=2)\n ax.set_title(col)\n return plt.show()\n\n\ndef scater_plot(data, numeric_columns, target_col):\n \"\"\"Scatter for numerical fetures of columns for target columns\"\"\"\n\n print(\"Scater plot\")\n for col in numeric_columns:\n\n fig = plt.figure(figsize=(9, 6))\n ax = fig.gca()\n feature = df[col]\n label = data[f'{target_col}']\n correlation = feature.corr(label)\n plt.scatter(x=feature, y=label)\n plt.xlabel(col)\n plt.ylabel('Price')\n ax.set_title('Price vs ' + col + '- correlation: ' + str(correlation))\n\n return plt.show()\n", "_____no_output_____" ], [ "# Bar plot for categorical features\nbar_plot(df, cat_col)", "Bar Plot for Categorical features\n" ], [ "# Histogram for numerical columns\nhistogram_plot(df, num_col)", "Histogram for numeric_columns\n" ], [ "# Scatter plot for price vs numerical columns\nscater_plot(df, num_col, 'price')", "Scater plot\n" ], [ "# Correlation plot\ncorr = df.corr().round(2)\nplt.figure(figsize=(12,8))\nsns.heatmap(corr, annot=True, cmap=\"YlGnBu\");", "_____no_output_____" ] ], [ [ "## 3. Split dataset", "_____no_output_____" ] ], [ [ "# create function for Split data\n\ndef split_data(data, target_col):\n \n # Remove rows with missing target, seprate target from predictors\n data_copy = data.copy()\n data_copy.dropna(axis=0, subset=[target_col], inplace=True)\n y = data_copy[target_col]\n data_copy.drop([target_col], axis=1, inplace=True)\n\n # Break off validation set from training data\n from sklearn.model_selection import train_test_split\n X_train, X_valid, y_train, y_valid = train_test_split(data_copy, y, train_size=0.8, test_size=0.2, random_state=4)\n\n return X_train, X_valid, y_train, y_valid", "_____no_output_____" ], [ "# Split data from main dataset to train, test and target\nX_train, X_valid, y_train, y_valid = split_data(df, 'price')", "_____no_output_____" ] ], [ [ "## 4. Model train", "_____no_output_____" ], [ "Create column classification function for pipeline", "_____no_output_____" ] ], [ [ "def col_classification(data, num=20):\n # Select categorical columns\n cat_cols = [cname for cname in data.columns if\n data[cname].nunique() < num and\n data[cname].dtype =='object']\n\n # Select numerical columns\n num_cols = [cname for cname in data.columns if\n data[cname].dtype in ['int64', 'float64']]\n\n return cat_cols, num_cols\n", "_____no_output_____" ], [ "# Categorical cols and numerical columns classfication\ncategorical_cols, numerical_cols = col_classification(X_train, 15)", "_____no_output_____" ] ], [ [ "Create model evaluation function", "_____no_output_____" ] ], [ [ "def evaluation_model(X_test, y_test, title=\"Target price prediction\"):\n \"\"\"Evaluation Model for regression problem, We need to use model name is clf\"\"\"\n # Evaluate the model using the test data\n preds = reg.predict(X_test)\n\n mse = mean_squared_error(y_test, preds)\n print(\"MSE:\", mse)\n rmse = np.sqrt(mse)\n print('Mae: ', mean_absolute_error(y_valid, preds))\n print(\"RMSE:\", rmse)\n r2 = r2_score(y_valid, preds)\n print(\"R2:\", r2)\n\n # Plot predicted vs actual\n plt.scatter(y_test, preds)\n plt.xlabel('Actual Labels')\n plt.ylabel('Predicted Labels')\n plt.title(title)\n\n # overlay the regression line\n z = np.polyfit(y_test, preds, 1)\n p = np.poly1d(z)\n plt.plot(y_valid,p(y_valid), color='magenta')\n return plt.show() ", "_____no_output_____" ] ], [ [ "Try some of the linear regression model and evaluation their performance in our dataset", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import Ridge\nfrom sklearn.linear_model import Lasso\nfrom sklearn.linear_model import ElasticNet\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.ensemble import RandomForestRegressor\n\n\nmodels={'Linear Regression': LinearRegression(),\n 'Decision Tree Regressior' : DecisionTreeRegressor(random_state=0),\n 'Random Forrest Regressor' : RandomForestRegressor(n_estimators=10, random_state=0),\n 'Ridge': Ridge(),\n 'Lasso': Lasso(),\n 'ElasticN': ElasticNet(),\n 'Gradient Boosting Regressor': GradientBoostingRegressor()\n }\n\nprint('####################################################################### \\n')\nfor name, model in models.items():\n name_model = model\n reg = name_model.fit(X_train, y_train)\n print(f'{name}:')\n evaluation_model(X_valid, y_valid, 'Housing Price Prediction')\n print('####################################################################### \\n')", "####################################################################### \n\nLinear Regression:\nMSE: 39200213360.80082\nMae: 126368.12273608406\nRMSE: 197990.43754888978\nR2: 0.6969193739897344\n" ] ], [ [ "After the evaluation Gradient Boosting Regressor works well in our dataset so we will use it.", "_____no_output_____" ], [ "**Model train with Gradient Boosting Regressor using sklearn pipeline**", "_____no_output_____" ] ], [ [ "from sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.metrics import mean_absolute_error\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.ensemble import GradientBoostingRegressor\n\n\n# Preprocessing for numerical data\nnumerical_transformer = SimpleImputer(strategy='median')\n\n# Preprocessing for categorical data\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='most_frequent')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))\n])\n\n\n# Bundle preprocessing for numerical and categorical data\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numerical_transformer, numerical_cols),\n ('cat', categorical_transformer, categorical_cols)\n ])\n\n\nmodel = GradientBoostingRegressor(random_state=4, n_estimators=1600)\n\n# Bundle Preprocessing and modeling code in pipeline\nreg = Pipeline(steps=[\n ('preprocessor', preprocessor),\n ('scaler', StandardScaler()),\n ('model', model),\n ])\n\n# Preprocessing of training data, fit model\nreg.fit(X_train, y_train)", "_____no_output_____" ], [ "evaluation_model(X_valid, y_valid, 'Housing Price Prediction')", "MSE: 13811738810.108906\nMae: 66634.5069623637\nRMSE: 117523.35431780743\nR2: 0.8932130698797662\n" ] ], [ [ "## 5. Save Model, Load and Prediction", "_____no_output_____" ] ], [ [ "import pickle\n!mkdir 'model_save'\npickle.dump(reg, open(\"model_save/model.pkl\",\"wb\"))", "mkdir: cannot create directory ‘model_save’: File exists\n" ], [ "# Load the model from the file\nloaded_model = pickle.load(open('model_save/model.pkl', 'rb'))", "_____no_output_____" ], [ "X_new = pd.read_csv('dataset/user_input.csv')\nX_new", "_____no_output_____" ], [ "result = loaded_model.predict(X_new)\nprint('Prediction: {:.0f} price'.format(np.round(result[0])))", "Prediction: 562858 price\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7a72efc415a7c7604ea9218ecae5556c4690ce1
156,652
ipynb
Jupyter Notebook
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
6a7bdec5ed2195005d64ca1f9eaf6613d68fb8ca
[ "MIT" ]
102
2016-06-25T09:30:00.000Z
2022-03-24T21:02:49.000Z
Testing_the_lolviz_Python_module.ipynb
Jimmy-INL/notebooks
ccf5ebc11131f56305c484cfd4556f4bcf63c19b
[ "MIT" ]
34
2016-06-26T12:21:30.000Z
2021-04-06T09:19:49.000Z
Testing_the_lolviz_Python_module.ipynb
Jimmy-INL/notebooks
ccf5ebc11131f56305c484cfd4556f4bcf63c19b
[ "MIT" ]
44
2017-05-13T23:54:56.000Z
2021-07-17T15:34:24.000Z
70.09038
1,980
0.543823
[ [ [ "# Table of Contents\n <p><div class=\"lev1 toc-item\"><a href=\"#Testing-lolviz\" data-toc-modified-id=\"Testing-lolviz-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Testing <a href=\"https://github.com/parrt/lolviz\" target=\"_blank\">lolviz</a></a></div><div class=\"lev2 toc-item\"><a href=\"#Testing-naively\" data-toc-modified-id=\"Testing-naively-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Testing naively</a></div><div class=\"lev2 toc-item\"><a href=\"#Testing-from-within-a-Jupyter-notebook\" data-toc-modified-id=\"Testing-from-within-a-Jupyter-notebook-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Testing from within a Jupyter notebook</a></div><div class=\"lev3 toc-item\"><a href=\"#List\" data-toc-modified-id=\"List-121\"><span class=\"toc-item-num\">1.2.1&nbsp;&nbsp;</span>List</a></div><div class=\"lev3 toc-item\"><a href=\"#List-of-lists\" data-toc-modified-id=\"List-of-lists-122\"><span class=\"toc-item-num\">1.2.2&nbsp;&nbsp;</span>List of lists</a></div><div class=\"lev3 toc-item\"><a href=\"#List-of-lists-of-lists???\" data-toc-modified-id=\"List-of-lists-of-lists???-123\"><span class=\"toc-item-num\">1.2.3&nbsp;&nbsp;</span>List of lists of lists???</a></div><div class=\"lev3 toc-item\"><a href=\"#Tree\" data-toc-modified-id=\"Tree-124\"><span class=\"toc-item-num\">1.2.4&nbsp;&nbsp;</span>Tree</a></div><div class=\"lev3 toc-item\"><a href=\"#Objects\" data-toc-modified-id=\"Objects-125\"><span class=\"toc-item-num\">1.2.5&nbsp;&nbsp;</span>Objects</a></div><div class=\"lev3 toc-item\"><a href=\"#Calls\" data-toc-modified-id=\"Calls-126\"><span class=\"toc-item-num\">1.2.6&nbsp;&nbsp;</span>Calls</a></div><div class=\"lev3 toc-item\"><a href=\"#String\" data-toc-modified-id=\"String-127\"><span class=\"toc-item-num\">1.2.7&nbsp;&nbsp;</span>String</a></div><div class=\"lev2 toc-item\"><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-13\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Conclusion</a></div>", "_____no_output_____" ], [ "# Testing [lolviz](https://github.com/parrt/lolviz)\n\nI liked how the [lolviz](https://github.com/parrt/lolviz) module looked like. Let's try it!", "_____no_output_____" ] ], [ [ "%load_ext watermark\n%watermark -v -m -p lolviz", "CPython 3.6.5\nIPython 6.4.0\n\nlolviz n\u0007\n\ncompiler : GCC 7.3.0\nsystem : Linux\nrelease : 4.15.0-23-generic\nmachine : x86_64\nprocessor : x86_64\nCPU cores : 4\ninterpreter: 64bit\n" ], [ "from lolviz import *", "_____no_output_____" ] ], [ [ "## Testing naively", "_____no_output_____" ] ], [ [ "data = ['hi', 'mom', {3, 4}, {\"parrt\": \"user\"}]\ng = listviz(data)\nprint(g.source) # if you want to see the graphviz source\ng.view() # render and show graphviz.files.Source object", "\n digraph G {\n nodesep=.05;\n node [penwidth=\"0.5\", width=.1,height=.1];\n node139621679300936 [shape=\"box\", space=\"0.0\", margin=\"0.01\", fontcolor=\"#444443\", fontname=\"Helvetica\", label=<<table BORDER=\"0\" CELLBORDER=\"0\" CELLSPACING=\"0\">\n<tr>\n<td cellspacing=\"0\" cellpadding=\"0\" bgcolor=\"#fefecd\" border=\"1\" sides=\"br\" valign=\"top\"><font color=\"#444443\" point-size=\"9\">0</font></td>\n<td cellspacing=\"0\" cellpadding=\"0\" bgcolor=\"#fefecd\" border=\"1\" sides=\"br\" valign=\"top\"><font color=\"#444443\" point-size=\"9\">1</font></td>\n<td cellspacing=\"0\" cellpadding=\"0\" bgcolor=\"#fefecd\" border=\"1\" sides=\"br\" valign=\"top\"><font color=\"#444443\" point-size=\"9\">2</font></td>\n<td cellspacing=\"0\" cellpadding=\"0\" bgcolor=\"#fefecd\" border=\"1\" sides=\"b\" valign=\"top\"><font color=\"#444443\" point-size=\"9\">3</font></td>\n</tr>\n<tr>\n<td port=\"0\" bgcolor=\"#fefecd\" border=\"1\" sides=\"r\" align=\"center\"><font point-size=\"11\">'hi'</font></td>\n<td port=\"1\" bgcolor=\"#fefecd\" border=\"1\" sides=\"r\" align=\"center\"><font point-size=\"11\">'mom'</font></td>\n<td port=\"2\" bgcolor=\"#fefecd\" border=\"1\" sides=\"r\" align=\"center\"><font point-size=\"11\">{3, 4}</font></td>\n<td port=\"3\" bgcolor=\"#fefecd\" border=\"0\" align=\"center\"><font point-size=\"11\">{'parrt': 'user'}</font></td>\n</tr></table>\n>];\n}\n\n" ] ], [ [ "It opened a window showing me this image:\n\n<img src=\"data/Testing_the_lolviz_Python_module_1.png\" width=55%>", "_____no_output_____" ], [ "## Testing from within a Jupyter notebook\n\nI test here all [the features of lolviz](https://github.com/parrt/lolviz#functionality) :", "_____no_output_____" ], [ "### List", "_____no_output_____" ] ], [ [ "squares = [ i**2 for i in range(10) ]", "_____no_output_____" ], [ "squares", "_____no_output_____" ], [ "listviz(squares)", "_____no_output_____" ] ], [ [ "### List of lists", "_____no_output_____" ] ], [ [ "n, m = 3, 4\nexample_matrix = [[0 if i != j else 1 for i in range(n)] for j in range(m)]", "_____no_output_____" ], [ "example_matrix", "_____no_output_____" ], [ "lolviz(example_matrix)", "_____no_output_____" ] ], [ [ "### List of lists of lists???", "_____no_output_____" ] ], [ [ "n, m, o = 2, 3, 4\nexample_3D_matrix = [[[\n 1 if i < j < k else 0\n for i in range(n)]\n for j in range(m)]\n for k in range(o)]", "_____no_output_____" ], [ "example_3D_matrix", "_____no_output_____" ], [ "lolviz(example_3D_matrix)", "_____no_output_____" ] ], [ [ "It works, even if it is not as pretty.", "_____no_output_____" ], [ "### Tree\nOnly for binary trees, apparently. Let's try with a dictionary that looks like a binary tree:", "_____no_output_____" ] ], [ [ "anakin = {\n \"name\": \"Anakin Skywalker\",\n \"son\": {\n \"name\": \"Luke Skywalker\",\n },\n \"daughter\": {\n \"name\": \"Leia Skywalker\",\n },\n}", "_____no_output_____" ], [ "from pprint import pprint\npprint(anakin)", "{'daughter': {'name': 'Leia Skywalker'},\n 'name': 'Anakin Skywalker',\n 'son': {'name': 'Luke Skywalker'}}\n" ], [ "treeviz(anakin, leftfield='son', rightfield='daugther')", "_____no_output_____" ] ], [ [ "It doesn't work out of the box for dictionaries, sadly.\n\nLet's check another example:", "_____no_output_____" ] ], [ [ "class Tree:\n def __init__(self, value, left=None, right=None):\n self.value = value\n self.left = left\n self.right = right\n \nroot = Tree('parrt',\n Tree('mary',\n Tree('jim',\n Tree('srinivasan'),\n Tree('april'))),\n Tree('xue',None,Tree('mike')))\n\ntreeviz(root)", "_____no_output_____" ] ], [ [ "### Objects", "_____no_output_____" ] ], [ [ "objviz(anakin)", "_____no_output_____" ], [ "objviz(anakin.values())", "_____no_output_____" ], [ "objviz(anakin.items())", "_____no_output_____" ] ], [ [ "For complex numbers for instance?", "_____no_output_____" ] ], [ [ "z = 1+4j", "_____no_output_____" ], [ "print(z)", "(1+4j)\n" ], [ "objviz(z)", "_____no_output_____" ] ], [ [ "OK, this fails.", "_____no_output_____" ], [ "### Calls", "_____no_output_____" ] ], [ [ "def factorial(n):\n if n < 0: return 0\n elif n == 0: return 1\n else: return n * factorial(n - 1)", "_____no_output_____" ], [ "for n in range(12):\n print(f\"{n}! = {factorial(n)}\")", "0! = 1\n1! = 1\n2! = 2\n3! = 6\n4! = 24\n5! = 120\n6! = 720\n7! = 5040\n8! = 40320\n9! = 362880\n10! = 3628800\n11! = 39916800\n" ] ], [ [ "And now with some visualization:", "_____no_output_____" ] ], [ [ "from IPython.display import display", "_____no_output_____" ], [ "def factorial2(n):\n display(callsviz(varnames=[\"n\"]))\n if n < 0: return 0\n elif n == 0: return 1\n else: return n * factorial2(n - 1)", "_____no_output_____" ], [ "n = 4\nprint(f\"{n}! = {factorial2(n)}\")", "_____no_output_____" ] ], [ [ "We really see the \"call stack\" as the system keeps track of the nested calls. I like that! 👌", "_____no_output_____" ], [ "### String", "_____no_output_____" ] ], [ [ "import string\nstring.hexdigits", "_____no_output_____" ], [ "strviz(string.hexdigits)", "_____no_output_____" ] ], [ [ "## Conclusion\nThat's it. See [this other example](https://github.com/parrt/lolviz/blob/master/examples.ipynb) for more.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e7a739e73bd79c8b5de552c9d21c051361083bd5
10,202
ipynb
Jupyter Notebook
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
e53b1f3415a81862d465e0443fc09fb35d14d1e0
[ "MIT" ]
42
2020-05-25T09:33:45.000Z
2022-03-29T03:41:19.000Z
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
e53b1f3415a81862d465e0443fc09fb35d14d1e0
[ "MIT" ]
133
2020-05-28T18:29:04.000Z
2022-03-31T22:21:42.000Z
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
e53b1f3415a81862d465e0443fc09fb35d14d1e0
[ "MIT" ]
17
2020-06-30T07:07:50.000Z
2022-03-17T15:45:27.000Z
25.441397
127
0.57822
[ [ [ "# Masks\n\nWith gdsfactory you can easily go from components, to sweeps, to Masks. \n\nLets start with a resistance sweep, where you change the resistance width to measure sheet resistance\n\n## Pack", "_____no_output_____" ] ], [ [ "import gdsfactory as gf\ngf.clear_cache()\n\nsweep = [gf.components.resistance_sheet(width=width) for width in [1, 10, 100]]\nm = gf.pack(sweep)\nm[0]", "_____no_output_____" ], [ "spiral_te = gf.routing.add_fiber_single(gf.functions.rotate(gf.components.spiral_inner_io_fiber_single, 90))\nspiral_te", "_____no_output_____" ], [ "# which is equivalent to\nspiral_te = gf.compose(gf.routing.add_fiber_single, gf.functions.rotate90, gf.components.spiral_inner_io_fiber_single)\nspiral_te(length=10e3)", "_____no_output_____" ], [ "import gdsfactory as gf\n\nspiral_te = gf.compose(gf.routing.add_fiber_single, gf.functions.rotate90, gf.components.spiral_inner_io_fiber_single)\nsweep = [spiral_te(length=length) for length in [10e3, 20e3, 30e3]]\nm = gf.pack(sweep)\nm[0]", "_____no_output_____" ] ], [ [ "You can also add a `prefix` to each text label. For example `S` for the spirals at the `north-center`\n\n`text_rectangular` is DRC clean and is anchored on `nc` (north-center)", "_____no_output_____" ] ], [ [ "text_metal3 = gf.partial(gf.components.text_rectangular_multi_layer, layers=(gf.LAYER.M3,))\n\nm = gf.pack(sweep, text=text_metal3, text_anchors=('nc',), text_prefix='s')\nm[0]", "_____no_output_____" ], [ "text_metal2 = gf.partial(gf.c.text, layer=gf.LAYER.M2)\n\nm = gf.pack(sweep, text=text_metal2, text_anchors=('nc',), text_prefix='s')\nm[0]", "_____no_output_____" ] ], [ [ "## Grid", "_____no_output_____" ] ], [ [ "g = gf.grid(sweep)\ng", "_____no_output_____" ], [ "gh = gf.grid(sweep, shape=(1, len(sweep)))\ngh", "_____no_output_____" ], [ "ghymin = gf.grid(sweep, shape=(1, len(sweep)), align_y='ymin')\nghymin", "_____no_output_____" ] ], [ [ "You can also add text labels to each element of the sweep", "_____no_output_____" ] ], [ [ "ghymin = gf.grid_with_text(sweep, shape=(1, len(sweep)), align_y='ymin', text=text_metal3)\nghymin", "_____no_output_____" ] ], [ [ "## Mask\n\nYou can easily define a mask using `grid` and `pack`", "_____no_output_____" ] ], [ [ "import gdsfactory as gf\n\ntext_metal3 = gf.partial(gf.c.text_rectangular_multi_layer, layers=(gf.LAYER.M3,))\ngrid = gf.partial(gf.grid_with_text, text=text_metal3)\npack = gf.partial(gf.pack, text=text_metal3)\n\ngratings_sweep = [gf.c.grating_coupler_elliptical(taper_angle=taper_angle) for taper_angle in [20, 30, 40]]\ngratings = grid(gratings_sweep, text=None)\ngratings", "_____no_output_____" ], [ "gratings_sweep = [gf.c.grating_coupler_elliptical(taper_angle=taper_angle) for taper_angle in [20, 30, 40]]\ngratings_loss_sweep = [gf.c.grating_coupler_loss_fiber_single(grating_coupler=grating) for grating in gratings_sweep]\ngratings = grid(gratings_loss_sweep, shape=(1, len(gratings_loss_sweep)), spacing = (40,0))\ngratings", "_____no_output_____" ], [ "sweep_resistance = [gf.components.resistance_sheet(width=width) for width in [1, 10, 100]]\nresistance = gf.pack(sweep_resistance)[0]\nresistance", "_____no_output_____" ], [ "spiral_te = gf.compose(gf.routing.add_fiber_single, gf.functions.rotate90, gf.components.spiral_inner_io_fiber_single)\nsweep_spirals = [spiral_te(length=length) for length in [10e3, 20e3, 30e3]]\nspirals = gf.pack(sweep_spirals)[0]\nspirals", "_____no_output_____" ], [ "mask = gf.pack([spirals, resistance, gratings])[0]\nmask", "_____no_output_____" ] ], [ [ "As you can see you can define your mask in a single line.\n\nFor more complex mask, you can also create a new cell to build up more complexity", "_____no_output_____" ] ], [ [ "@gf.cell\ndef mask():\n c = gf.Component()\n c << gf.pack([spirals, resistance, gratings])[0]\n c << gf.c.seal_ring(c)\n return c\n\nc = mask(cache=False)\nc", "_____no_output_____" ], [ "c.write_gds_with_metadata(gdsdir='extra')", "_____no_output_____" ], [ "gf.mask.write_labels(gdspath='extra/mask_d41d8cd9.gds', label_layer=(201, 0))", "_____no_output_____" ] ], [ [ "```\n\nCSV labels ------|\n |--> merge_test_metadata dict\n |\nYAML metatada ---\n\n```", "_____no_output_____" ] ], [ [ "test_metadata = gf.mask.merge_test_metadata(gdspath='extra/mask_d41d8cd9.gds')", "_____no_output_____" ], [ "test_metadata.spiral_inner_io_6dc6250a.full.length", "_____no_output_____" ], [ "spiral_names = [s for s in test_metadata.keys() if s.startswith('spiral')]\nspiral_names", "_____no_output_____" ], [ "spiral_lengths = [test_metadata[spiral_name].length for spiral_name in spiral_names]\nspiral_lengths", "_____no_output_____" ], [ "gc_names = [s for s in test_metadata.keys() if s.startswith('grating')]\ngc_names", "_____no_output_____" ], [ "gc_taper_angles = [test_metadata[name].full.taper_angle for name in gc_names]\ngc_taper_angles", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e7a742ac47136198709ffa1ff25332e3c0982f1f
32,297
ipynb
Jupyter Notebook
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
1d0b3c28fe7f6f93e00e332e74873e6d1ec29d0b
[ "MIT" ]
null
null
null
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
1d0b3c28fe7f6f93e00e332e74873e6d1ec29d0b
[ "MIT" ]
null
null
null
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
1d0b3c28fe7f6f93e00e332e74873e6d1ec29d0b
[ "MIT" ]
null
null
null
28.581416
415
0.569279
[ [ [ "# Introduction to Convolution Neural Nets\n========\n\n#### Version 0.1\n\nBy B Nord 2018 Nov 09\n\nThis notebook was developed within the [Google Collaboratory](https://colab.research.google.com/notebooks/welcome.ipynb#recent=true) framework. The original notebook can be run in a web browser, and is available [via Collaboratory](https://colab.research.google.com/drive/1wKzhJ0cOsJbgM9L0uIVUCYW1f2Zdf3PK#scrollTo=qwubzWGWWD6E). It has been recreated below, though we recommend you run the web-based version.", "_____no_output_____" ], [ "# Install packages on the back end", "_____no_output_____" ] ], [ [ "# install software on the backend, which is located at \n# Google's Super Secret Sky Server in an alternate universe.\n# The backend is called a 'hosted runtime' if it is on their server.\n# A local runtime would start a colab notebook on your machine locally. \n# Think of google colab as a Google Docs version of Jupyter Notebooks\n\n# remove display of install details\n%%capture --no-display \n\n# pip install\n!pip install numpy matplotlib scipy pandas scikit-learn astropy seaborn ipython jupyter #standard install for DSFP\n!pip install keras tensorflow # required for deep learning \n!pip install pycm", "_____no_output_____" ], [ "# standard-ish imports\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time\nimport itertools\n\n# non-standard, but stable package for confusion matrices\nfrom pycm import ConfusionMatrix\n\n\n# neural network / machine learning packages\nfrom sklearn import metrics\nimport keras\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten, Activation\nfrom keras.layers import Conv2D, MaxPooling2D, BatchNormalization\nfrom keras import backend as K", "Using TensorFlow backend.\n" ] ], [ [ "# Convolutional Neural Networks make the future now!\n\n**Learning Objectives**\n\n\n1. Gain familiarity with \n 1. Two standard convolutional neural network (CNN) architectures: \n 1. **Feed-forward CNN**\n 2. **Convolutional Autoencoder (CAE)**\n 2. One standard task performed with CNNs: **Binary Classification**\n 3. One new diagnostic of CNNs: **Feature maps from the first layer**\n2. Experience fundamental considerations, pitfalls, and strategies when training NNs\n 1. Data set preparation (never underestimate the time required for this)\n 2. CNN layer manipulation and architecture design\n 5. Model fitting (the learning process)\n 6. Effects of image quality\n3. Apply diagnostics from previous exercises\n4. Apply new diagnostics: look inside the networks with feature maps of the first layer\n5. Continue connecting NN functionality to data set structure and problem of interest\n\n\nSome of this notebook is very similar to the first one, but we're using a new architecture that has more moving pieces.\n\n\n\n*I'm still taking bets that we can start a paper with deep nets during the Saturday hack.*\n", "_____no_output_____" ], [ "## Activity 1: Classify Handwritten Digits with Convolutional Neural Networks (CNNs)\nIs it a \"zero\" [0] or a \"one\" [1]? (ooooh, the suspense; or maybe the suspense has dissipated by now.)\n\n\n\n\n", "_____no_output_____" ], [ "## Prepare the Data", "_____no_output_____" ], [ "### Download the data \n(ooh look it's all stored on Amazon's AWS!)\n(pssst, we're in the cloooud)", "_____no_output_____" ] ], [ [ "# import MNIST data\n(x_train_temp, y_train_temp), (x_test_temp, y_test_temp) = mnist.load_data()", "_____no_output_____" ] ], [ [ "### **Look** at the data\n(always do this so that you **know** what the structure is.)", "_____no_output_____" ] ], [ [ "# Print the shapes\nprint(\"Train Data Shape:\", x_train_temp.shape)\nprint(\"Test Data Shape:\", x_test_temp.shape)\nprint(\"Train Label Shape:\", y_train_temp.shape)\nprint(\"Test Label Shape:\", y_test_temp.shape)", "_____no_output_____" ] ], [ [ "\n**Do the shapes of 'data' and 'label' (for train and test, respectively) match? If they don't now, Keras/TF will kindly yell at you later.**\n\n", "_____no_output_____" ] ], [ [ "# Print an example\nprint(\"Example:\")\nprint(\"y_train[0] is the label for the 0th image, and it is a\", y_train_temp[0])\nprint(\"x_train[0] is the image data, and you kind of see the pattern in the array of numbers\")\nprint(x_train_temp[0])", "_____no_output_____" ] ], [ [ "**Can you see the pattern of the number in the array?** ", "_____no_output_____" ] ], [ [ "# Plot the data! \nf = plt.figure()\nf.add_subplot(1,2, 1)\nplt.imshow(x_train_temp[0])\nf.add_subplot(1,2, 2)\nplt.imshow(x_train_temp[1])\nplt.show(block=True)", "_____no_output_____" ] ], [ [ "### Prepare the data\n\nData often need to be re-shaped and normalized for ingestion into the neural network.", "_____no_output_____" ], [ "#### Normalize the data\n\nThe images are recast as float and normalized to one for the network.\n\n", "_____no_output_____" ] ], [ [ "print(\"Before:\", np.min(x_train_temp), np.max(x_train_temp))\nx_train = x_train_temp.astype('float32')\nx_test = x_test_temp.astype('float32')\nx_train /= 255\nx_test /= 255\ny_train = y_train_temp\ny_test = y_test_temp\nprint(\"After:\", np.min(x_train), np.max(x_train))", "_____no_output_____" ] ], [ [ "#### Reshape the data arrays: set the input shape to be ready for a convolution [NEW]\n\nWe're going to use a Dense Neural Architecture, not as images, so we need to make the input shape appropriate.", "_____no_output_____" ] ], [ [ "# read the dimensions from one example in the training set\nimg_rows, img_cols = x_train[0].shape[0], x_train[0].shape[1]\n\n# Different NN libraries (e.g., TF) use different ordering of dimensions\n# Here we set the \"input shape\" so that later the NN knows what shape to expect\nif K.image_data_format() == 'channels_first':\n x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)\n x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)\n input_shape = (1, img_rows, img_cols)\nelse:\n x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)\n x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) \n input_shape = (img_rows, img_cols, 1)", "_____no_output_____" ] ], [ [ "#### Apply *one-hot encoding* to the data\n\n\n1. Current encoding provides a literal label. For example, the label for \"3\" is *3*.\n2. One-hot encoding places a \"1\" in an array at the appropriate location for that datum. For example, the label \"3\" becomes *[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]*\n\nThis increases the efficiency of the matrix algebra during network training and evaluation.\n\n\n", "_____no_output_____" ] ], [ [ "# One-hot encoding\nnum_classes = 10\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)", "_____no_output_____" ] ], [ [ "## Design Neural Network Architecture!", "_____no_output_____" ], [ "### Select model format", "_____no_output_____" ] ], [ [ "model = Sequential()", "_____no_output_____" ] ], [ [ "### Add layers to the model sequentially [NEW]", "_____no_output_____" ] ], [ [ "model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))\nmodel.add(Dropout(0.25))\nmodel.add(Flatten())\nmodel.add(Dense(32, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(num_classes, activation='softmax'))\nmodel.summary()", "_____no_output_____" ] ], [ [ "*Things to think about and notice:*\n\n1. How does the \"output shape\" column change as you go through the network? How does this relate to pictures of CNNs you've seen (or might find on google images, for example)?\n2. What happens when you re-compile the [cell where you add layers sequentially](https://colab.research.google.com/drive/1wKzhJ0cOsJbgM9L0uIVUCYW1f2Zdf3PK#scrollTo=qXiW9aIx9_CM&line=3&uniqifier=1), without first compiling model-definition cell. Why does that happen?", "_____no_output_____" ], [ "### Compile the model\n\nSelect three key options\n1. **optimizer**: the method for optimizing the weights. \"Stochastic Gradient Descent (SGD)\" is the canonical method.\n2. **loss** function: the form of the function to encode the difference between the data's true label and the predict label.\n3. **metric**: the function by which the model is evaluated.", "_____no_output_____" ] ], [ [ "model.compile(optimizer=\"sgd\", loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "### Fit (read: Train) the model", "_____no_output_____" ] ], [ [ "# Training parameters\nbatch_size = 32 # number of images per epoch\nnum_epochs = 5 # number of epochs\nvalidation_split = 0.8 # fraction of the training set that is for validation only", "_____no_output_____" ], [ "# Train the model\nhistory = model.fit(x_train, y_train, \n batch_size=batch_size, \n epochs=num_epochs, \n validation_split=validation_split, \n verbose=True)", "_____no_output_____" ] ], [ [ "---\n*Things to think about and notice:*\n\n1. How fast is this training compared to the Dense/Fully Connected Networks? What could be a causing a difference between these two networks?\n2. Why is it taking a long time at the end of each epoch?\n\n", "_____no_output_____" ], [ "## Diagnostics!\n", "_____no_output_____" ], [ "#### Evaluate overall model efficacy\n\nEvaluate model on training and test data and compare. This provides summary values that are equivalent to the final value in the accuracy/loss history plots.", "_____no_output_____" ] ], [ [ "loss_train, acc_train = model.evaluate(x_train, y_train, verbose=False)\nloss_test, acc_test = model.evaluate(x_test, y_test, verbose=False)\nprint(f'Train acc/loss: {acc_train:.3}, {loss_train:.3}')\nprint(f'Test acc/loss: {acc_test:.3}, {loss_test:.3}')", "_____no_output_____" ] ], [ [ "#### Predict train and test data", "_____no_output_____" ] ], [ [ "y_pred_train = model.predict(x_train, verbose=True)\ny_pred_test = model.predict(x_test,verbose=True)", "_____no_output_____" ] ], [ [ "#### Plot accuracy and loss as a function of epochs (equivalently training time)\n", "_____no_output_____" ] ], [ [ "# set up figure\nf = plt.figure(figsize=(12,5))\nf.add_subplot(1,2, 1)\n\n# plot accuracy as a function of epoch\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\n# plot loss as a function of epoch\nf.add_subplot(1,2, 2)\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\nplt.show(block=True)", "_____no_output_____" ] ], [ [ "---\n*Things to think about and notice:*\n\n1. How do these curve shapes compare to the initial dense network results?\n\n\n", "_____no_output_____" ], [ "#### Confusion Matrix", "_____no_output_____" ] ], [ [ "# Function: Convert from categorical back to numerical value\ndef convert_to_index(array_categorical):\n array_index = [np.argmax(array_temp) for array_temp in array_categorical]\n return array_index\n\ndef plot_confusion_matrix(cm,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function modified to plots the ConfusionMatrix object.\n Normalization can be applied by setting `normalize=True`.\n \n Code Reference : \n http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html\n \n This script is derived from PyCM repository: https://github.com/sepandhaghighi/pycm\n \n \"\"\"\n\n plt_cm = []\n for i in cm.classes :\n row=[]\n for j in cm.classes:\n row.append(cm.table[i][j])\n plt_cm.append(row)\n plt_cm = np.array(plt_cm)\n if normalize:\n plt_cm = plt_cm.astype('float') / plt_cm.sum(axis=1)[:, np.newaxis] \n plt.imshow(plt_cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(cm.classes))\n plt.xticks(tick_marks, cm.classes, rotation=45)\n plt.yticks(tick_marks, cm.classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = plt_cm.max() / 2.\n for i, j in itertools.product(range(plt_cm.shape[0]), range(plt_cm.shape[1])):\n plt.text(j, i, format(plt_cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if plt_cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('Actual')\n plt.xlabel('Predict')", "_____no_output_____" ], [ "# apply conversion function to data\ny_test_ind = convert_to_index(y_test)\ny_pred_test_ind = convert_to_index(y_pred_test)\n\n# compute confusion matrix\ncm_test = ConfusionMatrix(y_test_ind, y_pred_test_ind)\nnp.set_printoptions(precision=2)\n\n# plot confusion matrix result\nplt.figure()\nplot_confusion_matrix(cm_test,title='cm')", "_____no_output_____" ] ], [ [ "---\n*Things to think about and notice:*\n\n1. How does this confusion matrix compare to that from the Dense network?", "_____no_output_____" ], [ "## Problems for the CNNs (I mean ones that Wolf Blitzer can't solve)", "_____no_output_____" ], [ "---\n### Problem 1: There are a lot of moving parts here. A lot of in's and out's\n(bonus points if you know the 2000's movie, from which this is a near-quote.)\n\nSo, let's reduce the data set size at the beginning of the notebook.\n\nFor the rest of the exercises, we'd like to have the flexibility to experiment with larger networks (MOAR PARAMETERS, MOAR), so let's reduce the data set size. \n\n1. Go to the [cell where we download the data](https://colab.research.google.com/drive/1wKzhJ0cOsJbgM9L0uIVUCYW1f2Zdf3PK#scrollTo=qwXuui6_yYBv&line=2&uniqifier=1), and add a cell after it. \n2. Use array indexing and slicing to create a smaller training set. How about 5000? \n3. When train the model then, we'll want to update that validation fraction so that we get about 3000 in our training set.", "_____no_output_____" ], [ "---\n### Problem 2: Keeeep Learning! \n\n\nWhat happens if you run the cell that does the model-fitting again, right after doing it the first time. What do you notice about the loss and accuracy, as compared to when you did the fitting the first time?\n\nWhy do you think this is happening? \n", "_____no_output_____" ], [ "---\n### Problem 3: What happens if you add a maxpooling layer? \n\nDoes this change the training speed? \nWhy might this be? \nCheck the model summary output to see what effect the pooling layer has.", "_____no_output_____" ], [ "---\n### Problem 4: How deep can you make the network? \n\n1. Make a deep network and see how many parameters you can make. Is it trainable in a reasonable amount of time? Try add Conv layers, but not pooling layers.\n2. What if you want it to be efficient? Try adding a Max Pooling Layers after every Conv layer. How many layers can you possibly add now? Compile the model until you have an output shape of ( None, 1, 1 , #PARAMS) before the first dense layer.", "_____no_output_____" ], [ "---\n### Problem 5: Comparing performance and efficiency between CNNs and Dense Networks\n\nExperiment with the neural network above, and reduce the number of parameters to near that of the Dense network in the first exercise. \n\nIs there a CNN architecture that has the same number of parameters as the Dense network, but can perform better?\n\nRemember to think deeply, to pool your resources. When you're nearing the end it may not be as dense as it looks, but nearly so. \n", "_____no_output_____" ], [ "---\n### Problem 6: What happens to the training result when you degrade the images?\n\nIn this part, we will degrade the images by adding noise, and then by blurring the images, and we'll look at how the network training responds. \n\n", "_____no_output_____" ], [ "---\n### Problem 7: Let's see if we can look inside the neural networks\n\nUsing the [FAQ from Keras](https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer) or any other online resource, like examples from Github, can we make a plot of the feature maps for any of the layers, so we can see what the neural net sees?\n\n", "_____no_output_____" ], [ "---\n### Problem 8: Let's progress to Regression.\n\nConsider the labels as real values and modify the network to perform regression instead of classification on those values. You may want to consider the following:\n* normalizing the labels.\n* normalizing the image data.\n* modifying the activations that are used.\n* modifying the loss function that is appropriate for real-valued prediction. (see [keras loss](https://keras.io/losses/) )", "_____no_output_____" ], [ "# Activity 2: Compress Handwritten Digits with a Convolutional Autoencoder (CAE)\n", "_____no_output_____" ], [ "#### Add layers to the model sequentially [NEW]", "_____no_output_____" ] ], [ [ "autoencoder = Sequential()\n\n# Encoder Layers\nautoencoder.add(Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:]))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\nautoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same'))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\nautoencoder.add(Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same'))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\nautoencoder.add(Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same'))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\n\n# Flatten encoding for visualization\nautoencoder.add(Flatten())\nautoencoder.add(Reshape((1, 1, 8)))\n\n# Decoder Layers\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(16, (3, 3), activation='relu'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(1, (3, 3), activation='sigmoid', padding='same'))\n\nautoencoder.summary()", "_____no_output_____" ] ], [ [ "#### Create a separate model that is just the encoder\n\nThis will allow us to encode the images and look at what the encoding results in. ", "_____no_output_____" ] ], [ [ "encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer('flatten_8').output)\nencoder.summary()", "_____no_output_____" ] ], [ [ "#### Compile the autencoder", "_____no_output_____" ] ], [ [ "autoencoder.compile(optimizer='adam', loss='binary_crossentropy')\nnum_epochs = 10", "_____no_output_____" ] ], [ [ "#### Plot the input, output, and encoded images", "_____no_output_____" ] ], [ [ "# set number of images to visualize\nnum_images = 10\n\n# select random subsect to visualize\nnp.random.seed(42)\nrandom_test_images = np.random.randint(x_test.shape[0], size=num_images)\n\n# encode images\nencoded_imgs = encoder.predict(x_test)\n\n#decode encode AND decode images\ndecoded_imgs = autoencoder.predict(x_test)\n\n\n# plot figure\nplt.figure(figsize=(18, 4))\n\nnum_rows=4\nnum_pixel_x = 2\nnum_pixel_y = 4\n\nfor i, image_idx in enumerate(random_test_images):\n # plot original image\n ax = plt.subplot(4, num_images, i + 1)\n plt.imshow(x_test[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n \n # plot encoded image\n ax = plt.subplot(num_rows, num_images, num_images + i + 1)\n\n plt.imshow(encoded_imgs[image_idx].reshape(num_pixel_x, num_pixel_y), interpolation=None, resample=None)\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n \n # plot reconstructed image\n ax = plt.subplot(num_rows, num_images, 2*num_images + i + 1)\n plt.imshow(decoded_imgs[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a74d202c416c712f092e5581c873c5123aa9ab
12,287
ipynb
Jupyter Notebook
Learning/Hartree-Fock/Hartree_Fock.ipynb
GaryZ700/CatLab_CompChem
4a3dee5a3778eb8b69b53b9d0e9ed6321cadedea
[ "MIT" ]
1
2021-11-13T12:21:52.000Z
2021-11-13T12:21:52.000Z
Learning/Hartree-Fock/Hartree_Fock.ipynb
GaryZ700/CatLab_CompChem
4a3dee5a3778eb8b69b53b9d0e9ed6321cadedea
[ "MIT" ]
null
null
null
Learning/Hartree-Fock/Hartree_Fock.ipynb
GaryZ700/CatLab_CompChem
4a3dee5a3778eb8b69b53b9d0e9ed6321cadedea
[ "MIT" ]
null
null
null
29.607229
314
0.534956
[ [ [ "## Main Hartree Code\n<br>\nHartree-Fock Computational Chemistry Method implemented in Python as described in <u>Modern Quantum Chemistry Introduction to Advanced Electronic Structure Theory</u>, by Attila Szabo and Neil S. Ostlund. <br> <br>\nThroughout the rest of the modules in this notebook, the entire text of Modern Quantum Chemistry will simply be refered to as \"Szabo\" for the sake of brevity. \n<br> <br>\nProgram is limited to molecules consisting of hydrogen and helium with only even numbers of electrons. Required user input is the atomic number, 3D space location, and electron number for each atom. A basis set specification is also required. <br>\n<br>\nThe program makes use of Hartree Atomic Units such that the distances and lengths are described in Bohr Radius, energy is in Hartrees and mass is in Hartree Atomic Units, such that the mass of a proton is equal to 1836 atomic units. More information on Hartree Atomic Units can be found on page 41 of Szabo. ", "_____no_output_____" ] ], [ [ "#Python Implementation of the Hartree Fock Method\n#Procedures listed in the code follow as described in Modern Quantum Chemistry: \n#Introduction to Advanced Electronic Structure Theory, By Attila Szabo and Neil S. Ostlund\nimport sys\nsys.path.append(\"..\\\\Comp_Chem_Package\")\nimport numpy as np\nfrom molecule import atom\nfrom vector import vector\nfrom molecule import gaussian\nfrom molecule import molecule\nfrom notebookImporter import importNotebook\n\n#import integrals notebook for the hartree method\nintegrals = importNotebook(\"Hartree_Integrals\")\nscf = importNotebook(\"Hartree_SCF\")\n\n#define SCF convergence critera, and max number of iteration cycles\nSCF_CONVERGENCE = pow(10, -15)\nMAX_ITERATIONS = 500\n\n#Step 1\n#Specify Molecules, Nuclear Coordinates, and Charge of the nucli Number of Electrons,\n\n#generate an h2 atom with a distance of 1.4 AU to compare with Szabo pg. 160\n\n#R is in units of Bohr Radius \nR = 1.4\nsystem = molecule()\nsystem.addAtom(atom(vector(1,1,1), 1, 1))\nsystem.addAtom(atom(vector(1,1,1 + R), 1, 1))\n\n#add a basis set\nsystem.addBasis(\"STO-3G\")\n\nsystem.display()", "Molecule\nBasis Set: STO-3G\nTotal Number of Electrons: 2\nAtoms: \n Atomic Number: 1, Electrons: 1, Coordinate: X: 1 Y: 1 Z: 1\n Atomic Number: 1, Electrons: 1, Coordinate: X: 1 Y: 1 Z: 2.4\n" ], [ "#Step 2\n#Calculate Integrals\n#Overlap, KE, Nuclear Attraaction, and Electron Repulsion\nS = integrals.overlap(system)\nprint(\"Overlap Matrix: \")\nprint(np.matrix(S))\nprint()\n\nT = integrals.kineticEnergy(system)\nprint(\"Electron Kinetic Energy Matrix: \")\nprint(np.matrix(T))\nprint()\n\nV = integrals.nuclearAttraction(system)\nfor index, atom in enumerate(V):\n print(\"Nucli \" + str(index) + \"-Electron Attraction Matrix: \")\n print(np.matrix(atom))\nprint()\n\nelectronRepulsion = integrals.electronElectronRepulsion(system)\nprint(\"Electron Repulsion Tensor: \")\nprint(np.array(electronRepulsion))\nprint()\n\n#Form the electronic hamiltonian\nH = np.matrix(T)\n\n#add in all of the nuclear attractions matricies to the hamiltonian\nfor atom in V:\n H += np.matrix(atom)\n \nprint(\"Electronic Hamiltonian :\")\nprint(H)\nprint()", "Overlap Matrix: \n[[1. 0.65931821]\n [0.65931821 1. ]]\n\nElectron Kinetic Energy Matrix: \n[[0.76003188 0.23645466]\n [0.23645466 0.76003188]]\n\nNucli 0-Electron Attraction Matrix: \n[[-1.22661373 -0.59741731]\n [-0.59741731 -0.65382716]]\nNucli 1-Electron Attraction Matrix: \n[[-0.65382716 -0.59741731]\n [-0.59741731 -1.22661373]]\n\nElectron Repulsion Tensor: \n[[[[0.77460594 0.44410766]\n [0.44410766 0.56967593]]\n\n [[0.44410766 0.29702854]\n [0.29702854 0.44410766]]]\n\n\n [[[0.44410766 0.29702854]\n [0.29702854 0.44410766]]\n\n [[0.56967593 0.44410766]\n [0.44410766 0.77460594]]]]\n\nElectronic Hamiltonian :\n[[-1.12040901 -0.95837996]\n [-0.95837996 -1.12040901]]\n\n" ], [ "#Prepare for the SCF procedure\n\n#get size of the basis set \nsize = len(S)\n\n#compute the Transformation Matrix\nX = scf.X(S, size)\n\n#get guess Fock matrix, assume 2-electron term is equal to 0\nF = H\n\nprint(F)", "[[-1.12040901 -0.95837996]\n [-0.95837996 -1.12040901]]\n" ], [ "# SCF Procedure \n\n#init list to store the energy from each iteration\n#as well as a boolean to signify whether the loop has converged\nE = []\nconverged = False\n\nwhile( not converged ):\n \n #diagnolze the Fock matrix and convert it to MO basis \n F = X.transpose() * F * X \n print(\"F**\",F)\n \n #diagnolize the Fock Matrix to obtain the MOs and the their respective energies\n MOEnergy, MO = np.linalg.eigh(F)\n \n #Transform the MO basis MOs to an AO basis\n C = X * MO\n print(\"C\", C)\n #compute the electron density, the two electron term, and then use G to compute the new Fock matrix\n P = scf.densityMatrix(C, system.N, size)\n G = scf.G(electronRepulsion, P, size)\n F = H + G\n \n #compute the new expectation energy\n #Expectation Energy is in units of Hartrees\n E.append(scf.expectationEnergy(H, F, P))\n \n #check if at least two SCF iterations have occured\n #if more than two have occured, then check if the difference betweeen this E, \n #and the previous E is less then the covergence value, if yes, end the SCF loop\n #if energy has not converged, check whether the max number of iterations have occured so far\n sizeE = len(E)\n if(len(E) > 2):\n if(abs(E[sizeE-2] - E[sizeE-1]) < SCF_CONVERGENCE):\n converged = True\n elif(sizeE > MAX_ITERATIONS):\n print(\"SCF Failed to Converge\")\n break\n \n #compute total energy of the system including nuclear-nuclear repulsion\n totalE = E[sizeE-1] + scf.nuclearRepulsion(system)\n \n #display information about current SCF iteration to the user\n print(\"SCF Iteration #\" + str(sizeE) + \", Electronic Energy: \" + str(E[sizeE-1]) + \" Hartrees, Total Energy: \" + str(totalE) + \" Hartrees\")\n print(\"F\", F)\n print()\nprint(\"-\"*50)\nprint()\nprint(\"Final SCF Energy: \" + str(E[sizeE-1]))", "F** [[-4.75602306e-01 -1.08869060e-16]\n [-2.35585295e-16 -1.25279706e+00]]\nC [[ 0.54893404 1.21146407]\n [ 0.54893404 -1.21146407]]\nSCF Iteration #1, Electronic Energy: -1.8310000394614832 Hartrees, Total Energy: -1.116714325175769 Hartrees\nF [[-0.36553735 -0.59388537]\n [-0.59388537 -0.36553735]]\n\nF** [[ 6.70267761e-01 -2.12833504e-16]\n [-2.35379740e-16 -5.78202977e-01]]\nC [[-0.54893404 1.21146407]\n [-0.54893404 -1.21146407]]\nSCF Iteration #2, Electronic Energy: -1.8310000394614834 Hartrees, Total Energy: -1.116714325175769 Hartrees\nF [[-0.36553735 -0.59388537]\n [-0.59388537 -0.36553735]]\n\nF** [[ 6.70267761e-01 -1.51889583e-16]\n [-3.69879271e-16 -5.78202977e-01]]\nC [[-0.54893404 1.21146407]\n [-0.54893404 -1.21146407]]\nSCF Iteration #3, Electronic Energy: -1.8310000394614832 Hartrees, Total Energy: -1.116714325175769 Hartrees\nF [[-0.36553735 -0.59388537]\n [-0.59388537 -0.36553735]]\n\n--------------------------------------------------\n\nFinal SCF Energy: -1.8310000394614832\n" ], [ "X.transpose() * H * X", "_____no_output_____" ], [ "X.transpose() * H", "_____no_output_____" ], [ "X[0,1]", "_____no_output_____" ], [ "np.matmul(X, H)", "_____no_output_____" ], [ "X * H", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a786e963db3e979b1a8ba27d43068d2ddedbd9
35,165
ipynb
Jupyter Notebook
Deep_Learning/Spam_detection.ipynb
Ironspine/zoli
8e149b3458741343ea20dd9c6023dbe61d8abf14
[ "Apache-2.0" ]
6
2020-06-21T09:08:55.000Z
2021-07-28T14:54:30.000Z
Deep_Learning/Spam_detection.ipynb
Ironspine/zoli
8e149b3458741343ea20dd9c6023dbe61d8abf14
[ "Apache-2.0" ]
null
null
null
Deep_Learning/Spam_detection.ipynb
Ironspine/zoli
8e149b3458741343ea20dd9c6023dbe61d8abf14
[ "Apache-2.0" ]
null
null
null
24.369369
166
0.361951
[ [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "dataset = pd.read_csv('spam.csv')", "_____no_output_____" ], [ "result = list(map(lambda x: 'no_spam' if x == 'ham' else 'spam', dataset['type']))\ndataset['Type'] = result\ndataset = dataset.drop('type', axis = 1)", "_____no_output_____" ], [ "dataset", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(dataset['text'], dataset['Type'], test_size=0.25, random_state=0)", "_____no_output_____" ], [ "# Both vectorizer method can be used\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nvectorizer = TfidfVectorizer()\nX_train = vectorizer.fit_transform(X_train)\nX_test = vectorizer.transform(X_test)\n\n'''\nfrom sklearn.feature_extraction.text import CountVectorizer\n\nvectorizer = CountVectorizer()\nX_train = vectorizer.fit_transform(X_train)\nX_test = vectorizer.transform(X_test)\n'''", "_____no_output_____" ], [ "vectorizer.get_feature_names()", "_____no_output_____" ], [ "X_train.toarray()", "_____no_output_____" ], [ "from sklearn.naive_bayes import MultinomialNB\n\nmodel = MultinomialNB()\nmodel.fit(X_train, y_train)\nprediction = model.predict(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix, classification_report\n\nprint(classification_report(y_test, prediction))\nprint(confusion_matrix(y_test, prediction))", " precision recall f1-score support\n\n no_spam 0.96 1.00 0.98 1221\n spam 1.00 0.71 0.83 169\n\n accuracy 0.96 1390\n macro avg 0.98 0.86 0.91 1390\nweighted avg 0.97 0.96 0.96 1390\n\n[[1221 0]\n [ 49 120]]\n" ], [ "train_data = pd.read_csv('train_fake_news.csv', index_col = 'id')", "_____no_output_____" ], [ "train_data = train_data.rename(columns = {'title': 'Title', 'author': 'Author',\n 'text': 'Text', 'label': 'Label'})\nidx = train_data.index\ntrain_data.index = idx.rename('ID')\n\nlenght = [len(str(train_data.iloc[i]['Text'])) for i in range(len(train_data))]\ntrain_data['Lenght'] = lenght\ntrain_data = train_data[train_data['Lenght'] > 50]", "_____no_output_____" ], [ "train_data = train_data.drop(['Title', 'Author'], axis = 1)\ntrain_data = train_data.dropna(axis = 0)", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\n\nX_train_data, X_test_data, y_train_data, y_test_data = train_test_split(train_data['Text'][:10000], train_data['Label'][:10000], test_size=0.20, random_state=0)", "_____no_output_____" ], [ "vectorizer = TfidfVectorizer()\nX_train_data = vectorizer.fit_transform(X_train_data)\nX_test_data = vectorizer.transform(X_test_data)", "_____no_output_____" ], [ "len(X_test_data.toarray())", "_____no_output_____" ], [ "from sklearn.naive_bayes import MultinomialNB\n\nmodel = MultinomialNB()\nmodel.fit(X_train_data, y_train_data)\nprediction_data = model.predict(X_test_data)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix, classification_report\n\nprint(classification_report(y_test_data, prediction_data))\nprint(confusion_matrix(y_test_data, prediction_data))", " precision recall f1-score support\n\n 0 0.73 0.99 0.84 1003\n 1 0.99 0.63 0.77 997\n\n accuracy 0.81 2000\n macro avg 0.86 0.81 0.80 2000\nweighted avg 0.86 0.81 0.80 2000\n\n[[996 7]\n [373 624]]\n" ], [ "prediction_data", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a78792f3c156f158071d75c24a9410e048611c
6,372
ipynb
Jupyter Notebook
guides/tracker.ipynb
vishalbelsare/labml
97fac318b8e260b80295efe54a6f9fe0e4d2f958
[ "MIT" ]
463
2021-05-28T03:21:14.000Z
2022-03-28T06:28:21.000Z
guides/tracker.ipynb
vishalbelsare/labml
97fac318b8e260b80295efe54a6f9fe0e4d2f958
[ "MIT" ]
15
2021-06-22T10:02:36.000Z
2021-12-20T06:14:12.000Z
guides/tracker.ipynb
vishalbelsare/labml
97fac318b8e260b80295efe54a6f9fe0e4d2f958
[ "MIT" ]
29
2020-06-03T07:13:31.000Z
2021-05-23T18:20:34.000Z
27.114894
212
0.543001
[ [ [ "# Tracker\n\n[![Github](https://img.shields.io/github/stars/lab-ml/labml?style=social)](https://github.com/lab-ml/labml)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lab-ml/labml/blob/master/guides/tracker.ipynb)\n[![Docs](https://img.shields.io/badge/labml-docs-blue)](https://docs.labml.ai/api/tracker.html)\n\n\nHere you specify indicators and the logger stores them temporarily and write in batches.\nIt can aggregate and write them as means or histograms.", "_____no_output_____" ] ], [ [ "%%capture\n!pip install labml", "_____no_output_____" ], [ "import time\n\nimport numpy as np\n\nfrom labml import tracker\n\n# dummy train function\ndef train():\n return np.random.randint(100)\n\n# Reset global step because we incremented in previous loop\ntracker.set_global_step(0)", "_____no_output_____" ] ], [ [ "This stores all the loss values and writes the logs the mean on every tenth iteration.\nConsole output line is replaced until\n[`labml.tracker.new_line`](https://docs.labml.ai/api/tracker.html#labml.tracker.new_line)\nis called.", "_____no_output_____" ] ], [ [ "for i in range(1, 401):\n tracker.add_global_step()\n loss = train()\n tracker.add(loss=loss)\n if i % 10 == 0:\n tracker.save()\n if i % 100 == 0:\n tracker.new_line()\n time.sleep(0.02)", "_____no_output_____" ] ], [ [ "## Indicator settings", "_____no_output_____" ] ], [ [ "# dummy train function\ndef train2(idx):\n return idx, 10, np.random.randint(100)\n\n# Reset global step because we incremented in previous loop\ntracker.set_global_step(0)", "_____no_output_____" ] ], [ [ "Histogram indicators will log a histogram of data.\nQueue will store data in a `deque` of size `queue_size`, and log histograms.\nBoth of these will log the means too. And if `is_print` is `True` it will print the mean.", "_____no_output_____" ], [ "queue size of `10` and the values are printed to the console", "_____no_output_____" ] ], [ [ "tracker.set_queue('reward', 10, True)", "_____no_output_____" ] ], [ [ "By default values are not printed to console; i.e. `is_print` defaults to `False`.", "_____no_output_____" ] ], [ [ "tracker.set_scalar('policy')", "_____no_output_____" ] ], [ [ "Settings `is_print` to `True` will print the mean value of histogram to console", "_____no_output_____" ] ], [ [ "tracker.set_histogram('value', True)", "_____no_output_____" ], [ "for i in range(1, 400):\n tracker.add_global_step()\n reward, policy, value = train2(i)\n tracker.add(reward=reward, policy=policy, value=value, loss=1.)\n if i % 10 == 0:\n tracker.save()\n if i % 100 == 0:\n tracker.new_line()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a7993795fa3f8c64edb8b98712db607cfa503e
271,693
ipynb
Jupyter Notebook
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
5fdfa5bf8f7f99d71706bed1030b8cf02de48924
[ "MIT" ]
null
null
null
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
5fdfa5bf8f7f99d71706bed1030b8cf02de48924
[ "MIT" ]
null
null
null
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
5fdfa5bf8f7f99d71706bed1030b8cf02de48924
[ "MIT" ]
1
2021-05-27T06:10:16.000Z
2021-05-27T06:10:16.000Z
211.59891
70,596
0.90592
[ [ [ "<a href=\"https://colab.research.google.com/github/DiploDatos/AnalisisyVisualizacion/blob/master/01_Probabilidad.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "\n**Diplomatura en Ciencia de Datos, Aprendizaje Automático y sus Aplicaciones**\n\n**Edición 2021**\n\n---\n\n# Variables Aleatorias y Probabilidad\n\nEn esta notebook, vamos a realizar una primera aproximación al conjunto de datos. \n\n* Variables aleatorias y sus distintos tipos\n* Probabilidad", "_____no_output_____" ] ], [ [ "import io\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy\nimport pandas as pd\nimport seaborn\n\nseaborn.set_context('talk')", "_____no_output_____" ] ], [ [ "## Lectura del dataset\n\nEn la notebook 00 se explican los detalles de la siguiente sección.", "_____no_output_____" ] ], [ [ "url = 'https://cs.famaf.unc.edu.ar/~mteruel/datasets/diplodatos/sysarmy_survey_2020_processed.csv'\ndf = pd.read_csv(url)", "_____no_output_____" ], [ "df[:3]", "_____no_output_____" ] ], [ [ "# Análisis de salarios\n\nLa primera pregunta que se nos ocurre al ver esta encuenta es: **\"¿Y cuánto cobran los programadores en Argentina?\"**.\n\nEste es un punto de partida para el análisis del conjunto de datos. El proceso total constará de varias iteraciones: a medida que se obtengan conclusiones, se descrubrirán otros aspectos relevantes de los datos, lo cual disparará nuevas preguntas.", "_____no_output_____" ], [ "Para conocer más sobre la distribución de los salarios, es necesario elegir una columna de la encuesta para analizar.", "_____no_output_____" ] ], [ [ "salary_col = 'salary_monthly_NETO'", "_____no_output_____" ] ], [ [ "Una buena forma de comenzar una exploración es a través de la visualización. Seaborn nos provee un tipo de gráfico específico para graficar columnas que contienen números, llamado `displot`. (No confundir con `distplot`, que está deprecado). \n\nEl gráfico generado es un **histograma** de frecuencias. En el eje x se grafican los valores que toma la columna, divididos en intervalos o bins. En el eje y se grafica el conteo de ocurrencias de valores en cada intervalo.", "_____no_output_____" ] ], [ [ "seaborn.displot(df[salary_col], aspect=2)\n## para evitar la notación científica en las etiquetas\nplt.ticklabel_format(style='plain', axis='x')", "_____no_output_____" ] ], [ [ "### ¿Qué estamos viendo?\n\nLas visualizaciones simples son prácticas para conocer la forma de los datos rápidamente, porque condensan mucha información. Por ejemplo:\n* El rango de valores tomados por la columna va desde 0 hasta aproximadamente 2M.\n* La mayoría de ls valores se condensa por debajo de los 250K, y pocos superan los 500K.\n* Los valores más frencuentes aparentan estar cerca de los 100K.\n* Hay un pico de ocurrencias en el valor 0.\n y brindan poco detalle.\n\n## Ejercicio: ¿Qué otro tipo de preguntas nos podemos hacer en este punto que no se responden con un histograma?\n", "_____no_output_____" ], [ "### Análisis, ¡fundamentado!\n\nPara continuar el análisis, es necesario aplicar herramientas teóricas que nos brinda la Estadística y la Probabilidad.", "_____no_output_____" ], [ "## Variables aleatorias y sus tipos\n\nEn base a la definición de variable aleatoria que discutimos, se puede hablar de que cada columna de nuestro dataset es un **variable aleatoria**, y que su valor en cada respuesta es una **realización** de dicha variable. Pero, ¿qué tipo tienen esas variables?", "_____no_output_____" ], [ "### V.A. numéricas\n\nEl salario, la edad, los años de experiencia, son variables aleatorias cuyo rango es un conjunto numérico. Podemos clasificarlas en **continuas** o **discretas**, aunque esa distinción se vuelve difusa cuando trabajamos con datos computacionalmente. ¿Por qué? \n\n* Datos que en teoría son continuos, se miden de manera discreta. Por ejemplo, los *años* de experiencia, la altura de una persona en *centímetros*.\n* Datos que en teoría son continuos, se discretizan a fines prácticos. Por ejemplo, la edad, el salario en pesos argentinos.", "_____no_output_____" ], [ "Para analizar datos continuos se usan frecuentemente los histogramas, como en el caso anterior de los sueldos.\n\n**¡Tip!** Antes de graficar, controlar el rango (ya que seaborn intentará crear miles de segmentos si el rango es muy grande) y remover los valores nulos.", "_____no_output_____" ] ], [ [ "# Obtenemos el rango de valores observados de la variable\ndf.profile_age.min(), df.profile_age.max()", "_____no_output_____" ], [ "seaborn.displot(df.profile_age[df.profile_age < 100].dropna(),\n stat='count', aspect=4)", "_____no_output_____" ] ], [ [ "Sin embargo, los histogramas pueden ocultar información. ¿Por qué? Porque agrupan rangos de valores en intervalos inferidos automáticamente. Como resultado, la visualización varía de con distintas longitudes de segmentos. Comparemos los siguientes histogramas.", "_____no_output_____" ] ], [ [ "# Un ejemplo más avanzado\nfig, ax = plt.subplots(nrows=2, ncols=3, figsize=(15,10), sharey='row')\nseaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[0,0],\n stat='count')\nseaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[0,1],\n bins=20, stat='count')\nseaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[0,2],\n bins=5, stat='count')\n\nseaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[1,0],\n stat='frequency')\nseaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[1,1],\n bins=20, stat='frequency')\nseaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[1,2],\n bins=5, stat='frequency')\n\nfig.show()", "_____no_output_____" ] ], [ [ "Para variables discretas puede usarse un gráfico de línea, que permite visualizar el conteo de cada uno de los puntos en el rango observado.\n\n**¿Se puede usar un gráfico de líneas para la variable `salary_montly_NETO`? ¿Tiene sentido?** ", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(16,4))\nage_counts = df[df.profile_age < 100].profile_age.value_counts()\nseaborn.lineplot(age_counts.index, age_counts.values, color='steelblue')\nplt.xticks(fontsize=14) # Achicamos la letra para que se vea mejor\nseaborn.despine()", "/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n" ] ], [ [ "### V.A. categóricas\n\nLas variables categóricas toman valores de un conjunto pre-definido, usualmente pero no necesariamente finito. Para visualizarlas, puede usarse un gráfico de barras, que representa cada valor observado con una columna, y el conteo de ese valor con la altura de la columna.\n\nLas variables numéricas discretas, ¿son categóricas?", "_____no_output_____" ] ], [ [ "df.profile_gender.unique()", "_____no_output_____" ], [ "fig = plt.figure(figsize=(8,6))\nseaborn.countplot(df.profile_gender, color='steelblue')", "/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n" ] ], [ [ "Las variables categóricas pueden ser *ordinales*, si existe un orden lógico entre sus valores. Esto es independiente de que sean numéricas. En caso de que un orden exista, es adecuado incluirlo en el gráfico.", "_____no_output_____" ] ], [ [ "sorted_studies_levels = ['Primario', 'Secundario', 'Terciario', 'Universitario',\n 'Posgrado', 'Doctorado', 'Posdoctorado']\nfig, axes = plt.subplots(ncols=2, figsize=(15,6))\ng = seaborn.countplot(df.profile_studies_level, color='steelblue', ax=axes[0])\ng = seaborn.countplot(df.profile_studies_level, color='steelblue', ax=axes[1],\n order=sorted_studies_levels)\nfor ax in axes:\n ax.tick_params(labelrotation=30)", "/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n" ] ], [ [ "### Tipos de variables vs tipos de datos\n\nTenemos que distinguir dos conceptos con el mismo nombre y significado similar, pero que no son iguales:\n - **tipo de la variable aleatoria** es el tipo de valores con los que decidimos *intepretar* las realizaciones\n - **tipo de datos** es un concepto de programación que indica en qué formato se representa la información. Cuando asignamos a una variable `age` *del programa de Python* una realización de una variable aleatoria conceptual `profile_age`, esa variable `age` también tiene un *tipo de Python*, por ejemplo `int` o `float`.\n", "_____no_output_____" ] ], [ [ "age = df.profile_age.iloc[0]\ntype(age)", "_____no_output_____" ] ], [ [ "*¡Importante!* Hay que tener en cuenta también los límites de la capacidad computacional al momento de representar entidades matemáticas.\n* Los números reales siempre son \"redondeados\" a una representación racional.\n* Los tipos básicos como `Int` sólo pueden representar números en un rango, por ejemplo `(-2^31, 2^31)`. Exceder el rango puede tener consecuencias inesperadas, como `integer overflow`.\n\n¿Por qué es importante saberlo? Porque se pueden producir errores de redondeo u obtener resultados aproximados.", "_____no_output_____" ] ], [ [ "print(type(3), type(3.44), type(1/3)) # 1/3 es un numero irracional\nimport numpy\nprint(numpy.iinfo('int64').min, numpy.iinfo('int64').max)\nnumpy.int64(numpy.iinfo('int64').max) + 1\n# Traten de hacer numpy.int64(numpy.iinfo('int64').max + 1)", "<class 'int'> <class 'float'> <class 'float'>\n-9223372036854775808 9223372036854775807\n" ] ], [ [ "Se puede acceder a los tipos de datos del DataFrame. El tipo `object` se utiliza para representar cualquier variable que no sea numérica, como por ejemplo los `str`.", "_____no_output_____" ] ], [ [ "df.dtypes[:10]", "_____no_output_____" ] ], [ [ "Hay que tener en cuenta que las librerías de gráficos nos permitirán crear las visualizaciones que querramos, mientras los tipos de datos sean los adecuados.\n\nPor ejemplo, podemos hacer un histograma con la variable `profile_open_source_contributions` si la transformamos a tipo `bool` (que se representa internamente como un tipo entero). Sin embargo, esto no tiene ningún sentido.", "_____no_output_____" ] ], [ [ "df.loc[:,'salary_in_usd_bool'] = \\\n df.salary_in_usd.replace({'Mi sueldo está dolarizado': True}).fillna(False)\nprint(df.salary_in_usd.unique(), df.salary_in_usd_bool.unique())", "[nan 'Mi sueldo está dolarizado'] [False True]\n" ], [ "seaborn.histplot(df.salary_in_usd_bool, bins=5)", "<string>:6: RuntimeWarning: Converting input from bool to <class 'numpy.uint8'> for compatibility.\n<string>:6: RuntimeWarning: Converting input from bool to <class 'numpy.uint8'> for compatibility.\n" ] ], [ [ "También podemos graficar la frecuencia de una variable categórica utilizando un gráfico de líneas. **¿Por qué esta visualización no es correcta?**", "_____no_output_____" ] ], [ [ "count_by_province = df.work_province.value_counts()\nfig = plt.figure(figsize=(16, 4))\nseaborn.lineplot(x=count_by_province.index, y=count_by_province.values)\nplt.xticks(rotation=45)\nseaborn.despine()", "_____no_output_____" ] ], [ [ "# Análisis del impacto de los años de experiencia\n\nAhora que ya sabemos aproximadamente la forma de nuestros datos, podemos pasar a realizar otra pregunta (otra iteración del proceso de análisis): \n\n**¿Tener más años de experiencia significa que se cobra más?**\n\nPara responder a esta pregunta, analizamos la probabilidad de que un programador tenga un salario mensual mayor que el promedio, cuando tiene una experiencia mayor que 5 años.\n\n", "_____no_output_____" ] ], [ [ "avg_salary = df[salary_col].mean()\navg_salary", "_____no_output_____" ] ], [ [ "## Medida de probabilidad\n\nEn el teórico vimos que si cada una de nuestros eventos es independiente e idénticamente distribuido, es decir, que $P(\\{\\omega_i\\})=1/k$, entonces la probabilidad de un conjunto $A \\subset \\Omega$ es la proporción de $A$, donde .\n\n\n$$P(\\{\\omega_i\\})=1/k \\implies P(A)=|A|/|\\Omega|=|A|/k$$\n\n\nEn este problema en particular, $\\Omega$ son todas las respuestas del dataset, cada $a_i$ es una variable que representa una respuesta, y el conjunto $A$ son las respuestas (filas) en la que la columna `salary_col` tiene un valor mayor que el promedio\n", "_____no_output_____" ] ], [ [ "p_above_avg = len(df[df[salary_col] >= avg_salary]) / len(df)\np_above_avg", "_____no_output_____" ] ], [ [ "* ¿Por qué podemos usar la teoría de la probabilidad?\n* ¿Por qué calculamos una probabilidad con esta fórmula?\n* ¿Cómo podemos interpretar esta probabilidad?", "_____no_output_____" ], [ "## Probabilidad condicional\n\nAhora podemos pasar a hablar de la probabilidad condicional entre los dos eventos. La definimos como\n\n$$P(A|B) = \\frac{P(A \\cap B)}{P(B)}$$\n\nEsto es equivalente a:\n\n$$P(A|B) = \\frac{|A \\cap B|}{|B|}$$\n\n## Ejercicio\n\nReponder: **¿Si uno tiene más de 5 años de experiencia, la probabilidad de cobrar más que el promedio aumenta? ¿Estos eventos, son independientes?**\n", "_____no_output_____" ] ], [ [ "is_above_avg = df[salary_col] > avg_salary\nexperience_greater_5 = df.profile_years_experience > 5\nintersection_count = len(df[is_above_avg & experience_greater_5])", "_____no_output_____" ], [ "p_above_avg_given_experience = 0\np_above_avg_given_experience", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
e7a79a3a5b4617cca1030523218a3758b7e9242e
610
ipynb
Jupyter Notebook
my_experiments/simple_umbrella_sampling.ipynb
BloonCorps/IAP2022
11a481790878defad0d2974b81ae109168306077
[ "MIT" ]
null
null
null
my_experiments/simple_umbrella_sampling.ipynb
BloonCorps/IAP2022
11a481790878defad0d2974b81ae109168306077
[ "MIT" ]
null
null
null
my_experiments/simple_umbrella_sampling.ipynb
BloonCorps/IAP2022
11a481790878defad0d2974b81ae109168306077
[ "MIT" ]
null
null
null
16.944444
77
0.545902
[ [ [ "# Umbrella Sampling", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e7a7a31d9bd52c9a4919209579bdfe143e9f9ac0
3,148
ipynb
Jupyter Notebook
notebooks/basic_plugins.ipynb
anthonycrane/nipype_tutorial
909c7274e4a15391fd78ffe2cd92d401ea1de63f
[ "BSD-3-Clause" ]
null
null
null
notebooks/basic_plugins.ipynb
anthonycrane/nipype_tutorial
909c7274e4a15391fd78ffe2cd92d401ea1de63f
[ "BSD-3-Clause" ]
null
null
null
notebooks/basic_plugins.ipynb
anthonycrane/nipype_tutorial
909c7274e4a15391fd78ffe2cd92d401ea1de63f
[ "BSD-3-Clause" ]
null
null
null
35.370787
361
0.586086
[ [ [ "# Execution Plugins\n\nAs you learned in the [Workflow](basic_workflow.ipynb) tutorial, a workflow is executed with the ``run`` method. For example:\n\n workflow.run()\n\nWhenever you execute a workflow like this, it will be executed in serial order. This means that no node will be executed in parallel, even if they are completely independent of each other. Now, while this might be preferable under certain circumstances, we usually want to executed workflows in parallel. For this, Nipype provides many different plugins.", "_____no_output_____" ], [ "## Local execution\n\n### ``Linear`` Plugin\n\nIf you want to run your workflow in a linear fashion, just use the following code:\n\n workflow.run(plugin='Linear')", "_____no_output_____" ], [ "### ``MultiProc`` Plugin\n\nThe easiest way to executed a workflow locally in parallel is the ``MultiProc`` plugin:\n\n workflow.run(plugin='MultiProc', plugin_args={'n_procs': 4})\n\nThe additional plugin argument ``n_procs``, specifies how many cores should be used for the parallel execution. In this case, it's 4.\n\nThe `MultiProc` plugin uses the [multiprocessing](http://docs.python.org/library/multiprocessing.html) package in the standard library, and is the only parallel plugin that is guaranteed to work right out of the box.", "_____no_output_____" ], [ "## Cluster execution\n\nThere are many different plugins to run Nipype on a cluster, such as: ``PBS``, ``SGE``, ``LSF``, ``Condor`` and ``IPython``. Implementing them is as easy as ``'MultiProc'``.\n\n workflow.run('PBS', plugin_args={'qsub_args': '-q many'})\n workflow.run('SGE', plugin_args={'qsub_args': '-q many'})\n workflow.run('LSF', plugin_args={'qsub_args': '-q many'})\n workflow.run('Condor')\n workflow.run('IPython')\n \n workflow.run('PBSGraph', plugin_args={'qsub_args': '-q many'})\n workflow.run('SGEGraph', plugin_args={'qsub_args': '-q many'})\n workflow.run('CondorDAGMan')\n\nFor a complete list and explanation of all supported plugins, see: http://nipype.readthedocs.io/en/latest/users/plugins.html", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ] ]
e7a7a54ee166ea27f166fe84f4590d7c7d3747b3
38,066
ipynb
Jupyter Notebook
nbs/02_maml_pl.ipynb
ojss/c3lr
a018c5a793a2c9eedc3f0fefcca0970f0be35ffc
[ "Apache-2.0" ]
3
2022-02-24T07:02:12.000Z
2022-03-20T18:33:58.000Z
nbs/02_maml_pl.ipynb
ojss/c3lr
a018c5a793a2c9eedc3f0fefcca0970f0be35ffc
[ "Apache-2.0" ]
null
null
null
nbs/02_maml_pl.ipynb
ojss/c3lr
a018c5a793a2c9eedc3f0fefcca0970f0be35ffc
[ "Apache-2.0" ]
null
null
null
39.859686
714
0.474702
[ [ [ "#default_exp maml\n#export\nimport logging\nimport warnings \n\nimport higher\nimport kornia as K\nimport wandb\nimport pytorch_lightning as pl\nimport torch\nimport torchvision\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchmetrics\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom copy import deepcopy\nfrom pytorch_lightning import Trainer\nfrom pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint\nfrom pytorch_lightning.loggers import WandbLogger, TensorBoardLogger\nfrom pytorch_lightning.metrics.functional import accuracy\nfrom torchmeta.datasets.helpers import omniglot\nfrom torchmeta.utils.data import BatchMetaDataLoader\nfrom unsupervised_meta_learning.pl_dataloaders import OmniglotDataModule", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "#export\nlogger = logging.getLogger(__name__)", "_____no_output_____" ], [ "#export\nclass ConvolutionalNeuralNetwork(nn.Module):\n def __init__(self, in_channels, out_features, hidden_size=64):\n super(ConvolutionalNeuralNetwork, self).__init__()\n self.in_channels = in_channels\n self.out_features = out_features\n self.hidden_size = hidden_size\n\n self.features = nn.Sequential(\n self.conv3x3(in_channels, hidden_size),\n self.conv3x3(hidden_size, hidden_size),\n self.conv3x3(hidden_size, hidden_size),\n self.conv3x3(hidden_size, hidden_size),\n )\n\n self.classifier = nn.Linear(hidden_size, out_features)\n\n def forward(self, inputs, params=None):\n features = self.features(inputs)\n features = features.view((features.size(0), -1))\n logits = self.classifier(features)\n return logits\n\n def conv3x3(self, in_channels, out_channels, **kwargs):\n return nn.Sequential(\n nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, **kwargs),\n nn.BatchNorm2d(out_channels, momentum=1.0, track_running_stats=False),\n nn.ReLU(),\n nn.MaxPool2d(2),\n )", "_____no_output_____" ], [ "#export\ndef get_accuracy(logits, targets):\n \"\"\"Compute the accuracy (after adaptation) of MAML on the test/query points\n Parameters\n ----------\n logits : `torch.FloatTensor` instance\n Outputs/logits of the model on the query points. This tensor has shape\n `(num_examples, num_classes)`.\n targets : `torch.LongTensor` instance\n A tensor containing the targets of the query points. This tensor has \n shape `(num_examples,)`.\n Returns\n -------\n accuracy : `torch.FloatTensor` instance\n Mean accuracy on the query points\n \"\"\"\n _, predictions = torch.max(logits, dim=-1)\n return torch.mean(predictions.eq(targets).float())", "_____no_output_____" ], [ "#export\nclass MAML(pl.LightningModule):\n def __init__(self, model, outer_lr, inner_lr, inner_steps=1):\n super().__init__()\n self.model = model\n self.accuracy = get_accuracy\n self.automatic_optimization = False\n self.inner_steps = inner_steps\n self.outer_lr = outer_lr\n self.inner_lr = inner_lr\n \n def forward(self, x):\n return self.model(x)\n \n def inner_loop(self, fmodel, diffopt, train_input, train_target):\n train_logit = fmodel(train_input)\n inner_loss = F.cross_entropy(train_logit, train_target)\n diffopt.step(inner_loss)\n \n return inner_loss.item()\n \n @torch.enable_grad()\n def meta_learn(self, batch, batch_idx, optimizer_idx=None):\n meta_optimizer, inner_optimizer = self.optimizers()\n meta_optimizer = meta_optimizer.optimizer\n inner_optimizer = inner_optimizer.optimizer\n \n train_inputs, train_targets = batch['train']\n test_inputs, test_targets = batch['test']\n \n batch_size = train_inputs.shape[0]\n outer_loss = torch.tensor(0., device=self.device)\n acc = torch.tensor(0., device=self.device)\n self.model.zero_grad()\n \n for task_idx, (train_input, train_target, test_input, test_target) in enumerate(\n zip(train_inputs, train_targets, test_inputs, test_targets)\n ):\n# inner_optimizer.zero_grad()\n with higher.innerloop_ctx(self.model, inner_optimizer, copy_initial_weights=False) as (fmodel, diffopt):\n# train_logit = fmodel(train_input)\n# inner_loss = F.cross_entropy(train_logit, train_target)\n\n# diffopt.step(inner_loss)\n for step in range(self.inner_steps):\n self.inner_loop(fmodel, diffopt, train_input, train_target)\n \n test_logit = fmodel(test_input)\n outer_loss += F.cross_entropy(test_logit, test_target)\n \n with torch.no_grad():\n preds = test_logit.softmax(dim=-1)\n acc += self.accuracy(test_logit, test_target)\n \n\n# self.print(self.accuracy(test_logit, test_target))\n \n outer_loss.div_(batch_size)\n acc.div_(batch_size)\n self.log_dict({\n 'outer_loss': outer_loss,\n 'accuracy': acc\n }, prog_bar=True)\n \n meta_optimizer.zero_grad()\n# outer_loss.backward()\n self.manual_backward(outer_loss, meta_optimizer)\n meta_optimizer.step()\n return outer_loss, acc\n \n \n def training_step(self, batch, batch_idx, optimizer_idx):\n train_loss, acc = self.meta_learn(batch, batch_idx, optimizer_idx)\n \n self.log_dict({\n 'train_loss': train_loss.item(),\n 'train_accuracy': acc.item()\n }, prog_bar=True)\n \n return train_loss.item()\n \n def validation_step(self, batch, batch_idx):\n val_loss, val_acc = self.meta_learn(batch, batch_idx)\n \n self.log_dict({\n 'val_loss': val_loss.item(),\n 'val_accuracy': val_acc.item()\n })\n return val_loss.item()\n \n def test_step(self, batch, batch_idx):\n test_loss, test_acc = self.meta_learn(batch, batch_idx)\n self.log_dict({\n 'test_loss': test_loss.item(),\n 'test_accuracy': test_acc.item()\n })\n return test_loss.item()\n \n \n def configure_optimizers(self):\n meta_optimizer = torch.optim.Adam(self.parameters(), lr=self.outer_lr)\n inner_optimizer = torch.optim.SGD(self.parameters(), lr=self.inner_lr)\n \n return [meta_optimizer, inner_optimizer]", "_____no_output_____" ], [ "#export\nclass UMTRA(pl.LightningModule):\n def __init__(self, model, augmentation, inner_steps, inner_lr, outer_lr):\n super().__init__()\n self.model = model\n self.accuracy = get_accuracy\n self.augmentation = augmentation\n self.inner_steps = inner_steps\n self.inner_lr = inner_lr\n self.outer_lr = outer_lr\n self.automatic_optimization = False\n \n def forward(self, x):\n return self.model(x)\n\n def inner_loop(self, fmodel, diffopt, train_input, train_target):\n train_logit = fmodel(train_input)\n inner_loss = F.cross_entropy(train_logit, train_target)\n diffopt.step(inner_loss)\n \n return inner_loss.item()\n \n @torch.enable_grad()\n def meta_learn(self, batch, batch_idx, inner_copied_optimizer=None, optimizer_idx=None):\n meta_optimizer, inner_optimizer = self.optimizers(use_pl_optimizer=False)\n inner_optimizer = inner_optimizer if inner_copied_optimizer is None else inner_copied_optimizer\n \n train_inputs, train_targets = batch['train']\n test_inputs, test_targets = batch['test']\n \n batch_size = train_inputs.shape[0]\n outer_loss = torch.tensor(0., device=self.device)\n acc = torch.tensor(0., device=self.device)\n self.model.zero_grad()\n \n for task_idx, (train_input, train_target, test_input, test_target) in enumerate(\n zip(train_inputs, train_targets, test_inputs, test_targets)\n ):\n val_input = self.augmentation(train_input).to(self.device)\n val_target = deepcopy(train_target).to(self.device)\n with higher.innerloop_ctx(self.model, inner_optimizer, copy_initial_weights=False) as (fmodel, diffopt):\n for step in range(self.inner_steps):\n self.inner_loop(fmodel, diffopt, train_input, train_target)\n \n val_logits = fmodel(val_input)\n outer_loss += F.cross_entropy(val_logits, val_target)\n\n with torch.no_grad():\n test_logits = fmodel(test_input)\n acc += self.accuracy(test_logits, test_target)\n \n outer_loss.div_(batch_size)\n acc.div_(batch_size)\n \n meta_optimizer.zero_grad()\n# outer_loss.backward()\n\n self.manual_backward(outer_loss, meta_optimizer)\n meta_optimizer.step()\n \n return outer_loss, acc\n \n def training_step(self, batch, batch_idx, optimizer_idx):\n train_loss, acc = self.meta_learn(batch, batch_idx, optimizer_idx=optimizer_idx)\n \n self.log_dict({\n 'train_loss': train_loss.item(),\n 'train_accuracy': acc.item()\n }, prog_bar=True)\n \n return train_loss.item()\n \n def validation_step(self, batch, batch_idx):\n val_loss, val_acc = self.meta_learn(batch, batch_idx)\n self.log_dict({\n 'val_loss': val_loss.item(),\n 'val_accuracy': val_acc.item()\n })\n return val_loss.item()\n \n def test_step(self, batch, batch_idx):\n self.model.train()\n test_loss, test_acc = self.meta_learn(batch, batch_idx)\n \n self.log_dict({\n 'test_loss': test_loss.item(),\n 'test_accuracy': test_acc.item()\n })\n return test_loss.item()\n \n \n def configure_optimizers(self):\n meta_optimizer = torch.optim.Adam(self.parameters(), lr=self.outer_lr)\n inner_optimizer = torch.optim.SGD(self.parameters(), lr=self.inner_lr)\n \n return [meta_optimizer, inner_optimizer]", "_____no_output_____" ], [ "dm = OmniglotDataModule(\n \"data\",\n shots=1,\n ways=5,\n shuffle_ds=True,\n test_shots=15,\n meta_train=True,\n download=True,\n batch_size=16,\n shuffle=True,\n num_workers=8,\n)", "_____no_output_____" ], [ "model = MAML(model=ConvolutionalNeuralNetwork(1, 5, hidden_size=64), outer_lr=3e-3, inner_lr=5e-1, inner_steps=1)", "_____no_output_____" ], [ "logger = WandbLogger(\n project='maml',\n config={\n 'batch_size': 16,\n 'steps': 100,\n 'dataset': \"omniglot\",\n 'inner_steps': 1,\n 'val/test': 'enabled'\n }\n)\ntrainer = Trainer(\n profiler='simple',\n max_epochs=100,\n max_steps=100,\n limit_train_batches=100,\n limit_val_batches=0,\n limit_test_batches=2,\n fast_dev_run=False,\n gpus=1,\n log_every_n_steps=1,\n flush_logs_every_n_steps=1,\n num_sanity_val_steps=2,\n logger=logger\n )", "GPU available: True, used: True\nTPU available: False, using: 0 TPU cores\n" ], [ "with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n trainer.fit(model, datamodule=dm)", "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n\u001b[34m\u001b[1mwandb\u001b[0m: wandb version 0.10.32 is available! To upgrade, please run:\n\u001b[34m\u001b[1mwandb\u001b[0m: $ pip install wandb --upgrade\n" ], [ "wandb.finish()", "_____no_output_____" ], [ "aug = nn.Sequential(\n K.augmentation.RandomAffine(degrees=0, translate=(0.4, 0.4), padding_mode='border'),\n K.augmentation.RandomGaussianNoise(mean=0., std=.1, p=.3)\n)\nmodel = UMTRA(model=ConvolutionalNeuralNetwork(1, 5, hidden_size=64), augmentation=aug, inner_steps=1, outer_lr=3e-3, inner_lr=5e-1)", "_____no_output_____" ], [ "logger = WandbLogger(\n project='umtra',\n config={\n 'batch_size': 16,\n 'steps': 100,\n 'dataset': \"omniglot\",\n 'inner_steps': 5\n }\n)\ntrainer = Trainer(\n profiler='simple',\n max_epochs=100,\n max_steps=100,\n limit_train_batches=50,\n limit_val_batches=0.,\n limit_test_batches=2,\n fast_dev_run=False,\n gpus=1,\n log_every_n_steps=1,\n flush_logs_every_n_steps=1,\n num_sanity_val_steps=2,\n logger=logger\n )", "GPU available: True, used: True\nTPU available: False, using: 0 TPU cores\n" ], [ "with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n trainer.fit(model, datamodule=dm)", "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n\u001b[34m\u001b[1mwandb\u001b[0m: wandb version 0.10.32 is available! To upgrade, please run:\n\u001b[34m\u001b[1mwandb\u001b[0m: $ pip install wandb --upgrade\n" ], [ "wandb.finish()", "_____no_output_____" ], [ "from nbdev.export import notebook2script; notebook2script()", "Converted 01_nn_utils.ipynb.\nConverted 01b_data_loaders_pl.ipynb.\nConverted 01c_grad_utils.ipynb.\nConverted 01d_hessian_free.ipynb.\nConverted 02_maml_pl.ipynb.\nConverted 02b_iMAML.ipynb.\nConverted 03_protonet_pl.ipynb.\nConverted 04_cactus.ipynb.\nConverted index.ipynb.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a7a56383242bc419506ade5305106548638080
8,057
ipynb
Jupyter Notebook
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
dd4ad669931c7f75e026456470cf33ac5b682d0d
[ "Apache-2.0" ]
1
2021-12-19T15:11:09.000Z
2021-12-19T15:11:09.000Z
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
dd4ad669931c7f75e026456470cf33ac5b682d0d
[ "Apache-2.0" ]
null
null
null
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
dd4ad669931c7f75e026456470cf33ac5b682d0d
[ "Apache-2.0" ]
null
null
null
37.129032
545
0.607298
[ [ [ "# Simulating Clifford randomized benchmarking using implicit models\n\nThis tutorial demonstrates shows how to simulate Clifford RB sequences using $n$-qubit \"implicit\" models which build $n$-qubit process matrices from smaller building blocks. This restricts the noise allowed in the $n$-qubit model; in this tutorial we take $n=3$ and use a `LocalNoiseModel`.", "_____no_output_____" ] ], [ [ "import pygsti\nimport numpy as np", "_____no_output_____" ] ], [ [ "## Get some CRB circuits\n\nFirst, we follow the [Clifford RB](../CliffordRB.ipynb) tutorial to generate a set of sequences. If you want to perform Direct RB instead, just replace this cell with the contents of the [Direct RB](../DirectRB.ipynb) tutorial up until the point where it creates `circuitlist`:", "_____no_output_____" ] ], [ [ "#Specify the device to be benchmarked - in this case 2 qubits\nnQubits = 3\nqubit_labels = list(range(nQubits)) \ngate_names = ['Gxpi2', 'Gypi2','Gcphase'] \navailability = {'Gcphase':[(i,i+1) for i in range(nQubits-1)]}\npspec = pygsti.obj.ProcessorSpec(nQubits, gate_names, availability=availability, \n qubit_labels=qubit_labels)\n\n#Specify RB parameters (k = number of repetitions at each length)\nlengths = [0,1,2,4,8,16]\nk = 10\nsubsetQs = qubit_labels\nrandomizeout = False # ==> all circuits have the *same* ideal outcome (the all-zeros bitstring)\n\n#Generate clifford RB circuits\nexp_design = pygsti.protocols.CliffordRBDesign(pspec, lengths, k, qubit_labels=subsetQs, randomizeout=randomizeout)\n\n#Collect all the circuits into one list:\ncircuitlist = exp_design.all_circuits_needing_data", "_____no_output_____" ] ], [ [ "## Create a model to simulate these circuits\nNow we need to create a model that can simulate circuits like this. The RB circuits use pyGSTi's \"multi-qubit\" conventions, which mean:\n1. RB circuits use our \"multi-qubit\" gate naming, so you have gates like `Gxpi2:0` and `Gcphase:0:1`.\n2. RB circuits do gates in parallel (this only matters for >1 qubits), so you have layers like `[Gypi2:0Gypi2:1]`\n\n\"Implicit\" models in pyGSTi (see the [implicit model tutorial](../../objects/ImplicitModel.ipynb)) are designed to efficiently describe multi-qubit processors. There are numerous ways of constructing implicit models, all of which can simulate the type of circuits described above. Here we'll demonstrate the simplest type: a \"local noise model\" (class `LocalNoiseModel`) where the noise on a gate can only act on that gate's target qubits - so, for instance, 1-qubit gates are still given by 1-qubit operators, not $n$-qubit ones.\n\nThe construction of a local noise model follows the same pattern as building the `ProcessorSpec` above (in fact, `pspec.models['target']` *is* essentially the same model we build below except it was built with the default `parmeterization=\"static\"` argument.", "_____no_output_____" ] ], [ [ "myModel = pygsti.obj.LocalNoiseModel.build_from_parameterization(nQubits, gate_names,\n availability=availability, \n qubit_labels=qubit_labels,\n parameterization=\"full\")", "_____no_output_____" ] ], [ [ "Setting `parameterization=\"full\"` is important, as it lets us assign arbitrary numpy arrays to gates as we'll show below. If you need to use other gates that aren't built into pyGSTi, you can use the `nonstd_gate_unitaries`\nargument of `build_from_parameterization` (see the docstring).\n\nThe `build_from_parameterization` function creates a model with ideal (perfect) gates. We'll now create a 1-qubit depolarization superoperator, and a corresponding 2-qubit one (just the tensor product of two 1-qubit ones) to add some simple noise. ", "_____no_output_____" ] ], [ [ "depol1Q = np.array([[1, 0, 0, 0],\n [0, 0.99, 0, 0],\n [0, 0, 0.99, 0],\n [0, 0, 0, 0.99]], 'd') # 1-qubit depolarizing operator\ndepol2Q = np.kron(depol1Q,depol1Q)", "_____no_output_____" ] ], [ [ "As detailed in the [implicit model tutorial](../../objects/ImplicitModel.ipynb), the gate operations of a `LocalNoiseModel` are held in its `.operation_blks['gates']` dictionary. We'll alter these by assigning new process matrices to each gate. In this case, it will be just a depolarized version of the original gate.", "_____no_output_____" ] ], [ [ "myModel.operation_blks['gates'][\"Gxpi2\"] = np.dot(depol1Q, myModel.operation_blks['gates'][\"Gxpi2\"])\nmyModel.operation_blks['gates'][\"Gypi2\"] = np.dot(depol1Q, myModel.operation_blks['gates'][\"Gypi2\"]) \nmyModel.operation_blks['gates'][\"Gcphase\"] = np.dot(depol2Q, myModel.operation_blks['gates'][\"Gcphase\"])", "_____no_output_____" ] ], [ [ "Here's what the gates look like now:", "_____no_output_____" ] ], [ [ "print(myModel.operation_blks['gates'][\"Gxpi2\"])\nprint(myModel.operation_blks['gates'][\"Gypi2\"])\nprint(myModel.operation_blks['gates'][\"Gcphase\"])", "_____no_output_____" ] ], [ [ "Now that our `Model` object is set to go, generating simulated data is easy:", "_____no_output_____" ] ], [ [ "ds = pygsti.construction.generate_fake_data(myModel, circuitlist, 100, seed=1234)", "_____no_output_____" ] ], [ [ "## Running RB on the simulated `DataSet`\nTo run an RB analysis, we just package up the experiment design and data set into a `ProtocolData` object and give this to a `RB` protocol's `run` method. This returns a `RandomizedBenchmarkingResults` object that can be used to plot the RB decay curve. (See the [RB analysis tutorial](../RBAnalysis.ipynb) for more details.)", "_____no_output_____" ] ], [ [ "data = pygsti.protocols.ProtocolData(exp_design, ds)\nresults = pygsti.protocols.RB().run(data)", "_____no_output_____" ], [ "%matplotlib inline\nresults.plot()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a7acfa2eebc46c77b7737af3a6396790eb3bc8
16,953
ipynb
Jupyter Notebook
ratsql_colab.ipynb
nghoanglong/rat-sql
5b42637255decd5ffa389a520c191bafe26c0913
[ "MIT" ]
null
null
null
ratsql_colab.ipynb
nghoanglong/rat-sql
5b42637255decd5ffa389a520c191bafe26c0913
[ "MIT" ]
null
null
null
ratsql_colab.ipynb
nghoanglong/rat-sql
5b42637255decd5ffa389a520c191bafe26c0913
[ "MIT" ]
null
null
null
32.477011
428
0.530467
[ [ [ "<a href=\"https://colab.research.google.com/github/nghoanglong/rat-sql/blob/master/ratsql_colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "gpu_info = !nvidia-smi\ngpu_info = '\\n'.join(gpu_info)\nif gpu_info.find('failed') >= 0:\n print('Not connected to a GPU')\nelse:\n print(gpu_info)", "_____no_output_____" ] ], [ [ "# Set up and Install requirements", "_____no_output_____" ] ], [ [ "from google.colab import drive\ndrive.mount('/content/drive')", "_____no_output_____" ], [ "!git clone https://github.com/nghoanglong/rat-sql.git", "_____no_output_____" ], [ "%cd /content/rat-sql", "_____no_output_____" ], [ "!pip install -r requirements.txt", "_____no_output_____" ], [ "import nltk\nnltk.download('stopwords')\nnltk.download('punkt')", "_____no_output_____" ], [ "from transformers import BertModel\nBertModel.from_pretrained('bert-large-uncased-whole-word-masking')", "_____no_output_____" ], [ "!mkdir -p third_party", "_____no_output_____" ], [ "!git clone https://github.com/salesforce/WikiSQL third_party/wikisql", "_____no_output_____" ], [ "%cd /content/rat-sql", "/content/rat-sql\n" ] ], [ [ "# Run Spider", "_____no_output_____" ], [ "## Spider - Glove", "_____no_output_____" ] ], [ [ "!python run.py preprocess /content/rat-sql/experiments/spider-glove-run.jsonnet", "_____no_output_____" ], [ "!python run.py train /content/rat-sql/experiments/spider-glove-run.jsonnet", "_____no_output_____" ], [ "!python run.py eval /content/rat-sql/experiments/spider-glove-run.jsonnet", "_____no_output_____" ] ], [ [ "## Spider - Bert", "_____no_output_____" ] ], [ [ "!python run.py preprocess /content/rat-sql/experiments/spider-bert-run.jsonnet", "\rDownloading https://huggingface.co/stanfordnlp/CoreNLP/resolve/main/stanford-corenlp-latest.zip: 100% 505M/505M [00:05<00:00, 100MB/s] \n2021-10-28 09:14:19 WARNING: For customized installation location, please set the `CORENLP_HOME` environment variable to the location of the installation. In Unix, this is done with `export CORENLP_HOME=/content/rat-sql/third_party/stanford-corenlp-full-2018-10-05`.\n2021-10-28 09:14:19 INFO: Writing properties to tmp file: corenlp_server-5998d88a11be49f6.props\nclient: <stanza.server.client.CoreNLPClient object at 0x7fce244cce50>\n2021-10-28 09:14:19 INFO: Starting server with command: java -Xmx4G -cp /content/rat-sql/third_party/stanford-corenlp-full-2018-10-05/* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9001 -timeout 60000 -threads 5 -maxCharLength 100000 -quiet True -serverProperties corenlp_server-5998d88a11be49f6.props -annotators tokenize,ssplit,pos,lemma,ner -preload -outputFormat serialized\ntrain section: 100% 7000/7000 [31:16<00:00, 3.73it/s]\nDB connections: 100% 166/166 [00:01<00:00, 129.38it/s]\nval section: 100% 1034/1034 [11:45<00:00, 1.47it/s]\n" ], [ "!python run.py train /content/rat-sql/experiments/spider-bert-run.jsonnet", "[2021-10-28T09:58:21] Logging to /content/drive/MyDrive/Datasets/ratsql/datasets/spider/logdir/bs=3,lr=7.4e-04,bert_lr=3.0e-06,end_lr=0e0,att=1\nSome weights of the model checkpoint at bert-large-uncased-whole-word-masking were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']\n- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n[2021-10-28T09:58:45] Step 0 stats, train: loss = 160.08171844482422\n[2021-10-28T09:58:57] Step 0 stats, val: loss = 190.71556091308594\n[2021-10-28T09:59:02] Step 0: loss=194.38427734375\n[2021-10-28T10:00:35] Step 10: loss=166.95787048339844\n[2021-10-28T10:01:25] Step 20: loss=230.46824645996094\nTraceback (most recent call last):\n File \"/content/rat-sql/ratsql/commands/train.py\", line 208, in train\n loss = self.model.compute_loss(batch)\n File \"/content/rat-sql/ratsql/models/enc_dec.py\", line 76, in _compute_loss_enc_batched\n enc_states = self.encoder([enc_input for enc_input, dec_output in batch])\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\n result = self.forward(*input, **kwargs)\n File \"/content/rat-sql/ratsql/models/spider/spider_enc.py\", line 1037, in forward\n attention_mask=att_masks_tensor, token_type_ids=tok_type_tensor)[0]\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py\", line 1005, in forward\n return_dict=return_dict,\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py\", line 589, in forward\n output_attentions,\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py\", line 475, in forward\n past_key_value=self_attn_past_key_value,\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py\", line 410, in forward\n attention_output = self.output(self_outputs[0], hidden_states)\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py\", line 361, in forward\n hidden_states = self.dropout(hidden_states)\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/dropout.py\", line 54, in forward\n return F.dropout(input, self.p, self.training, self.inplace)\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py\", line 936, in dropout\n else _VF.dropout(input, p, training))\nKeyboardInterrupt\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"run.py\", line 109, in <module>\n main()\n File \"run.py\", line 77, in main\n train.main(train_config)\n File \"/content/rat-sql/ratsql/commands/train.py\", line 296, in main\n trainer.train(config, modeldir=args.logdir)\n File \"/content/rat-sql/ratsql/commands/train.py\", line 208, in train\n loss = self.model.compute_loss(batch)\nKeyboardInterrupt\n" ], [ "!python run.py eval /content/rat-sql/experiments/spider-bert-run.jsonnet", "_____no_output_____" ] ], [ [ "# Run vitext2sql", "_____no_output_____" ] ], [ [ "!wget -P /content/rat-sql/third_party/phow2v_emb https://public.vinai.io/word2vec_vi_words_300dims.zip", "_____no_output_____" ], [ "cd /content/rat-sql/third_party/phow2v_emb", "/content/rat-sql/third_party/phow2v_emb\n" ], [ "!unzip /content/rat-sql/third_party/phow2v_emb/word2vec_vi_words_300dims.zip", "Archive: /content/rat-sql/third_party/phow2v_emb/word2vec_vi_words_300dims.zip\n inflating: word2vec_vi_words_300dims.txt \n" ], [ "cd /content/rat-sql", "/content/rat-sql\n" ] ], [ [ "## Run Vitext2sql - No PhoBert", "_____no_output_____" ] ], [ [ "!python run.py preprocess /content/rat-sql/experiments/vitext2sql-phow2v-run.jsonnet", "_____no_output_____" ], [ "!python run.py train /content/rat-sql/experiments/vitext2sql-phow2v-run.jsonnet", "_____no_output_____" ], [ "!python run.py eval /content/rat-sql/experiments/vitext2sql-phow2v-run.jsonnet", "_____no_output_____" ] ], [ [ "## Run Vitext2SQL - PhoBert", "_____no_output_____" ] ], [ [ "!python run.py preprocess /content/rat-sql/experiments/vitext2sql-phobert-run.jsonnet", "_____no_output_____" ], [ "!python run.py train /content/rat-sql/experiments/vitext2sql-phobert-run.jsonnet", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a7b5563ceeda43bd582ead786916196d192ce0
34,977
ipynb
Jupyter Notebook
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
39e0b1cd03b2aec5997da5d0424a011b116801ca
[ "MIT" ]
7
2017-08-26T01:58:26.000Z
2021-09-05T22:02:38.000Z
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
39e0b1cd03b2aec5997da5d0424a011b116801ca
[ "MIT" ]
null
null
null
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
39e0b1cd03b2aec5997da5d0424a011b116801ca
[ "MIT" ]
null
null
null
36.358628
102
0.523487
[ [ [ "import os\nimport glob\nimport time\nimport json\nimport numpy as np\nimport theano as th\nimport theano.tensor as tt\nimport theano.tensor.slinalg as sla\nimport bmtools.exact.moments as mom\nimport bmtools.relaxations.gm_relaxations as gmr\nimport bmtools.relaxations.var_mixture as var\nimport bmtools.utils as utils\nimport matplotlib.pyplot as plt\nimport thermomc.continuous_temp as cont_temp\nimport thermomc.discrete_temp as disc_temp\nimport thermomc.control_funcs as ctrl\nimport seaborn as sns\nsns.set_style('whitegrid')\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Load model parameters and set up", "_____no_output_____" ] ], [ [ "base_dir = os.path.dirname(os.getcwd())\nmodel_dir = os.path.join(base_dir, 'data', 'gaussian-bmr')\nexp_dir = os.path.join(base_dir, 'experiments', 'gaussian-bmr')\nif not os.path.exists(exp_dir):\n os.makedirs(exp_dir)", "_____no_output_____" ], [ "seed = 201702\nrng = np.random.RandomState(seed)", "_____no_output_____" ], [ "class PhiFunc(object):\n \n def __init__(self, Q, b, log_zeta):\n self.Q = Q\n self.b = b\n self.log_zeta = log_zeta\n \n def __call__(self, x):\n return (\n 0.5 * (x**2).sum(-1) - \n tt.log(tt.cosh(x.dot(self.Q.T) + self.b)).sum(-1) + self.log_zeta\n )\n\nclass PsiFunc(object):\n\n def __init__(self, L, m):\n self.L = L\n self.m = m\n \n def __call__(self, x):\n z = x - self.m\n return 0.5 * (\n (z.T * sla.solve_upper_triangular(\n self.L.T, sla.solve_lower_triangular(\n self.L, z.T))).sum(0) +\n self.m.shape[0] * tt.log(2 * np.pi) + \n 2 * tt.log(self.L.diagonal()).sum()\n )", "_____no_output_____" ], [ "def sigmoid(x):\n return 1. / (1. + np.exp(-x))\n\ndef sigmoidal_schedule(num_temp, scale):\n inv_temp_sched = sigmoid(\n scale * (2. * np.arange(num_temp + 1) / num_temp - 1.))\n return (\n (inv_temp_sched - inv_temp_sched[0]) / \n (inv_temp_sched[-1] - inv_temp_sched[0])\n )\n\ndef rmse(x, y):\n return ((x - y)**2).mean()**0.5", "_____no_output_____" ], [ "dtype = 'float64'\nrelaxation_list = []\ntrue_log_norm_list = []\ntrue_mean_list = []\ntrue_covar_list = []\nvar_log_norm_list = []\nvar_mean_list = []\nvar_covar_chol_list = []\nphi_funcs = []\npsi_funcs = []\nfor i, file_path in enumerate(\n sorted(glob.glob(os.path.join(model_dir, 'params_and_moms_*.npz')))):\n loaded = np.load(file_path)\n relaxation_list.append(gmr.IsotropicCovarianceGMRelaxation(\n loaded['weights'], loaded['biases'], True)\n )\n true_log_norm_list.append(loaded['log_norm_const_x'])\n true_mean_list.append(loaded['expc_x'])\n true_covar_list.append(loaded['covar_x'])\n var_mean, var_covar_chol, var_log_norm = (\n var.mixture_of_variational_distributions_moments(\n relaxation_list[-1], rng\n ))\n var_log_norm_list.append(var_log_norm)\n var_mean_list.append(var_mean)\n var_covar_chol_list.append(var_covar_chol)\n np.savez(os.path.join(model_dir, 'var_moms_{0}.npz'.format(i)), \n var_mean=var_mean, var_covar_chol=var_covar_chol, var_log_norm=var_log_norm)\n Q = tt.constant(relaxation_list[-1].Q, 'Q' + str(i), 2, dtype)\n b = tt.constant(relaxation_list[-1].b, 'b' + str(i), 1, dtype)\n L = tt.constant(var_covar_chol, 'L' + str(i), 2, dtype)\n m = tt.constant(var_mean, 'm' + str(i), 1, dtype)\n log_zeta = tt.constant(var_log_norm, 'log_zeta' + str(i), 0, dtype)\n phi_funcs.append(PhiFunc(Q, b, log_zeta))\n psi_funcs.append(PsiFunc(L, m))\n print('Var. log norm RMSE: {0}'.format(rmse(var_log_norm, true_log_norm_list[-1])))", "_____no_output_____" ] ], [ [ "## Annealed Importance Sampling", "_____no_output_____" ] ], [ [ "num_temps = [1000, 5000, 10000, 20000]\ndt = 0.5\ntemp_scale = 4.\nnum_reps = 10\nnum_step = 10\nnum_runs_per_rep = 100\nmom_resample_coeff = 1.\nnum_runs = num_reps * num_runs_per_rep", "_____no_output_____" ], [ "pos = tt.matrix('pos')\ninv_temps = tt.vector('inv_temps')\nhmc_params = {\n 'dt': dt,\n 'n_step': num_step,\n 'mom_resample_coeff': mom_resample_coeff\n}\nais_sampler = disc_temp.AnnealedImportanceSampler(\n tt.shared_randomstreams.RandomStreams(seed), False\n)\nais_run_funcs = []\nfor phi_func, psi_func in zip(phi_funcs, psi_funcs):\n pos_samples, log_weights, accepts, updates = ais_sampler.run(\n pos, None, inv_temps, phi_func, psi_func, hmc_params\n )\n ais_run = th.function(\n [pos, inv_temps],\n [pos_samples, log_weights, accepts],\n updates=updates\n )\n ais_run_funcs.append(ais_run)", "_____no_output_____" ], [ "for i in range(10):\n ais_exp_dir = os.path.join(exp_dir, 'ais', 'params-' + str(i))\n if not os.path.exists(ais_exp_dir):\n os.makedirs(ais_exp_dir)\n for num_temp in num_temps:\n settings = {\n 'dt': dt,\n 'num_temp': num_temp,\n 'temp_scale': temp_scale,\n 'n_step': num_step,\n 'mom_resample_coeff': mom_resample_coeff\n }\n print('Parameters {0} num temps {1}'.format(i, num_temp))\n print('-' * 100)\n print(settings)\n settings_path = os.path.join(ais_exp_dir, 'settings-{0}.json'.format(num_temp))\n results_path = os.path.join(ais_exp_dir, 'results-{0}.npz'.format(num_temp))\n with open(settings_path, 'w') as f:\n json.dump(settings, f, indent=True)\n inv_temp_sched = sigmoidal_schedule(num_temp, temp_scale)\n num_dim = relaxation_list[i].n_dim_r\n pos_init = rng.normal(size=(num_runs, num_dim)).dot(\n var_covar_chol_list[i].T) + var_mean_list[i]\n start_time = time.time()\n pos_samples, log_weights, accepts = ais_run_funcs[i](\n pos_init, inv_temp_sched\n )\n sampling_time = time.time() - start_time\n print('Sampling time: {0:.2f}s'.format(sampling_time))\n log_norm_rmses = []\n mean_rmses = []\n covar_rmses = []\n for lw, ps in zip(\n log_weights.reshape((num_reps, -1)),\n pos_samples.reshape((num_reps, num_runs_per_rep, -1))):\n log_norm_rmses.append(\n rmse(np.log(np.exp(lw).mean(0)) + var_log_norm_list[i], \n true_log_norm_list[i])\n )\n probs = np.exp(lw)\n probs /= probs.sum()\n mean_est = (probs[:, None] * ps).sum(0)\n mean_rmses.append(\n rmse(true_mean_list[i], mean_est)\n )\n ps_zm = ps - mean_est\n covar_est = (ps_zm * probs[:, None]).T.dot(ps_zm)\n covar_rmses.append(\n rmse(true_covar_list[i], covar_est)\n )\n var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i])\n var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i])\n var_covar_rmse = rmse(true_covar_list[i], \n var_covar_chol_list[i].dot(var_covar_chol_list[i].T))\n print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}'\n .format(\n np.mean(log_norm_rmses) / var_log_norm_rmse, \n np.mean(mean_rmses) / var_mean_rmse, \n np.mean(covar_rmses) / var_covar_rmse\n )\n )\n np.savez(\n results_path, \n sampling_time=sampling_time, \n pos_samples=pos_samples, \n log_weights=log_weights, \n accepts=accepts,\n log_norm_rmses=np.array(log_norm_rmses),\n mean_rmses=np.array(mean_rmses),\n covar_rmses=np.array(covar_rmses),\n var_log_norm_rmse=var_log_norm_rmse,\n var_mean_rmse=var_mean_rmse,\n var_covar_rmse=var_covar_rmse\n )\n print('Saved to ' + results_path)", "_____no_output_____" ] ], [ [ "## Hamiltonian Annealed Importance Sampling", "_____no_output_____" ] ], [ [ "num_temps = [1000, 5000, 10000, 20000]\ndt = 0.5\ntemp_scale = 4.\nnum_reps = 10\nnum_step = 1\nnum_runs_per_rep = 500\nnum_runs = num_reps * num_runs_per_rep", "_____no_output_____" ], [ "pos = tt.matrix('pos')\ninv_temps = tt.vector('inv_temps')\nhmc_params = {\n 'dt': dt,\n 'n_step': num_step,\n 'mom_resample_coeff': (1. - 0.5**dt)**0.5\n}\nais_sampler = disc_temp.AnnealedImportanceSampler(\n tt.shared_randomstreams.RandomStreams(seed), True\n)\nais_run_funcs = []\nfor phi_func, psi_func in zip(phi_funcs, psi_funcs):\n pos_samples, log_weights, accepts, updates = ais_sampler.run(\n pos, None, inv_temps, phi_func, psi_func, hmc_params\n )\n ais_run = th.function(\n [pos, inv_temps],\n [pos_samples, log_weights, accepts],\n updates=updates\n )\n ais_run_funcs.append(ais_run)", "_____no_output_____" ], [ "for i in range(10):\n ais_exp_dir = os.path.join(exp_dir, 'h-ais', 'params-' + str(i))\n if not os.path.exists(ais_exp_dir):\n os.makedirs(ais_exp_dir)\n for num_temp in num_temps:\n settings = {\n 'dt': dt,\n 'num_temp': num_temp,\n 'temp_scale': temp_scale,\n 'n_step': num_step,\n 'mom_resample_coeff': (1. - 0.5**dt)**0.5\n }\n print('Parameters {0} num temp {1}'.format(i, num_temp))\n print('-' * 100)\n print(settings)\n settings_path = os.path.join(ais_exp_dir, 'settings-{0}.json'.format(num_temp))\n results_path = os.path.join(ais_exp_dir, 'results-{0}.npz'.format(num_temp))\n with open(settings_path, 'w') as f:\n json.dump(settings, f, indent=True)\n inv_temp_sched = sigmoidal_schedule(num_temp, temp_scale)\n num_dim = relaxation_list[i].n_dim_r\n pos_init = rng.normal(size=(num_runs, num_dim)).dot(\n var_covar_chol_list[i].T) + var_mean_list[i]\n start_time = time.time()\n pos_samples, log_weights, accepts = ais_run_funcs[i](\n pos_init, inv_temp_sched\n )\n sampling_time = time.time() - start_time\n print('Sampling time: {0:.2f}s'.format(sampling_time))\n log_norm_rmses = []\n mean_rmses = []\n covar_rmses = []\n for lw, ps in zip(\n log_weights.reshape((num_reps, -1)),\n pos_samples.reshape((num_reps, num_runs_per_rep, -1))):\n log_norm_rmses.append(\n rmse(np.log(np.exp(lw).mean(0)) + var_log_norm_list[i], \n true_log_norm_list[i])\n )\n probs = np.exp(lw)\n probs /= probs.sum()\n mean_est = (probs[:, None] * ps).sum(0)\n mean_rmses.append(\n rmse(true_mean_list[i], mean_est)\n )\n ps_zm = ps - mean_est\n covar_est = (ps_zm * probs[:, None]).T.dot(ps_zm)\n covar_rmses.append(\n rmse(true_covar_list[i], covar_est)\n )\n var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i])\n var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i])\n var_covar_rmse = rmse(true_covar_list[i], \n var_covar_chol_list[i].dot(var_covar_chol_list[i].T))\n print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}'\n .format(\n np.mean(log_norm_rmses) / var_log_norm_rmse, \n np.mean(mean_rmses) / var_mean_rmse, \n np.mean(covar_rmses) / var_covar_rmse\n )\n )\n np.savez(\n results_path, \n sampling_time=sampling_time, \n pos_samples=pos_samples, \n log_weights=log_weights, \n accepts=accepts,\n log_norm_rmses=np.array(log_norm_rmses),\n mean_rmses=np.array(mean_rmses),\n covar_rmses=np.array(covar_rmses),\n var_log_norm_rmse=var_log_norm_rmse,\n var_mean_rmse=var_mean_rmse,\n var_covar_rmse=var_covar_rmse\n )\n print('Saved to ' + results_path)\n print('-' * 100)", "_____no_output_____" ] ], [ [ "## Incremental RMSE helper", "_____no_output_____" ] ], [ [ "def rmse(x, y):\n return ((x - y)**2).mean()**0.5\n\ndef calculate_incremental_rmses(x_samples, probs_1, probs_0,\n true_log_norm, true_mean, true_covar):\n n_sample, n_chain, n_dim = x_samples.shape\n sum_probs_1_x = 0\n sum_probs_1_xx = 0\n sum_probs_1 = 0\n sum_probs_0 = 0\n log_norm_rmses = np.empty(n_sample) * np.nan\n mean_rmses = np.empty(n_sample) * np.nan\n covar_rmses = np.empty(n_sample) * np.nan\n for s in range(n_sample):\n p1 = probs_1[s]\n p0 = probs_0[s]\n x = x_samples[s]\n sum_probs_1_x += p1[:, None] * x\n sum_probs_1_xx += p1[:, None, None] * (x[:, :, None] * x[:, None, :])\n sum_probs_1 += p1\n sum_probs_0 += p0\n log_norm_est = np.log(sum_probs_1.sum(0)) - np.log(sum_probs_0.sum(0))\n mean_est = sum_probs_1_x.sum(0) / sum_probs_1.sum(0)\n covar_est = sum_probs_1_xx.sum(0) / sum_probs_1.sum(0) - np.outer(mean_est, mean_est)\n log_norm_rmses[s] = rmse(log_norm_est, true_log_norm)\n mean_rmses[s] = rmse(mean_est, true_mean)\n covar_rmses[s] = rmse(covar_est, true_covar)\n return log_norm_rmses, mean_rmses, covar_rmses", "_____no_output_____" ] ], [ [ "## Simulated Tempering", "_____no_output_____" ] ], [ [ "num_temp = 1000\ndt = 0.5\nnum_step = 20\ntemp_scale = 4.\nnum_reps = 10\nnum_runs_per_rep = 10\nnum_runs = num_reps * num_runs_per_rep\nmom_resample_coeff = 1.", "_____no_output_____" ], [ "pos = tt.matrix('pos')\nidx = tt.lvector('idx')\ninv_temps = tt.vector('inv_temps')\nnum_sample = tt.lscalar('num_sample')\nhmc_params = {\n 'dt': dt,\n 'n_step': num_step,\n 'mom_resample_coeff': mom_resample_coeff\n}\nst_sampler = disc_temp.SimulatedTemperingSampler(\n tt.shared_randomstreams.RandomStreams(seed), False\n)\nst_chain_funcs = []\nfor phi_func, psi_func in zip(phi_funcs, psi_funcs):\n pos_samples, idx_samples, probs_0, probs_1, accepts, updates = st_sampler.chain(\n pos, None, idx, inv_temps, 0, phi_func, psi_func, num_sample, hmc_params\n )\n st_chain = th.function(\n [pos, idx, inv_temps, num_sample],\n [pos_samples, idx_samples, probs_0, probs_1, accepts],\n updates=updates\n )\n st_chain_funcs.append(st_chain)", "_____no_output_____" ], [ "num_sample = 40000\nfor i in range(10):\n st_exp_dir = os.path.join(exp_dir, 'st', 'params-' + str(i))\n if not os.path.exists(st_exp_dir):\n os.makedirs(st_exp_dir)\n settings = {\n 'dt': dt,\n 'num_temp': num_temp,\n 'num_sample': num_sample,\n 'num_step': num_step,\n 'temp_scale': temp_scale,\n 'mom_resample_coeff': mom_resample_coeff\n }\n print('Parameters {0}'.format(i))\n print('-' * 100)\n print(settings)\n settings_path = os.path.join(st_exp_dir, 'settings.json')\n results_path = os.path.join(st_exp_dir, 'results.npz')\n with open(settings_path, 'w') as f:\n json.dump(settings, f, indent=True)\n inv_temp_sched = sigmoidal_schedule(num_temp, temp_scale)\n num_dim = relaxation_list[i].n_dim_r\n pos_init = rng.normal(size=(num_runs, num_dim)).dot(\n var_covar_chol_list[i].T) + var_mean_list[i]\n idx_init = np.zeros(num_runs, 'int64')\n start_time = time.time()\n pos_samples, idx_samples, probs_0, probs_1, accepts = st_chain_funcs[i](\n pos_init, idx_init, inv_temp_sched, num_sample\n )\n sampling_time = time.time() - start_time\n print('Sampling time: {0:.2f}s'.format(sampling_time))\n log_norm_rmses = np.empty((num_reps, num_sample))\n mean_rmses = np.empty((num_reps, num_sample))\n covar_rmses = np.empty((num_reps, num_sample))\n for r in range(num_reps):\n log_norm_rmses[r], mean_rmses[r], covar_rmses[r] = calculate_incremental_rmses(\n pos_samples[:, r:(r+1)*num_runs_per_rep], \n probs_1[:, r:(r+1)*num_runs_per_rep], \n probs_0[:, r:(r+1)*num_runs_per_rep], \n true_log_norm_list[i] - var_log_norm_list[i],\n true_mean_list[i], true_covar_list[i]\n )\n var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i])\n var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i])\n var_covar_rmse = rmse(true_covar_list[i], \n var_covar_chol_list[i].dot(var_covar_chol_list[i].T))\n print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}'\n .format(\n np.mean(log_norm_rmses[:, -1]) / var_log_norm_rmse, \n np.mean(mean_rmses[:, -1]) / var_mean_rmse, \n np.mean(covar_rmses[:, -1]) / var_covar_rmse\n )\n )\n fig, axes = plt.subplots(1, 3, figsize=(9, 3))\n axes[0].semilogy(log_norm_rmses.mean(0) / var_log_norm_rmse)\n axes[0].set_title('Log norm RMSE')\n axes[1].semilogy(mean_rmses.mean(0) / var_mean_rmse)\n axes[1].set_title('Mean RMSE')\n axes[2].semilogy(covar_rmses.mean(0) / var_covar_rmse)\n axes[2].set_title('Covariance RMSE')\n plt.show()\n np.savez(\n results_path, \n sampling_time=sampling_time, \n pos_samples=pos_samples,\n idx_samples=idx_samples,\n probs_1=probs_1,\n probs_0=probs_0,\n accepts=accepts,\n log_norm_rmses=log_norm_rmses,\n mean_rmses=mean_rmses,\n covar_rmses=covar_rmses,\n var_log_norm_rmse=var_log_norm_rmse,\n var_mean_rmse=var_mean_rmse,\n var_covar_rmse=var_covar_rmse\n )\n print('Saved to ' + results_path)\n print('-' * 100)", "_____no_output_____" ] ], [ [ "## Continuous tempering", "_____no_output_____" ], [ "### Gibbs", "_____no_output_____" ] ], [ [ "dt = 0.5\nnum_step = 20\nnum_reps = 10\nnum_runs_per_rep = 10\nnum_runs = num_reps * num_runs_per_rep\nmom_resample_coeff = 1.", "_____no_output_____" ], [ "pos = tt.matrix('pos')\nidx = tt.lvector('idx')\ninv_temp = tt.vector('inv_temp')\nnum_sample = tt.lscalar('n_sample')\nhmc_params = {\n 'dt': dt,\n 'n_step': num_step,\n 'mom_resample_coeff': mom_resample_coeff\n}\ngct_sampler = cont_temp.GibbsContinuousTemperingSampler(\n tt.shared_randomstreams.RandomStreams(seed), False\n)\ngct_chain_funcs = []\nfor phi_func, psi_func in zip(phi_funcs, psi_funcs):\n pos_samples, inv_temp_samples, probs_0, probs_1, accepts, updates = gct_sampler.chain(\n pos, None, inv_temp, phi_func, psi_func, num_sample, hmc_params\n )\n gct_chain = th.function(\n [pos, inv_temp, num_sample],\n [pos_samples, inv_temp_samples, probs_0, probs_1, accepts],\n updates=updates\n )\n gct_chain_funcs.append(gct_chain)", "_____no_output_____" ], [ "num_sample = 60000\nfor i in range(10):\n gct_exp_dir = os.path.join(exp_dir, 'gibbs-ct', 'params-' + str(i))\n if not os.path.exists(gct_exp_dir):\n os.makedirs(gct_exp_dir)\n settings = {\n 'dt': dt,\n 'num_sample': num_sample,\n 'num_step': num_step,\n 'mom_resample_coeff': mom_resample_coeff\n }\n print('Parameters {0}'.format(i))\n print('-' * 100)\n print(settings)\n settings_path = os.path.join(gct_exp_dir, 'settings.json')\n results_path = os.path.join(gct_exp_dir, 'results.npz')\n with open(settings_path, 'w') as f:\n json.dump(settings, f, indent=True)\n num_dim = relaxation_list[i].n_dim_r\n pos_init = rng.normal(size=(num_runs, num_dim)).dot(\n var_covar_chol_list[i].T) + var_mean_list[i]\n inv_temp_init = np.zeros(num_runs)\n start_time = time.time()\n pos_samples, inv_temp_samples, probs_0, probs_1, accepts = gct_chain_funcs[i](\n pos_init, inv_temp_init, num_sample\n )\n sampling_time = time.time() - start_time\n print('Sampling time: {0:.2f}s'.format(sampling_time))\n log_norm_rmses = np.empty((num_reps, num_sample))\n mean_rmses = np.empty((num_reps, num_sample))\n covar_rmses = np.empty((num_reps, num_sample))\n for r in range(num_reps):\n log_norm_rmses[r], mean_rmses[r], covar_rmses[r] = calculate_incremental_rmses(\n pos_samples[:, r:(r+1)*num_runs_per_rep], \n probs_1[:, r:(r+1)*num_runs_per_rep], \n probs_0[:, r:(r+1)*num_runs_per_rep], \n true_log_norm_list[i] - var_log_norm_list[i],\n true_mean_list[i], true_covar_list[i]\n )\n var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i])\n var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i])\n var_covar_rmse = rmse(true_covar_list[i], \n var_covar_chol_list[i].dot(var_covar_chol_list[i].T))\n print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}'\n .format(\n np.mean(log_norm_rmses[:, -1]) / var_log_norm_rmse, \n np.mean(mean_rmses[:, -1]) / var_mean_rmse, \n np.mean(covar_rmses[:, -1]) / var_covar_rmse\n )\n )\n fig, axes = plt.subplots(1, 3, figsize=(9, 3))\n axes[0].semilogy(log_norm_rmses.mean(0) / var_log_norm_rmse)\n axes[0].set_title('Log norm RMSE')\n axes[1].semilogy(mean_rmses.mean(0) / var_mean_rmse)\n axes[1].set_title('Mean RMSE')\n axes[2].semilogy(covar_rmses.mean(0) / var_covar_rmse)\n axes[2].set_title('Covariance RMSE')\n plt.show()\n np.savez(\n results_path, \n sampling_time=sampling_time, \n pos_samples=pos_samples,\n inv_temp_samples=inv_temp_samples,\n probs_1=probs_1,\n probs_0=probs_0,\n accepts=accepts,\n log_norm_rmses=log_norm_rmses,\n mean_rmses=mean_rmses,\n covar_rmses=covar_rmses,\n var_log_norm_rmse=var_log_norm_rmse,\n var_mean_rmse=var_mean_rmse,\n var_covar_rmse=var_covar_rmse\n )\n print('Saved to ' + results_path)\n print('-' * 100)", "_____no_output_____" ] ], [ [ "### Joint", "_____no_output_____" ] ], [ [ "dt = 0.5\nnum_step = 20\ntemp_scale = 1.\nnum_reps = 10\nnum_runs_per_rep = 10\nnum_runs = num_reps * num_runs_per_rep\nmom_resample_coeff = 1.", "_____no_output_____" ], [ "pos = tt.matrix('pos')\ntmp_ctrl = tt.vector('tmp_ctrl')\nnum_sample = tt.lscalar('n_sample')\nctrl_func = ctrl.SigmoidalControlFunction(temp_scale)\nhmc_params = {\n 'dt': dt,\n 'n_step': num_step,\n 'mom_resample_coeff': mom_resample_coeff\n}\njct_sampler = cont_temp.JointContinuousTemperingSampler(\n tt.shared_randomstreams.RandomStreams(seed), False\n)\njct_chain_funcs = []\nfor phi_func, psi_func in zip(phi_funcs, psi_funcs):\n (pos_samples, tmp_ctrl_sample, inv_temp_samples, \n probs_0, probs_1, accepts, updates) = jct_sampler.chain(\n pos, tmp_ctrl, None, phi_func, psi_func, ctrl_func, num_sample, hmc_params\n )\n jct_chain = th.function(\n [pos, tmp_ctrl, num_sample],\n [pos_samples, inv_temp_samples, probs_0, probs_1, accepts],\n updates=updates\n )\n jct_chain_funcs.append(jct_chain)", "_____no_output_____" ], [ "num_sample = 50000\nfor i in range(10):\n jct_exp_dir = os.path.join(exp_dir, 'joint-ct', 'params-' + str(i))\n if not os.path.exists(jct_exp_dir):\n os.makedirs(jct_exp_dir)\n settings = {\n 'dt': dt,\n 'num_sample': num_sample,\n 'num_step': num_step,\n 'temp_scale': temp_scale,\n 'mom_resample_coeff': mom_resample_coeff\n }\n print('Parameters {0}'.format(i))\n print('-' * 100)\n print(settings)\n settings_path = os.path.join(jct_exp_dir, 'settings.json')\n results_path = os.path.join(jct_exp_dir, 'results.npz')\n with open(settings_path, 'w') as f:\n json.dump(settings, f, indent=True)\n num_dim = relaxation_list[i].n_dim_r\n pos_init = rng.normal(size=(num_runs, num_dim)).dot(\n var_covar_chol_list[i].T) + var_mean_list[i]\n tmp_ctrl_init = np.zeros(num_runs) - 10.\n start_time = time.time()\n pos_samples, inv_temp_samples, probs_0, probs_1, accepts = jct_chain_funcs[i](\n pos_init, tmp_ctrl_init, num_sample\n )\n sampling_time = time.time() - start_time\n print('Sampling time: {0:.2f}s'.format(sampling_time))\n log_norm_rmses = np.empty((num_reps, num_sample))\n mean_rmses = np.empty((num_reps, num_sample))\n covar_rmses = np.empty((num_reps, num_sample))\n for r in range(num_reps):\n log_norm_rmses[r], mean_rmses[r], covar_rmses[r] = calculate_incremental_rmses(\n pos_samples[:, r:(r+1)*num_runs_per_rep], \n probs_1[:, r:(r+1)*num_runs_per_rep], \n probs_0[:, r:(r+1)*num_runs_per_rep], \n true_log_norm_list[i] - var_log_norm_list[i],\n true_mean_list[i], true_covar_list[i]\n )\n var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i])\n var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i])\n var_covar_rmse = rmse(true_covar_list[i], \n var_covar_chol_list[i].dot(var_covar_chol_list[i].T))\n print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}'\n .format(\n np.mean(log_norm_rmses[:, -1]) / var_log_norm_rmse, \n np.mean(mean_rmses[:, -1]) / var_mean_rmse, \n np.mean(covar_rmses[:, -1]) / var_covar_rmse\n )\n )\n fig, axes = plt.subplots(1, 3, figsize=(9, 3))\n axes[0].semilogy(log_norm_rmses.mean(0) / var_log_norm_rmse)\n axes[0].set_title('Log norm RMSE')\n axes[1].semilogy(mean_rmses.mean(0) / var_mean_rmse)\n axes[1].set_title('Mean RMSE')\n axes[2].semilogy(covar_rmses.mean(0) / var_covar_rmse)\n axes[2].set_title('Covariance RMSE')\n plt.show()\n np.savez(\n results_path, \n sampling_time=sampling_time, \n pos_samples=pos_samples,\n inv_temp_samples=inv_temp_samples,\n probs_1=probs_1,\n probs_0=probs_0,\n accepts=accepts,\n log_norm_rmses=log_norm_rmses,\n mean_rmses=mean_rmses,\n covar_rmses=covar_rmses,\n var_log_norm_rmse=var_log_norm_rmse,\n var_mean_rmse=var_mean_rmse,\n var_covar_rmse=var_covar_rmse\n )\n print('Saved to ' + results_path)\n print('-' * 100)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7a7b7683abf6c11659a9ed87888e1624ec02e73
14,503
ipynb
Jupyter Notebook
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
103d86cc589e4a38d24b7f426fa51c68be35c993
[ "MIT" ]
null
null
null
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
103d86cc589e4a38d24b7f426fa51c68be35c993
[ "MIT" ]
null
null
null
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
103d86cc589e4a38d24b7f426fa51c68be35c993
[ "MIT" ]
null
null
null
27.261278
612
0.558988
[ [ [ "# Topological Data Analysis with Python and the Gudhi Library \n\n# Introduction to simplex trees ", "_____no_output_____" ], [ "**Authors** : F. Chazal and B. Michel", "_____no_output_____" ], [ "TDA typically aims at extracting topological signatures from a point cloud in $\\mathbb R^d$ or in a general metric space. By studying the topology of the point clouds, we actually mean studying the topology of unions of balls centered at the point cloud (offsets). However, non-discrete sets such as offsets, and also continuous mathematical shapes like curves, surfaces and more generally manifolds, cannot easily be encoded as finite discrete structures. [Simplicial complexes](https://en.wikipedia.org/wiki/Simplicial_complex) are therefore used in computational geometry to approximate such shapes.\n\nA simplicial complex is a set of [simplices](https://en.wikipedia.org/wiki/Simplex). It can be seen as a higher dimensional generalization of a graph. It is a mathematical object that is both topological and combinatorial, which makes it particularly useful for TDA. Here is an exemple of simplicial complex:\n\n![title](Images/Pers14.PNG)\n \nA filtration is a increasing sequence of sub-complexes of a simplicial complex $\\mathcal K$. It can be seen as ordering the simplices included in the complex. Indeed, simpicial complexes often come with a specific order, as for [Vietoris-Rips complexes](https://en.wikipedia.org/wiki/Vietoris%E2%80%93Rips_complex), [Cech complexes](https://en.wikipedia.org/wiki/%C4%8Cech_complex) and [alpha complexes](https://en.wikipedia.org/wiki/Alpha_shape#Alpha_complex). ", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nfrom os import chdir\nimport numpy as np\nimport gudhi as gd\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "In Gudhi, filtered simplicial complexes are encoded through a data structure called simplex tree. \n![](https://gudhi.inria.fr/python/latest/_images/Simplex_tree_representation.png)\n\nThis notebook illustrates the use of simplex tree to represent simplicial complexes from data points.\n\nSee the [Python Gudhi documentation](https://gudhi.inria.fr/python/latest/simplex_tree_ref.html#) for more details on simplex trees.", "_____no_output_____" ], [ "### My first simplex tree", "_____no_output_____" ], [ "Let's create our first simplicial complex, represented by a simplex tree :", "_____no_output_____" ] ], [ [ "st = gd.SimplexTree()", "_____no_output_____" ] ], [ [ "The `st` object has class `SimplexTree`. For now, `st` is an empty simplex tree.\n\nThe `SimplexTree` class has several useful methods for the practice of TDA. For instance, there are methods to define new types of simplicial complexes from existing ones.\n\nThe `insert()` method can be used to insert simplices in the simplex tree. In the simplex tree:\n\n- vertices (0-dimensional simplices) are represented with integers, \n- edges (1-dimensional simplices) are represented with a length-2 list of integers (corresponding to the two vertices involved in the edge),\n- triangles (2-dimensional simplices) by three integers are represented with a length-3 list of integers (corresponding to the three vertices involved in the triangle),\n- etc.\n\nFor example, the following piece of code inserts three edges into the simplex tree:", "_____no_output_____" ] ], [ [ "st.insert([0, 1])\nst.insert([1, 2])\nst.insert([3, 1])", "_____no_output_____" ] ], [ [ "When the simplex is successfully inserted into the simplex tree, the `insert()` method outputs `True` as you can see from the execution of the above code. On the contrary, if the simplex is already in the filtration, the `insert()` method outputs `False`:", "_____no_output_____" ] ], [ [ "st.insert([3, 1])", "_____no_output_____" ] ], [ [ "We obtain the list of all the simplices in the simplex tree with the `get_filtration()` method : ", "_____no_output_____" ] ], [ [ "st_gen = st.get_filtration() ", "_____no_output_____" ] ], [ [ "The output `st_gen` is a generator and we thus we can iterate on its elements. Each element in the list is a tuple that contains a simplex and its **filtration value**.", "_____no_output_____" ] ], [ [ "for splx in st_gen :\n print(splx)", "([0], 0.0)\n([1], 0.0)\n([0, 1], 0.0)\n([2], 0.0)\n([1, 2], 0.0)\n([3], 0.0)\n([1, 3], 0.0)\n" ] ], [ [ "Intuitively, the filtration value of a simplex in a filtered complex acts as a *time stamp* corresponding to \"when\" the simplex appears in the filtration. By default, the `insert()` method assigns a filtration value equal to 0.\n\nNotice that inserting an edge automatically inserts its vertices (if they were not already in the complex) in order to satisfy the **inclusion property** of a filtered complex: any simplex with filtration value $t$ must have all its faces in the filtered complex, with filtration values smaller than or equal to $t$.", "_____no_output_____" ], [ "### Simplex tree description", "_____no_output_____" ], [ "The dimension of a simplical complex is the largest dimension of the simplices in it. It can be retrieved by the simplex tree `dimension()` method:", "_____no_output_____" ] ], [ [ "st.dimension()", "_____no_output_____" ] ], [ [ "It is possible to compute the number of vertices in a simplex tree via the `num_vertices()` method:", "_____no_output_____" ] ], [ [ "st.num_vertices()", "_____no_output_____" ] ], [ [ "The number of simplices in the simplex tree is given by", "_____no_output_____" ] ], [ [ "st.num_simplices()", "_____no_output_____" ] ], [ [ "The [$d$-skeleton](https://en.wikipedia.org/wiki/N-skeleton) -- which is the union of all simplices of dimensions smaller than or equal to $d$ -- can be also computed with the `get_skeleton()` method. This method takes as argument the dimension of the desired skeleton. To retrieve the topological graph from a simplex tree, we can therefore call:", "_____no_output_____" ] ], [ [ "print(st.get_skeleton(1))", "[([0, 1], 0.0), ([0], 0.0), ([1, 2], 0.0), ([1, 3], 0.0), ([1], 0.0), ([2], 0.0), ([3], 0.0)]\n" ] ], [ [ "One can also check whether a simplex is already in the filtration. This is achieved with the `find()` method:", "_____no_output_____" ] ], [ [ "st.find([2, 4])", "_____no_output_____" ] ], [ [ "### Filtration values\n\nWe can insert simplices at a given filtration value. For example, the following piece of code will insert three triangles in the simplex tree at three different filtration values:", "_____no_output_____" ] ], [ [ "st.insert([0, 1, 2], filtration = 0.1)\nst.insert([1, 2, 3], filtration = 0.2)\nst.insert([0, 1, 3], filtration = 0.4)\nst_gen = st.get_filtration() \n\nfor splx in st_gen :\n print(splx)", "([0], 0.0)\n([1], 0.0)\n([0, 1], 0.0)\n([2], 0.0)\n([1, 2], 0.0)\n([3], 0.0)\n([1, 3], 0.0)\n([0, 2], 0.1)\n([0, 1, 2], 0.1)\n([2, 3], 0.2)\n([1, 2, 3], 0.2)\n([0, 3], 0.4)\n([0, 1, 3], 0.4)\n" ] ], [ [ "As you can see, when we add a new simplex with a given filtration value, all its faces that were not already in the complex are added with the same filtration value: here the edge `[0, 3]` was not part of the tree before including the triangle `[0, 1, 3]` and is thus inserted with the filtration value of the inserted triangle. On the other hand, the filtration value of the faces of added simplices that were already part of the tree before is left alone. One can modify the filtration value of any simplex included in the tree with the `assign_filtration()` method:", "_____no_output_____" ] ], [ [ "st.assign_filtration([3], filtration = 0.8)\nst_gen = st.get_filtration()\nfor splx in st_gen:\n print(splx) ", "([0], 0.0)\n([1], 0.0)\n([0, 1], 0.0)\n([2], 0.0)\n([1, 2], 0.0)\n([1, 3], 0.0)\n([0, 2], 0.1)\n([0, 1, 2], 0.1)\n([2, 3], 0.2)\n([1, 2, 3], 0.2)\n([0, 3], 0.4)\n([0, 1, 3], 0.4)\n([3], 0.8)\n" ] ], [ [ "Notice that, the vertex `[3]` has been moved to the end of the filtration because it now has the highest filtration value. However, this simplex tree is not a filtered simplicial complex anymore because the filtration value of the vertex `[3]` is higher than the filtration value of the edge `[2 3]`. We can use the `make_filtration_non_decreasing()` method to solve the problem:", "_____no_output_____" ] ], [ [ "st.make_filtration_non_decreasing()\nst_gen = st.get_filtration()\nfor splx in st_gen:\n print(splx) ", "([0], 0.0)\n([1], 0.0)\n([0, 1], 0.0)\n([2], 0.0)\n([1, 2], 0.0)\n([0, 2], 0.1)\n([0, 1, 2], 0.1)\n([3], 0.8)\n([0, 3], 0.8)\n([1, 3], 0.8)\n([0, 1, 3], 0.8)\n([2, 3], 0.8)\n([1, 2, 3], 0.8)\n" ] ], [ [ "Finally, it is worth mentioning the `filtration()` method, which returns the filtration value of a given simplex in the filtration :", "_____no_output_____" ] ], [ [ "st.filtration([2, 3])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a7c2dfa80dadfe52b95371561864ce0f1b5688
546,313
ipynb
Jupyter Notebook
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
f164311e5f2086a524fb4ca864e75386738e0b4f
[ "MIT" ]
null
null
null
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
f164311e5f2086a524fb4ca864e75386738e0b4f
[ "MIT" ]
null
null
null
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
f164311e5f2086a524fb4ca864e75386738e0b4f
[ "MIT" ]
null
null
null
915.097152
106,916
0.951888
[ [ [ "# Non-Linear Classifiers", "_____no_output_____" ] ], [ [ "# Global variables for testing changes to this notebook quickly\nRANDOM_SEED = 0\nNUM_FOLDS = 5", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nimport time\nimport math\nimport os\nimport pyarrow\nimport gc\n\n# scikit-learn optimization\nfrom sklearnex import patch_sklearn\npatch_sklearn()\n\n# Model evaluation\nfrom sklearn.base import clone\nfrom sklearn.model_selection import StratifiedKFold, train_test_split\nfrom sklearn.metrics import accuracy_score\n\n# Plotting\nimport matplotlib\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\nfrom IPython.display import Image\n\n# Hide warnings\nimport warnings\nwarnings.filterwarnings('ignore')", "Intel(R) Extension for Scikit-learn* enabled (https://github.com/intel/scikit-learn-intelex)\n" ] ], [ [ "# Scoring Function", "_____no_output_____" ] ], [ [ "# Scoring/Training Baseline Function\ndef score_model(sklearn_model, preprocessing = None):\n \n # Store the holdout predictions\n oof_preds = np.zeros((train.shape[0],))\n scores = np.zeros(NUM_FOLDS)\n times = np.zeros(NUM_FOLDS)\n print('')\n \n # Stratified k-fold cross-validation\n skf = StratifiedKFold(n_splits = NUM_FOLDS, shuffle = True, random_state = RANDOM_SEED)\n for fold, (train_idx, valid_idx) in enumerate(skf.split(train, target_bins)):\n \n # Training and Validation Sets\n X_train, y_train = train[features].iloc[train_idx], train['target'].iloc[train_idx]\n X_valid, y_valid = train[features].iloc[valid_idx], train['target'].iloc[valid_idx]\n train_weight, valid_weight = train['sample_weight'].iloc[train_idx], train['sample_weight'].iloc[valid_idx]\n \n # Preprocessing\n start = time.time()\n if preprocessing:\n X_train = preprocessing.fit_transform(X_train)\n X_valid = preprocessing.transform(X_valid)\n \n # Create model\n model = clone(sklearn_model)\n try:\n model.fit(X_train, y_train, sample_weight = train_weight)\n except:\n model.fit(X_train, y_train)\n \n # validation\n valid_preds = model.predict(X_valid)\n scores[fold] = accuracy_score(y_valid, valid_preds, sample_weight = valid_weight)\n oof_preds[valid_idx] = valid_preds\n end = time.time()\n print(f'Fold {fold}: {round(scores[fold], 5)} accuracy in {round(end-start,2)}s.')\n times[fold] = end-start\n \n mask1, mask10 = train.gcd == 1, train.gcd == 10 \n mask1000, mask10000 = train.gcd == 1000, train.gcd == 10000\n print(\"\\nAccuracy (1M Reads):\", round(accuracy_score(oof_preds[mask1], train['target'].loc[mask1], sample_weight = train['sample_weight'].loc[mask1]), 5))\n print(\"Accuracy (100k Reads):\", round(accuracy_score(oof_preds[mask10], train['target'].loc[mask10], sample_weight = train['sample_weight'].loc[mask10]), 5))\n print(\"Accuracy (1k Reads):\", round(accuracy_score(oof_preds[mask1000], train['target'].loc[mask1000], sample_weight = train['sample_weight'].loc[mask1000]), 5))\n print(\"Accuracy (100 Reads):\", round(accuracy_score(oof_preds[mask10000], train['target'].loc[mask10000], sample_weight = train['sample_weight'].loc[mask10000]), 5))\n print(\"Out-of-Fold Accuracy:\", round(accuracy_score(oof_preds, train['target'], sample_weight = train['sample_weight']), 5))\n print(f'Training Time: {round(times.sum(), 2)}s')\n \n return oof_preds", "_____no_output_____" ], [ "from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix, accuracy_score\nimport matplotlib.pyplot as plt\n\n# Plot confusion matrix\ndef plot_confusion_matrix(true_values, pred_values, gcds, plot_title = \"Confusion Matrix\"):\n \n gcd = [[1,10],[1000,10000]]\n \n # Confusion matrix\n fig, ax = plt.subplots(2, 2, figsize = (12,9))\n for row in range(2):\n for col in range(2):\n idx = 2*row + col\n cm = confusion_matrix(true_values[gcds == gcd[row][col]], pred_values[gcds == gcd[row][col]])\n np.fill_diagonal(cm, 0)\n disp = ConfusionMatrixDisplay(confusion_matrix = cm)\n disp.plot(ax = ax[row,col])\n plt.show()", "_____no_output_____" ] ], [ [ "# Load Data", "_____no_output_____" ] ], [ [ "%%time\nfrom sklearn.preprocessing import LabelEncoder\n\ntrain = pd.read_feather('../data/train.feather')\nfeatures = [x for x in train.columns if x not in ['row_id','target','sample_weight','gcd']]\n\nencoder = LabelEncoder()\ntrain['target'] = encoder.fit_transform(train['target'])\ntarget_bins = train['target'].astype(str) + train['gcd'].astype(str)\n\nprint(f'Training Samples: {len(train)}')", "Training Samples: 123993\nCPU times: total: 1.12 s\nWall time: 195 ms\n" ] ], [ [ "# Naive Bayes", "_____no_output_____" ] ], [ [ "from sklearn.naive_bayes import MultinomialNB\nfrom sklearn.preprocessing import PowerTransformer, MinMaxScaler\nfrom math import factorial\n\ndef fix_bias(input_df, add = True):\n df = input_df.copy()\n bias = lambda w, x, y, z: factorial(10) / (factorial(w) * factorial(x) * factorial(y) * factorial(z) * 4**10)\n \n for col in features:\n w = int(col[1:col.index('T')])\n x = int(col[col.index('T')+1:col.index('G')])\n y = int(col[col.index('G')+1:col.index('C')])\n z = int(col[col.index('C')+1:])\n if add:\n df[col] = df[col] + bias(w, x, y, z)\n else:\n df[col] = df[col] - bias(w, x, y, z)\n return df", "_____no_output_____" ], [ "# Naive Bayes\noof_preds = score_model(\n MultinomialNB(),\n MinMaxScaler()\n)\n\nplot_confusion_matrix(train['target'], oof_preds, train['gcd'])", "\nFold 0: 0.55738 accuracy in 0.43s.\nFold 1: 0.56818 accuracy in 0.29s.\nFold 2: 0.54745 accuracy in 0.31s.\nFold 3: 0.56104 accuracy in 0.31s.\nFold 4: 0.55401 accuracy in 0.31s.\n\nAccuracy (1M Reads): 0.61076\nAccuracy (100k Reads): 0.61098\nAccuracy (1k Reads): 0.56488\nAccuracy (100 Reads): 0.4438\nOut-of-Fold Accuracy: 0.55762\nTraining Time: 1.65s\n" ] ], [ [ "# KNN Classifier", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.preprocessing import StandardScaler", "_____no_output_____" ], [ "# KNN\noof_preds = score_model(\n KNeighborsClassifier(n_neighbors = 1),\n StandardScaler()\n)\n\nplot_confusion_matrix(train['target'], oof_preds, train['gcd'])", "\nFold 0: 0.91848 accuracy in 3.23s.\nFold 1: 0.91989 accuracy in 3.25s.\nFold 2: 0.91832 accuracy in 3.45s.\nFold 3: 0.91746 accuracy in 3.47s.\nFold 4: 0.91839 accuracy in 3.42s.\n\nAccuracy (1M Reads): 1.0\nAccuracy (100k Reads): 0.99998\nAccuracy (1k Reads): 0.84836\nAccuracy (100 Reads): 0.82578\nOut-of-Fold Accuracy: 0.91851\nTraining Time: 16.82s\n" ] ], [ [ "# Radius Neighbors", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import RadiusNeighborsClassifier\n\n# Radius Neighbors\noof_preds = score_model(\n RadiusNeighborsClassifier(\n n_jobs = -1,\n outlier_label = 'most_frequent',\n ),\n StandardScaler()\n)\n\nplot_confusion_matrix(train['target'], oof_preds, train['gcd'])", "\nFold 0: 0.36938 accuracy in 55.13s.\nFold 1: 0.36505 accuracy in 54.6s.\nFold 2: 0.36727 accuracy in 54.28s.\nFold 3: 0.37093 accuracy in 54.42s.\nFold 4: 0.37039 accuracy in 54.55s.\n\nAccuracy (1M Reads): 0.88689\nAccuracy (100k Reads): 0.38838\nAccuracy (1k Reads): 0.09996\nAccuracy (100 Reads): 0.09964\nOut-of-Fold Accuracy: 0.3686\nTraining Time: 272.97s\n" ] ], [ [ "# Nearest Centroid", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import NearestCentroid\n\n# Nearest Centroid\noof_preds = score_model(\n NearestCentroid(),\n StandardScaler()\n)\n\nplot_confusion_matrix(train['target'], oof_preds, train['gcd'])", "\nFold 0: 0.51745 accuracy in 0.52s.\nFold 1: 0.52542 accuracy in 0.51s.\nFold 2: 0.5295 accuracy in 0.55s.\nFold 3: 0.53634 accuracy in 0.56s.\nFold 4: 0.51784 accuracy in 0.53s.\n\nAccuracy (1M Reads): 0.56773\nAccuracy (100k Reads): 0.57384\nAccuracy (1k Reads): 0.55603\nAccuracy (100 Reads): 0.40347\nOut-of-Fold Accuracy: 0.52529\nTraining Time: 2.66s\n" ] ], [ [ "# Support Vector Machines", "_____no_output_____" ] ], [ [ "from sklearn.svm import SVC\n\n# Polynomial SVM\noof_preds = score_model(\n SVC(kernel = \"poly\", degree = 2, coef0 = 1),\n StandardScaler()\n)\n\nplot_confusion_matrix(train['target'], oof_preds, train['gcd'])", "\nFold 0: 0.90259 accuracy in 17.89s.\nFold 1: 0.90317 accuracy in 18.06s.\nFold 2: 0.90762 accuracy in 17.77s.\nFold 3: 0.90415 accuracy in 17.83s.\nFold 4: 0.90168 accuracy in 17.53s.\n\nAccuracy (1M Reads): 0.95937\nAccuracy (100k Reads): 0.9743\nAccuracy (1k Reads): 0.92233\nAccuracy (100 Reads): 0.75928\nOut-of-Fold Accuracy: 0.90384\nTraining Time: 89.09s\n" ], [ "# Polynomial SVM\noof_preds = score_model(\n SVC(kernel = \"rbf\"),\n StandardScaler()\n)\n\nplot_confusion_matrix(train['target'], oof_preds, train['gcd'])", "\nFold 0: 0.93306 accuracy in 28.33s.\nFold 1: 0.92856 accuracy in 27.51s.\nFold 2: 0.92868 accuracy in 29.49s.\nFold 3: 0.93257 accuracy in 29.49s.\nFold 4: 0.92989 accuracy in 28.31s.\n\nAccuracy (1M Reads): 0.96342\nAccuracy (100k Reads): 0.98104\nAccuracy (1k Reads): 0.9288\nAccuracy (100 Reads): 0.84891\nOut-of-Fold Accuracy: 0.93055\nTraining Time: 143.12s\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a7ce2b57edc31264c90ebd4ed5524169347c6f
4,959
ipynb
Jupyter Notebook
atomic_orbitals_wave_function.ipynb
OpenDreamKit/k3d_demo
1281b2944f5ca9b16ee3b7b2249f4d8a5baf0067
[ "MIT" ]
1
2020-09-17T18:25:55.000Z
2020-09-17T18:25:55.000Z
atomic_orbitals_wave_function.ipynb
OpenDreamKit/k3d_demo
1281b2944f5ca9b16ee3b7b2249f4d8a5baf0067
[ "MIT" ]
null
null
null
atomic_orbitals_wave_function.ipynb
OpenDreamKit/k3d_demo
1281b2944f5ca9b16ee3b7b2249f4d8a5baf0067
[ "MIT" ]
4
2020-04-24T13:37:54.000Z
2021-03-03T08:54:13.000Z
27.247253
428
0.483162
[ [ [ "# Hydrogen atom\n\n\\begin{equation}\n\\label{eq1}\n-\\frac{\\hbar^2}{2 \\mu} \\left[ \\frac{1}{r^2} \\frac{\\partial }{\\partial r} \\left( r^2 \\frac{ \\partial \\psi}{\\partial r}\\right) + \\frac{1}{r^2 \\sin \\theta} \\frac{\\partial }{\\partial \\theta} \\left( \\sin \\theta \\frac{\\partial \\psi}{\\partial \\theta}\\right) + \\frac{1}{r^2 \\sin^2 \\theta} \\frac{\\partial^2 \\psi}{\\partial \\phi^2} \\right] - \\frac{e^2}{ 4 \\pi \\epsilon_0 r} \\psi= E \\psi\n\\end{equation}\n\n\n\\begin{equation}\n\\label{eqsol1}\n\\psi_{n\\ell m}(r,\\vartheta,\\varphi) = \\sqrt {{\\left ( \\frac{2}{n a^*_0} \\right )}^3 \\frac{(n-\\ell-1)!}{2n(n+\\ell)!}} e^{- \\rho / 2} \\rho^{\\ell} L_{n-\\ell-1}^{2\\ell+1}(\\rho) Y_{\\ell}^{m}(\\vartheta, \\varphi ) \n\\end{equation}\n", "_____no_output_____" ] ], [ [ "import k3d\nfrom ipywidgets import interact, FloatSlider\n\nimport numpy as np\nimport scipy.special\nimport scipy.misc\n\nr = lambda x,y,z: np.sqrt(x**2+y**2+z**2)\ntheta = lambda x,y,z: np.arccos(z/r(x,y,z))\nphi = lambda x,y,z: np.arctan2(y,x)\n\na0 = 1.\nR = lambda r,n,l: (2*r/(n*a0))**l * np.exp(-r/n/a0) * scipy.special.genlaguerre(n-l-1,2*l+1)(2*r/n/a0)\nWF = lambda r,theta,phi,n,l,m: R(r,n,l) * scipy.special.sph_harm(m,l,phi,theta)\nabsWF = lambda r,theta,phi,n,l,m: abs(WF(r,theta,phi,n,l,m)).astype(np.float32)**2\nN = 50j\na = 30.0\nx,y,z = np.ogrid[-a:a:N,-a:a:N,-a:a:N]\nx = x.astype(np.float32)\ny = y.astype(np.float32)\nz = z.astype(np.float32)", "_____no_output_____" ], [ "orbital = WF(r(x,y,z),theta(x,y,z),phi(x,y,z),4,1,0).real.astype(np.float32) # 4p", "_____no_output_____" ], [ "plt_vol = k3d.volume(orbital)\nplt_label = k3d.text2d(r'n=1\\; l=0\\; m=0',(0.,0.))\nplot = k3d.plot()\nplot += plt_vol\nplot += plt_label\n\nplt_vol.opacity_function = [0. , 0. , 0.21327923, 0.98025 , 0.32439035,\n 0. , 0.5 , 0. , 0.67560965, 0. ,\n 0.74537706, 0.9915 , 1. , 0. ]\n\nplt_vol.color_map = k3d.colormaps.paraview_color_maps.Cool_to_Warm_Extended\nplt_vol.color_range = (-0.5,0.5)\n\n\n\nplot.display()", "_____no_output_____" ] ], [ [ "## animation \n### single wave function is sent at a time", "_____no_output_____" ] ], [ [ "E = 4\nfor l in range(E):\n for m in range(-l,l+1):\n psi2 = WF(r(x,y,z),theta(x,y,z),phi(x,y,z),E,l,m).real.astype(np.float32)\n plt_vol.volume = psi2/np.max(psi2)\n plt_label.text = 'n=%d \\quad l=%d \\quad m=%d'%(E,l,m)\n \n", "_____no_output_____" ] ], [ [ "### using time series \n\n - series of volumetric data are sent to k3d, \n - player interpolates between ", "_____no_output_____" ] ], [ [ "E = 4\npsi_t = {}\nt = 0.0\nfor l in range(E):\n for m in range(-l,l+1):\n \n psi2 = WF(r(x,y,z),theta(x,y,z),phi(x,y,z),E,l,m)\n psi_t[str(t)] = (psi2.real/np.max(np.abs(psi2))).astype(np.float32)\n t += 0.3 \nplt_vol.volume = psi_t \n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a7ce3510d2448af2050409190d66e3389499d9
62,987
ipynb
Jupyter Notebook
Text-Mining Hands-on.ipynb
tdekelver-bd/ResuMe
ad6cf3bf3f8877c1581dde4b538a30e16a675ded
[ "MIT" ]
null
null
null
Text-Mining Hands-on.ipynb
tdekelver-bd/ResuMe
ad6cf3bf3f8877c1581dde4b538a30e16a675ded
[ "MIT" ]
null
null
null
Text-Mining Hands-on.ipynb
tdekelver-bd/ResuMe
ad6cf3bf3f8877c1581dde4b538a30e16a675ded
[ "MIT" ]
null
null
null
34.194897
1,835
0.550352
[ [ [ "![B&D](http://www.avenir-it.fr/wp-content/uploads/2015/10/BD-Logo-groupe.jpg)\n\n# Demo text-mining: Pharma case\n\nIn this demo, I will demonstrate what are the basic steps that you will have to use in most text-mining cases. This are also some of the steps that have been used in the ResuMe app, available here: [ResuMe](https://resume.businessdecision.be/). The case that we will cover here, is a simplified version of a project that has actually been carried out by B&D, where the goal was to identify if a given paper is treating about Pharmacovigilance or not. Pharmacovigilance is a domain of study in healthcare about drug safety. Consequently, we would like to predict, based on the text of the scientific article if the article treats about Pharmacovigilance or not.\n\nFor this we can use any kind of model, but in any case we will have to transform the words in numbers in some way. We'll see different methods and compare their performance.\n\n## Downloading the dataset from PubMed", "_____no_output_____" ] ], [ [ "#import documents from PubMed\nfrom Bio import Entrez\n\n# Function to search for a certain number articles based on a certain keyword\ndef search(keyword,number=20):\n Entrez.email = '[email protected]'\n handle = Entrez.esearch(db='pubmed', \n sort='relevance', \n retmax=str(number),\n retmode='xml', \n term=keyword)\n results = Entrez.read(handle)\n return results\n\n# Function to retrieve the results of previous search query\ndef fetch_details(id_list):\n ids = ','.join(id_list)\n Entrez.email = '[email protected]'\n handle = Entrez.efetch(db='pubmed',\n retmode='xml',\n id=ids)\n results = Entrez.read(handle)\n return results\n", "_____no_output_____" ] ], [ [ "### Retrieving top 200 articles with Pharmacovigilance keyword", "_____no_output_____" ] ], [ [ "results = search('Pharmacovigilance', 200) #querying PubMed\nid_list = results['IdList']\npapers_pharmacov = fetch_details(id_list) #retrieving the info about the articles in nested lists & dictionary format", "_____no_output_____" ], [ "# checking article title for the first 10 retrieved articles\nfor i, paper in enumerate(papers_pharmacov['PubmedArticle'][:10]):\n print(\"%d) %s\" % (i+1,paper['MedlineCitation']['Article']['ArticleTitle']))", "1) FarmaREL: An Italian pharmacovigilance project to monitor and evaluate adverse drug reactions in haematologic patients.\n2) Feasibility and Educational Value of a Student-Run Pharmacovigilance Programme: A Prospective Cohort Study.\n3) Developing a Crowdsourcing Approach and Tool for Pharmacovigilance Education Material Delivery.\n4) Promoting and Protecting Public Health: How the European Union Pharmacovigilance System Works.\n5) Effect of an educational intervention on knowledge and attitude regarding pharmacovigilance and consumer pharmacovigilance among community pharmacists in Lalitpur district, Nepal.\n6) Pharmacovigilance and Biomedical Informatics: A Model for Future Development.\n7) Pharmacovigilance in Europe: Place of the Pharmacovigilance Risk Assessment Committee (PRAC) in organisation and decisional processes.\n8) Tamoxifen Pharmacovigilance: Implications for Safe Use in the Future.\n9) Pharmacovigilance Skills, Knowledge and Attitudes in our Future Doctors - A Nationwide Study in the Netherlands.\n10) Adverse drug reactions reporting in Calabria (Southern Italy) in the four-year period 2011-2014: impact of a regional pharmacovigilance project in light of the new European Legislation.\n" ] ], [ [ "### Retrieving top 1.000 articles with Pharma keyword\nThis will be our base of comparison, we want to separate them from the others", "_____no_output_____" ] ], [ [ "results = search('Pharma', 1000) #querying PubMed\nid_list = results['IdList']\npapers_pharma = fetch_details(id_list)#retrieving the info about the articles in nested lists & dictionary format", "_____no_output_____" ], [ "# checking article title for the first 10 retrieved articles\nfor i, paper in enumerate(papers_pharma['PubmedArticle'][:10]):\n print(\"%d) %s\" % (i+1,paper['MedlineCitation']['Article']['ArticleTitle']))", "1) Recent trends in specialty pharma business model.\n2) The moderating role of absorptive capacity and the differential effects of acquisitions and alliances on Big Pharma firms' innovation performance.\n3) Space-related pharma-motifs for fast search of protein binding motifs and polypharmacological targets.\n4) Pharma Websites and \"Professionals-Only\" Information: The Implications for Patient Trust and Autonomy.\n5) BRIC Health Systems and Big Pharma: A Challenge for Health Policy and Management.\n6) Developing Deep Learning Applications for Life Science and Pharma Industry.\n7) Exzellenz in der Bildung für eine innovative Schweiz: Die Position des Wirtschaftsdachverbandes Chemie Pharma Biotech.\n8) Shaking Up Biotech/Pharma: Can Cues Be Taken from the Tech Industry?\n9) Pharma-Nutritional Properties of Olive Oil Phenols. Transfer of New Findings to Human Nutrition.\n10) Pharma Success in Product Development—Does Biotechnology Change the Paradigm in Product Development and Attrition.\n" ] ], [ [ "### Saving ID's, labels and title + abstracts of the articles\n\nWhen an article was retrieved via the Pharmacovigilance keyword, it will receive the label = 1 and = 0 else. We'll per article put the article title and article abstract together as our text data on the article. ", "_____no_output_____" ] ], [ [ "# Save ids & label 1 = pharmacovigilance , 0 = not pharmacovigilance\n# & Save title + abstract in dico\nids = []\nlabels = []\ndata = []\nfor i, paper in enumerate(papers_pharmacov['PubmedArticle']):\n if 'Abstract' in paper['MedlineCitation']['Article'].keys(): #check that abstract info is available\n ids.append(str(paper['MedlineCitation']['PMID']))\n labels.append(1)\n title = paper['MedlineCitation']['Article']['ArticleTitle'] #Article title\n abstract = paper['MedlineCitation']['Article']['Abstract']['AbstractText'][0] #Abstract\n data.append( title + abstract )\nfor i, paper in enumerate(papers_pharma['PubmedArticle']):\n if 'Abstract' in paper['MedlineCitation']['Article'].keys(): #check that abstract info is available\n ids.append(str(paper['MedlineCitation']['PMID']))\n labels.append(0)\n title = paper['MedlineCitation']['Article']['ArticleTitle'] #Article title\n abstract = paper['MedlineCitation']['Article']['Abstract']['AbstractText'][0] #Abstract\n data.append( title + abstract )\n", "_____no_output_____" ], [ "# Check result for one paper\nids[0] # ID\nlabels[0] # 1 = pharmacovigilance , 0 = not pharmacovigilance\ndata[0] # Title & abstract", "_____no_output_____" ] ], [ [ "### Transform to numeric attributes\nWe will now **transform** the **text into numeric attributes**. For this, we will convert every word to a number, but we first need to **split** the full text into **separate words**. This is done by using a ***Tokenizer***. The tokenizer will split the full text based on a certain pattern you specify. Here we'll take a very basic pattern and take any words that contain only upper- or lowercase letters and we will convert everything to lowercase.", "_____no_output_____" ] ], [ [ "from nltk.tokenize.regexp import RegexpTokenizer #import a tokenizer, to split the full text into separate words\n\ndef Tokenize_text_value(value):\n tokenizer1 = RegexpTokenizer(r\"[A-Za-z]+\") # our self defined tokenizera\n value = value.lower() # convert all words to lowercase\n return tokenizer1.tokenize(value) # tokenize each text", "_____no_output_____" ], [ "# example of our tokenizer\nTokenize_text_value(data[0])", "_____no_output_____" ] ], [ [ "Using the ***bag-of-words*** method we can transform any document to a vector. Using this method you have **one column per word and one row per document** and either a binary value 1 if the word is present in a certain document, 0 if not or a count value of the number of times the word appears in the document. \n\nFor instance, the following three sentences:\n1. Intelligent applications creates intelligent business processes\n2. Bots are intelligent applications\n3. I do business intelligence\n\nCan be represented in the following matrix using the counts of each word as values in the matrix\n![matrix](http://www.darrinbishop.com/wp-content/uploads/2017/10/Document-Term-Matrix.png)", "_____no_output_____" ] ], [ [ "# transform non-processed data to nummeric features:\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nbinary_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', binary = True,\n tokenizer=Tokenize_text_value) # initialize the binary vectorizer\ncount_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=False, \n tokenizer=Tokenize_text_value) # initialize the count vectorizer\n\nbinary_matrix = binary_vectorizer.fit_transform(data) # fit & transform\ncount_matrix = count_vectorizer.fit_transform(data) # fit & transform", "_____no_output_____" ], [ "# Check our output matrix shape: rows = documents, columns = words\nbinary_matrix.shape", "_____no_output_____" ] ], [ [ "### Check performance in a basic model\nWe'll apply now a model on our 2 matrices. For this we will use the ***Naive Bayes model***, which (as the name tells) is based on the probabilistic Bayes theorem. It is used a lot in text-mining as it is really **fast** to train and apply and is able to **handle a lot of features**, which is often the case in text-mining, when you have one column per word. We will use the ***kappa*** measure to evaluate model performance. Kappa is a metric that is robust to class-imbalances in the data and varies from -1 to +1 with 0 being a random performance and +1 a perfect performance.", "_____no_output_____" ] ], [ [ "# apply cross validation Naive Bayes model\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import (cohen_kappa_score, make_scorer)\n\nNB = MultinomialNB() # our Naive Bayes Model initialisation\nscorer = make_scorer(cohen_kappa_score) # Our kappa score", "_____no_output_____" ], [ "scores = cross_val_score(NB,binary_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Binary matrix with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Binary matrix with a mean kappa score of 0.210382 and variance of 0.003118\n" ], [ "scores = cross_val_score(NB,count_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Count matrix with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Count matrix with a mean kappa score of 0.064193 and variance of 0.000682\n" ] ], [ [ "### TF-IDF transformation\n\nAn alternative to the binary and count matrix is the **tf-idf transformation**. It stands for ***Term Frequency - Inverse Document Frequency*** and is a measure that will try to find the words that are unique to each document and that characterizes the document compared to the other documents. How this achieved is by taking the term frequency (which is the same as the count that we have defined before) and multiplying it by the inverse document frequency (which is low when the term appears in all other documents and high when it appears in few other documents):\n\n![TF-IDF](https://chrisalbon.com/images/machine_learning_flashcards/TF-IDF_print.png)\n*Copyright © Chris Albon, 2018*", "_____no_output_____" ] ], [ [ "# transform non-processed data to nummeric features:\nfrom sklearn.feature_extraction.text import TfidfVectorizer\ntfidf_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=True, smooth_idf = True ,\n tokenizer=Tokenize_text_value) # initialize the tf-idf vectorizer\n\ntfidf_matrix = tfidf_vectorizer.fit_transform(data) # fit & transform", "_____no_output_____" ], [ "scores = cross_val_score(NB,tfidf_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on TF-IDF matrix with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on TF-IDF matrix with a mean kappa score of 0.320332 and variance of 0.003960\n" ] ], [ [ "### How to improve this score? \nHow come the TF-IDF works the best, followed closely by the binary matrix and with the count matrix far behind? Let's have a look at the words that occur the most in the different documents:", "_____no_output_____" ] ], [ [ "import numpy as np\n# Find words with maximum occurence for each document in the count_matrix\nmax_counts_per_doc = np.asarray(np.argmax(count_matrix,axis = 1)).ravel()\n# Count how many times every word is the most occuring word across all documents\nunique, counts = np.unique(max_counts_per_doc,return_counts=True)\n# Keep only the words that are the most frequent word of at least 5 different documents\nfrequent = unique[counts > 5]", "_____no_output_____" ], [ "# Retrieve the vocabulary of our count matrix\nvocab = count_vectorizer.get_feature_names()\n# print out the words in frequent\nfor i in frequent:\n print(vocab[i])", "a\nand\nfor\nin\nof\nthe\nto\nwith\n" ] ], [ [ "As you can see those words are all words without any added value as they are mostly used to link certain words together in sentences, but have no standalone value. This is what we call ***Stop words***. So knowing that, we can find an intuition of why the tf-idf and binary transformations worked better than the count one. In the count one, we have seen that words that appear a lot, but have no value as such, get a high weight/value, whereas in binary every word gets the same weight and in tf-idf, the words that appear a lot in the other documents are automatically given a lower weight thanks to the IDF part. To avoid this problem we usually remove stop words\n\n### Removing Stop words", "_____no_output_____" ] ], [ [ "# Remove the stop words\nbinary_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', binary = True,\n tokenizer=Tokenize_text_value, stop_words = 'english') # initialize the binary vectorizer\ncount_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=False, smooth_idf=False,\n tokenizer=Tokenize_text_value, stop_words = 'english') # initialize the count vectorizer\ntfidf_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=True, smooth_idf=True,\n tokenizer=Tokenize_text_value, stop_words = 'english') # initialize the tf-idf vectorizer\nbinary_matrix = binary_vectorizer.fit_transform(data) # fit & transform\ncount_matrix = count_vectorizer.fit_transform(data) # fit & transform\ntfidf_matrix = tfidf_vectorizer.fit_transform(data) # fit & transform", "_____no_output_____" ], [ "scores = cross_val_score(NB,binary_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Binary matrix by removing stop-words with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Binary matrix by removing stop-words with a mean kappa score of 0.472072 and variance of 0.001247\n" ], [ "scores = cross_val_score(NB,count_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Count matrix by removing stop-words with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Count matrix by removing stop-words with a mean kappa score of 0.753020 and variance of 0.009759\n" ], [ "scores = cross_val_score(NB,tfidf_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on TF-IDF matrix by removing stop-words with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on TF-IDF matrix by removing stop-words with a mean kappa score of 0.682766 and variance of 0.011562\n" ] ], [ [ "We have a big improvement in our performance when we remove the stop words. How can we go a step further? Now the following steps are mostly domain dependent. You have to think about your problem and what you would need to solve it. In this case, if we are using only the abstracts and the titles, if we had to do it ourselves, we would have a look at the most common keywords you have in the articles about Pharmacovigilance and when we have a new article to classify, we would look if we find those same keywords back. However, here we are analyzing all words (minus the stopwords) and not only the keywords. So we could try to filter out to keep only words that appear at least a certain number of times across all documents.\n### Keeping only key-words", "_____no_output_____" ] ], [ [ "# keep only words that appear at least in 5% of the documents:\nbinary_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', binary = True,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.05) # initialize the binary vectorizer\ncount_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=False, smooth_idf=False,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.05) # initialize the count vectorizer\ntfidf_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=True, smooth_idf=True,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.05) # initialize the tf-idf vectorizer\nbinary_matrix = binary_vectorizer.fit_transform(data) # fit & transform\ncount_matrix = count_vectorizer.fit_transform(data) # fit & transform\ntfidf_matrix = tfidf_vectorizer.fit_transform(data) # fit & transform", "_____no_output_____" ], [ "scores = cross_val_score(NB,binary_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Binary matrix by keeping only keywords appearing in at least 5%% of the documents with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Binary matrix by keeping only keywords appearing in at least 5% of the documents with a mean kappa score of 0.876158 and variance of 0.003158\n" ], [ "scores = cross_val_score(NB,count_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Count matrix by keeping only keywords appearing in at least 5%% of the documents with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Count matrix by keeping only keywords appearing in at least 5% of the documents with a mean kappa score of 0.951631 and variance of 0.001133\n" ], [ "scores = cross_val_score(NB,tfidf_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on TF-IDF matrix by keeping only keywords appearing in at least 5%% of the documents with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on TF-IDF matrix by keeping only keywords appearing in at least 5% of the documents with a mean kappa score of 0.916734 and variance of 0.001633\n" ] ], [ [ "### Final improvements\nWe've made a big improvement with this one as well. We can even go further and add some extra fine-tunings. Let's have a look at the final key-words:", "_____no_output_____" ] ], [ [ "tfidf_vectorizer.get_feature_names()", "_____no_output_____" ] ], [ [ "We can see that some words all refer to the same thing: *report, reported, reporting, reports* all refer to one same thing *report* and should therefore be grouped together => this can be done by ***stemming***\n### Stemming\nStemming is a technique where we try to reduce words to a common base form, this is done by chopping off the last part of the word: s's are removed, -ing is removed, -ed is removed, ...", "_____no_output_____" ] ], [ [ "# Define a stemmer that will preprocess the text before transforming it\nfrom nltk.stem.porter import PorterStemmer \ndef preprocess(value): \n stemmer = PorterStemmer() \n #split in tokens\n return ' '.join([stemmer.stem(i) for i in Tokenize_text_value(value) ])", "_____no_output_____" ], [ "# Have a look at what it gives on the first article\nprint(' '.join([i for i in Tokenize_text_value(data[0]) ])) # original\nprint('\\n')\nprint(preprocess(data[0])) #stemmed", "farmarel an italian pharmacovigilance project to monitor and evaluate adverse drug reactions in haematologic patients adverse drug reactions adrs reduce patients quality of life increase mortality and morbidity and have a negative economic impact on healthcare systems nevertheless the importance of adr reporting is often underestimated the project farmarel has been developed to monitor and evaluate adrs in haematological patients and to increase pharmacovigilance culture among haematology specialists in haematology units based in lombardy italy a dedicated specialist with the task of encouraging adrs reporting and sensitizing healthcare professionals to pharmacovigilance has been assigned the adrs occurring in haematological patients were collected electronically and then analysed with multiple logistic regression between january and december reports were collected the number of adrs was higher in older adults in male and in non hodgkin lymphoma patients most reactions were severe required or prolonged hospitalization but in most cases they were fully resolved at the time of reporting according to schumock and thornton criteria a percentage of adrs as high as was found to be preventable versus according to reporter opinion patients haematological diagnosis not age or gender resulted to be the variable that most influenced adr in particular severity and outcome the employment of personnel specifically dedicated to pharmacovigilance is a successful strategy to improve the number and quality of adr reports farmarel the first programme of active pharmacovigilance in oncohaematologic patients significantly contributed to reach the who gold standard for pharmacovigilance in lombardy italy\n\n\nfarmarel an italian pharmacovigil project to monitor and evalu advers drug reaction in haematolog patient advers drug reaction adr reduc patient qualiti of life increas mortal and morbid and have a neg econom impact on healthcar system nevertheless the import of adr report is often underestim the project farmarel ha been develop to monitor and evalu adr in haematolog patient and to increas pharmacovigil cultur among haematolog specialist in haematolog unit base in lombardi itali a dedic specialist with the task of encourag adr report and sensit healthcar profession to pharmacovigil ha been assign the adr occur in haematolog patient were collect electron and then analys with multipl logist regress between januari and decemb report were collect the number of adr wa higher in older adult in male and in non hodgkin lymphoma patient most reaction were sever requir or prolong hospit but in most case they were fulli resolv at the time of report accord to schumock and thornton criteria a percentag of adr as high as wa found to be prevent versu accord to report opinion patient haematolog diagnosi not age or gender result to be the variabl that most influenc adr in particular sever and outcom the employ of personnel specif dedic to pharmacovigil is a success strategi to improv the number and qualiti of adr report farmarel the first programm of activ pharmacovigil in oncohaematolog patient significantli contribut to reach the who gold standard for pharmacovigil in lombardi itali\n" ], [ "# Preprocess the documents by stemming the words\nbinary_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', binary = True,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.05, preprocessor = preprocess) # initialize the binary vectorizer\ncount_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=False, smooth_idf=False,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.05, preprocessor = preprocess) # initialize the count vectorizer\ntfidf_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=True, smooth_idf=True,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.05, preprocessor = preprocess) # initialize the tf-idf vectorizer\nbinary_matrix = binary_vectorizer.fit_transform(data) # fit & transform\ncount_matrix = count_vectorizer.fit_transform(data) # fit & transform\ntfidf_matrix = tfidf_vectorizer.fit_transform(data) # fit & transform", "_____no_output_____" ], [ "scores = cross_val_score(NB,binary_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Binary matrix by stemming and keeping only keywords appearing in at least 5%% of the documents with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Binary matrix by stemming and keeping only keywords appearing in at least 5% of the documents with a mean kappa score of 0.868418 and variance of 0.000270\n" ], [ "scores = cross_val_score(NB,count_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Count matrix by stemming and keeping only keywords appearing in at least 5%% of the documents with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Count matrix by stemming and keeping only keywords appearing in at least 5% of the documents with a mean kappa score of 0.944600 and variance of 0.001127\n" ], [ "scores = cross_val_score(NB,tfidf_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on TF-IDF matrix by stemming and keeping only keywords appearing in at least 5%% of the documents with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on TF-IDF matrix by stemming and keeping only keywords appearing in at least 5% of the documents with a mean kappa score of 0.901758 and variance of 0.001610\n" ], [ "# Check the stemmed final vocabulary\ntfidf_vectorizer.get_feature_names()", "_____no_output_____" ] ], [ [ "We can see that the performance slightly decreases with the stemming. Probably, because now when we are keeping words that appear in only 5% of the documents, we have more words than before, as before words with different endings were counted separately and now they are grouped together. So to correct for this we should increase our 5% threshold to take this effect into account.", "_____no_output_____" ] ], [ [ "# Preprocess the documents by stemming the words and keeping only words that appear in at least 10% of the documents:\nbinary_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', binary = True,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.1, preprocessor = preprocess) # initialize the binary vectorizer\ncount_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=False, smooth_idf=False,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.1, preprocessor = preprocess) # initialize the count vectorizer\ntfidf_vectorizer = TfidfVectorizer(input=u'content', analyzer=u'word', use_idf=True, smooth_idf=True,\n tokenizer=Tokenize_text_value, stop_words = 'english'\n , min_df = 0.1, preprocessor = preprocess) # initialize the tf-idf vectorizer\nbinary_matrix = binary_vectorizer.fit_transform(data) # fit & transform\ncount_matrix = count_vectorizer.fit_transform(data) # fit & transform\ntfidf_matrix = tfidf_vectorizer.fit_transform(data) # fit & transform", "_____no_output_____" ], [ "scores = cross_val_score(NB,binary_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Binary matrix by stemming with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Binary matrix by stemming with a mean kappa score of 0.893254 and variance of 0.000962\n" ], [ "scores = cross_val_score(NB,count_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on Count matrix by stemming with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on Count matrix by stemming with a mean kappa score of 0.951618 and variance of 0.001134\n" ], [ "scores = cross_val_score(NB,tfidf_matrix,labels,scoring = scorer,cv = 5 )\nprint('Cross validation on TF-IDF matrix by stemming with a mean kappa score of %f and variance of %f' % (scores.mean(),scores.var()))", "Cross validation on TF-IDF matrix by stemming with a mean kappa score of 0.937502 and variance of 0.000565\n" ], [ "# Check the stemmed final vocabulary\ntfidf_vectorizer.get_feature_names()", "_____no_output_____" ] ], [ [ "Now we have the same or a bit higher performance as before.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7a7d36d08b7dd57ead836ac3ee4cc1ab3575173
6,585
ipynb
Jupyter Notebook
LE2/binary_search.ipynb
flavio-mueller/apr-code
24bb52c3c4f10e631eddf2f2e80d9cd17c3caa3d
[ "MIT" ]
null
null
null
LE2/binary_search.ipynb
flavio-mueller/apr-code
24bb52c3c4f10e631eddf2f2e80d9cd17c3caa3d
[ "MIT" ]
null
null
null
LE2/binary_search.ipynb
flavio-mueller/apr-code
24bb52c3c4f10e631eddf2f2e80d9cd17c3caa3d
[ "MIT" ]
null
null
null
27.78481
127
0.485194
[ [ [ "def bin_search_first_2(seq, val):\n \"\"\"\n A binary search returning the index of the first occurrence of a value val within a increasingly ordered sequence\n seq. If val is not part of seq, the index of the first value exceeding val is returned, if any. If all elements of\n seq are smaller than val, the len of seq is returned.\n @param seq: the increasingly ordered sequence to search in\n @param val: the value to search for\n @return: the lowest index r, such that r == len(seq) or seq[r] >= val\n \"\"\"\n lo, hi = -1, len(seq)\n while lo + 1 != hi:\n m = (lo + hi) // 2\n if seq[m] < val:\n lo = m\n else:\n hi = m\n return hi\n", "_____no_output_____" ], [ "def bin_search_first(list, search_val):\n lower, higher = -1, len(list)\n while lower + 1 != higher:\n center = (lower + higher) // 2\n if list[center] < search_val:\n lower = center\n else:\n higher = center\n return higher", "_____no_output_____" ], [ " print(bin_search_first([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 24))\n print(bin_search_first([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 0))\n print(bin_search_first([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 1))\n print(bin_search_first([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 32))\n print(bin_search_first([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 31))\n", "10\n0\n0\n15\n14\n" ], [ "def bin_search(lst, search_val, lower, higher):\n if lower + 1 == higher:\n return higher\n\n center = (lower + higher) // 2\n if lst[center] < search_val:\n return bin_search(lst, search_val, center, higher)\n else:\n return bin_search(lst, search_val, lower, center)\n\ndef bin_search_start(lst, search_val):\n return bin_search(lst, search_val, -1, len(lst))", "_____no_output_____" ], [ "print(bin_search_start([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 24))\nprint(bin_search_start([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 0))\nprint(bin_search_start([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 1))\nprint(bin_search_start([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 32))\nprint(bin_search_start([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 31))", "10\n0\n0\n15\n14\n" ], [ "import timeit\n", "_____no_output_____" ], [ "%timeit bin_search_start([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 24)\n", "1.12 µs ± 11 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n" ], [ "%timeit bin_search_first([1, 1, 2, 3, 5, 8, 13, 14, 15, 22, 24, 24, 24, 30, 31], 24)\n", "719 ns ± 22 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n" ], [ "def inverse(func, y, lo, hi):\n \"\"\"\n Computes the value of y of the inverse of the monotonously increasing function func.\n\n >>> inverse(lambda x: x ** 2, 13, 0, 13) ** 2\n 13.000000000000002\n >>> 2 ** inverse(lambda x: 2 ** x, 13, 0, 13)\n 13.0\n\n @param func: the function to invert, must be increasing monotonously\n @param y: the value of the function to search for\n @param lo: a lower bound of the result, i.e. a value such that f(lo) < y\n @param hi: an upper bound of the result, i.e. a value such that f(hi) >= y\n @return: the smallest float x, such that f(x) >= y\n \"\"\"\n m = (lo + hi) / 2\n while m != lo and m != hi:\n if func(m) < y:\n lo = m\n else:\n hi = m\n m = (lo + hi) / 2\n return hi\n\n\ndef _test_inverse():\n print(inverse(lambda x: x ** 2, 13, 0, 13) ** 2)\n print(2 ** inverse(lambda x: 2 ** x, 13, 0, 13))\n", "_____no_output_____" ], [ "_test_inverse()", "13.000000000000002\n13.0\n" ], [ "def inverse(func, y,)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a7dc1121e542165c065d90a99a2c16fa5c9df4
11,478
ipynb
Jupyter Notebook
install_packages.ipynb
JoeyL6/boilerplate-data-science
afce69910c25a91615e472b999ccbb7236ee5378
[ "MIT" ]
1
2021-01-22T04:57:59.000Z
2021-01-22T04:57:59.000Z
install_packages.ipynb
JoeyL6/boilerplate-data-science
afce69910c25a91615e472b999ccbb7236ee5378
[ "MIT" ]
null
null
null
install_packages.ipynb
JoeyL6/boilerplate-data-science
afce69910c25a91615e472b999ccbb7236ee5378
[ "MIT" ]
null
null
null
91.095238
270
0.696463
[ [ [ "%load_ext lab_black", "_____no_output_____" ], [ "! pip install smart_open[s3]\n! pip install xgboost\n! pip install pyathena[pandas]\n! pip install nb_black", "Collecting smart_open[s3]\n Downloading smart_open-4.1.2-py3-none-any.whl (111 kB)\n\u001b[K |████████████████████████████████| 111 kB 2.9 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: boto3; extra == \"s3\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from smart_open[s3]) (1.9.201)\nRequirement already satisfied: botocore<1.13.0,>=1.12.201 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from boto3; extra == \"s3\"->smart_open[s3]) (1.12.204)\nRequirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from boto3; extra == \"s3\"->smart_open[s3]) (0.2.1)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from boto3; extra == \"s3\"->smart_open[s3]) (0.9.4)\nRequirement already satisfied: docutils<0.15,>=0.10 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from botocore<1.13.0,>=1.12.201->boto3; extra == \"s3\"->smart_open[s3]) (0.14)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= \"2.7\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from botocore<1.13.0,>=1.12.201->boto3; extra == \"s3\"->smart_open[s3]) (2.8.0)\nRequirement already satisfied: urllib3<1.26,>=1.20; python_version >= \"3.4\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from botocore<1.13.0,>=1.12.201->boto3; extra == \"s3\"->smart_open[s3]) (1.25.3)\nRequirement already satisfied: six>=1.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= \"2.7\"->botocore<1.13.0,>=1.12.201->boto3; extra == \"s3\"->smart_open[s3]) (1.12.0)\nInstalling collected packages: smart-open\nSuccessfully installed smart-open-4.1.2\n\u001b[33mWARNING: You are using pip version 20.1; however, version 20.3.3 is available.\nYou should consider upgrading via the '/usr/local/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\nCollecting xgboost\n Downloading xgboost-1.3.3-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl (1.2 MB)\n\u001b[K |████████████████████████████████| 1.2 MB 5.0 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: scipy in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from xgboost) (1.3.1)\nRequirement already satisfied: numpy in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from xgboost) (1.17.0)\nInstalling collected packages: xgboost\nSuccessfully installed xgboost-1.3.3\n\u001b[33mWARNING: You are using pip version 20.1; however, version 20.3.3 is available.\nYou should consider upgrading via the '/usr/local/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\nRequirement already satisfied: pyathena[pandas] in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (1.10.5)\nRequirement already satisfied: tenacity>=4.1.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pyathena[pandas]) (6.2.0)\nRequirement already satisfied: future in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pyathena[pandas]) (0.18.2)\nRequirement already satisfied: botocore>=1.5.52 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pyathena[pandas]) (1.12.204)\nRequirement already satisfied: boto3>=1.4.4 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pyathena[pandas]) (1.9.201)\nRequirement already satisfied: pandas>=0.24.0; extra == \"pandas\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pyathena[pandas]) (0.25.0)\nRequirement already satisfied: pyarrow>=0.15.0; extra == \"pandas\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pyathena[pandas]) (0.15.0)\nRequirement already satisfied: six>=1.9.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from tenacity>=4.1.0->pyathena[pandas]) (1.12.0)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from botocore>=1.5.52->pyathena[pandas]) (0.9.4)\nRequirement already satisfied: docutils<0.15,>=0.10 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from botocore>=1.5.52->pyathena[pandas]) (0.14)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= \"2.7\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from botocore>=1.5.52->pyathena[pandas]) (2.8.0)\nRequirement already satisfied: urllib3<1.26,>=1.20; python_version >= \"3.4\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from botocore>=1.5.52->pyathena[pandas]) (1.25.3)\nRequirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from boto3>=1.4.4->pyathena[pandas]) (0.2.1)\nRequirement already satisfied: pytz>=2017.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas>=0.24.0; extra == \"pandas\"->pyathena[pandas]) (2019.2)\nRequirement already satisfied: numpy>=1.13.3 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas>=0.24.0; extra == \"pandas\"->pyathena[pandas]) (1.17.0)\n\u001b[33mWARNING: You are using pip version 20.1; however, version 20.3.3 is available.\nYou should consider upgrading via the '/usr/local/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\nRequirement already satisfied: nb_black in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (1.0.7)\nRequirement already satisfied: ipython in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from nb_black) (7.7.0)\nRequirement already satisfied: black>='19.3'; python_version >= \"3.6\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from nb_black) (19.3b0)\nRequirement already satisfied: traitlets>=4.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (4.3.2)\nRequirement already satisfied: appnope; sys_platform == \"darwin\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (0.1.0)\nRequirement already satisfied: pygments in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (2.4.2)\nRequirement already satisfied: pickleshare in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (0.7.5)\nRequirement already satisfied: setuptools>=18.5 in /Users/zwl/Library/Python/3.7/lib/python/site-packages (from ipython->nb_black) (46.4.0)\nRequirement already satisfied: jedi>=0.10 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (0.14.1)\nRequirement already satisfied: decorator in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (4.4.0)\nRequirement already satisfied: backcall in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (0.1.0)\nRequirement already satisfied: pexpect; sys_platform != \"win32\" in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (4.7.0)\nRequirement already satisfied: prompt-toolkit<2.1.0,>=2.0.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from ipython->nb_black) (2.0.9)\nRequirement already satisfied: click>=6.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from black>='19.3'; python_version >= \"3.6\"->nb_black) (7.0)\nRequirement already satisfied: appdirs in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from black>='19.3'; python_version >= \"3.6\"->nb_black) (1.4.3)\nRequirement already satisfied: attrs>=18.1.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from black>='19.3'; python_version >= \"3.6\"->nb_black) (19.1.0)\nRequirement already satisfied: toml>=0.9.4 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from black>='19.3'; python_version >= \"3.6\"->nb_black) (0.10.0)\nRequirement already satisfied: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from traitlets>=4.2->ipython->nb_black) (0.2.0)\nRequirement already satisfied: six in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from traitlets>=4.2->ipython->nb_black) (1.12.0)\nRequirement already satisfied: parso>=0.5.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from jedi>=0.10->ipython->nb_black) (0.5.1)\nRequirement already satisfied: ptyprocess>=0.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pexpect; sys_platform != \"win32\"->ipython->nb_black) (0.6.0)\nRequirement already satisfied: wcwidth in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython->nb_black) (0.1.7)\n\u001b[33mWARNING: You are using pip version 20.1; however, version 20.3.3 is available.\nYou should consider upgrading via the '/usr/local/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\n" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
e7a7de8d7b49499336038b9e297fbdd20e3b32cf
9,193
ipynb
Jupyter Notebook
21. Python Threading.ipynb
HussamCheema/Python
26a29ee4dbdc73c4c69c2628931a7d01bd3d17a3
[ "MIT" ]
null
null
null
21. Python Threading.ipynb
HussamCheema/Python
26a29ee4dbdc73c4c69c2628931a7d01bd3d17a3
[ "MIT" ]
null
null
null
21. Python Threading.ipynb
HussamCheema/Python
26a29ee4dbdc73c4c69c2628931a7d01bd3d17a3
[ "MIT" ]
null
null
null
27.606607
78
0.528228
[ [ [ "# Threads are useful for IO bound processes. Speed Up the processing.\nimport time\nimport threading\n\nstart = time.perf_counter()\n\n\ndef do_something():\n print(f'Sleeping 1 Second...')\n time.sleep(1)\n print(f'Done Sleeping...')\n\n \nt1 = threading.Thread(target=do_something)\nt2 = threading.Thread(target=do_something)\n\nt1.start()\nt2.start()\n\nt1.join()\nt2.join() # Means that complete the task before moving ahead\n\nfinish = time.perf_counter()\n\nprint(f'Finished in {round(finish-start, 2)} second(s)')", "Sleeping 1 Second...\nSleeping 1 Second...\nDone Sleeping...\nDone Sleeping...\nFinished in 1.01 second(s)\n" ], [ "import time\nimport threading\n\nstart = time.perf_counter()\n\n\ndef do_something(seconds):\n print(f'Sleeping {seconds} Second(s)...')\n time.sleep(seconds)\n print(f'Done Sleeping...')\n\nthreads = []\n \nfor _ in range(10):\n t = threading.Thread(target=do_something, args=[1.5])\n t.start()\n threads.append(t)\n \nfor thread in threads:\n thread.join()\n\nfinish = time.perf_counter()\n\nprint(f'Finished in {round(finish-start, 2)} second(s)')", "Sleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nSleeping 1.5 Second(s)...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nDone Sleeping...\nFinished in 1.53 second(s)\n" ], [ "import time\nfrom concurrent.futures import ThreadPoolExecutor as executor\nfrom concurrent.futures import as_completed\n\nstart = time.perf_counter()\n\n\ndef do_something(seconds):\n print(f'Sleeping {seconds} Second(s)...')\n time.sleep(seconds)\n return f'Done Sleeping...{seconds}'\n\n\nwith executor():\n secs = [5,4,3,2,1]\n results = [executor().submit(do_something, sec) for sec in secs]\n \n for f in as_completed(results):\n print(f.result())\n\n\nfinish = time.perf_counter()\n\nprint(f'Finished in {round(finish-start, 2)} second(s)')", "Sleeping 5 Second(s)...Sleeping 4 Second(s)...\n\nSleeping 3 Second(s)...\nSleeping 2 Second(s)...\nSleeping 1 Second(s)...\nDone Sleeping...1\nDone Sleeping...2\nDone Sleeping...3\nDone Sleeping...4\nDone Sleeping...5\nFinished in 5.01 second(s)\n" ], [ "import time\nfrom concurrent.futures import ThreadPoolExecutor as executor\nfrom concurrent.futures import as_completed\n\nstart = time.perf_counter()\n\ndef do_something(seconds):\n print(f'Sleeping {seconds} Second(s)...')\n time.sleep(seconds)\n return f'Done Sleeping...{seconds}'\n\n\nwith executor():\n secs = [5,4,3,2,1]\n # map returns the results in the order they were started\n results = executor().map(do_something, secs)\n \n for result in results:\n print(result)\n\n\nfinish = time.perf_counter()\n\nprint(f'Finished in {round(finish-start, 2)} second(s)')", "Sleeping 5 Second(s)...\nSleeping 4 Second(s)...\nSleeping 3 Second(s)...\nSleeping 2 Second(s)...\nSleeping 1 Second(s)...\nDone Sleeping...5\nDone Sleeping...4\nDone Sleeping...3\nDone Sleeping...2\nDone Sleeping...1\nFinished in 5.01 second(s)\n" ], [ "# Real Example\nimport requests\nimport time\nimport concurrent.futures\n\nimg_urls = [\n 'https://images.unsplash.com/photo-1516117172878-fd2c41f4a759',\n 'https://images.unsplash.com/photo-1532009324734-20a7a5813719',\n 'https://images.unsplash.com/photo-1524429656589-6633a470097c',\n 'https://images.unsplash.com/photo-1513938709626-033611b8cc03',\n 'https://images.unsplash.com/photo-1530224264768-7ff8c1789d79',\n]\n\nt1 = time.perf_counter()\n\n\nfor img_url in img_urls:\n img_bytes = requests.get(img_url).content\n img_name = img_url.split('/')[3]\n img_name = f'{img_name}.jpg'\n with open(img_name, 'wb') as img_file:\n img_file.write(img_bytes)\n print(f'{img_name} was downloaded...')\n \nt2 = time.perf_counter()\n\nprint(f'Finished in {t2-t1} seconds')", "photo-1516117172878-fd2c41f4a759.jpg was downloaded...\nphoto-1532009324734-20a7a5813719.jpg was downloaded...\nphoto-1524429656589-6633a470097c.jpg was downloaded...\nphoto-1513938709626-033611b8cc03.jpg was downloaded...\nphoto-1530224264768-7ff8c1789d79.jpg was downloaded...\nFinished in 82.39902849999999 seconds\n" ], [ "# Real Example with threads\nimg_urls = [\n 'https://images.unsplash.com/photo-1516117172878-fd2c41f4a759',\n 'https://images.unsplash.com/photo-1532009324734-20a7a5813719',\n 'https://images.unsplash.com/photo-1524429656589-6633a470097c',\n 'https://images.unsplash.com/photo-1513938709626-033611b8cc03',\n 'https://images.unsplash.com/photo-1530224264768-7ff8c1789d79',\n]\n\nt1 = time.perf_counter()\n\n\ndef download_image(img_url):\n img_bytes = requests.get(img_url).content\n img_name = img_url.split('/')[3]\n img_name = f'{img_name}.jpg'\n with open(img_name, 'wb') as img_file:\n img_file.write(img_bytes)\n print(f'{img_name} was downloaded...')\n \nwith concurrent.futures.ThreadPoolExecutor() as executor:\n executor.map(download_image, img_urls)\n \nt2 = time.perf_counter()\n\nprint(f'Finished in {t2-t1} seconds')", "photo-1516117172878-fd2c41f4a759.jpg was downloaded...\nphoto-1532009324734-20a7a5813719.jpg was downloaded...\nphoto-1524429656589-6633a470097c.jpg was downloaded...\nphoto-1513938709626-033611b8cc03.jpg was downloaded...\nphoto-1530224264768-7ff8c1789d79.jpg was downloaded...\nFinished in 37.34158899999966 seconds\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
e7a7e4de12ea62851ee65b007d4b3c96b55e9fa8
5,521
ipynb
Jupyter Notebook
arrays_strings/priority_queue_(unsolved)/priority_queue_challenge.ipynb
zzong2006/interactive-coding-challenges
a023c680b239d7f3a91792b681b49800a9530f35
[ "Apache-2.0" ]
1
2021-05-15T23:27:46.000Z
2021-05-15T23:27:46.000Z
arrays_strings/priority_queue_(unsolved)/priority_queue_challenge.ipynb
zzong2006/interactive-coding-challenges
a023c680b239d7f3a91792b681b49800a9530f35
[ "Apache-2.0" ]
null
null
null
arrays_strings/priority_queue_(unsolved)/priority_queue_challenge.ipynb
zzong2006/interactive-coding-challenges
a023c680b239d7f3a91792b681b49800a9530f35
[ "Apache-2.0" ]
null
null
null
25.67907
185
0.510596
[ [ [ "This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).", "_____no_output_____" ], [ "# Challenge Notebook", "_____no_output_____" ], [ "## Problem: Implement a priority queue backed by an array.\n\n* [Constraints](#Constraints)\n* [Test Cases](#Test-Cases)\n* [Algorithm](#Algorithm)\n* [Code](#Code)\n* [Unit Test](#Unit-Test)\n* [Solution Notebook](#Solution-Notebook)", "_____no_output_____" ], [ "## Constraints\n\n* Do we expect the methods to be insert, extract_min, and decrease_key?\n * Yes\n* Can we assume there aren't any duplicate keys?\n * Yes\n* Do we need to validate inputs?\n * No\n* Can we assume this fits memory?\n * Yes", "_____no_output_____" ], [ "## Test Cases\n\n### insert\n\n* `insert` general case -> inserted node\n\n### extract_min\n\n* `extract_min` from an empty list -> None\n* `extract_min` general case -> min node\n\n### decrease_key\n\n* `decrease_key` an invalid key -> None\n* `decrease_key` general case -> updated node", "_____no_output_____" ], [ "## Algorithm\n\nRefer to the [Solution Notebook](priority_queue_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.", "_____no_output_____" ], [ "## Code", "_____no_output_____" ] ], [ [ "class PriorityQueueNode(object):\n\n def __init__(self, obj, key):\n self.obj = obj\n self.key = key\n\n def __repr__(self):\n return str(self.obj) + ': ' + str(self.key)\n\n\nclass PriorityQueue(object):\n\n def __init__(self):\n self.array = []\n\n def __len__(self):\n return len(self.array)\n\n def insert(self, node):\n for k in range(len(self.array)):\n if self.array[k].key < node.key:\n \n # TODO: Implement me\n pass\n\n def extract_min(self):\n if not self.array:\n return None\n else:\n \n # TODO: Implement me\n pass\n\n def decrease_key(self, obj, new_key):\n # TODO: Implement me\n pass", "_____no_output_____" ] ], [ [ "## Unit Test", "_____no_output_____" ], [ "**The following unit test is expected to fail until you solve the challenge.**", "_____no_output_____" ] ], [ [ "# %load test_priority_queue.py\nimport unittest\n\n\nclass TestPriorityQueue(unittest.TestCase):\n\n def test_priority_queue(self):\n priority_queue = PriorityQueue()\n self.assertEqual(priority_queue.extract_min(), None)\n priority_queue.insert(PriorityQueueNode('a', 20))\n priority_queue.insert(PriorityQueueNode('b', 5))\n priority_queue.insert(PriorityQueueNode('c', 15))\n priority_queue.insert(PriorityQueueNode('d', 22))\n priority_queue.insert(PriorityQueueNode('e', 40))\n priority_queue.insert(PriorityQueueNode('f', 3))\n priority_queue.decrease_key('f', 2)\n priority_queue.decrease_key('a', 19)\n mins = []\n while priority_queue.array:\n mins.append(priority_queue.extract_min().key)\n self.assertEqual(mins, [2, 5, 15, 19, 22, 40])\n print('Success: test_min_heap')\n\n\ndef main():\n test = TestPriorityQueue()\n test.test_priority_queue()\n\n\nif __name__ == '__main__':\n main()", "_____no_output_____" ] ], [ [ "## Solution Notebook\n\nReview the [Solution Notebook](priority_queue_solution.ipynb) for a discussion on algorithms and code solutions.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e7a82506e508dbba4604752d7b081305a79c8de5
44,501
ipynb
Jupyter Notebook
scripts/supervised/pIC50_test.ipynb
choderalab/malt
e0d6a9c60200cb5bd18424821608210eb1b7b6d5
[ "MIT" ]
null
null
null
scripts/supervised/pIC50_test.ipynb
choderalab/malt
e0d6a9c60200cb5bd18424821608210eb1b7b6d5
[ "MIT" ]
2
2022-02-17T17:07:06.000Z
2022-03-28T06:05:43.000Z
scripts/supervised/pIC50_test.ipynb
choderalab/malt
e0d6a9c60200cb5bd18424821608210eb1b7b6d5
[ "MIT" ]
null
null
null
122.930939
14,692
0.804117
[ [ [ "# pIC50 Test", "_____no_output_____" ] ], [ [ "import numpy as np\nimport torch\nimport seaborn as sns\nimport malt\nimport pandas as pd\nimport dgllife", "Using backend: pytorch\n" ], [ "from dgllife.utils import smiles_to_bigraph, CanonicalAtomFeaturizer, CanonicalBondFeaturizer\ndf = pd.read_csv('../../../data/moonshot_pIC50.csv', index_col=0)\n\ndgllife_dataset = dgllife.data.csv_dataset.MoleculeCSVDataset(\n df=df,\n smiles_to_graph=smiles_to_bigraph,\n node_featurizer=CanonicalAtomFeaturizer(),\n edge_featurizer=CanonicalBondFeaturizer(),\n smiles_column='SMILES',\n task_names=[\n # 'MW', 'cLogP', 'r_inhibition_at_20_uM',\n # 'r_inhibition_at_50_uM', 'r_avg_IC50', 'f_inhibition_at_20_uM',\n # 'f_inhibition_at_50_uM', 'f_avg_IC50', 'relative_solubility_at_20_uM',\n # 'relative_solubility_at_100_uM', 'trypsin_IC50',\n 'f_avg_pIC50'\n ],\n init_mask=False,\n cache_file_path='../../../data/moonshot_pIC50.bin'\n)", "Processing dgl graphs from scratch...\nProcessing molecule 1000/2260\nProcessing molecule 2000/2260\n" ], [ "sns.displot(dgllife_dataset.labels.numpy()[dgllife_dataset.labels.numpy() > 4.005])", "_____no_output_____" ], [ "from malt.data.collections import _dataset_from_dgllife\n\ndata = _dataset_from_dgllife(dgllife_dataset)\n\n# mask data if it's at the limit of detection\ndata_masked = data[list(np.flatnonzero(np.array(data.y) > 4.005))]\ndata.shuffle(seed=2666)\nds_tr, ds_vl, ds_te = data_masked[:1500].split([8, 1, 1])", "_____no_output_____" ] ], [ [ "Make model", "_____no_output_____" ] ], [ [ "model_choice = 'nn' # 'nn'\nif model_choice == \"gp\":\n model = malt.models.supervised_model.GaussianProcessSupervisedModel(\n representation=malt.models.representation.DGLRepresentation(\n out_features=128,\n ),\n regressor=malt.models.regressor.ExactGaussianProcessRegressor(\n in_features=128, out_features=2,\n ),\n likelihood=malt.models.likelihood.HeteroschedasticGaussianLikelihood(),\n )\n\n\nelif model_choice == \"nn\":\n model = malt.models.supervised_model.SimpleSupervisedModel(\n representation=malt.models.representation.DGLRepresentation(\n out_features=128,\n ),\n regressor=malt.models.regressor.NeuralNetworkRegressor(\n in_features=128, out_features=1,\n ),\n likelihood=malt.models.likelihood.HomoschedasticGaussianLikelihood(),\n )", "_____no_output_____" ] ], [ [ "Train and evaluate.", "_____no_output_____" ] ], [ [ "trainer = malt.trainer.get_default_trainer(\n without_player=True,\n batch_size=32,\n n_epochs=3000,\n learning_rate=1e-3\n)\nmodel = trainer(model, ds_tr)\n\nr2 = malt.metrics.supervised_metrics.R2()(model, ds_te)\nprint(r2)\n\nrmse = malt.metrics.supervised_metrics.RMSE()(model, ds_te)\nprint(rmse)", "_____no_output_____" ], [ "ds_te_loader = ds_te.view(batch_size=len(ds_te))\ng, y = next(iter(ds_te_loader))\ny_hat = model.condition(g).mean\ng = sns.jointplot(x = ds_te.y, y = y_hat.detach().numpy())\ng.set_axis_labels('y', '\\hat{y}')", "_____no_output_____" ], [ "import torch\nimport dgl\nimport malt\nimport argparse\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--data\", type=str, default=\"esol\")\nparser.add_argument(\"--model\", type=str, default=\"nn\")\n\nargs = parser.parse_args([])\n\ndata = getattr(malt.data.collections, args.data)()\ndata.shuffle(seed=2666)\nds_tr, ds_vl, ds_te = data.split([8, 1, 1])\n\nif args.model == \"gp\":\n model = malt.models.supervised_model.GaussianProcessSupervisedModel(\n representation=malt.models.representation.DGLRepresentation(\n out_features=128,\n ),\n regressor=malt.models.regressor.ExactGaussianProcessRegressor(\n in_features=128, out_features=2,\n ),\n likelihood=malt.models.likelihood.HeteroschedasticGaussianLikelihood(),\n )\n\n\nelif args.model == \"nn\":\n model = malt.models.supervised_model.SimpleSupervisedModel(\n representation=malt.models.representation.DGLRepresentation(\n out_features=128,\n ),\n regressor=malt.models.regressor.NeuralNetworkRegressor(\n in_features=128, out_features=1,\n ),\n likelihood=malt.models.likelihood.HomoschedasticGaussianLikelihood(),\n )\n\n\ntrainer = malt.trainer.get_default_trainer(without_player=True, batch_size=len(ds_tr), n_epochs=3000, learning_rate=1e-3)\nmodel = trainer(model, ds_tr)\n\nr2 = malt.metrics.supervised_metrics.R2()(model, ds_te)\nprint(r2)\n\nrmse = malt.metrics.supervised_metrics.RMSE()(model, ds_te)\nprint(rmse)", "Processing dgl graphs from scratch...\nProcessing molecule 1000/1128\ntensor(0.9217, grad_fn=<RsubBackward1>)\ntensor(2.8730, grad_fn=<SqrtBackward0>)\n" ], [ "ds_te_loader = ds_te.view(batch_size=len(ds_te))\ng, y = next(iter(ds_te_loader))\ny_hat = model.condition(g).mean\ng = sns.jointplot(x = ds_te.y, y = y_hat.detach().numpy())\ng.set_axis_labels('y', '\\hat{y}')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7a826975e92ae23efb12ee99ff0669e72815ebb
95,409
ipynb
Jupyter Notebook
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
adilfaiz001/deep-learning-v2-pytorch
1add1bdccae6ba7031e3ec5be5a72ce63f809d45
[ "MIT" ]
null
null
null
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
adilfaiz001/deep-learning-v2-pytorch
1add1bdccae6ba7031e3ec5be5a72ce63f809d45
[ "MIT" ]
null
null
null
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
adilfaiz001/deep-learning-v2-pytorch
1add1bdccae6ba7031e3ec5be5a72ce63f809d45
[ "MIT" ]
null
null
null
150.014151
45,008
0.853966
[ [ [ "# Inference and Validation\n\nNow that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. \n\nAs usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:\n\n```python\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\n```\n\nThe test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.", "_____no_output_____" ] ], [ [ "import torch\nfrom torchvision import datasets, transforms\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n# Download and load the training data\ntrainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n\n# Download and load the test data\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)", "_____no_output_____" ] ], [ [ "Here I'll create a model like normal, using the same one from my solution for part 4.", "_____no_output_____" ] ], [ [ "from torch import nn, optim\nimport torch.nn.functional as F\n\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = F.relu(self.fc3(x))\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x", "_____no_output_____" ] ], [ [ "The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.", "_____no_output_____" ] ], [ [ "model = Classifier()\n\nimages, labels = next(iter(testloader))\n# Get the class probabilities\nps = torch.exp(model(images))\n# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples\nprint(ps.shape)", "torch.Size([64, 10])\n" ] ], [ [ "With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.", "_____no_output_____" ] ], [ [ "top_p, top_class = ps.topk(1, dim=1)\n# Look at the most likely classes for the first 10 examples\nprint(top_class[:10,:])", "tensor([[0],\n [3],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0]])\n" ] ], [ [ "Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.\n\nIf we do\n\n```python\nequals = top_class == labels\n```\n\n`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.", "_____no_output_____" ] ], [ [ "equals = top_class == labels.view(*top_class.shape)", "_____no_output_____" ] ], [ [ "Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error\n\n```\nRuntimeError: mean is not implemented for type torch.ByteTensor\n```\n\nThis happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implement for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.", "_____no_output_____" ] ], [ [ "accuracy = torch.mean(equals.type(torch.FloatTensor))\nprint(f'Accuracy: {accuracy.item()*100}%')", "Accuracy: 20.3125%\n" ] ], [ [ "The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:\n\n```python\n# turn off gradients\nwith torch.no_grad():\n # validation pass here\n for images, labels in testloader:\n ...\n```\n\n>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.", "_____no_output_____" ] ], [ [ "model = Classifier()\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\nepochs = 30\nsteps = 0\n\ntrain_losses, test_losses = [], []\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n \n optimizer.zero_grad()\n \n log_ps = model(images)\n loss = criterion(log_ps, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n \n else:\n ## TODO: Implement the validation pass and print out the validation accuracy\n test_loss = 0\n accuracy = 0\n with torch.no_grad():\n for images, labels in testloader:\n log_ps = model(images)\n test_loss += criterion(log_ps, labels)\n \n ps = torch.exp(log_ps)\n top_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy += torch.mean(equals.type(torch.FloatTensor))\n \n train_losses.append(running_loss/len(trainloader))\n test_losses.append(test_loss/len(testloader))\n \n \n print(\"Epoch: {}/{}.. \".format(e+1, epochs),\n \"Training Loss: {:.3f}.. \".format(running_loss/len(trainloader)),\n \"Test Loss: {:.3f}.. \".format(test_loss/len(testloader)),\n \"Test Accuracy: {:.3f}\".format(accuracy/len(testloader)))\n ", "Epoch: 1/30.. Training Loss: 0.511.. Test Loss: 0.455.. Test Accuracy: 0.839\nEpoch: 2/30.. Training Loss: 0.392.. Test Loss: 0.411.. Test Accuracy: 0.846\nEpoch: 3/30.. Training Loss: 0.353.. Test Loss: 0.403.. Test Accuracy: 0.853\nEpoch: 4/30.. Training Loss: 0.333.. Test Loss: 0.376.. Test Accuracy: 0.867\nEpoch: 5/30.. Training Loss: 0.318.. Test Loss: 0.392.. Test Accuracy: 0.857\nEpoch: 6/30.. Training Loss: 0.303.. Test Loss: 0.388.. Test Accuracy: 0.864\nEpoch: 7/30.. Training Loss: 0.295.. Test Loss: 0.363.. Test Accuracy: 0.871\nEpoch: 8/30.. Training Loss: 0.282.. Test Loss: 0.352.. Test Accuracy: 0.877\nEpoch: 9/30.. Training Loss: 0.279.. Test Loss: 0.396.. Test Accuracy: 0.863\nEpoch: 10/30.. Training Loss: 0.269.. Test Loss: 0.363.. Test Accuracy: 0.874\nEpoch: 11/30.. Training Loss: 0.260.. Test Loss: 0.385.. Test Accuracy: 0.869\nEpoch: 12/30.. Training Loss: 0.249.. Test Loss: 0.382.. Test Accuracy: 0.879\nEpoch: 13/30.. Training Loss: 0.247.. Test Loss: 0.376.. Test Accuracy: 0.876\nEpoch: 14/30.. Training Loss: 0.244.. Test Loss: 0.381.. Test Accuracy: 0.878\nEpoch: 15/30.. Training Loss: 0.235.. Test Loss: 0.369.. Test Accuracy: 0.880\nEpoch: 16/30.. Training Loss: 0.228.. Test Loss: 0.389.. Test Accuracy: 0.877\nEpoch: 17/30.. Training Loss: 0.232.. Test Loss: 0.380.. Test Accuracy: 0.877\nEpoch: 18/30.. Training Loss: 0.220.. Test Loss: 0.379.. Test Accuracy: 0.880\nEpoch: 19/30.. Training Loss: 0.217.. Test Loss: 0.383.. Test Accuracy: 0.883\nEpoch: 20/30.. Training Loss: 0.214.. Test Loss: 0.371.. Test Accuracy: 0.880\nEpoch: 21/30.. Training Loss: 0.211.. Test Loss: 0.423.. Test Accuracy: 0.873\nEpoch: 22/30.. Training Loss: 0.207.. Test Loss: 0.401.. Test Accuracy: 0.885\nEpoch: 23/30.. Training Loss: 0.203.. Test Loss: 0.414.. Test Accuracy: 0.876\nEpoch: 24/30.. Training Loss: 0.203.. Test Loss: 0.401.. Test Accuracy: 0.880\nEpoch: 25/30.. Training Loss: 0.200.. Test Loss: 0.442.. Test Accuracy: 0.874\nEpoch: 26/30.. Training Loss: 0.196.. Test Loss: 0.393.. Test Accuracy: 0.886\nEpoch: 27/30.. Training Loss: 0.190.. Test Loss: 0.421.. Test Accuracy: 0.883\nEpoch: 28/30.. Training Loss: 0.187.. Test Loss: 0.408.. Test Accuracy: 0.878\nEpoch: 29/30.. Training Loss: 0.185.. Test Loss: 0.423.. Test Accuracy: 0.880\nEpoch: 30/30.. Training Loss: 0.180.. Test Loss: 0.466.. Test Accuracy: 0.874\n" ] ], [ [ "## Overfitting\n\nIf we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.\n\n<img src='assets/overfitting.png' width=450px>\n\nThe network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.\n\nThe most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.\n\n```python\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n # Dropout module with 0.2 drop probability\n self.dropout = nn.Dropout(p=0.2)\n \n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n # Now with dropout\n x = self.dropout(F.relu(self.fc1(x)))\n x = self.dropout(F.relu(self.fc2(x)))\n x = self.dropout(F.relu(self.fc3(x)))\n \n # output so no dropout here\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x\n```\n\nDuring training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.\n\n```python\n# turn off gradients\nwith torch.no_grad():\n \n # set model to evaluation mode\n model.eval()\n \n # validation pass here\n for images, labels in testloader:\n ...\n\n# set model back to train mode\nmodel.train()\n```", "_____no_output_____" ], [ "> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.", "_____no_output_____" ] ], [ [ "## TODO: Define your model with dropout added\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n self.dropout = nn.Dropout(p=0.2)\n \n def forward(self,x):\n x = x.view(x.shape[0], -1)\n\n # Now with dropout\n x = self.dropout(F.relu(self.fc1(x)))\n x = self.dropout(F.relu(self.fc2(x)))\n x = self.dropout(F.relu(self.fc3(x)))\n\n # output so no dropout here\n x = F.log_softmax(self.fc4(x), dim=1)\n\n return x\n ", "_____no_output_____" ], [ "model = Classifier()\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\nepochs = 30\nsteps = 0\n\ntrain_losses, test_losses = [], []\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n \n optimizer.zero_grad()\n \n log_ps = model(images)\n loss = criterion(log_ps, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n \n else:\n test_loss = 0\n accuracy = 0\n \n # Turn off gradients for validation, saves memory and computations\n with torch.no_grad():\n model.eval()\n for images, labels in testloader:\n log_ps = model(images)\n test_loss += criterion(log_ps, labels)\n \n ps = torch.exp(log_ps)\n top_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy += torch.mean(equals.type(torch.FloatTensor))\n \n model.train()\n \n train_losses.append(running_loss/len(trainloader))\n test_losses.append(test_loss/len(testloader))\n\n print(\"Epoch: {}/{}.. \".format(e+1, epochs),\n \"Training Loss: {:.3f}.. \".format(running_loss/len(trainloader)),\n \"Test Loss: {:.3f}.. \".format(test_loss/len(testloader)),\n \"Test Accuracy: {:.3f}\".format(accuracy/len(testloader)))", "Epoch: 1/30.. Training Loss: 0.611.. Test Loss: 0.471.. Test Accuracy: 0.830\nEpoch: 2/30.. Training Loss: 0.482.. Test Loss: 0.439.. Test Accuracy: 0.841\nEpoch: 3/30.. Training Loss: 0.449.. Test Loss: 0.406.. Test Accuracy: 0.852\nEpoch: 4/30.. Training Loss: 0.433.. Test Loss: 0.438.. Test Accuracy: 0.841\nEpoch: 5/30.. Training Loss: 0.421.. Test Loss: 0.409.. Test Accuracy: 0.852\nEpoch: 6/30.. Training Loss: 0.411.. Test Loss: 0.416.. Test Accuracy: 0.845\nEpoch: 7/30.. Training Loss: 0.406.. Test Loss: 0.401.. Test Accuracy: 0.856\nEpoch: 8/30.. Training Loss: 0.399.. Test Loss: 0.406.. Test Accuracy: 0.850\nEpoch: 9/30.. Training Loss: 0.399.. Test Loss: 0.412.. Test Accuracy: 0.855\nEpoch: 10/30.. Training Loss: 0.389.. Test Loss: 0.394.. Test Accuracy: 0.864\nEpoch: 11/30.. Training Loss: 0.385.. Test Loss: 0.378.. Test Accuracy: 0.869\nEpoch: 12/30.. Training Loss: 0.382.. Test Loss: 0.385.. Test Accuracy: 0.866\nEpoch: 13/30.. Training Loss: 0.380.. Test Loss: 0.384.. Test Accuracy: 0.867\nEpoch: 14/30.. Training Loss: 0.374.. Test Loss: 0.377.. Test Accuracy: 0.867\nEpoch: 15/30.. Training Loss: 0.373.. Test Loss: 0.382.. Test Accuracy: 0.869\nEpoch: 16/30.. Training Loss: 0.374.. Test Loss: 0.372.. Test Accuracy: 0.872\nEpoch: 17/30.. Training Loss: 0.372.. Test Loss: 0.384.. Test Accuracy: 0.870\nEpoch: 18/30.. Training Loss: 0.370.. Test Loss: 0.389.. Test Accuracy: 0.869\nEpoch: 19/30.. Training Loss: 0.358.. Test Loss: 0.380.. Test Accuracy: 0.871\nEpoch: 20/30.. Training Loss: 0.357.. Test Loss: 0.384.. Test Accuracy: 0.871\nEpoch: 21/30.. Training Loss: 0.357.. Test Loss: 0.388.. Test Accuracy: 0.865\nEpoch: 22/30.. Training Loss: 0.353.. Test Loss: 0.388.. Test Accuracy: 0.875\nEpoch: 23/30.. Training Loss: 0.362.. Test Loss: 0.373.. Test Accuracy: 0.874\nEpoch: 24/30.. Training Loss: 0.346.. Test Loss: 0.383.. Test Accuracy: 0.873\nEpoch: 25/30.. Training Loss: 0.347.. Test Loss: 0.382.. Test Accuracy: 0.871\nEpoch: 26/30.. Training Loss: 0.343.. Test Loss: 0.390.. Test Accuracy: 0.867\nEpoch: 27/30.. Training Loss: 0.344.. Test Loss: 0.386.. Test Accuracy: 0.868\nEpoch: 28/30.. Training Loss: 0.342.. Test Loss: 0.392.. Test Accuracy: 0.867\nEpoch: 29/30.. Training Loss: 0.346.. Test Loss: 0.402.. Test Accuracy: 0.867\nEpoch: 30/30.. Training Loss: 0.341.. Test Loss: 0.373.. Test Accuracy: 0.874\n" ], [ "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "plt.plot(train_losses, label='Training loss')\nplt.plot(test_losses, label='Validation loss')\nplt.legend(frameon=False)", "_____no_output_____" ] ], [ [ "## Inference\n\nNow that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.", "_____no_output_____" ] ], [ [ "# Import helper module (should be in the repo)\nimport helper\n\n# Test out your network!\n\nmodel.eval()\n\ndataiter = iter(testloader)\nimages, labels = dataiter.next()\nimg = images[0]\n# Convert 2D image to 1D vector\nimg = img.view(1, 784)\n\n# Calculate the class probabilities (softmax) for img\nwith torch.no_grad():\n output = model.forward(img)\n\nps = torch.exp(output)\n\n# Plot the image and probabilities\nhelper.view_classify(img.view(1, 28, 28), ps, version='Fashion')", "_____no_output_____" ] ], [ [ "## Next Up!\n\nIn the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7a829b3b4350325d3f4e94635b976203589ee51
42,718
ipynb
Jupyter Notebook
notebooks/old/DeprecatedTitles_t0_gpt.ipynb
IlyaGusev/NewsCausation
b1a1b3af637c6328a1d3327b30b5f4ca0a8f3752
[ "Apache-2.0" ]
7
2021-08-25T15:40:55.000Z
2022-02-28T06:40:50.000Z
notebooks/old/DeprecatedTitles_t0_gpt.ipynb
IlyaGusev/NewsCausation
b1a1b3af637c6328a1d3327b30b5f4ca0a8f3752
[ "Apache-2.0" ]
null
null
null
notebooks/old/DeprecatedTitles_t0_gpt.ipynb
IlyaGusev/NewsCausation
b1a1b3af637c6328a1d3327b30b5f4ca0a8f3752
[ "Apache-2.0" ]
null
null
null
56.134034
17,360
0.760733
[ [ [ "# Requirements", "_____no_output_____" ] ], [ [ "# !pip install --upgrade transformers bertviz checklist", "_____no_output_____" ] ], [ [ "# Data loading", "_____no_output_____" ] ], [ [ "# !rm -rf ru_news_cause_v1.tsv*\n# !wget https://www.dropbox.com/s/kcxnhjzfut4guut/ru_news_cause_v1.tsv.tar.gz\n# !tar -xzvf ru_news_cause_v1.tsv.tar.gz", "_____no_output_____" ], [ "# !cat ru_news_cause_v1.tsv | wc -l\n# !head ru_news_cause_v1.tsv", "_____no_output_____" ] ], [ [ "# GPTCause", "_____no_output_____" ] ], [ [ "from transformers import GPT2LMHeadModel, GPT2TokenizerFast\ndevice = 'cuda'\nmodel_id = 'sberbank-ai/rugpt3small_based_on_gpt2' \n \nmodel = GPT2LMHeadModel.from_pretrained(model_id).to(device)\ntokenizer = GPT2TokenizerFast.from_pretrained(model_id)", "2021-08-01 15:30:34.664821: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n" ], [ "import torch\nmax_length = model.config.n_positions\n\ndef gpt_assess(s1, s2):\n encodings = tokenizer(f'{s1} {s2}', return_tensors='pt')\n with torch.no_grad():\n outputs = model(encodings.input_ids.to(device), labels=encodings.input_ids.to(device))\n log_likelihood = outputs[0] * encodings.input_ids.size(1)\n return log_likelihood.detach().cpu().numpy()\n\ndef gpt_assess_pair(s1,s2):\n ppl1 = gpt_assess(s1,s2)\n ppl2 = gpt_assess(s2,s1)\n if ppl1<ppl2:\n return 0, ppl1/ppl2\n else:\n return 1, ppl1/ppl2", "_____no_output_____" ], [ "print(gpt_assess('Привет!', 'Как дела?'))\nprint(gpt_assess('Как дела?', 'Привет!'))\nprint(gpt_assess_pair('Как дела?', 'Привет!'))\nprint(gpt_assess_pair('Привет!', 'Как дела?'))\n", "11.083624\n23.637806\n(1, 2.1326785)\n(0, 0.46889395)\n" ] ], [ [ "## Scoring", "_____no_output_____" ] ], [ [ "import csv\n\nlabels = []\ntexts = []\npreds = []\nconfs = []\n\nwith open(\"ru_news_cause_v1.tsv\", \"r\", encoding='utf-8') as r:\n reader = csv.reader(r, delimiter=\"\\t\")\n header = next(reader)\n for row in reader:\n r = dict(zip(header, row))\n if float(r[\"confidence\"]) < 0.69:\n continue\n result = r[\"result\"]\n mapping = {\n \"left_right_cause\": 0,\n \"left_right_cancel\": 0,\n \"right_left_cause\": 1,\n \"right_left_cancel\": 1\n }\n if result not in mapping:\n continue\n r[\"label\"] = mapping[result]\n \n labels.append(r['label'])\n texts.append( (r[\"left_title\"], r[\"right_title\"] ) )\n p, c = gpt_assess_pair( r[\"left_title\"], r[\"right_title\"] )\n preds.append( p )\n confs.append( c )\n", "_____no_output_____" ], [ "from collections import Counter\n\nprint('labels', Counter(labels))\nprint('preds', Counter(preds))", "labels Counter({1: 763, 0: 706})\npreds Counter({1: 742, 0: 727})\n" ], [ "import matplotlib.pyplot as plt\n\nplt.hist(confs)\nplt.show()", "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\nTo disable this warning, you can either:\n\t- Avoid using `tokenizers` before the fork if possible\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\nTo disable this warning, you can either:\n\t- Avoid using `tokenizers` before the fork if possible\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ], [ "from sklearn.metrics import classification_report, balanced_accuracy_score, confusion_matrix\ny_true = labels\ny_pred = preds\n\nprint(classification_report(y_true, y_pred))\nprint('balanced_accuracy_score', balanced_accuracy_score(y_true, y_pred))\nprint('\\nconfusion_matrix\\n',confusion_matrix(y_true, y_pred))\n\n", " precision recall f1-score support\n\n 0 0.58 0.60 0.59 706\n 1 0.62 0.60 0.61 763\n\n accuracy 0.60 1469\n macro avg 0.60 0.60 0.60 1469\nweighted avg 0.60 0.60 0.60 1469\n\nbalanced_accuracy_score 0.5976343938308228\n\nconfusion_matrix\n [[421 285]\n [306 457]]\n" ], [ "import numpy as np\n\nconfidence_th = .1\n\nmask = np.array(list(map(lambda x:abs(x-1.), confs)))>confidence_th\n\ny_true = np.array(labels)[mask]\ny_pred = np.array(preds)[mask]\n\nprint(classification_report(y_true, y_pred))\nprint('balanced_accuracy_score', balanced_accuracy_score(y_true, y_pred))\nprint('\\nconfusion_matrix\\n',confusion_matrix(y_true, y_pred))\n\n", " precision recall f1-score support\n\n 0 0.73 0.67 0.70 139\n 1 0.72 0.78 0.75 153\n\n accuracy 0.73 292\n macro avg 0.73 0.72 0.72 292\nweighted avg 0.73 0.73 0.73 292\n\nbalanced_accuracy_score 0.7234212629896083\n\nconfusion_matrix\n [[ 93 46]\n [ 34 119]]\n" ], [ "from sklearn.metrics import f1_score\n\nxs = []\nys = []\nfor th_idx in range(250):\n th = th_idx/1000.\n mask = np.array(list(map(lambda x:abs(x-1.), confs)))>th\n\n y_true = np.array(labels)[mask]\n y_pred = np.array(preds)[mask]\n xs.append( th )\n ys.append( f1_score(y_true, y_pred) )\n", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nplt.plot(xs, ys)\nplt.suptitle('f1 by conf_th')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7a83d9abb86b12132bb03d409122ae88cf9c1b2
2,421
ipynb
Jupyter Notebook
Strategy/math_quadratic_exercise.ipynb
NachoCP/python-design-patterns
76a7834ff3f8d5c759935f3f5bd3c7c2a57ea112
[ "MIT" ]
null
null
null
Strategy/math_quadratic_exercise.ipynb
NachoCP/python-design-patterns
76a7834ff3f8d5c759935f3f5bd3c7c2a57ea112
[ "MIT" ]
null
null
null
Strategy/math_quadratic_exercise.ipynb
NachoCP/python-design-patterns
76a7834ff3f8d5c759935f3f5bd3c7c2a57ea112
[ "MIT" ]
null
null
null
24.454545
82
0.497728
[ [ [ "from abc import ABC\nimport cmath\n\nclass DiscriminantStrategy(ABC):\n def calculate_discriminant(self, a, b, c):\n pass\n\n\nclass OrdinaryDiscriminantStrategy(DiscriminantStrategy):\n def calculate_discriminant(self, a, b, c):\n first_item = b*b\n second_item = 4*a*c\n return first_item - second_item\n\n\nclass RealDiscriminantStrategy(DiscriminantStrategy):\n def calculate_discriminant(self, a, b, c):\n first_item = b*b\n second_item = 4*a*c\n if second_item > first_item:\n return float('nan')\n else:\n return first_item - second_item\n\n\nclass QuadraticEquationSolver:\n def __init__(self, strategy):\n self.strategy = strategy\n\n def solve(self, a, b, c):\n \"\"\" Returns a pair of complex (!) values \"\"\"\n discriminant = complex(strategy.calculate_discriminant(a,b,c), 0)\n root_dis = cmath.sqrt(discriminant)\n return (\n (-b + root_dis) / (2*a),\n (-b - root_dis) / (2*a)\n )", "_____no_output_____" ], [ "strategy = RealDiscriminantStrategy()\nsolver = QuadraticEquationSolver(strategy)\nresults = solver.solve(1, 4, 5)", "_____no_output_____" ], [ "print(results)", "((nan+nanj), (nan+nanj))\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e7a87655f8a4f79b7a95dcc9191ebf5f45d919ae
102,591
ipynb
Jupyter Notebook
CS224n/assignment1/exploring_word_vectors.ipynb
iofu728/Task
dd1f050d57ea831320ab6e4c2b827714cc13dfe3
[ "MIT" ]
2
2019-05-23T10:46:51.000Z
2020-03-21T13:42:22.000Z
CS224n/assignment1/exploring_word_vectors.ipynb
iofu728/Task
dd1f050d57ea831320ab6e4c2b827714cc13dfe3
[ "MIT" ]
1
2019-05-23T11:10:50.000Z
2019-05-23T14:37:42.000Z
CS224n/assignment1/exploring_word_vectors.ipynb
iofu728/Task
dd1f050d57ea831320ab6e4c2b827714cc13dfe3
[ "MIT" ]
3
2019-12-30T00:59:21.000Z
2020-04-13T05:51:26.000Z
74.395214
13,076
0.690304
[ [ [ "# CS224N Assignment 1: Exploring Word Vectors (25 Points)\n### <font color='blue'> Due 4:30pm, Tue Jan 14 </font>\n\nWelcome to CS224n! \n\nBefore you start, make sure you read the README.txt in the same directory as this notebook. You will find many provided codes in the notebook. We highly encourage you to read and understand the provided codes as part of the learning :-)", "_____no_output_____" ] ], [ [ "# All Import Statements Defined Here\n# Note: Do not add to this list.\n# ----------------\n\nimport sys\nassert sys.version_info[0]==3\nassert sys.version_info[1] >= 5\n\nfrom gensim.models import KeyedVectors\nfrom gensim.test.utils import datapath\nimport pprint\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = [10, 5]\nimport nltk\nnltk.download('reuters')\nfrom nltk.corpus import reuters\nimport numpy as np\nimport random\nimport scipy as sp\nfrom sklearn.decomposition import TruncatedSVD\nfrom sklearn.decomposition import PCA\n\nSTART_TOKEN = '<START>'\nEND_TOKEN = '<END>'\n\nnp.random.seed(0)\nrandom.seed(0)\n# ----------------", "[nltk_data] Downloading package reuters to\n[nltk_data] /usr/local/share/nltk_data...\n[nltk_data] Package reuters is already up-to-date!\n" ] ], [ [ "## Word Vectors\n\nWord Vectors are often used as a fundamental component for downstream NLP tasks, e.g. question answering, text generation, translation, etc., so it is important to build some intuitions as to their strengths and weaknesses. Here, you will explore two types of word vectors: those derived from *co-occurrence matrices*, and those derived via *GloVe*. \n\n**Assignment Notes:** Please make sure to save the notebook as you go along. Submission Instructions are located at the bottom of the notebook.\n\n**Note on Terminology:** The terms \"word vectors\" and \"word embeddings\" are often used interchangeably. The term \"embedding\" refers to the fact that we are encoding aspects of a word's meaning in a lower dimensional space. As [Wikipedia](https://en.wikipedia.org/wiki/Word_embedding) states, \"*conceptually it involves a mathematical embedding from a space with one dimension per word to a continuous vector space with a much lower dimension*\".", "_____no_output_____" ], [ "## Part 1: Count-Based Word Vectors (10 points)\n\nMost word vector models start from the following idea:\n\n*You shall know a word by the company it keeps ([Firth, J. R. 1957:11](https://en.wikipedia.org/wiki/John_Rupert_Firth))*\n\nMany word vector implementations are driven by the idea that similar words, i.e., (near) synonyms, will be used in similar contexts. As a result, similar words will often be spoken or written along with a shared subset of words, i.e., contexts. By examining these contexts, we can try to develop embeddings for our words. With this intuition in mind, many \"old school\" approaches to constructing word vectors relied on word counts. Here we elaborate upon one of those strategies, *co-occurrence matrices* (for more information, see [here](http://web.stanford.edu/class/cs124/lec/vectorsemantics.video.pdf) or [here](https://medium.com/data-science-group-iitr/word-embedding-2d05d270b285)).", "_____no_output_____" ], [ "### Co-Occurrence\n\nA co-occurrence matrix counts how often things co-occur in some environment. Given some word $w_i$ occurring in the document, we consider the *context window* surrounding $w_i$. Supposing our fixed window size is $n$, then this is the $n$ preceding and $n$ subsequent words in that document, i.e. words $w_{i-n} \\dots w_{i-1}$ and $w_{i+1} \\dots w_{i+n}$. We build a *co-occurrence matrix* $M$, which is a symmetric word-by-word matrix in which $M_{ij}$ is the number of times $w_j$ appears inside $w_i$'s window among all documents.\n\n**Example: Co-Occurrence with Fixed Window of n=1**:\n\nDocument 1: \"all that glitters is not gold\"\n\nDocument 2: \"all is well that ends well\"\n\n\n| * | `<START>` | all | that | glitters | is | not | gold | well | ends | `<END>` |\n|----------|-------|-----|------|----------|------|------|-------|------|------|-----|\n| `<START>` | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| all | 2 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |\n| that | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 |\n| glitters | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |\n| is | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |\n| not | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 |\n| gold | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |\n| well | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 |\n| ends | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |\n| `<END>` | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |\n\n**Note:** In NLP, we often add `<START>` and `<END>` tokens to represent the beginning and end of sentences, paragraphs or documents. In thise case we imagine `<START>` and `<END>` tokens encapsulating each document, e.g., \"`<START>` All that glitters is not gold `<END>`\", and include these tokens in our co-occurrence counts.\n\nThe rows (or columns) of this matrix provide one type of word vectors (those based on word-word co-occurrence), but the vectors will be large in general (linear in the number of distinct words in a corpus). Thus, our next step is to run *dimensionality reduction*. In particular, we will run *SVD (Singular Value Decomposition)*, which is a kind of generalized *PCA (Principal Components Analysis)* to select the top $k$ principal components. Here's a visualization of dimensionality reduction with SVD. In this picture our co-occurrence matrix is $A$ with $n$ rows corresponding to $n$ words. We obtain a full matrix decomposition, with the singular values ordered in the diagonal $S$ matrix, and our new, shorter length-$k$ word vectors in $U_k$.\n\n![Picture of an SVD](./imgs/svd.png \"SVD\")\n\nThis reduced-dimensionality co-occurrence representation preserves semantic relationships between words, e.g. *doctor* and *hospital* will be closer than *doctor* and *dog*. \n\n**Notes:** If you can barely remember what an eigenvalue is, here's [a slow, friendly introduction to SVD](https://davetang.org/file/Singular_Value_Decomposition_Tutorial.pdf). If you want to learn more thoroughly about PCA or SVD, feel free to check out lectures [7](https://web.stanford.edu/class/cs168/l/l7.pdf), [8](http://theory.stanford.edu/~tim/s15/l/l8.pdf), and [9](https://web.stanford.edu/class/cs168/l/l9.pdf) of CS168. These course notes provide a great high-level treatment of these general purpose algorithms. Though, for the purpose of this class, you only need to know how to extract the k-dimensional embeddings by utilizing pre-programmed implementations of these algorithms from the numpy, scipy, or sklearn python packages. In practice, it is challenging to apply full SVD to large corpora because of the memory needed to perform PCA or SVD. However, if you only want the top $k$ vector components for relatively small $k$ — known as [Truncated SVD](https://en.wikipedia.org/wiki/Singular_value_decomposition#Truncated_SVD) — then there are reasonably scalable techniques to compute those iteratively.", "_____no_output_____" ], [ "### Plotting Co-Occurrence Word Embeddings\n\nHere, we will be using the Reuters (business and financial news) corpus. If you haven't run the import cell at the top of this page, please run it now (click it and press SHIFT-RETURN). The corpus consists of 10,788 news documents totaling 1.3 million words. These documents span 90 categories and are split into train and test. For more details, please see https://www.nltk.org/book/ch02.html. We provide a `read_corpus` function below that pulls out only articles from the \"crude\" (i.e. news articles about oil, gas, etc.) category. The function also adds `<START>` and `<END>` tokens to each of the documents, and lowercases words. You do **not** have to perform any other kind of pre-processing.", "_____no_output_____" ] ], [ [ "def read_corpus(category=\"crude\"):\n \"\"\" Read files from the specified Reuter's category.\n Params:\n category (string): category name\n Return:\n list of lists, with words from each of the processed files\n \"\"\"\n files = reuters.fileids(category)\n return [[START_TOKEN] + [w.lower() for w in list(reuters.words(f))] + [END_TOKEN] for f in files]\n", "_____no_output_____" ] ], [ [ "Let's have a look what these documents are like….", "_____no_output_____" ] ], [ [ "reuters_corpus = read_corpus()\npprint.pprint(reuters_corpus[:3], compact=True, width=100)", "[['<START>', 'japan', 'to', 'revise', 'long', '-', 'term', 'energy', 'demand', 'downwards', 'the',\n 'ministry', 'of', 'international', 'trade', 'and', 'industry', '(', 'miti', ')', 'will', 'revise',\n 'its', 'long', '-', 'term', 'energy', 'supply', '/', 'demand', 'outlook', 'by', 'august', 'to',\n 'meet', 'a', 'forecast', 'downtrend', 'in', 'japanese', 'energy', 'demand', ',', 'ministry',\n 'officials', 'said', '.', 'miti', 'is', 'expected', 'to', 'lower', 'the', 'projection', 'for',\n 'primary', 'energy', 'supplies', 'in', 'the', 'year', '2000', 'to', '550', 'mln', 'kilolitres',\n '(', 'kl', ')', 'from', '600', 'mln', ',', 'they', 'said', '.', 'the', 'decision', 'follows',\n 'the', 'emergence', 'of', 'structural', 'changes', 'in', 'japanese', 'industry', 'following',\n 'the', 'rise', 'in', 'the', 'value', 'of', 'the', 'yen', 'and', 'a', 'decline', 'in', 'domestic',\n 'electric', 'power', 'demand', '.', 'miti', 'is', 'planning', 'to', 'work', 'out', 'a', 'revised',\n 'energy', 'supply', '/', 'demand', 'outlook', 'through', 'deliberations', 'of', 'committee',\n 'meetings', 'of', 'the', 'agency', 'of', 'natural', 'resources', 'and', 'energy', ',', 'the',\n 'officials', 'said', '.', 'they', 'said', 'miti', 'will', 'also', 'review', 'the', 'breakdown',\n 'of', 'energy', 'supply', 'sources', ',', 'including', 'oil', ',', 'nuclear', ',', 'coal', 'and',\n 'natural', 'gas', '.', 'nuclear', 'energy', 'provided', 'the', 'bulk', 'of', 'japan', \"'\", 's',\n 'electric', 'power', 'in', 'the', 'fiscal', 'year', 'ended', 'march', '31', ',', 'supplying',\n 'an', 'estimated', '27', 'pct', 'on', 'a', 'kilowatt', '/', 'hour', 'basis', ',', 'followed',\n 'by', 'oil', '(', '23', 'pct', ')', 'and', 'liquefied', 'natural', 'gas', '(', '21', 'pct', '),',\n 'they', 'noted', '.', '<END>'],\n ['<START>', 'energy', '/', 'u', '.', 's', '.', 'petrochemical', 'industry', 'cheap', 'oil',\n 'feedstocks', ',', 'the', 'weakened', 'u', '.', 's', '.', 'dollar', 'and', 'a', 'plant',\n 'utilization', 'rate', 'approaching', '90', 'pct', 'will', 'propel', 'the', 'streamlined', 'u',\n '.', 's', '.', 'petrochemical', 'industry', 'to', 'record', 'profits', 'this', 'year', ',',\n 'with', 'growth', 'expected', 'through', 'at', 'least', '1990', ',', 'major', 'company',\n 'executives', 'predicted', '.', 'this', 'bullish', 'outlook', 'for', 'chemical', 'manufacturing',\n 'and', 'an', 'industrywide', 'move', 'to', 'shed', 'unrelated', 'businesses', 'has', 'prompted',\n 'gaf', 'corp', '&', 'lt', ';', 'gaf', '>,', 'privately', '-', 'held', 'cain', 'chemical', 'inc',\n ',', 'and', 'other', 'firms', 'to', 'aggressively', 'seek', 'acquisitions', 'of', 'petrochemical',\n 'plants', '.', 'oil', 'companies', 'such', 'as', 'ashland', 'oil', 'inc', '&', 'lt', ';', 'ash',\n '>,', 'the', 'kentucky', '-', 'based', 'oil', 'refiner', 'and', 'marketer', ',', 'are', 'also',\n 'shopping', 'for', 'money', '-', 'making', 'petrochemical', 'businesses', 'to', 'buy', '.', '\"',\n 'i', 'see', 'us', 'poised', 'at', 'the', 'threshold', 'of', 'a', 'golden', 'period', ',\"', 'said',\n 'paul', 'oreffice', ',', 'chairman', 'of', 'giant', 'dow', 'chemical', 'co', '&', 'lt', ';',\n 'dow', '>,', 'adding', ',', '\"', 'there', \"'\", 's', 'no', 'major', 'plant', 'capacity', 'being',\n 'added', 'around', 'the', 'world', 'now', '.', 'the', 'whole', 'game', 'is', 'bringing', 'out',\n 'new', 'products', 'and', 'improving', 'the', 'old', 'ones', '.\"', 'analysts', 'say', 'the',\n 'chemical', 'industry', \"'\", 's', 'biggest', 'customers', ',', 'automobile', 'manufacturers',\n 'and', 'home', 'builders', 'that', 'use', 'a', 'lot', 'of', 'paints', 'and', 'plastics', ',',\n 'are', 'expected', 'to', 'buy', 'quantities', 'this', 'year', '.', 'u', '.', 's', '.',\n 'petrochemical', 'plants', 'are', 'currently', 'operating', 'at', 'about', '90', 'pct',\n 'capacity', ',', 'reflecting', 'tighter', 'supply', 'that', 'could', 'hike', 'product', 'prices',\n 'by', '30', 'to', '40', 'pct', 'this', 'year', ',', 'said', 'john', 'dosher', ',', 'managing',\n 'director', 'of', 'pace', 'consultants', 'inc', 'of', 'houston', '.', 'demand', 'for', 'some',\n 'products', 'such', 'as', 'styrene', 'could', 'push', 'profit', 'margins', 'up', 'by', 'as',\n 'much', 'as', '300', 'pct', ',', 'he', 'said', '.', 'oreffice', ',', 'speaking', 'at', 'a',\n 'meeting', 'of', 'chemical', 'engineers', 'in', 'houston', ',', 'said', 'dow', 'would', 'easily',\n 'top', 'the', '741', 'mln', 'dlrs', 'it', 'earned', 'last', 'year', 'and', 'predicted', 'it',\n 'would', 'have', 'the', 'best', 'year', 'in', 'its', 'history', '.', 'in', '1985', ',', 'when',\n 'oil', 'prices', 'were', 'still', 'above', '25', 'dlrs', 'a', 'barrel', 'and', 'chemical',\n 'exports', 'were', 'adversely', 'affected', 'by', 'the', 'strong', 'u', '.', 's', '.', 'dollar',\n ',', 'dow', 'had', 'profits', 'of', '58', 'mln', 'dlrs', '.', '\"', 'i', 'believe', 'the',\n 'entire', 'chemical', 'industry', 'is', 'headed', 'for', 'a', 'record', 'year', 'or', 'close',\n 'to', 'it', ',\"', 'oreffice', 'said', '.', 'gaf', 'chairman', 'samuel', 'heyman', 'estimated',\n 'that', 'the', 'u', '.', 's', '.', 'chemical', 'industry', 'would', 'report', 'a', '20', 'pct',\n 'gain', 'in', 'profits', 'during', '1987', '.', 'last', 'year', ',', 'the', 'domestic',\n 'industry', 'earned', 'a', 'total', 'of', '13', 'billion', 'dlrs', ',', 'a', '54', 'pct', 'leap',\n 'from', '1985', '.', 'the', 'turn', 'in', 'the', 'fortunes', 'of', 'the', 'once', '-', 'sickly',\n 'chemical', 'industry', 'has', 'been', 'brought', 'about', 'by', 'a', 'combination', 'of', 'luck',\n 'and', 'planning', ',', 'said', 'pace', \"'\", 's', 'john', 'dosher', '.', 'dosher', 'said', 'last',\n 'year', \"'\", 's', 'fall', 'in', 'oil', 'prices', 'made', 'feedstocks', 'dramatically', 'cheaper',\n 'and', 'at', 'the', 'same', 'time', 'the', 'american', 'dollar', 'was', 'weakening', 'against',\n 'foreign', 'currencies', '.', 'that', 'helped', 'boost', 'u', '.', 's', '.', 'chemical',\n 'exports', '.', 'also', 'helping', 'to', 'bring', 'supply', 'and', 'demand', 'into', 'balance',\n 'has', 'been', 'the', 'gradual', 'market', 'absorption', 'of', 'the', 'extra', 'chemical',\n 'manufacturing', 'capacity', 'created', 'by', 'middle', 'eastern', 'oil', 'producers', 'in',\n 'the', 'early', '1980s', '.', 'finally', ',', 'virtually', 'all', 'major', 'u', '.', 's', '.',\n 'chemical', 'manufacturers', 'have', 'embarked', 'on', 'an', 'extensive', 'corporate',\n 'restructuring', 'program', 'to', 'mothball', 'inefficient', 'plants', ',', 'trim', 'the',\n 'payroll', 'and', 'eliminate', 'unrelated', 'businesses', '.', 'the', 'restructuring', 'touched',\n 'off', 'a', 'flurry', 'of', 'friendly', 'and', 'hostile', 'takeover', 'attempts', '.', 'gaf', ',',\n 'which', 'made', 'an', 'unsuccessful', 'attempt', 'in', '1985', 'to', 'acquire', 'union',\n 'carbide', 'corp', '&', 'lt', ';', 'uk', '>,', 'recently', 'offered', 'three', 'billion', 'dlrs',\n 'for', 'borg', 'warner', 'corp', '&', 'lt', ';', 'bor', '>,', 'a', 'chicago', 'manufacturer',\n 'of', 'plastics', 'and', 'chemicals', '.', 'another', 'industry', 'powerhouse', ',', 'w', '.',\n 'r', '.', 'grace', '&', 'lt', ';', 'gra', '>', 'has', 'divested', 'its', 'retailing', ',',\n 'restaurant', 'and', 'fertilizer', 'businesses', 'to', 'raise', 'cash', 'for', 'chemical',\n 'acquisitions', '.', 'but', 'some', 'experts', 'worry', 'that', 'the', 'chemical', 'industry',\n 'may', 'be', 'headed', 'for', 'trouble', 'if', 'companies', 'continue', 'turning', 'their',\n 'back', 'on', 'the', 'manufacturing', 'of', 'staple', 'petrochemical', 'commodities', ',', 'such',\n 'as', 'ethylene', ',', 'in', 'favor', 'of', 'more', 'profitable', 'specialty', 'chemicals',\n 'that', 'are', 'custom', '-', 'designed', 'for', 'a', 'small', 'group', 'of', 'buyers', '.', '\"',\n 'companies', 'like', 'dupont', '&', 'lt', ';', 'dd', '>', 'and', 'monsanto', 'co', '&', 'lt', ';',\n 'mtc', '>', 'spent', 'the', 'past', 'two', 'or', 'three', 'years', 'trying', 'to', 'get', 'out',\n 'of', 'the', 'commodity', 'chemical', 'business', 'in', 'reaction', 'to', 'how', 'badly', 'the',\n 'market', 'had', 'deteriorated', ',\"', 'dosher', 'said', '.', '\"', 'but', 'i', 'think', 'they',\n 'will', 'eventually', 'kill', 'the', 'margins', 'on', 'the', 'profitable', 'chemicals', 'in',\n 'the', 'niche', 'market', '.\"', 'some', 'top', 'chemical', 'executives', 'share', 'the',\n 'concern', '.', '\"', 'the', 'challenge', 'for', 'our', 'industry', 'is', 'to', 'keep', 'from',\n 'getting', 'carried', 'away', 'and', 'repeating', 'past', 'mistakes', ',\"', 'gaf', \"'\", 's',\n 'heyman', 'cautioned', '.', '\"', 'the', 'shift', 'from', 'commodity', 'chemicals', 'may', 'be',\n 'ill', '-', 'advised', '.', 'specialty', 'businesses', 'do', 'not', 'stay', 'special', 'long',\n '.\"', 'houston', '-', 'based', 'cain', 'chemical', ',', 'created', 'this', 'month', 'by', 'the',\n 'sterling', 'investment', 'banking', 'group', ',', 'believes', 'it', 'can', 'generate', '700',\n 'mln', 'dlrs', 'in', 'annual', 'sales', 'by', 'bucking', 'the', 'industry', 'trend', '.',\n 'chairman', 'gordon', 'cain', ',', 'who', 'previously', 'led', 'a', 'leveraged', 'buyout', 'of',\n 'dupont', \"'\", 's', 'conoco', 'inc', \"'\", 's', 'chemical', 'business', ',', 'has', 'spent', '1',\n '.', '1', 'billion', 'dlrs', 'since', 'january', 'to', 'buy', 'seven', 'petrochemical', 'plants',\n 'along', 'the', 'texas', 'gulf', 'coast', '.', 'the', 'plants', 'produce', 'only', 'basic',\n 'commodity', 'petrochemicals', 'that', 'are', 'the', 'building', 'blocks', 'of', 'specialty',\n 'products', '.', '\"', 'this', 'kind', 'of', 'commodity', 'chemical', 'business', 'will', 'never',\n 'be', 'a', 'glamorous', ',', 'high', '-', 'margin', 'business', ',\"', 'cain', 'said', ',',\n 'adding', 'that', 'demand', 'is', 'expected', 'to', 'grow', 'by', 'about', 'three', 'pct',\n 'annually', '.', 'garo', 'armen', ',', 'an', 'analyst', 'with', 'dean', 'witter', 'reynolds', ',',\n 'said', 'chemical', 'makers', 'have', 'also', 'benefitted', 'by', 'increasing', 'demand', 'for',\n 'plastics', 'as', 'prices', 'become', 'more', 'competitive', 'with', 'aluminum', ',', 'wood',\n 'and', 'steel', 'products', '.', 'armen', 'estimated', 'the', 'upturn', 'in', 'the', 'chemical',\n 'business', 'could', 'last', 'as', 'long', 'as', 'four', 'or', 'five', 'years', ',', 'provided',\n 'the', 'u', '.', 's', '.', 'economy', 'continues', 'its', 'modest', 'rate', 'of', 'growth', '.',\n '<END>'],\n ['<START>', 'turkey', 'calls', 'for', 'dialogue', 'to', 'solve', 'dispute', 'turkey', 'said',\n 'today', 'its', 'disputes', 'with', 'greece', ',', 'including', 'rights', 'on', 'the',\n 'continental', 'shelf', 'in', 'the', 'aegean', 'sea', ',', 'should', 'be', 'solved', 'through',\n 'negotiations', '.', 'a', 'foreign', 'ministry', 'statement', 'said', 'the', 'latest', 'crisis',\n 'between', 'the', 'two', 'nato', 'members', 'stemmed', 'from', 'the', 'continental', 'shelf',\n 'dispute', 'and', 'an', 'agreement', 'on', 'this', 'issue', 'would', 'effect', 'the', 'security',\n ',', 'economy', 'and', 'other', 'rights', 'of', 'both', 'countries', '.', '\"', 'as', 'the',\n 'issue', 'is', 'basicly', 'political', ',', 'a', 'solution', 'can', 'only', 'be', 'found', 'by',\n 'bilateral', 'negotiations', ',\"', 'the', 'statement', 'said', '.', 'greece', 'has', 'repeatedly',\n 'said', 'the', 'issue', 'was', 'legal', 'and', 'could', 'be', 'solved', 'at', 'the',\n 'international', 'court', 'of', 'justice', '.', 'the', 'two', 'countries', 'approached', 'armed',\n 'confrontation', 'last', 'month', 'after', 'greece', 'announced', 'it', 'planned', 'oil',\n 'exploration', 'work', 'in', 'the', 'aegean', 'and', 'turkey', 'said', 'it', 'would', 'also',\n 'search', 'for', 'oil', '.', 'a', 'face', '-', 'off', 'was', 'averted', 'when', 'turkey',\n 'confined', 'its', 'research', 'to', 'territorrial', 'waters', '.', '\"', 'the', 'latest',\n 'crises', 'created', 'an', 'historic', 'opportunity', 'to', 'solve', 'the', 'disputes', 'between',\n 'the', 'two', 'countries', ',\"', 'the', 'foreign', 'ministry', 'statement', 'said', '.', 'turkey',\n \"'\", 's', 'ambassador', 'in', 'athens', ',', 'nazmi', 'akiman', ',', 'was', 'due', 'to', 'meet',\n 'prime', 'minister', 'andreas', 'papandreou', 'today', 'for', 'the', 'greek', 'reply', 'to', 'a',\n 'message', 'sent', 'last', 'week', 'by', 'turkish', 'prime', 'minister', 'turgut', 'ozal', '.',\n 'the', 'contents', 'of', 'the', 'message', 'were', 'not', 'disclosed', '.', '<END>']]\n" ] ], [ [ "### Question 1.1: Implement `distinct_words` [code] (2 points)\n\nWrite a method to work out the distinct words (word types) that occur in the corpus. You can do this with `for` loops, but it's more efficient to do it with Python list comprehensions. In particular, [this](https://coderwall.com/p/rcmaea/flatten-a-list-of-lists-in-one-line-in-python) may be useful to flatten a list of lists. If you're not familiar with Python list comprehensions in general, here's [more information](https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html).\n\nYou may find it useful to use [Python sets](https://www.w3schools.com/python/python_sets.asp) to remove duplicate words.", "_____no_output_____" ] ], [ [ "def distinct_words(corpus):\n \"\"\" Determine a list of distinct words for the corpus.\n Params:\n corpus (list of list of strings): corpus of documents\n Return:\n corpus_words (list of strings): list of distinct words across the corpus, sorted (using python 'sorted' function)\n num_corpus_words (integer): number of distinct words across the corpus\n \"\"\"\n \n # ------------------\n # Write your implementation here.\n corpus_words = list(sorted(set([token for sentences in corpus for token in sentences])))\n num_corpus_words = len(corpus_words)\n # ------------------\n\n return corpus_words, num_corpus_words", "_____no_output_____" ], [ "# ---------------------\n# Run this sanity check\n# Note that this not an exhaustive check for correctness.\n# ---------------------\n\n# Define toy corpus\ntest_corpus = [\"{} All that glitters isn't gold {}\".format(START_TOKEN, END_TOKEN).split(\" \"), \"{} All's well that ends well {}\".format(START_TOKEN, END_TOKEN).split(\" \")]\ntest_corpus_words, num_corpus_words = distinct_words(test_corpus)\n\n# Correct answers\nans_test_corpus_words = sorted([START_TOKEN, \"All\", \"ends\", \"that\", \"gold\", \"All's\", \"glitters\", \"isn't\", \"well\", END_TOKEN])\nans_num_corpus_words = len(ans_test_corpus_words)\n\n# Test correct number of words\nassert(num_corpus_words == ans_num_corpus_words), \"Incorrect number of distinct words. Correct: {}. Yours: {}\".format(ans_num_corpus_words, num_corpus_words)\n\n# Test correct words\nassert (test_corpus_words == ans_test_corpus_words), \"Incorrect corpus_words.\\nCorrect: {}\\nYours: {}\".format(str(ans_test_corpus_words), str(test_corpus_words))\n\n# Print Success\nprint (\"-\" * 80)\nprint(\"Passed All Tests!\")\nprint (\"-\" * 80)", "--------------------------------------------------------------------------------\nPassed All Tests!\n--------------------------------------------------------------------------------\n" ] ], [ [ "### Question 1.2: Implement `compute_co_occurrence_matrix` [code] (3 points)\n\nWrite a method that constructs a co-occurrence matrix for a certain window-size $n$ (with a default of 4), considering words $n$ before and $n$ after the word in the center of the window. Here, we start to use `numpy (np)` to represent vectors, matrices, and tensors. If you're not familiar with NumPy, there's a NumPy tutorial in the second half of this cs231n [Python NumPy tutorial](http://cs231n.github.io/python-numpy-tutorial/).\n", "_____no_output_____" ] ], [ [ "def compute_co_occurrence_matrix(corpus, window_size=4):\n \"\"\" Compute co-occurrence matrix for the given corpus and window_size (default of 4).\n \n Note: Each word in a document should be at the center of a window. Words near edges will have a smaller\n number of co-occurring words.\n \n For example, if we take the document \"<START> All that glitters is not gold <END>\" with window size of 4,\n \"All\" will co-occur with \"<START>\", \"that\", \"glitters\", \"is\", and \"not\".\n \n Params:\n corpus (list of list of strings): corpus of documents\n window_size (int): size of context window\n Return:\n M (a symmetric numpy matrix of shape (number of unique words in the corpus , number of unique words in the corpus)): \n Co-occurence matrix of word counts. \n The ordering of the words in the rows/columns should be the same as the ordering of the words given by the distinct_words function.\n word2Ind (dict): dictionary that maps word to index (i.e. row/column number) for matrix M.\n \"\"\"\n words, num_words = distinct_words(corpus)\n\n # ------------------\n # Write your implementation here.\n M = np.zeros((num_words, num_words), dtype=np.int)\n co_time = {ii: [] for ii in range(num_words)}\n word2Ind = dict(zip(words, range(num_words)))\n for sent in corpus:\n for idx, center in enumerate(sent):\n center_id = word2Ind[center]\n context = sent[max(0, idx - window_size):idx + window_size + 1]\n context_id = [word2Ind[ii] for ii in context if ii != center]\n co_time[center_id].extend(context_id)\n for center, co_list in co_time.items():\n unique, counts = np.unique(co_list, return_counts=True)\n co_map = dict(zip(unique, counts))\n for context, time in co_map.items():\n M[center][context] = time\n\n # ------------------\n\n return M, word2Ind", "_____no_output_____" ], [ "# ---------------------\n# Run this sanity check\n# Note that this is not an exhaustive check for correctness.\n# ---------------------\n\n# Define toy corpus and get student's co-occurrence matrix\ntest_corpus = [\"{} All that glitters isn't gold {}\".format(START_TOKEN, END_TOKEN).split(\" \"), \"{} All's well that ends well {}\".format(START_TOKEN, END_TOKEN).split(\" \")]\nM_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)\n\n# Correct M and word2Ind\nM_test_ans = np.array( \n [[0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,],\n [0., 0., 1., 1., 0., 0., 0., 0., 0., 0.,],\n [0., 1., 0., 0., 0., 0., 0., 0., 1., 0.,],\n [0., 1., 0., 0., 0., 0., 0., 0., 0., 1.,],\n [0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,],\n [0., 0., 0., 0., 0., 0., 0., 1., 1., 0.,],\n [1., 0., 0., 0., 0., 0., 0., 1., 0., 0.,],\n [0., 0., 0., 0., 0., 1., 1., 0., 0., 0.,],\n [0., 0., 1., 0., 1., 1., 0., 0., 0., 1.,],\n [1., 0., 0., 1., 1., 0., 0., 0., 1., 0.,]]\n)\nans_test_corpus_words = sorted([START_TOKEN, \"All\", \"ends\", \"that\", \"gold\", \"All's\", \"glitters\", \"isn't\", \"well\", END_TOKEN])\nword2Ind_ans = dict(zip(ans_test_corpus_words, range(len(ans_test_corpus_words))))\n\n# Test correct word2Ind\nassert (word2Ind_ans == word2Ind_test), \"Your word2Ind is incorrect:\\nCorrect: {}\\nYours: {}\".format(word2Ind_ans, word2Ind_test)\n\n# Test correct M shape\nassert (M_test.shape == M_test_ans.shape), \"M matrix has incorrect shape.\\nCorrect: {}\\nYours: {}\".format(M_test.shape, M_test_ans.shape)\n\n# Test correct M values\nfor w1 in word2Ind_ans.keys():\n idx1 = word2Ind_ans[w1]\n for w2 in word2Ind_ans.keys():\n idx2 = word2Ind_ans[w2]\n student = M_test[idx1, idx2]\n correct = M_test_ans[idx1, idx2]\n if student != correct:\n print(\"Correct M:\")\n print(M_test_ans)\n print(\"Your M: \")\n print(M_test)\n raise AssertionError(\"Incorrect count at index ({}, {})=({}, {}) in matrix M. Yours has {} but should have {}.\".format(idx1, idx2, w1, w2, student, correct))\n\n# Print Success\nprint (\"-\" * 80)\nprint(\"Passed All Tests!\")\nprint (\"-\" * 80)", "--------------------------------------------------------------------------------\nPassed All Tests!\n--------------------------------------------------------------------------------\n" ] ], [ [ "### Question 1.3: Implement `reduce_to_k_dim` [code] (1 point)\n\nConstruct a method that performs dimensionality reduction on the matrix to produce k-dimensional embeddings. Use SVD to take the top k components and produce a new matrix of k-dimensional embeddings. \n\n**Note:** All of numpy, scipy, and scikit-learn (`sklearn`) provide *some* implementation of SVD, but only scipy and sklearn provide an implementation of Truncated SVD, and only sklearn provides an efficient randomized algorithm for calculating large-scale Truncated SVD. So please use [sklearn.decomposition.TruncatedSVD](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html).", "_____no_output_____" ] ], [ [ "def reduce_to_k_dim(M, k=2):\n \"\"\" Reduce a co-occurence count matrix of dimensionality (num_corpus_words, num_corpus_words)\n to a matrix of dimensionality (num_corpus_words, k) using the following SVD function from Scikit-Learn:\n - http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html\n \n Params:\n M (numpy matrix of shape (number of unique words in the corpus , number of unique words in the corpus)): co-occurence matrix of word counts\n k (int): embedding size of each word after dimension reduction\n Return:\n M_reduced (numpy matrix of shape (number of corpus words, k)): matrix of k-dimensioal word embeddings.\n In terms of the SVD from math class, this actually returns U * S\n \"\"\" \n n_iters = 10 # Use this parameter in your call to `TruncatedSVD`\n print(\"Running Truncated SVD over %i words...\" % (M.shape[0]))\n \n # ------------------\n # Write your implementation here.\n svd = TruncatedSVD(n_components=k, n_iter=n_iters)\n svd.fit(M.T)\n M_reduced = svd.components_.T\n # ------------------\n\n print(\"Done.\")\n return M_reduced", "_____no_output_____" ], [ "# ---------------------\n# Run this sanity check\n# Note that this is not an exhaustive check for correctness \n# In fact we only check that your M_reduced has the right dimensions.\n# ---------------------\n\n# Define toy corpus and run student code\ntest_corpus = [\"{} All that glitters isn't gold {}\".format(START_TOKEN, END_TOKEN).split(\" \"), \"{} All's well that ends well {}\".format(START_TOKEN, END_TOKEN).split(\" \")]\nM_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)\nM_test_reduced = reduce_to_k_dim(M_test, k=2)\n\n# Test proper dimensions\nassert (M_test_reduced.shape[0] == 10), \"M_reduced has {} rows; should have {}\".format(M_test_reduced.shape[0], 10)\nassert (M_test_reduced.shape[1] == 2), \"M_reduced has {} columns; should have {}\".format(M_test_reduced.shape[1], 2)\n\n# Print Success\nprint (\"-\" * 80)\nprint(\"Passed All Tests!\")\nprint (\"-\" * 80)", "Running Truncated SVD over 10 words...\nDone.\n--------------------------------------------------------------------------------\nPassed All Tests!\n--------------------------------------------------------------------------------\n" ] ], [ [ "### Question 1.4: Implement `plot_embeddings` [code] (1 point)\n\nHere you will write a function to plot a set of 2D vectors in 2D space. For graphs, we will use Matplotlib (`plt`).\n\nFor this example, you may find it useful to adapt [this code](https://www.pythonmembers.club/2018/05/08/matplotlib-scatter-plot-annotate-set-text-at-label-each-point/). In the future, a good way to make a plot is to look at [the Matplotlib gallery](https://matplotlib.org/gallery/index.html), find a plot that looks somewhat like what you want, and adapt the code they give.", "_____no_output_____" ] ], [ [ "def plot_embeddings(M_reduced, word2Ind, words):\n \"\"\" Plot in a scatterplot the embeddings of the words specified in the list \"words\".\n NOTE: do not plot all the words listed in M_reduced / word2Ind.\n Include a label next to each point.\n \n Params:\n M_reduced (numpy matrix of shape (number of unique words in the corpus , 2)): matrix of 2-dimensioal word embeddings\n word2Ind (dict): dictionary that maps word to indices for matrix M\n words (list of strings): words whose embeddings we want to visualize\n \"\"\"\n\n # ------------------\n # Write your implementation here.\n for w in words:\n x = M_reduced[word2Ind[w]][0]\n y = M_reduced[word2Ind[w]][1]\n plt.scatter(x, y, marker='x', color='red')\n plt.text(x, y, w)\n plt.show()\n # ------------------", "_____no_output_____" ], [ "# ---------------------\n# Run this sanity check\n# Note that this is not an exhaustive check for correctness.\n# The plot produced should look like the \"test solution plot\" depicted below. \n# ---------------------\n\nprint (\"-\" * 80)\nprint (\"Outputted Plot:\")\n\nM_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]])\nword2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4}\nwords = ['test1', 'test2', 'test3', 'test4', 'test5']\nplot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words)\n\nprint (\"-\" * 80)", "--------------------------------------------------------------------------------\nOutputted Plot:\n" ] ], [ [ "<font color=red>**Test Plot Solution**</font>\n<br>\n<img src=\"./imgs/test_plot.png\" width=40% style=\"float: left;\"> </img>\n", "_____no_output_____" ], [ "### Question 1.5: Co-Occurrence Plot Analysis [written] (3 points)\n\nNow we will put together all the parts you have written! We will compute the co-occurrence matrix with fixed window of 4 (the default window size), over the Reuters \"crude\" (oil) corpus. Then we will use TruncatedSVD to compute 2-dimensional embeddings of each word. TruncatedSVD returns U\\*S, so we need to normalize the returned vectors, so that all the vectors will appear around the unit circle (therefore closeness is directional closeness). **Note**: The line of code below that does the normalizing uses the NumPy concept of *broadcasting*. If you don't know about broadcasting, check out\n[Computation on Arrays: Broadcasting by Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html).\n\nRun the below cell to produce the plot. It'll probably take a few seconds to run. What clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? **Note:** \"bpd\" stands for \"barrels per day\" and is a commonly used abbreviation in crude oil topic articles.", "_____no_output_____" ] ], [ [ "# -----------------------------\n# Run This Cell to Produce Your Plot\n# ------------------------------\nreuters_corpus = read_corpus()\nM_co_occurrence, word2Ind_co_occurrence = compute_co_occurrence_matrix(reuters_corpus)\nM_reduced_co_occurrence = reduce_to_k_dim(M_co_occurrence, k=2)\n\n# Rescale (normalize) the rows to make them each of unit-length\nM_lengths = np.linalg.norm(M_reduced_co_occurrence, axis=1)\nM_normalized = M_reduced_co_occurrence / M_lengths[:, np.newaxis] # broadcasting\n\nwords = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']\n\nplot_embeddings(M_normalized, word2Ind_co_occurrence, words)", "Running Truncated SVD over 8185 words...\nDone.\n" ] ], [ [ "The 'ecuador', 'energy', 'kuwait', 'oil', 'output', 'venezuela' cluster together which is intuitive.\nAnd the 'bpd', 'barrels' doesn't cluster together which should have.\n", "_____no_output_____" ], [ "## Part 2: Prediction-Based Word Vectors (15 points)\n\nAs discussed in class, more recently prediction-based word vectors have demonstrated better performance, such as word2vec and GloVe (which also utilizes the benefit of counts). Here, we shall explore the embeddings produced by GloVe. Please revisit the class notes and lecture slides for more details on the word2vec and GloVe algorithms. If you're feeling adventurous, challenge yourself and try reading [GloVe's original paper](https://nlp.stanford.edu/pubs/glove.pdf).\n\nThen run the following cells to load the GloVe vectors into memory. **Note**: If this is your first time to run these cells, i.e. download the embedding model, it will take about 15 minutes to run. If you've run these cells before, rerunning them will load the model without redownloading it, which will take about 1 to 2 minutes.", "_____no_output_____" ] ], [ [ "def load_embedding_model():\n \"\"\" Load GloVe Vectors\n Return:\n wv_from_bin: All 400000 embeddings, each lengh 200\n \"\"\"\n import gensim.downloader as api\n wv_from_bin = api.load(\"glove-wiki-gigaword-200\")\n print(\"Loaded vocab size %i\" % len(wv_from_bin.vocab.keys()))\n return wv_from_bin", "_____no_output_____" ], [ "# -----------------------------------\n# Run Cell to Load Word Vectors\n# Note: This will take several minutes\n# -----------------------------------\nwv_from_bin = load_embedding_model()", "Loaded vocab size 400000\n" ] ], [ [ "#### Note: If you are receiving reset by peer error, rerun the cell to restart the download. ", "_____no_output_____" ], [ "### Reducing dimensionality of Word Embeddings\nLet's directly compare the GloVe embeddings to those of the co-occurrence matrix. In order to avoid running out of memory, we will work with a sample of 10000 GloVe vectors instead.\nRun the following cells to:\n\n1. Put 10000 Glove vectors into a matrix M\n2. Run reduce_to_k_dim (your Truncated SVD function) to reduce the vectors from 200-dimensional to 2-dimensional.", "_____no_output_____" ] ], [ [ "def get_matrix_of_vectors(wv_from_bin, required_words=['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']):\n \"\"\" Put the GloVe vectors into a matrix M.\n Param:\n wv_from_bin: KeyedVectors object; the 400000 GloVe vectors loaded from file\n Return:\n M: numpy matrix shape (num words, 200) containing the vectors\n word2Ind: dictionary mapping each word to its row number in M\n \"\"\"\n import random\n words = list(wv_from_bin.vocab.keys())\n print(\"Shuffling words ...\")\n random.seed(224)\n random.shuffle(words)\n words = words[:10000]\n print(\"Putting %i words into word2Ind and matrix M...\" % len(words))\n word2Ind = {}\n M = []\n curInd = 0\n for w in words:\n try:\n M.append(wv_from_bin.word_vec(w))\n word2Ind[w] = curInd\n curInd += 1\n except KeyError:\n continue\n for w in required_words:\n if w in words:\n continue\n try:\n M.append(wv_from_bin.word_vec(w))\n word2Ind[w] = curInd\n curInd += 1\n except KeyError:\n continue\n M = np.stack(M)\n print(\"Done.\")\n return M, word2Ind", "_____no_output_____" ], [ "# -----------------------------------------------------------------\n# Run Cell to Reduce 200-Dimensional Word Embeddings to k Dimensions\n# Note: This should be quick to run\n# -----------------------------------------------------------------\nM, word2Ind = get_matrix_of_vectors(wv_from_bin)\nM_reduced = reduce_to_k_dim(M, k=2)\n\n# Rescale (normalize) the rows to make them each of unit-length\nM_lengths = np.linalg.norm(M_reduced, axis=1)\nM_reduced_normalized = M_reduced / M_lengths[:, np.newaxis] # broadcasting", "Shuffling words ...\nPutting 10000 words into word2Ind and matrix M...\nDone.\nRunning Truncated SVD over 10010 words...\nDone.\n" ] ], [ [ "**Note: If you are receiving out of memory issues on your local machine, try closing other applications to free more memory on your device. You may want to try restarting your machine so that you can free up extra memory. Then immediately run the jupyter notebook and see if you can load the word vectors properly. If you still have problems with loading the embeddings onto your local machine after this, please follow the Piazza instructions, as how to run remotely on Stanford Farmshare machines.**", "_____no_output_____" ], [ "### Question 2.1: GloVe Plot Analysis [written] (4 points)\n\nRun the cell below to plot the 2D GloVe embeddings for `['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']`.\n\nWhat clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? How is the plot different from the one generated earlier from the co-occurrence matrix? What is a possible reason for causing the difference?", "_____no_output_____" ] ], [ [ "words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']\nplot_embeddings(M_reduced_normalized, word2Ind, words)", "_____no_output_____" ] ], [ [ "venezuela with ecuador and industry with energy cluster together. petroleum and oil shoulf cluster together.\nThe plot contine more semantic information than the co-occurrence matrix method.\nMaybe the counter of word is small in the data set.", "_____no_output_____" ], [ "### Cosine Similarity\nNow that we have word vectors, we need a way to quantify the similarity between individual words, according to these vectors. One such metric is cosine-similarity. We will be using this to find words that are \"close\" and \"far\" from one another.\n\nWe can think of n-dimensional vectors as points in n-dimensional space. If we take this perspective [L1](http://mathworld.wolfram.com/L1-Norm.html) and [L2](http://mathworld.wolfram.com/L2-Norm.html) Distances help quantify the amount of space \"we must travel\" to get between these two points. Another approach is to examine the angle between two vectors. From trigonometry we know that:\n\n<img src=\"./imgs/inner_product.png\" width=20% style=\"float: center;\"></img>\n\nInstead of computing the actual angle, we can leave the similarity in terms of $similarity = cos(\\Theta)$. Formally the [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity) $s$ between two vectors $p$ and $q$ is defined as:\n\n$$s = \\frac{p \\cdot q}{||p|| ||q||}, \\textrm{ where } s \\in [-1, 1] $$ ", "_____no_output_____" ], [ "### Question 2.2: Words with Multiple Meanings (2 points) [code + written] \nPolysemes and homonyms are words that have more than one meaning (see this [wiki page](https://en.wikipedia.org/wiki/Polysemy) to learn more about the difference between polysemes and homonyms ). Find a word with at least 2 different meanings such that the top-10 most similar words (according to cosine similarity) contain related words from *both* meanings. For example, \"leaves\" has both \"vanishes\" and \"stalks\" in the top 10, and \"scoop\" has both \"handed_waffle_cone\" and \"lowdown\". You will probably need to try several polysemous or homonymic words before you find one. Please state the word you discover and the multiple meanings that occur in the top 10. Why do you think many of the polysemous or homonymic words you tried didn't work (i.e. the top-10 most similar words only contain **one** of the meanings of the words)?\n\n**Note**: You should use the `wv_from_bin.most_similar(word)` function to get the top 10 similar words. This function ranks all other words in the vocabulary with respect to their cosine similarity to the given word. For further assistance please check the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__.", "_____no_output_____" ] ], [ [ "# ------------------\n# Write your implementation here.\nwv_from_bin.most_similar(\"run\")\n# ------------------", "_____no_output_____" ] ], [ [ "The run have running and start two meaning.", "_____no_output_____" ], [ "### Question 2.3: Synonyms & Antonyms (2 points) [code + written] \n\nWhen considering Cosine Similarity, it's often more convenient to think of Cosine Distance, which is simply 1 - Cosine Similarity.\n\nFind three words (w1,w2,w3) where w1 and w2 are synonyms and w1 and w3 are antonyms, but Cosine Distance(w1,w3) < Cosine Distance(w1,w2). For example, w1=\"happy\" is closer to w3=\"sad\" than to w2=\"cheerful\". \n\nOnce you have found your example, please give a possible explanation for why this counter-intuitive result may have happened.\n\nYou should use the the `wv_from_bin.distance(w1, w2)` function here in order to compute the cosine distance between two words. Please see the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.FastTextKeyedVectors.distance)__ for further assistance.", "_____no_output_____" ] ], [ [ "# ------------------\n# Write your implementation here.\nw1 = \"design\"\nw2 = \"proposal\"\nw3 = \"borrow\"\nw12_dist = wv_from_bin.distance(w1, w2)\nw13_dist = wv_from_bin.distance(w1, w3)\nprint(\"Synonyms {}, {} have cosine distance: {:.2f}\".format(w1, w2, w12_dist))\nprint(\"Antonyms {}, {} have cosine distance: {:.2f}\".format(w1, w3, w13_dist))\n\n# ------------------", "Synonyms design, proposal have cosine distance: 0.69\nAntonyms design, borrow have cosine distance: 0.90\n" ] ], [ [ "Comapring proposal and design, the cosine distrance is 0.69. And the cosine distance between design and borrow is 0.9.", "_____no_output_____" ], [ "### Solving Analogies with Word Vectors\nWord vectors have been shown to *sometimes* exhibit the ability to solve analogies. \n\nAs an example, for the analogy \"man : king :: woman : x\" (read: man is to king as woman is to x), what is x?\n\nIn the cell below, we show you how to use word vectors to find x. The `most_similar` function finds words that are most similar to the words in the `positive` list and most dissimilar from the words in the `negative` list. The answer to the analogy will be the word ranked most similar (largest numerical value).\n\n**Note:** Further Documentation on the `most_similar` function can be found within the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__.", "_____no_output_____" ] ], [ [ "# Run this cell to answer the analogy -- man : king :: woman : x\npprint.pprint(wv_from_bin.most_similar(positive=['woman', 'king'], negative=['man']))", "[('queen', 0.6978678703308105),\n ('princess', 0.6081745028495789),\n ('monarch', 0.5889754891395569),\n ('throne', 0.5775108933448792),\n ('prince', 0.5750998854637146),\n ('elizabeth', 0.546359658241272),\n ('daughter', 0.5399125814437866),\n ('kingdom', 0.5318052768707275),\n ('mother', 0.5168544054031372),\n ('crown', 0.5164472460746765)]\n" ] ], [ [ "### Question 2.4: Finding Analogies [code + written] (2 Points)\nFind an example of analogy that holds according to these vectors (i.e. the intended word is ranked top). In your solution please state the full analogy in the form x:y :: a:b. If you believe the analogy is complicated, explain why the analogy holds in one or two sentences.\n\n**Note**: You may have to try many analogies to find one that works!", "_____no_output_____" ] ], [ [ "# ------------------\n# Write your implementation here.\npprint.pprint(wv_from_bin.most_similar(positive=['woman', 'waitress'], negative=['man']))\n# ------------------", "[('barmaid', 0.6116799116134644),\n ('bartender', 0.5877381563186646),\n ('receptionist', 0.5782569646835327),\n ('waiter', 0.5508327484130859),\n ('waitresses', 0.5503603219985962),\n ('hostess', 0.5346562266349792),\n ('housekeeper', 0.5310243368148804),\n ('homemaker', 0.5298492908477783),\n ('prostitute', 0.5254124402999878),\n ('housewife', 0.5207685232162476)]\n" ] ], [ [ "woman:man::waitress:waiter, through the probability of waiter isn't the maximum.", "_____no_output_____" ], [ "### Question 2.5: Incorrect Analogy [code + written] (1 point)\nFind an example of analogy that does *not* hold according to these vectors. In your solution, state the intended analogy in the form x:y :: a:b, and state the (incorrect) value of b according to the word vectors.", "_____no_output_____" ] ], [ [ "# ------------------\n# Write your implementation here.\npprint.pprint(wv_from_bin.most_similar(positive=['high', 'jump'], negative=['low']))\n# ------------------", "[('jumping', 0.6205310225486755),\n ('jumps', 0.5840020775794983),\n ('leap', 0.5402169823646545),\n ('jumper', 0.4817255735397339),\n ('climb', 0.4797284007072449),\n ('bungee', 0.464731365442276),\n ('championships', 0.4643418788909912),\n ('jumped', 0.46396756172180176),\n ('triple', 0.4550389349460602),\n ('throw', 0.4516879916191101)]\n" ] ], [ [ "the high:low shoudle == jump:fall, but the fall not in the top 10 probability list.", "_____no_output_____" ], [ "### Question 2.6: Guided Analysis of Bias in Word Vectors [written] (1 point)\n\nIt's important to be cognizant of the biases (gender, race, sexual orientation etc.) implicit in our word embeddings. Bias can be dangerous because it can reinforce stereotypes through applications that employ these models.\n\nRun the cell below, to examine (a) which terms are most similar to \"woman\" and \"worker\" and most dissimilar to \"man\", and (b) which terms are most similar to \"man\" and \"worker\" and most dissimilar to \"woman\". Point out the difference between the list of female-associated words and the list of male-associated words, and explain how it is reflecting gender bias.", "_____no_output_____" ] ], [ [ "# Run this cell\n# Here `positive` indicates the list of words to be similar to and `negative` indicates the list of words to be\n# most dissimilar from.\npprint.pprint(wv_from_bin.most_similar(positive=['woman', 'worker'], negative=['man']))\nprint()\npprint.pprint(wv_from_bin.most_similar(positive=['man', 'worker'], negative=['woman']))", "[('employee', 0.6375863552093506),\n ('workers', 0.6068919897079468),\n ('nurse', 0.5837947726249695),\n ('pregnant', 0.5363885164260864),\n ('mother', 0.5321309566497803),\n ('employer', 0.5127025842666626),\n ('teacher', 0.5099576711654663),\n ('child', 0.5096741914749146),\n ('homemaker', 0.5019454956054688),\n ('nurses', 0.4970572590827942)]\n\n[('workers', 0.6113258004188538),\n ('employee', 0.5983108282089233),\n ('working', 0.5615328550338745),\n ('laborer', 0.5442320108413696),\n ('unemployed', 0.5368517637252808),\n ('job', 0.5278826951980591),\n ('work', 0.5223963260650635),\n ('mechanic', 0.5088937282562256),\n ('worked', 0.505452036857605),\n ('factory', 0.4940453767776489)]\n" ] ], [ [ "The word most similar to \"woman\" and \"worker\" and most dissimilar to \"man\" is nurses.\nMost nurses are woman.\n\nThe word most similar to \"man\" and \"worker\" and most dissimilar to \"woman\" is factory.\nThe factory have many worker man.\n", "_____no_output_____" ], [ "### Question 2.7: Independent Analysis of Bias in Word Vectors [code + written] (1 point)\n\nUse the `most_similar` function to find another case where some bias is exhibited by the vectors. Please briefly explain the example of bias that you discover.", "_____no_output_____" ] ], [ [ "# ------------------\n# Write your implementation here.\npprint.pprint(wv_from_bin.most_similar(positive=['elephant', 'skyscraper'], negative=['ant']))\nprint()\npprint.pprint(wv_from_bin.most_similar(positive=['motorcycle', 'car'], negative=['bicycle']))\n# ------------------", "[('tower', 0.49941301345825195),\n ('skyscrapers', 0.48599374294281006),\n ('tallest', 0.46377506852149963),\n ('statue', 0.4558914303779602),\n ('towers', 0.44428494572639465),\n ('40-story', 0.4247894287109375),\n ('monument', 0.4171640872955322),\n ('high-rise', 0.41149571537971497),\n ('bust', 0.408037006855011),\n ('gleaming', 0.40352344512939453)]\n\n[('cars', 0.6663373112678528),\n ('driver', 0.6263732314109802),\n ('vehicle', 0.6231670379638672),\n ('mercedes', 0.6017158627510071),\n ('truck', 0.581316351890564),\n ('driving', 0.5702999234199524),\n ('motorbike', 0.5668439865112305),\n ('bmw', 0.5605546236038208),\n ('vehicles', 0.5472753047943115),\n ('motorcycles', 0.5434989929199219)]\n" ] ], [ [ "elephant is very high and ant is very small, skyscraper is very high but the most similairty also high.\n\nThe motocycle and bicycle is two rounds.", "_____no_output_____" ], [ "### Question 2.8: Thinking About Bias [written] (2 points)\n\nWhat might be the causes of these biases in the word vectors? You should give least 2 explainations how bias get into the word vectors. How might you be able to investigate/test these causes?", "_____no_output_____" ], [ "From the training processing, in my opinion, the bias is base on the co-occurrence.\nDifference word have difference co-occurrence.", "_____no_output_____" ], [ "# <font color=\"blue\"> Submission Instructions</font>\n\n1. Click the Save button at the top of the Jupyter Notebook.\n2. Select Cell -> All Output -> Clear. This will clear all the outputs from all cells (but will keep the content of all cells). \n2. Select Cell -> Run All. This will run all the cells in order, and will take several minutes.\n3. Once you've rerun everything, select File -> Download as -> PDF via LaTeX (If you have trouble using \"PDF via LaTex\", you can also save the webpage as pdf. <font color='blue'> Make sure all your solutions especially the coding parts are displayed in the pdf</font>, it's okay if the provided codes get cut off because lines are not wrapped in code cells).\n4. Look at the PDF file and make sure all your solutions are there, displayed correctly. The PDF is the only thing your graders will see!\n5. Submit your PDF on Gradescope.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e7a87e83b1cbe2fdb0629637ce50ca597bcd941e
932
ipynb
Jupyter Notebook
python/Vectors/ConservativeVectorFiled.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
python/Vectors/ConservativeVectorFiled.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
python/Vectors/ConservativeVectorFiled.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
20.26087
74
0.530043
[ [ [ "# conservative vector field\n> # $ \\vec{F} = \n\\nabla \\phi \\iff \\vec{F} \\text{ is conservative vector} \\\\\n\\phi \\text{ is Potential Energy}\n$", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e7a8990a1e5705e9cb2d38ec3291d3f5f295905d
3,492
ipynb
Jupyter Notebook
GeeksForGeeks/.ipynb_checkpoints/Untitled-checkpoint.ipynb
Dhaneshgupta1027/Python
12193d689cc49d3198ea6fee3f7f7d37b8e59175
[ "MIT" ]
37
2019-04-03T07:19:57.000Z
2022-01-09T06:18:41.000Z
GeeksForGeeks/.ipynb_checkpoints/Untitled-checkpoint.ipynb
Dhaneshgupta1027/Python
12193d689cc49d3198ea6fee3f7f7d37b8e59175
[ "MIT" ]
16
2020-08-11T08:09:42.000Z
2021-10-30T17:40:48.000Z
GeeksForGeeks/.ipynb_checkpoints/Untitled-checkpoint.ipynb
Dhaneshgupta1027/Python
12193d689cc49d3198ea6fee3f7f7d37b8e59175
[ "MIT" ]
130
2019-10-02T14:40:20.000Z
2022-01-26T17:38:26.000Z
17.287129
46
0.392039
[ [ [ "for _ in range(int(input())):\n n=int(input())\n a=list(map(int, input().split()))\n q=[a[-1]]\n for i in range(n-2,-1,-1):\n if a[i]>=q[-1]:\n q.append(a[i])\n print(*(q[::-1]) )", "1\n6\n16 17 4 3 5 2\n17 5 2\n" ], [ "q", "_____no_output_____" ], [ "print(*(q[::-1]))", "17 5 2\n" ], [ "print(*q)", "2 5 17\n" ], [ "q.reverse()", "_____no_output_____" ], [ "q[-1]", "_____no_output_____" ], [ "for _ in range(int(input())):\n n=int(input())\n a=list(map(int,input().split()))\n# c=a.copy()\n# for i in range(len(a)-1):\n# for j in range(i+1,len(a)):\n# if a[i]<a[j]:\n# c.remove(a[i])\n# break\n c=[a[-1]]\n for i in range(len(a)-2,-1,-1):\n if a[i]>= c[-1]:\n c.append(a[i])\n# for i in c:\n# print(i,end=' ')\n print(*c)\n print()", "1\n6\n16 17 4 3 5 2\n2 5 17\n\n" ], [ "7//2", "_____no_output_____" ], [ "if(7//2==0)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a89b1104622274b0caf1e57447c92cc7704da8
380,962
ipynb
Jupyter Notebook
matlab/plot_model_data/plot_model_results.ipynb
mwaugh0328/redistributing_gains_from_trade
96457414a05d04a30a0aa7f77ba94c5f16daef7a
[ "MIT" ]
2
2020-01-24T17:48:49.000Z
2020-08-25T09:28:05.000Z
matlab/plot_model_data/plot_model_results.ipynb
mwaugh0328/redist_gains_jie
96457414a05d04a30a0aa7f77ba94c5f16daef7a
[ "MIT" ]
null
null
null
matlab/plot_model_data/plot_model_results.ipynb
mwaugh0328/redist_gains_jie
96457414a05d04a30a0aa7f77ba94c5f16daef7a
[ "MIT" ]
7
2018-10-09T23:07:40.000Z
2021-09-16T09:32:50.000Z
263.8241
67,648
0.894953
[ [ [ "### Plotting File for [Redistributing the Gains From Trade Through Progressive Taxation](http://www.waugheconomics.com/uploads/2/2/5/6/22563786/lw_tax.pdf)\n\nThis notebook imports the output from the MATLAB code and then plots it. Description is below.", "_____no_output_____" ] ], [ [ "from IPython.display import display, Image # Displays things nicely\nimport pandas as pd\nimport weightedcalcs as wc\nimport numpy as np\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\n\nimport matplotlib.pyplot as plt\nfrom scipy.io import loadmat # this is the SciPy module that loads mat-files\n\n#fig_path = \"C:\\\\Users\\\\mwaugh.NYC-STERN\\\\Documents\\\\GitHub\\\\tradeexposure\\\\figures\"", "C:\\Program Files\\Anaconda3\\lib\\site-packages\\statsmodels\\compat\\pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\n" ] ], [ [ "---\n\n## Read in output from model\n\nThis is the structure of the mat file and the naming conventions. Here it is assumed that the .mat files from Matlab are within the working directory. Then read it in, note the use of ``scipy`` package to get a .mat file into python.", "_____no_output_____" ] ], [ [ "#[params.trade_cost, trade, ls, move, output_per_hour, welfare, double(exit_flag)];\n\ncolumn_names = [\"tau_p\", \"tau\", \"trade_volume\", \"ls\", \"migration\", \"output\", \"OPterm2\", \"welfare\", \"exitflag\", \"welfare_smth\", \n \"trade_share\"]\n\nvalues = [\"0.05\",\"0.1\", \"0.2\", \"0.3\", \"0.4\"]", "_____no_output_____" ], [ "all_df = pd.DataFrame([])\n\nfor val in values:\n \n file_name = \"results\" + val + \".mat\" \n \n mat = loadmat(file_name) \n \n df = pd.DataFrame(mat[\"results\"])\n \n df[\"9\"] = val\n \n all_df = all_df.append(df)\n\nall_df.columns = column_names\n\nall_df.head(10)", "_____no_output_____" ] ], [ [ "Now define some functions that we will use...", "_____no_output_____" ] ], [ [ "def cons_eqiv(df):\n \n maxwel = float(df[\"welfare_smth\"][df[\"tau_p\"] == 0.18])\n \n df[\"cons_eqiv\"] = 100*(np.exp((1-0.95)*(df[\"welfare_smth\"] - maxwel))-1)\n # These are consumptione equivialents. \n \n return df", "_____no_output_____" ] ], [ [ "Group on the trade share values....", "_____no_output_____" ] ], [ [ "grp = all_df.groupby(\"trade_share\")\n\ngrp = grp.apply(cons_eqiv)\n\ngrp = grp.groupby(\"trade_share\")", "_____no_output_____" ] ], [ [ "---\n\n## Optimal policy in the model\n\nThen the next two code cells will replicate Figure 3 and Figure 6 in the paper...", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize = (10,7))\n\nval = \"0.1\"\n\nax.plot(grp.get_group(val).tau_p, grp.get_group(val).cons_eqiv,\n linewidth = 4, label = \"Imports/GDP = \" + val, \n color = \"blue\",alpha = 0.70)\n\n\nindex_max = grp.get_group(val).cons_eqiv.idxmax()\n\ntau_max = grp.get_group(val).tau_p.iloc[index_max]\n\nax.plot(tau_max, \n grp.get_group(val).cons_eqiv.iloc[index_max], 'ro',\n markersize=10, linewidth = 50,\n color = \"red\",alpha = 0.50)\n\n\nax.set_ylabel(\"Welfare (CE Units), Percent from Baseline\", fontsize = 14)\nax.set_xlabel(\"Tax Progressivity\", fontsize = 14)\n\nax.spines[\"right\"].set_visible(False)\nax.spines[\"top\"].set_visible(False)\n\nax.axvline(x = 0.18, \n color='k', \n linestyle='--',\n lw = 2, alpha = 0.5) \n\n#ax.legend(fontsize = 14, frameon=False)\n\n\nax.annotate(\n \"Optimal Progressivity \\ntau_p* = \" + str(tau_max), \n xy=(tau_max, 0.15), # This is where we point at...\n xycoords=\"data\", # Not exactly sure about this\n xytext=(0.4, 1.5), # This is about where the text is\n horizontalalignment=\"left\", # How the text is alined\n arrowprops={\n \"arrowstyle\": \"-|>\", # This is stuff about the arrow\n \"connectionstyle\": \"angle3,angleA=5,angleB=85\",\n \"color\": \"black\"\n },\n fontsize=14,\n)\n\n\nax.annotate(\n \"US Data (HSV estimate) \\ntau_p = 0.18\", \n xy=(0.18, 0.05), # This is where we point at...\n xycoords=\"data\", # Not exactly sure about this\n xytext=(-0.10, 1.5), # This is about where the text is\n horizontalalignment=\"left\", # How the text is alined\n arrowprops={\n \"arrowstyle\": \"-|>\", # This is stuff about the arrow\n \"connectionstyle\": \"angle3,angleA=0,angleB=150\",\n \"color\": \"black\"\n },\n fontsize=14,\n)\n\nax.set_xlim(-0.25,0.6)\nax.set_ylim(-3,1.5)\n\n#plt.savefig(fig_path + \"\\\\social_welfare_baseline.pdf\", bbox_inches = \"tight\", dip = 3600)\n\nplt.show()", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize = (10,7))\n\nax.set_prop_cycle('color',plt.cm.seismic(np.linspace(0,1.15,5)))\n\nwelfare_opt = []\ntax_opt = []\n\nflat_loss = []\n\noxford_data = pd.DataFrame() \n\n\nfor val in values[1:]:\n \n ax.plot(grp.get_group(val).tau_p, grp.get_group(val).cons_eqiv,\n linewidth = 4, label = \"Imports/GDP = \" + val, alpha = 0.70)\n \n index_max = grp.get_group(val).cons_eqiv.idxmax()\n \n ax.plot(grp.get_group(val).tau_p.iloc[index_max], \n grp.get_group(val).cons_eqiv.iloc[index_max], marker =\"o\",\n markersize=10, linewidth = 50,\n color = \"red\",alpha = 0.50)\n \n welfare_opt.append(grp.get_group(val).cons_eqiv.iloc[index_max])\n tax_opt.append(grp.get_group(val).tau_p.iloc[index_max])\n \n oxford_data = pd.concat([oxford_data, grp.get_group(val).cons_eqiv],axis=1)\n \n flat_idx = grp.get_group(val).tau_p == 0\n flat_loss.append(float(grp.get_group(val).cons_eqiv[flat_idx]))\n\nax.set_ylabel(\"Welfare (CE Units), Percent from Baseline\", fontsize = 14)\nax.set_xlabel(\"Tax Progressivity\", fontsize = 14)\n\nax.spines[\"right\"].set_visible(False)\nax.spines[\"top\"].set_visible(False)\n\nax.axvline(x = 0.18, \n color='k', \n linestyle='--',\n lw = 2, alpha = 0.5) \n\nax.legend(fontsize = 14, frameon=False)\n\nax.set_xlim(-0.25,0.6)\nax.set_ylim(-7.5,1.5)\n\n#plt.savefig(fig_path + \"\\\\social_welfare_prog_diff_tau_fine.pdf\", bbox_inches = \"tight\", dip = 3600)\n\nplt.show()\n\n###########################################################################################################\n\n#oxford_data = pd.concat([oxford_data, grp.get_group(val).tau_p],axis=1)\n\n#oxford_names = values\n\n#oxford_names.append(\"tax_progressivity\") \n\n#oxford_data.columns = oxford_names[1:]\n\n#values.remove(\"tax_progressivity\")\n\n#oxford_data.to_excel('oxford_fig2.xlsx')", "_____no_output_____" ], [ "print(welfare_opt)\nprint(tax_opt)\nprint(flat_loss)\n\n#grp.get_group(val).head()", "[0.10435983980994212, 0.34328431738519516, 0.7242843517205388, 1.3810768398272222]\n[0.27, 0.32, 0.37, 0.44999999999999996]\n[-0.8279921208671714, -1.3903253040046915, -1.9424125246026658, -2.628430117595859]\n" ] ], [ [ "This finds the optimal tau...", "_____no_output_____" ] ], [ [ "opt_tau = []\n\ntau = []\n\ntrade = []\n\nfor val in values:\n \n index_max = grp.get_group(val).cons_eqiv.idxmax()\n \n tau_star = grp.get_group(val).tau_p.iloc[index_max]\n \n opt_tau.append(tau_star)\n \n \n tau.append(grp.get_group(val).tau.iloc[index_max])\n \n trade.append(float(val))\n \nhold = {\"opt_tau\": opt_tau, \"trade\": trade, \"tau\": tau}\n\nopt_df = pd.DataFrame(hold)\n\nopt_df.head(10)", "_____no_output_____" ] ], [ [ "This then generates the output cost figure, Figure 8 in the paper.", "_____no_output_____" ] ], [ [ "def smooth_reg(df, series):\n \n specification = series + \"~ tau_p + np.square(tau_p) + np.power(tau_p,3)+ np.power(tau_p,4)\"\n \n results = smf.ols(specification , # This is the model in variable names we want to estimate\n data=df[df[\"exitflag\"]==0]).fit() \n \n pred = results.predict(exog = df[\"tau_p\"])\n \n #print(results.summary())\n \n return pred\n", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize = (10,7))\n\nseries = \"output\"\n\nax.set_prop_cycle('color',plt.cm.seismic(np.linspace(0,1.15,5)))\n\nfor val in values[1:]:\n\n#baseline = float(grp.get_group(val)[grp.get_group(val).tau_p == 0.18][series])\n ypred = smooth_reg(grp.get_group(val), series)\n\n baseline = float(ypred[grp.get_group(val).tau_p == 0.18])\n\n real = grp.get_group(val)[series]\n\n index_max = grp.get_group(val).cons_eqiv.idxmax()\n\n ax.plot(grp.get_group(val).tau_p, 100*(ypred/baseline - 1),\n linewidth = 4, label = \"Imports/GDP = \" + val, alpha = 0.70)\n \n index_max = grp.get_group(val).cons_eqiv.idxmax()\n \n ax.plot(grp.get_group(val).tau_p.iloc[index_max], \n 100*(ypred.iloc[index_max]/baseline - 1), marker =\"o\",\n markersize=10, linewidth = 50,\n color = \"red\",alpha = 0.50)\n \n####################################################################################\n\nax.axvline(x = 0.18, \n color='k', \n linestyle='--',\n lw = 2, alpha = 0.5) \n\nax.spines[\"right\"].set_visible(False)\nax.spines[\"top\"].set_visible(False)\n\nax.set_ylabel(\"GDP, Percentage Points from Baseline\", fontsize = 14)\n\nax.legend(fontsize = 14, frameon=False)\n\nax.set_xlim(-0.25,0.6)\n\n#plt.savefig(fig_path + \"\\\\output_cost.pdf\", bbox_inches = \"tight\", dip = 3600)\n\nplt.show()", "_____no_output_____" ] ], [ [ "This then generates the allocative efficiency (covariance term) figure, Figure 4", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize = (10,7))\n\nseries = \"output\"\n\nval = \"0.1\"\n\n#baseline = float(grp.get_group(val)[grp.get_group(val).tau_p == 0.18][series])\n\nypred = smooth_reg(grp.get_group(val), series)\n\nbaseline = float(ypred[grp.get_group(val).tau_p == 0.18])\n\nreal = grp.get_group(val)[series]\n\nindex_max = grp.get_group(val).cons_eqiv.idxmax()\n\nax.plot(grp.get_group(val).tau_p, 100*(ypred /baseline-1),\n linewidth = 4, label = \"GDP\", \n color = 'blue', alpha = 0.75)\n\noptimal_prog = grp.get_group(val).tau_p.iloc[index_max]\n\nax.plot(optimal_prog, \n 100*(ypred /baseline-1).iloc[index_max], 'ro',\n markersize=10, linewidth = 50,\n color = \"red\",alpha = 0.50)\n\n\n####################################################################################\nseries = \"OPterm2\"\n\nypred = smooth_reg(grp.get_group(val), series)\n\nypred_output = smooth_reg(grp.get_group(val), \"output\")\n\nbaseline_output = float(ypred_output[grp.get_group(val).tau_p == 0.18])\n\nbaseline = float(ypred[grp.get_group(val).tau_p == 0.18])\n\nreal = grp.get_group(val)[series]\n\nindex_max = grp.get_group(val).cons_eqiv.idxmax()\n\nax.plot(grp.get_group(val).tau_p, 100*((ypred)/ypred_output - baseline/baseline_output),\n linewidth = 4, label = \"Covariance Term (Allocative Efficiency)\", \n alpha = 0.75, color = \"red\", linestyle='--')\n\n####################################################################################\n\nax.axvline(x = 0.18, \n color='k', \n linestyle='--',\n lw = 2, alpha = 0.5) \n\nax.annotate(\n \"Optimal Progressivity \\ntau_p* =\" + str(optimal_prog), \n xy=(0.27, -0.35), # This is where we point at...\n xycoords=\"data\", # Not exactly sure about this\n xytext=(0.4, 0.25), # This is about where the text is\n horizontalalignment=\"left\", # How the text is alined\n arrowprops={\n \"arrowstyle\": \"-|>\", # This is stuff about the arrow\n \"connectionstyle\": \"angle3,angleA=5,angleB=85\",\n \"color\": \"black\"\n },\n fontsize=14,\n)\n\n\nax.set_ylabel(\"Percentage Points from Baseline\", fontsize = 14)\nax.set_xlabel(\"Tax Progressivity\", fontsize = 14)\n\nax.spines[\"right\"].set_visible(False)\nax.spines[\"top\"].set_visible(False)\n\nax.set_xlim(-0.25,0.6)\n\nax.legend(loc = \"lower left\", fontsize = 14, frameon=False)\n\n#plt.savefig(fig_path + \"\\\\output_baseline.pdf\", bbox_inches = \"tight\", dip = 3600)\n\nplt.show()\n", "_____no_output_____" ] ], [ [ "Then the migration figure, Figure 5", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize = (10,7))\n\nseries = \"migration\"\n\nval = \"0.1\"\n\nbaseline = float(grp.get_group(val)[grp.get_group(val).tau_p == 0.18][series])\n\nypred = smooth_reg(grp.get_group(val), series)\n\nax.plot(grp.get_group(val).tau_p, 100*(ypred /baseline-1),\n linewidth = 4, label = \"Migration\", \n color = 'blue', alpha = 0.70)\n\nseries = \"ls\"\n\nval = \"0.1\"\n\nbaseline = float(grp.get_group(val)[grp.get_group(val).tau_p == 0.18][series])\n\nypred = smooth_reg(grp.get_group(val), series)\n\nreal = grp.get_group(val)[series]\n\nax.plot(grp.get_group(val).tau_p, 100*(ypred /baseline-1),\n linewidth = 4, label = \"Labor Supply\", \n color = 'red', alpha = 0.70, linestyle = \"--\")\n\n###########################################################################################\n\nax.axvline(x = 0.18, \n color='k', \n linestyle='--',\n lw = 2, alpha = 0.5) \n\nax.set_ylabel(\"Percentage Points from Baseline\", fontsize = 14)\nax.set_xlabel(\"Tax Progressivity\", fontsize = 14)\n\nax.spines[\"right\"].set_visible(False)\nax.spines[\"top\"].set_visible(False)\n\nax.legend(fontsize = 14, frameon=False)\n\nax.set_xlim(-0.25,0.6)\n\n#plt.savefig(fig_path + \"\\\\migration_baseline.pdf\", bbox_inches = \"tight\", dip = 3600)\n\n\nplt.show()", "_____no_output_____" ] ], [ [ "---\n\n## Marginal tax rates in the model", "_____no_output_____" ] ], [ [ "values_TAX = [\"0.05\",\"0.1b\", \"0.1\", \"0.2\", \"0.3\", \"0.4\"]\n\nmat = loadmat(\"opt_marg_rates\") \n \nmarginal_rates = pd.DataFrame(mat[\"marg_rates\"])\n\nmarginal_rates.columns = values_TAX\n\nmat = loadmat(\"opt_incom_prct\") \n \nincome_pct = pd.DataFrame(mat[\"incom_prct\"])\n\nincome_pct.columns = values_TAX", "_____no_output_____" ], [ "def smooth_marg_rates(income_pct, marginal_rates, op_level):\n \n df = pd.DataFrame([income_pct.T.loc[op_level], marginal_rates.T.loc[op_level]])\n \n df = df.T\n \n df.columns = [\"inc_prct\", \"marg_rates\"]\n \n specification = '''marg_rates ~ np.log(inc_prct) + np.square(np.log(inc_prct)) + \n np.power(np.log(inc_prct),3)+ np.power(np.log(inc_prct),4)'''\n \n results = smf.ols(specification , # This is the model in variable names we want to estimate\n data=df).fit() \n \n pred = results.predict(exog = df[\"inc_prct\"])\n \n #print(results.summary())\n \n return pred\n", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize = (10,7))\n\nax.set_prop_cycle('color',plt.cm.seismic(np.linspace(0,1.15,5)))\n\n\nfor val in values[1:]:\n \n pred = smooth_marg_rates(income_pct, marginal_rates, val)\n \n #ax.plot(100*(income_pct[val]), 100*pred,linewidth = 4, \n # alpha = 0.70, label = \"Imports/GDP = \" + val)\n \n if val == \"0.1b\":\n print(\" \")\n ax.plot(100*income_pct[val], 100*pred,linewidth = 4, \n alpha = 0.70, color = \"black\", linestyle = '--', label = \"Baseline\")\n \n \n \n \n else: \n ax.plot(100*income_pct[val], 100*pred,linewidth = 4, \n alpha = 0.70, label = \"Imports/GDP = \" + val)\n \n idx = (np.abs(income_pct[val]-0.90)).idxmin()\n print(100*pred[idx])\n \n##############################################################################\n \nax.set_ylabel(\"Marginal Tax Rates, Percent\", fontsize = 14)\nax.set_xlabel(\"Pre-Tax Labor Income Percentile\", fontsize = 14)\n\nax.set_ylim(-10,70)\nax.set_xlim(-2,91)\n\ntest = list(range(0,100,10))\n#test.append(90) \n\nax.set_xticks(test)\n\nax.spines[\"right\"].set_visible(False)\nax.spines[\"top\"].set_visible(False)\n\nax.legend(loc = \"lower right\", fontsize = 14, frameon=False)\n\n#plt.savefig(fig_path + \"\\\\marginal_rates.pdf\", bbox_inches = \"tight\", dip = 3600)\n\n\nplt.show()", "48.08660450829356\n52.83763415842769\n56.82176121850468\n63.580426059183345\n" ], [ "opt_tax = []\n\nprctile_income = 0.10\n\nfor val in values_TAX[1:]:\n \n pred = smooth_marg_rates(income_pct, marginal_rates, val)\n \n #ax.plot(100*(income_pct[val]), 100*pred,linewidth = 4, \n # alpha = 0.70, label = \"Imports/GDP = \" + val)\n \n if val == \"0.1b\":\n print(\" \")\n \n else: \n \n idx = (np.abs(income_pct[val]-prctile_income)).idxmin()\n \n opt_tax.append(100*pred[idx])\n \n#######################################################################\n\nelasticity = (opt_tax[-1] - opt_tax[1])/(40-10)\n\nfig, ax = plt.subplots(figsize = (10,7))\n\nax.plot(values[1:], opt_tax, linewidth = 5, alpha = 0.70, color = \"red\", linestyle = '--')\n\n\n#test.append(90)\nax.spines[\"right\"].set_visible(False)\nax.spines[\"top\"].set_visible(False)\n\nax.set_ylabel(\"Marginal Tax Rates for 90th Percentile\", fontsize = 14)\nax.set_xlabel(\"Imports/ GDP\", fontsize = 14)\n\n#ax.set_ylim(35,70)\n\ntest = list(range(0,100,10))\n#test.append(90) \n\nax.set_xticklabels(test)\n\nplt.show()", " \n" ], [ "def gains_trade(df, tax_policy):\n \n new_df = df[df.tau_p == tax_policy]\n \n basewel = float(new_df[\"welfare\"][new_df[\"trade_share\"] == \"0.1\"])\n \n new_df[\"cons_eqiv\"] = 100*(np.exp((1-0.95)*(new_df[\"welfare_smth\"] - basewel))-1)\n \n return new_df ", "_____no_output_____" ], [ "gains_trade(all_df, 0.18)", "C:\\Program Files\\Anaconda3\\lib\\site-packages\\ipykernel\\__main__.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n" ], [ "#100*(-27.633091 / -(-26.18) + 1)\n\n100*(np.exp((1-0.95)*(-26.528930- -27.633091))-1)", "_____no_output_____" ], [ "100*(-27.633091 / -(-26.528930) + 1)\n\n100*(np.exp((1-0.95)*(-26.248914- -27.466803))-1)", "_____no_output_____" ], [ "100*(np.exp((1-0.95)*(-26.248914- -27.633091))-1)", "_____no_output_____" ], [ "7.16/5.67", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a8a60cc37972e23c15ecc816e9d478a2963e32
196,517
ipynb
Jupyter Notebook
.ipynb_checkpoints/2.2&2.3 Data-representation-fo-neura-networks_cn &The-gears-of-neural-networks-tensor-operations -checkpoint.ipynb
ViolinLee/deep-learning-with-python-notebooks
a41cdbb09c9f8481bc4c22f2f23b58fed30c65e1
[ "MIT" ]
1
2019-01-04T14:19:40.000Z
2019-01-04T14:19:40.000Z
.ipynb_checkpoints/2.2&2.3 Data-representation-fo-neura-networks_cn &The-gears-of-neural-networks-tensor-operations -checkpoint.ipynb
ViolinLee/deep-learning-with-python-notebooks
a41cdbb09c9f8481bc4c22f2f23b58fed30c65e1
[ "MIT" ]
null
null
null
.ipynb_checkpoints/2.2&2.3 Data-representation-fo-neura-networks_cn &The-gears-of-neural-networks-tensor-operations -checkpoint.ipynb
ViolinLee/deep-learning-with-python-notebooks
a41cdbb09c9f8481bc4c22f2f23b58fed30c65e1
[ "MIT" ]
null
null
null
167.248511
37,892
0.896996
[ [ [ "在上面的例子中,数据存储为多维Numpy数组,也称为张量(tensor)。当前流行的机器学习系统都以张量作为基本数据结构。所以Google的TensorFlow也拿张量命名。那张量是什么呢?\n张量是数据的容器(container)。这里的数据一般是数值型数据,所以是数字的容器。大家所熟悉的矩阵是二维(2D)张量。张量是广义的矩阵,它的某一维也称为轴(axis)。", "_____no_output_____" ], [ "### 标量(Scalar,0D 张量)\n只包含一个数字的张量称为标量(或者数量张量,零维张量,0D张量)。在Numpy中,一个float32或者float64位的数值称为数量张量。Numpy张量可用其ndim属性显示轴的序数,数量张量有0个轴(ndim == 0)。张量的轴的序数也称为阶(rank)。下面是Numpy标量:", "_____no_output_____" ] ], [ [ "import numpy as np\nx = np.array(12)\nx", "_____no_output_____" ], [ "x.ndim", "_____no_output_____" ] ], [ [ "### 向量(1D张量)\n数字的数组也称为向量,或者一维张量(1D张量)。一维张量只有一个轴。", "_____no_output_____" ] ], [ [ "x = np.array([12, 3, 6, 14])\nx", "_____no_output_____" ], [ "x.ndim", "_____no_output_____" ] ], [ [ "该向量有5项,也称为5维的向量。但是不要混淆5D向量和5D张量!一个5D向量只有一个轴,以及沿该轴有5个维数(元素);然而一个5D张量有5个轴,并且沿每个轴可以有任意个的维数。维度既能表示沿某个轴的项的数量(比如,上面的5D向量),又能表示一个张量中轴的数量(比如,上面的5D张量),时常容易混淆。对于后者,用更准确地技术术语来讲,应该称为5阶张量(张量的阶即是轴的数量),但人们更常用的表示方式是5D张量。", "_____no_output_____" ], [ "### 矩阵(2D张量)\n向量的数组称为矩阵,或者二维张量(2D张量)。矩阵有两个轴,也常称为行和列。你可以将数字排成的矩形网格看成矩阵,下面是一个Numpy矩阵:", "_____no_output_____" ] ], [ [ "x = np.array([[5, 78, 2, 34, 0],\n [6, 79, 3, 35, 1],\n [7, 80, 4, 36, 2]])\nx.ndim", "_____no_output_____" ] ], [ [ "沿着第一个轴的项称为行,沿着第二个轴的项称为列。上面的例子中,[5, 78, 2, 34, 0]是矩阵 x 第一行,[5, 6, 7]是第一列。\n\n矩阵的数组称为三维张量(3D张量),你可以将其看成是数字排列成的立方体,下面是一个Numpy三维张量(注意该三维张量内部的三个二维张量的shape一致,均为(3,5)。维度是一个自然数,形状则是一个元组):", "_____no_output_____" ] ], [ [ "x = np.array([[[5, 78, 2, 34, 0],\n [6, 79, 3, 35, 1],\n [7, 80, 4, 36, 2]],\n [[5, 78, 2, 34, 0],\n [6, 79, 3, 35, 1],\n [7, 80, 4, 36, 2]],\n [[5, 78, 2, 34, 0],\n [6, 79, 3, 35, 1],\n [7, 80, 4, 36, 2]]])\nx.ndim", "_____no_output_____" ] ], [ [ "题外话!若张量某维度的元素未对其,则这些元素成为list。", "_____no_output_____" ] ], [ [ "x = np.array([[[5, 78, 2, 34, 0],\n [6, 79, 3, 35, 1],\n [7, 80, 4, 36, 2]],\n [[5, 78, 2, 34, 0],\n [6, 79, 3, 35, 1],\n [7, 80, 4, 36, 2]],\n [[5, 78, 2, 34, 0],\n [7, 80, 4, 36, 2]]])\nx", "_____no_output_____" ], [ "x.ndim", "_____no_output_____" ] ], [ [ "同理,将三维张量放进数组可以创建四维张量,其它更高维的张量亦是如此。深度学习中常用的张量是 0D 到 4D。如果处理视频数据,你会用到5D。", "_____no_output_____" ], [ "### 关键属性\n张量具有如下三个关键属性:\n1. 轴的数量(阶数,rank):一个三维张量有3个轴,矩阵有2个轴。Python Numpy中的张量维度为ndim。\n2. 形状(shape):它是一个整数元组,描述张量沿每个轴有多少维。例如,前面的例子中,矩阵的形状为(3,5),三维张量的形状为(3,3,5)。向量的形状只有一个元素,比如(5,),标量则是空形状,()。\n3. 数据类型:张量中包含的数据类型有float32,unit8,float64等等,调用Python的dtype属性获取。字符型张量是极少见的。注意,Numpy中不存在字符串张量,其它大部分库也不存在。因为张量存在于预先申请的、连续的内存分段;而字符是变长的。", "_____no_output_____" ], [ "下面来几个具体的例子,回看MNIST数据集。首先加载MNIST数据集:", "_____no_output_____" ] ], [ [ "from keras.datasets import mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()", "e:\\program_files\\miniconda3\\envs\\dl\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n" ] ], [ [ "接着,用ndim属性显示张量train_images的轴数量:", "_____no_output_____" ] ], [ [ "print(train_images.ndim)", "3\n" ] ], [ [ "打印形状:", "_____no_output_____" ] ], [ [ "print(train_images.shape)", "(60000, 28, 28)\n" ] ], [ [ "使用dtype属性打印数据类型:", "_____no_output_____" ] ], [ [ "print(train_images.dtype)", "uint8\n" ] ], [ [ "所以train_images是一个8-bit 整数的三维张量。更确切地说,它是一个包含60,000个矩阵的数组,其中每个矩阵是28 x 8 的整数。每个矩阵是一个灰度图,其值为0到255。\n下面使用Python Matplotlib库显示三维张量中的第四幅数字图,见图2.2: ", "_____no_output_____" ] ], [ [ "#Listing 2.6 Dispalying the fourth digit\ndigit = train_images[4]\nimport matplotlib.pyplot as plt\nplt.imshow(digit, cmap=plt.cm.binary)\nplt.show()", "_____no_output_____" ] ], [ [ "注:这里使用 matplotlib.cm, cm 表示 colormap。 \nbinary map:https://en.wikipedia.org/wiki/Binary_image", "_____no_output_____" ], [ "digit 是从这个三维张量取出的一个矩阵(二维数组/张量):", "_____no_output_____" ] ], [ [ "print(digit.ndim,\",\",digit.shape)", "2 , (28, 28)\n" ] ], [ [ "### Numpy中的张量操作\n上面的例子中,使用了train_images[i]沿第一个轴选择指定的数字图。选择张量的指定元素称为张量分片(tensor slicing),下面看Numpy数组中的张量切片操作: \n选择#10到#100(不包括#100)的数字图,对应的张量形状为(90,28,28):", "_____no_output_____" ] ], [ [ "my_slice = train_images[10:100]\nprint(my_slice.shape)", "(90, 28, 28)\n" ] ], [ [ "其等效的表示方法有,沿每个轴为张量分片指定起始索引和终止索引。注意,“:”等效于选择整个轴的数据:", "_____no_output_____" ] ], [ [ "my_slice = train_images[10:100, 0:28, 0:28]\nmy_slice.shape", "_____no_output_____" ] ], [ [ "一般,你可以沿着张量每个轴任意选择两个索引之间的元素。例如,选择所有图片的右下角的14 x 14的像素:", "_____no_output_____" ] ], [ [ "my_slice = train_images[:, 14:, 14:]", "_____no_output_____" ] ], [ [ "你也可以用负索引。就像Python list中的负索引一样,它表示相对于当前轴末端的位置。剪切图片中间14 x 14像素,使用如下的方法:", "_____no_output_____" ] ], [ [ "my_slice = train_images[:, 7:-7, 7:-7]", "_____no_output_____" ] ], [ [ "### 数据批(data batch)的概念\n总的来说,你在深度学习中即将接触的所有数据张量的第一轴(axis 0,since indexing starts at 0)就叫“样本轴”(samples axis,也叫“样本维”)。简单地说,MINIST例子中的“样本(samples)”就是那些数字图片。 \n另外,深度学习模型不会一次处理整个数据集,而是将数据集分解为若干个小批次。具体来讲,下面就是MNIST数据集中一个大小为128的批次。", "_____no_output_____" ] ], [ [ "# Listing 2.23 Slicing a tensor into batches\nbatch = train_images[:128]\n\n# and here's the next batch\nbatch = train_images[128:256]\n\n# and the n-th batch:\n#batch = train_images[128 * n: 128 * (n + 1)]\nbatch.shape", "_____no_output_____" ] ], [ [ "考虑这样一个批张量,第一轴(axis 0)就称为“批次轴”(batch axis)或“批次维数”(batch dimension)。这是一个术语,当你使用Keras或其他深度学习库时你会经常接触到。", "_____no_output_____" ], [ "### 现实中data tensors的例子\n让我们让数据张量更具体,还有一些类似于你稍后会遇到的例子。 \n你将操作的数据几乎总是属于下列类别之一:\n1. 向量数据:2D 张量,shape 为(samples,features)\n2. 时间序列数据或序列数据:3D 张量,shape 为(samples,timesteps,features)\n3. 图像: 4D 张量,shape 为(samples,width,height,channels)或者(samples,channels,width,height)\n4. 视频:5D 张量,shape 为(samples,frames,width,height,channels)或者(samples,frames,channels,width,height)", "_____no_output_____" ], [ "### 向量数据\n", "_____no_output_____" ], [ "### 时间序列数据或序列数据\n![image.png](attachment:image.png)", "_____no_output_____" ], [ "### 图像\n\n![image.png](attachment:image.png)", "_____no_output_____" ], [ "### 视频", "_____no_output_____" ], [ "### The gears of neural networks: tensor operations", "_____no_output_____" ], [ "就像计算机程序可以最终被降阶为一系列对二值输入的二值操作一样,深度神经网络中的所有变换可以被降阶至一系列对数值数据张量的“张量操作”。例如,可以对张量实施add,multiply等操作。 \n在我们最开始的例子中,我们通过逐个地堆叠 Dense 层搭建了神经网络。一个神经网络层看起来是这样的:", "_____no_output_____" ] ], [ [ "#Listing 2.24 A Keras layer\n#keras.layers.Dense(512, activation='relu')", "_____no_output_____" ] ], [ [ " 这个神经网络层,可以用一个函数(function)来解释,这个函数输入一个2D张量,并返回另一个2D张量,这个张量是对输入张量的新描述。具体地,该方程为:\n ![image.png](attachment:image.png)\n 让我们来分解它。这个方程里面有三个张量操作:输入张量和W张量的点乘(dot),得到的2D张量和向量b的加(+)操作,最后是一个relu操作。rulu(x) 就是简单的 max(x,0)。 \n 虽然这些操作完全是线性代数计算,但你会发现这里没有任何数学符号。因为我们发现当没有相应数学背景的编程人员使用Python语句而不是数学方程式时,他们更能够掌握。所以这里我们一直使用Numpy代码。", "_____no_output_____" ], [ "### Element-wise operations\n“relu”操作和加法操作都是element-wise操作(独立地对张量的每一个元素实施计算)。这意味着这些操作对大量并行运算是非常适合的(这类操作也叫“vectorized” implementations,向量化运算。)。如果你要写一个element-wise 操作的朴素Python实现,你将使用到for循环:", "_____no_output_____" ] ], [ [ "#Listing 2.25 A naive implemetation of an element-wise \"relu\" operation\ndef naive_relu(x):\n # x is a 2D Numpy tensor\n assert len(x.shape) == 2\n \n x = x.copy() # Avoid overwrinting the input tensor\n for i in range(x.shape[0]:\n for j in range(x.shape[1]):\n x[i, j] = max(x[i, j], 0)\n return x", "_____no_output_____" ] ], [ [ "同样的,对于加法操作有:", "_____no_output_____" ] ], [ [ "#Listing 2.26 A naive implementation of element-wise addition\ndef naive_add(x, y):\n # x and y are 2D Numpy tensors\n assert len(x.shape) == 2\n assert x.shape == y.shape\n \n x = x.copy() # Avoid overwriting the input tensor\n for i in range(x.shape[0]):\n for j in range(x.shape[1]):\n x[i, j] += y[i, h]\n return x", "_____no_output_____" ] ], [ [ "使用相同的方法,我们可以实现element-wise multiplication,subtraction等运算。 \nIn practice, when dealing with Numpy arrays, these operations are available as well-optimized built-in Numpy functions, which themselves delegate the heavy lifting to a BLAS implementation (Basic Linear Algebra Subprograms) if you have one installed, which you should. BLAS are low-level, highly-parallel, efficient tensor manipulation routines typically implemented in Fortran or C. \n所以在Numpy中你可以这样实施element-wise,速度非常快。", "_____no_output_____" ] ], [ [ "# Listing 2.27 Naive element-wise operation in Numpy\nimport numpy as np\n# Element-wise addtion\n#z = x + y\n\n# Element-wise relu\n#z = np.maximum(z, 0.)", "_____no_output_____" ] ], [ [ "### Broadcasting", "_____no_output_____" ], [ "In our naive implementation of above, we only support the addition of 2D naive_add tensors with identical shapes. But in the layer introduced earlier, we were adding a Dense 2D tensor with a vector. What happens with addition when the shape of the two tensors being added differ? \nWhen possible and if there is no ambiguity, the smaller tensor will be \"broadcasted\" to match the shape of the larger tensor. Broadcasting consists in two steps: \n1. 维度小的张量添加一个轴,以和维度大的张量的维度(ndim)适配(该操作为 broadcast axes)\n2. 维度小的张量沿着新轴拷贝,以和维度大的张量的形状(shape)适配。 \n让我们来看一个具体的例子:考虑shape为(32,10)的张量x,和shape为(10,)的张量y。 \n首先,首先我们给y张量添加一个第一轴,这时y的shape变成(1,10)。接着沿着新轴重复y 32次,得到shape为(32,10)的张量Y。即:", "_____no_output_____" ] ], [ [ "# Y[i,:] = y for i in range(0, 32)", "_____no_output_____" ] ], [ [ "In terms of implementation, no new 2D tensor would actually be created since that would be terribly inefficient, so the repetition operation would be entirely virtual, i.e. it would be happening at the algorithmic level rather than at the memory level. But thinking of the vector being repeated 10 times alongside a new axis is a helpful mental model. Here’s what a naive implementation would look like:", "_____no_output_____" ] ], [ [ "def naive_add_matrix_and_vector(x, y):\n # x is a 2D Numpy tensor\n # y is a Numpy vector\n assert len(x.shape) == 2\n assert len(y.shape) == 1\n assert x.shape[10] == y.shape[0]\n \n x = x.copy() # Avoid overwriting the input tensor\n for i in range(x.shape[0]):\n for j in range(x.shape[1]):\n x[i, j] += y[j]\n return x", "_____no_output_____" ] ], [ [ "With broadcasting, you can generally apply two-tensor element-wise operations if one tensor has shape and the other has shape (a, b, … n, n + 1, … m) and the other has shpae (n, n + 1, ... m). The broadcasting would then automatically happen for axes a to n -1. \nYou can thus do:\n(注意下面这里扩展的轴的维度为2,而不是1!)", "_____no_output_____" ] ], [ [ "### Listing 2.29 Applying the element-wise operation to two tensors of maximum different shapes via broadcastin\nimport numpy as np\n\n# x is a random tensor with shape (64, 3, 32, 10) \nx = np.random.random((64, 3, 32, 10)) \n# y is a random tensor with shape (32, 10) \ny = np.random.random((32, 10))\n# The output z has shape (64, 3, 32, 10) like x \nz = np.maximum(x, y)", "_____no_output_____" ] ], [ [ "### Tensor dot\ndot操作, 也叫张量乘法(tensor product),不要将其和element-wise混淆。它是张量运算中最常见和最重要的。 与element-wise相反,它将输入张量中的元素组合在一起(组合有权重)。 \n\nElement-wise product is done with the * operator in Numpy, Keras, Theano and TensorFlow. uses a different syntax in TensorFlow, but in both Numpy and Keras it dot is done using the standard operator:", "_____no_output_____" ] ], [ [ "# Listing 2.30 Numpy operations between two tensors\nimport numpy as np\n#z = np.dot(x, y)", "_____no_output_____" ] ], [ [ "In mathematical notation, you would note the operation with a dot . : \nz = x . y \nMathematically, what does the dot operation do? Let’s start with the dot product of two vectors x and y. It is computed as such:\n", "_____no_output_____" ] ], [ [ "# Listing 2.31 A naive implementation of dot\ndef naive_vector_dot(x, y): \n # x and y are Numpy vectors \n assert len(x.shape) == 1 \n assert len(y.shape) == 1 \n assert x.shape[0] == y.shape[0]\n \n z = 0. \n for i in range(x.shape[0]): \n z += x[i] * y[i] \n return z", "_____no_output_____" ] ], [ [ "You will have noticed that the dot product between two vectors is a scalar, and that only vectors with the same number of elements are compatible for dot product. \nYou can also take the dot product between a matrix x and a vector y, which returns a vector where coefficients are the dot products between y and the rows of x. You would implement it as such", "_____no_output_____" ] ], [ [ "# Listing 2.32 A naive implementation of matrix-vector dot\nimport numpy as np\ndef naive_matrix_vector_dot(x, y):\n # x is a Numpy matrix \n # y is a Numpy vector \n assert len(x.shape) == 2 \n assert len(y.shape) == 1 \n # The 1st dimension of x must be \n # the same as the 0th dimension of y! \n assert x.shape[1] == y.shape[0]\n \n # This operation returns a vector of 0s \n # with the same shape as y \n z = np.zeros(x.shape[0]) \n for i in range(x.shape[0]): \n for j in range(x.shape[1]): \n z[i] += x[i, j] * y[j] \n return z ", "_____no_output_____" ] ], [ [ "You could also be reusing the code we wrote previously, which highlights the relationship between matrix-vector product and vector product:", "_____no_output_____" ] ], [ [ "# Listing 2.33 Alternative naive implementation of matrix-vector dot\ndef naive_matrix_vector_dot(x, y): \n z = np.zeros(x.shape[0]) \n for i in range(x.shape[0]): \n z[i] = naive_vector_dot(x[i, :], y) \n return z", "_____no_output_____" ] ], [ [ "Note that as soon as one of the two tensors has a higher than 1, is no longer ndim dot symmetric, which is to say that is not the same as . dot(x, y) dot(y, x) Of course, dot product generalizes to tensors with arbitrary number of axes. The most common applications may be the dot product between two matrices. You can take the dot product of two matrices x and y (dot(x, y)) if and only if x.shape[1] == y.shape[0]. The result is a matrix with shape (x.shape[0], y.shape[1]) , where coefficients are the vector products between the rows of x and the columns of y. Here’s the naive implementation:", "_____no_output_____" ] ], [ [ "# Listing 2.34 A naive implementation of matrix-matrix dot\ndef naive_matrix_dot(x, y): \n # x and y are Numpy matrices \n assert len(x.shape) == 2 \n assert len(y.shape) == 2 \n # The 1st dimension of x must be \n # the same as the 0th dimension of y! \n assert x.shape[1] == y.shape[0]\n # This operation returns a matrix of 0s \n # with a specific shape \n z = np.zeros((x.shape[0], y.shape[1])) \n # We iterate over the rows of x \n for i in range(x.shape[0]): \n # And over the columns of y \n for j in range(y.shape[1]): \n row_x = x[i, :] \n column_y = y[:, j] \n z[i, j] = naive_vector_dot(row_x, column_y) \n return z", "_____no_output_____" ] ], [ [ "To understand dot product shape compatibility, it helps to visualize the input and output tensors by aligning them in the following way:\n![image.png](attachment:image.png)", "_____no_output_____" ], [ "x, y and z are pictured as rectangles (literal boxes of coefficients). Because the rows and x and the columns of y must have the same size, it follows that the width of x must match the height of y. If you go on to develop new machine learning algorithms, you will likely be drawing such diagrams a lot. \nMore generally, you can take the dot product between higher-dimensional tensors, following the same rules for shape compatibility as outlined above for the 2D case:下面这个操作在线性代数的矩阵乘法中还没有接触过,这已经超过矩阵的维度(二维)。 \n(a, b, c, d) . (d,) (a, b, c) \n(a, b, c, d) . (d, e) (a, b, c, e)", "_____no_output_____" ], [ "### Tensor reshaping\n第三类非常需要理解的张量运算是“tensor reshaping”. \n虽然我们的第一个神经网络的例子的第一层中并没用用到该操作,但是在另一个Dense中,我们在将数据喂入网络前的数字数据预处理(pre-proccessed)上使用了reshape。", "_____no_output_____" ] ], [ [ "# Listing 2.35 MNIST image tensor reshaping\n#train_images = train_images.reshape((60000, 28 * 28)", "_____no_output_____" ] ], [ [ "Reshaping a tensor means re-arranging its rows and columns so as to match a target shape. Naturally the reshaped tensor will have the same total number of coefficients as the initial tensor. Reshaping is best understood via simple examples:\n对一个张量的reshape意味着重新调整张量的行和列,以适配目标shape。所以reshape后地张量和原始的张量自然而然地拥有着相同总数的参数。reshpe可以通过下面的例子很好地理解:", "_____no_output_____" ] ], [ [ "# Listing 2.36 Tensor reshaping example\nx = np.array([[0., 1.], \n [2., 3.], \n [4., 5.]]) \nprint(x.shape)", "(3, 2)\n" ], [ "x = x.reshape((6, 1))\nx", "_____no_output_____" ], [ "x = x.reshape((2, 3))\nx", "_____no_output_____" ] ], [ [ "A special case of reshaping that is commonly encountered is the transposition. \"Transposing\" a matrix means exchanging its rows and its columns, so that x[i, :] becomes :", "_____no_output_____" ] ], [ [ "# Listing 2.37 Matrix transposition\nx = np.zeros((300, 20)) \n# Creates an all-zeros matrix of shape (300, 20) \nx = np.transpose(x) \nprint(x.shape)", "(20, 300)\n" ] ], [ [ "### Geometric interpretation of tensor operations\nBecause the contents of the tensors being manipulated by tensor operations can be interpreted as being coordinates of points in some geometric space, all tensor operations have a geometric interpretation \nFor instance, let’s consider addition. We will start from the following vector: \nA = [0.5, 1.0] \nIt is a point in a 2D space: \n![image.png](attachment:image.png) ", "_____no_output_____" ], [ "It is common to picture a vector as an arrow linking the origin to the point: \n![image.png](attachment:image.png) ", "_____no_output_____" ], [ "Let’s consider a new point, , which we will add to the previous one. B = [1, 0.25] This is done geometrically by simply chaining together the vector arrows, with the resulting location being the vector representing the sum of the previous two vectors: \n![image.png](attachment:image.png)\nIn general, elementary geometric operations such as affine transformations, rotations, scaling, etc. can be expressed as tensor operations. For instance, a rotation of a 2D vector by an angle theta can be achieved via dot product with a 2x2 matrix where R = [u, v] u and and both vectors of the plane: \nu = [cos(theta), sin(theta)] and v = [-sin(theta), cos(theta)].", "_____no_output_____" ], [ "### A geometric interpretation of deep learning\nYou just learned that neural networks consist entirely in chains of tensors operations, and that all these tensor operations are really just geometric transformations of the input data. It follows that you can interpret a neural network as a very complex geometric transformation in a high-dimensional space, implemented via a long series of simple steps. \nIn 3D, the following mental image may prove useful: imagine two sheets of colored paper, a red one and a blue one. Superpose them. Now crumple them together into a small paper ball. That crumpled paper ball is your input data, and each sheet of paper is a class of data in a classification problem. What a neural network (or any other machine learning model) is meant to do, is to figure out a transformation (计算出一种transformation) of the paper ball that would uncrumple it, so as to make the two classes cleanly separable again. With deep learning, this would be implemented as a series of simple transformations of the 3D space, such as those you could apply on the paper ball with your fingers, one movement at a time. \nUncrumpling paper balls is what all machine learning is about: finding neat representations for complex, highly folded data manifolds. At this point, you should already have a pretty good intuition as to why deep learning excels at it: it takes the approach of incrementally decomposing a very complicated geometric transformation into a long chain of elementary ones, which is pretty much the strategy a human would follow to uncrumple a paper ball. Each layer in a deep network applies a transformation that disentangle the data a little bit—and a deep stack of layers makes tractable an extremely complicated disentanglement process.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e7a8a977aa3f11d13c54de9f9239ed4364f9a877
87,201
ipynb
Jupyter Notebook
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
7669c4460be02b8bbaea2ae79182af2667e9e6b2
[ "MIT" ]
40
2020-09-30T13:45:50.000Z
2022-03-10T10:22:19.000Z
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
7669c4460be02b8bbaea2ae79182af2667e9e6b2
[ "MIT" ]
null
null
null
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
7669c4460be02b8bbaea2ae79182af2667e9e6b2
[ "MIT" ]
24
2020-10-06T07:05:38.000Z
2022-03-10T10:23:29.000Z
74.594525
18,718
0.524409
[ [ [ "# Neural Machine Translation\n\nWelcome to your first programming assignment for this week! \n\nYou will build a Neural Machine Translation (NMT) model to translate human readable dates (\"25th of June, 2009\") into machine readable dates (\"2009-06-25\"). You will do this using an attention model, one of the most sophisticated sequence to sequence models. \n\nThis notebook was produced together with NVIDIA's Deep Learning Institute. \n\nLet's load all the packages you will need for this assignment.", "_____no_output_____" ] ], [ [ "from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply\nfrom keras.layers import RepeatVector, Dense, Activation, Lambda\nfrom keras.optimizers import Adam\nfrom keras.utils import to_categorical\nfrom keras.models import load_model, Model\nimport keras.backend as K\nimport numpy as np\n\nfrom faker import Faker\nimport random\nfrom tqdm import tqdm\nfrom babel.dates import format_date\nfrom nmt_utils import *\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Using TensorFlow backend.\n" ] ], [ [ "## 1 - Translating human readable dates into machine readable dates\n\nThe model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler \"date translation\" task. \n\nThe network will input a date written in a variety of possible formats (*e.g. \"the 29th of August 1958\", \"03/30/1968\", \"24 JUNE 1987\"*) and translate them into standardized, machine readable dates (*e.g. \"1958-08-29\", \"1968-03-30\", \"1987-06-24\"*). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. \n\n\n\n<!-- \nTake a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> ", "_____no_output_____" ], [ "### 1.1 - Dataset\n\nWe will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples. ", "_____no_output_____" ] ], [ [ "m = 10000\ndataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)", "100%|██████████| 10000/10000 [00:01<00:00, 6665.42it/s]\n" ], [ "dataset[:10]", "_____no_output_____" ] ], [ [ "You've loaded:\n- `dataset`: a list of tuples of (human readable date, machine readable date)\n- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index \n- `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with `human_vocab`. \n- `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters. \n\nLet's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since \"YYYY-MM-DD\" is 10 characters long). ", "_____no_output_____" ] ], [ [ "Tx = 30\nTy = 10\nX, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)\n\nprint(\"X.shape:\", X.shape)\nprint(\"Y.shape:\", Y.shape)\nprint(\"Xoh.shape:\", Xoh.shape)\nprint(\"Yoh.shape:\", Yoh.shape)", "X.shape: (10000, 30)\nY.shape: (10000, 10)\nXoh.shape: (10000, 30, 37)\nYoh.shape: (10000, 10, 11)\n" ] ], [ [ "You now have:\n- `X`: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via `human_vocab`. Each date is further padded to $T_x$ values with a special character (< pad >). `X.shape = (m, Tx)`\n- `Y`: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in `machine_vocab`. You should have `Y.shape = (m, Ty)`. \n- `Xoh`: one-hot version of `X`, the \"1\" entry's index is mapped to the character thanks to `human_vocab`. `Xoh.shape = (m, Tx, len(human_vocab))`\n- `Yoh`: one-hot version of `Y`, the \"1\" entry's index is mapped to the character thanks to `machine_vocab`. `Yoh.shape = (m, Tx, len(machine_vocab))`. Here, `len(machine_vocab) = 11` since there are 11 characters ('-' as well as 0-9). \n", "_____no_output_____" ], [ "Lets also look at some examples of preprocessed training examples. Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed. ", "_____no_output_____" ] ], [ [ "index = 0\nprint(\"Source date:\", dataset[index][0])\nprint(\"Target date:\", dataset[index][1])\nprint()\nprint(\"Source after preprocessing (indices):\", X[index])\nprint(\"Target after preprocessing (indices):\", Y[index])\nprint()\nprint(\"Source after preprocessing (one-hot):\", Xoh[index])\nprint(\"Target after preprocessing (one-hot):\", Yoh[index])", "Source date: 15 october 1986\nTarget date: 1986-10-15\n\nSource after preprocessing (indices): [ 4 8 0 26 15 30 26 14 17 28 0 4 12 11 9 36 36 36 36 36 36 36 36 36 36\n 36 36 36 36 36]\nTarget after preprocessing (indices): [ 2 10 9 7 0 2 1 0 2 6]\n\nSource after preprocessing (one-hot): [[ 0. 0. 0. ..., 0. 0. 0.]\n [ 0. 0. 0. ..., 0. 0. 0.]\n [ 1. 0. 0. ..., 0. 0. 0.]\n ..., \n [ 0. 0. 0. ..., 0. 0. 1.]\n [ 0. 0. 0. ..., 0. 0. 1.]\n [ 0. 0. 0. ..., 0. 0. 1.]]\nTarget after preprocessing (one-hot): [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]\n [ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]]\n" ] ], [ [ "## 2 - Neural machine translation with attention\n\nIf you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. \n\nThe attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. \n\n\n### 2.1 - Attention mechanism\n\nIn this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one \"Attention\" step does to calculate the attention variables $\\alpha^{\\langle t, t' \\rangle}$, which are used to compute the context variable $context^{\\langle t \\rangle}$ for each timestep in the output ($t=1, \\ldots, T_y$). \n\n<table>\n<td> \n<img src=\"images/attn_model.png\" style=\"width:500;height:500px;\"> <br>\n</td> \n<td> \n<img src=\"images/attn_mechanism.png\" style=\"width:500;height:500px;\"> <br>\n</td> \n</table>\n<caption><center> **Figure 1**: Neural machine translation with attention</center></caption>\n", "_____no_output_____" ], [ "\nHere are some properties of the model that you may notice: \n\n- There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism, we will call it *pre-attention* Bi-LSTM. The LSTM at the top of the diagram comes *after* the attention mechanism, so we will call it the *post-attention* LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps. \n\n- The post-attention LSTM passes $s^{\\langle t \\rangle}, c^{\\langle t \\rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\\langle t\\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\\langle t\\rangle}$ and the hidden cell state $c^{\\langle t\\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\\langle t-1 \\rangle}$ as input; it only takes $s^{\\langle t\\rangle}$ and $c^{\\langle t\\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date. \n\n- We use $a^{\\langle t \\rangle} = [\\overrightarrow{a}^{\\langle t \\rangle}; \\overleftarrow{a}^{\\langle t \\rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM. \n\n- The diagram on the right uses a `RepeatVector` node to copy $s^{\\langle t-1 \\rangle}$'s value $T_x$ times, and then `Concatenation` to concatenate $s^{\\langle t-1 \\rangle}$ and $a^{\\langle t \\rangle}$ to compute $e^{\\langle t, t'}$, which is then passed through a softmax to compute $\\alpha^{\\langle t, t' \\rangle}$. We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. \n\nLets implement this model. You will start by implementing two functions: `one_step_attention()` and `model()`.\n\n**1) `one_step_attention()`**: At step $t$, given all the hidden states of the Bi-LSTM ($[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$) and the previous hidden state of the second LSTM ($s^{<t-1>}$), `one_step_attention()` will compute the attention weights ($[\\alpha^{<t,1>},\\alpha^{<t,2>}, ..., \\alpha^{<t,T_x>}]$) and output the context vector (see Figure 1 (right) for details):\n$$context^{<t>} = \\sum_{t' = 0}^{T_x} \\alpha^{<t,t'>}a^{<t'>}\\tag{1}$$ \n\nNote that we are denoting the attention in this notebook $context^{\\langle t \\rangle}$. In the lecture videos, the context was denoted $c^{\\langle t \\rangle}$, but here we are calling it $context^{\\langle t \\rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\\langle t \\rangle}$. \n \n**2) `model()`**: Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$. Then, it calls `one_step_attention()` $T_y$ times (`for` loop). At each iteration of this loop, it gives the computed context vector $c^{<t>}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\\hat{y}^{<t>}$. \n\n\n\n**Exercise**: Implement `one_step_attention()`. The function `model()` will call the layers in `one_step_attention()` $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras:\n1. Define the layer objects (as global variables for examples).\n2. Call these objects when propagating the input.\n\nWe have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: [RepeatVector()](https://keras.io/layers/core/#repeatvector), [Concatenate()](https://keras.io/layers/merge/#concatenate), [Dense()](https://keras.io/layers/core/#dense), [Activation()](https://keras.io/layers/core/#activation), [Dot()](https://keras.io/layers/merge/#dot).", "_____no_output_____" ] ], [ [ "# Defined shared layers as global variables\nrepeator = RepeatVector(Tx)\nconcatenator = Concatenate(axis=-1)\ndensor1 = Dense(10, activation = \"tanh\")\ndensor2 = Dense(1, activation = \"relu\")\nactivator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook\ndotor = Dot(axes = 1)", "_____no_output_____" ] ], [ [ "Now you can use these layers to implement `one_step_attention()`. In order to propagate a Keras tensor object X through one of these layers, use `layer(X)` (or `layer([X,Y])` if it requires multiple inputs.), e.g. `densor(X)` will propagate X through the `Dense(1)` layer defined above.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: one_step_attention\n\ndef one_step_attention(a, s_prev):\n \"\"\"\n Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights\n \"alphas\" and the hidden states \"a\" of the Bi-LSTM.\n \n Arguments:\n a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)\n s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)\n \n Returns:\n context -- context vector, input of the next (post-attetion) LSTM cell\n \"\"\"\n \n ### START CODE HERE ###\n # Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states \"a\" (≈ 1 line)\n s_prev = repeator(s_prev)\n # Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)\n concat = concatenator([a, s_prev])\n # Use densor1 to propagate concat through a small fully-connected neural network to compute the \"intermediate energies\" variable e. (≈1 lines)\n e = densor1(concat)\n # Use densor2 to propagate e through a small fully-connected neural network to compute the \"energies\" variable energies. (≈1 lines)\n energies = densor2(e)\n # Use \"activator\" on \"energies\" to compute the attention weights \"alphas\" (≈ 1 line)\n alphas = activator(energies)\n # Use dotor together with \"alphas\" and \"a\" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)\n context = dotor([alphas, a])\n ### END CODE HERE ###\n \n return context", "_____no_output_____" ] ], [ [ "You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function.", "_____no_output_____" ], [ "**Exercise**: Implement `model()` as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in `model()`.", "_____no_output_____" ] ], [ [ "n_a = 32\nn_s = 64\npost_activation_LSTM_cell = LSTM(n_s, return_state = True)\noutput_layer = Dense(len(machine_vocab), activation=softmax)", "_____no_output_____" ] ], [ [ "Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: \n\n1. Propagate the input into a [Bidirectional](https://keras.io/layers/wrappers/#bidirectional) [LSTM](https://keras.io/layers/recurrent/#lstm)\n2. Iterate for $t = 0, \\dots, T_y-1$: \n 1. Call `one_step_attention()` on $[\\alpha^{<t,1>},\\alpha^{<t,2>}, ..., \\alpha^{<t,T_x>}]$ and $s^{<t-1>}$ to get the context vector $context^{<t>}$.\n 2. Give $context^{<t>}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\\langle t-1\\rangle}$ and cell-states $c^{\\langle t-1\\rangle}$ of this LSTM using `initial_state= [previous hidden state, previous cell state]`. Get back the new hidden state $s^{<t>}$ and the new cell state $c^{<t>}$.\n 3. Apply a softmax layer to $s^{<t>}$, get the output. \n 4. Save the output by adding it to the list of outputs.\n\n3. Create your Keras model instance, it should have three inputs (\"inputs\", $s^{<0>}$ and $c^{<0>}$) and output the list of \"outputs\".", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: model\n\ndef model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):\n \"\"\"\n Arguments:\n Tx -- length of the input sequence\n Ty -- length of the output sequence\n n_a -- hidden state size of the Bi-LSTM\n n_s -- hidden state size of the post-attention LSTM\n human_vocab_size -- size of the python dictionary \"human_vocab\"\n machine_vocab_size -- size of the python dictionary \"machine_vocab\"\n\n Returns:\n model -- Keras model instance\n \"\"\"\n \n # Define the inputs of your model with a shape (Tx,)\n # Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)\n X = Input(shape=(Tx, human_vocab_size))\n s0 = Input(shape=(n_s,), name='s0')\n c0 = Input(shape=(n_s,), name='c0')\n s = s0\n c = c0\n \n # Initialize empty list of outputs\n outputs = []\n \n ### START CODE HERE ###\n \n # Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)\n a = Bidirectional(LSTM(n_a, return_sequences=True),input_shape=(m, Tx, n_a*2))(X) \n \n # Step 2: Iterate for Ty steps\n for t in range(Ty):\n \n # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)\n context = one_step_attention(a, s)\n \n # Step 2.B: Apply the post-attention LSTM cell to the \"context\" vector.\n # Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)\n s, _, c = post_activation_LSTM_cell(context,initial_state = [s, c] ) \n \n # Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)\n out = output_layer(s)\n \n # Step 2.D: Append \"out\" to the \"outputs\" list (≈ 1 line)\n outputs.append(out)\n \n # Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)\n model = Model(inputs=[X,s0,c0],outputs=outputs)\n \n ### END CODE HERE ###\n \n return model", "_____no_output_____" ] ], [ [ "Run the following cell to create your model.", "_____no_output_____" ] ], [ [ "model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))", "_____no_output_____" ] ], [ [ "Let's get a summary of the model to check if it matches the expected output.", "_____no_output_____" ] ], [ [ "model.summary()", "____________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n====================================================================================================\ninput_1 (InputLayer) (None, 30, 37) 0 \n____________________________________________________________________________________________________\ns0 (InputLayer) (None, 64) 0 \n____________________________________________________________________________________________________\nbidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0] \n____________________________________________________________________________________________________\nrepeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] \n lstm_1[0][0] \n lstm_1[1][0] \n lstm_1[2][0] \n lstm_1[3][0] \n lstm_1[4][0] \n lstm_1[5][0] \n lstm_1[6][0] \n lstm_1[7][0] \n lstm_1[8][0] \n____________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0] \n repeat_vector_1[0][0] \n bidirectional_1[0][0] \n repeat_vector_1[1][0] \n bidirectional_1[0][0] \n repeat_vector_1[2][0] \n bidirectional_1[0][0] \n repeat_vector_1[3][0] \n bidirectional_1[0][0] \n repeat_vector_1[4][0] \n bidirectional_1[0][0] \n repeat_vector_1[5][0] \n bidirectional_1[0][0] \n repeat_vector_1[6][0] \n bidirectional_1[0][0] \n repeat_vector_1[7][0] \n bidirectional_1[0][0] \n repeat_vector_1[8][0] \n bidirectional_1[0][0] \n repeat_vector_1[9][0] \n____________________________________________________________________________________________________\ndense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0] \n concatenate_1[1][0] \n concatenate_1[2][0] \n concatenate_1[3][0] \n concatenate_1[4][0] \n concatenate_1[5][0] \n concatenate_1[6][0] \n concatenate_1[7][0] \n concatenate_1[8][0] \n concatenate_1[9][0] \n____________________________________________________________________________________________________\ndense_2 (Dense) (None, 30, 1) 11 dense_1[0][0] \n dense_1[1][0] \n dense_1[2][0] \n dense_1[3][0] \n dense_1[4][0] \n dense_1[5][0] \n dense_1[6][0] \n dense_1[7][0] \n dense_1[8][0] \n dense_1[9][0] \n____________________________________________________________________________________________________\nattention_weights (Activation) (None, 30, 1) 0 dense_2[0][0] \n dense_2[1][0] \n dense_2[2][0] \n dense_2[3][0] \n dense_2[4][0] \n dense_2[5][0] \n dense_2[6][0] \n dense_2[7][0] \n dense_2[8][0] \n dense_2[9][0] \n____________________________________________________________________________________________________\ndot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0] \n bidirectional_1[0][0] \n attention_weights[1][0] \n bidirectional_1[0][0] \n attention_weights[2][0] \n bidirectional_1[0][0] \n attention_weights[3][0] \n bidirectional_1[0][0] \n attention_weights[4][0] \n bidirectional_1[0][0] \n attention_weights[5][0] \n bidirectional_1[0][0] \n attention_weights[6][0] \n bidirectional_1[0][0] \n attention_weights[7][0] \n bidirectional_1[0][0] \n attention_weights[8][0] \n bidirectional_1[0][0] \n attention_weights[9][0] \n bidirectional_1[0][0] \n____________________________________________________________________________________________________\nc0 (InputLayer) (None, 64) 0 \n____________________________________________________________________________________________________\nlstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0] \n s0[0][0] \n c0[0][0] \n dot_1[1][0] \n lstm_1[0][0] \n lstm_1[0][2] \n dot_1[2][0] \n lstm_1[1][0] \n lstm_1[1][2] \n dot_1[3][0] \n lstm_1[2][0] \n lstm_1[2][2] \n dot_1[4][0] \n lstm_1[3][0] \n lstm_1[3][2] \n dot_1[5][0] \n lstm_1[4][0] \n lstm_1[4][2] \n dot_1[6][0] \n lstm_1[5][0] \n lstm_1[5][2] \n dot_1[7][0] \n lstm_1[6][0] \n lstm_1[6][2] \n dot_1[8][0] \n lstm_1[7][0] \n lstm_1[7][2] \n dot_1[9][0] \n lstm_1[8][0] \n lstm_1[8][2] \n____________________________________________________________________________________________________\ndense_3 (Dense) (None, 11) 715 lstm_1[0][0] \n lstm_1[1][0] \n lstm_1[2][0] \n lstm_1[3][0] \n lstm_1[4][0] \n lstm_1[5][0] \n lstm_1[6][0] \n lstm_1[7][0] \n lstm_1[8][0] \n lstm_1[9][0] \n====================================================================================================\nTotal params: 52,960\nTrainable params: 52,960\nNon-trainable params: 0\n____________________________________________________________________________________________________\n" ] ], [ [ "**Expected Output**:\n\nHere is the summary you should see\n<table>\n <tr>\n <td>\n **Total params:**\n </td>\n <td>\n 52,960\n </td>\n </tr>\n <tr>\n <td>\n **Trainable params:**\n </td>\n <td>\n 52,960\n </td>\n </tr>\n <tr>\n <td>\n **Non-trainable params:**\n </td>\n <td>\n 0\n </td>\n </tr>\n <tr>\n <td>\n **bidirectional_1's output shape **\n </td>\n <td>\n (None, 30, 64) \n </td>\n </tr>\n <tr>\n <td>\n **repeat_vector_1's output shape **\n </td>\n <td>\n (None, 30, 64) \n </td>\n </tr>\n <tr>\n <td>\n **concatenate_1's output shape **\n </td>\n <td>\n (None, 30, 128) \n </td>\n </tr>\n <tr>\n <td>\n **attention_weights's output shape **\n </td>\n <td>\n (None, 30, 1) \n </td>\n </tr>\n <tr>\n <td>\n **dot_1's output shape **\n </td>\n <td>\n (None, 1, 64)\n </td>\n </tr>\n <tr>\n <td>\n **dense_3's output shape **\n </td>\n <td>\n (None, 11) \n </td>\n </tr>\n</table>\n", "_____no_output_____" ], [ "As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, a custom [Adam](https://keras.io/optimizers/#adam) [optimizer](https://keras.io/optimizers/#usage-of-optimizers) (`learning rate = 0.005`, $\\beta_1 = 0.9$, $\\beta_2 = 0.999$, `decay = 0.01`) and `['accuracy']` metrics:", "_____no_output_____" ] ], [ [ "### START CODE HERE ### (≈2 lines)\nopt = Adam(lr = 0.005, beta_1 = 0.9, beta_2 = 0.999, decay = 0.01) \nmodel.compile(loss='categorical_crossentropy', optimizer=opt,metrics=['accuracy'])\n### END CODE HERE ###", "_____no_output_____" ] ], [ [ "The last step is to define all your inputs and outputs to fit the model:\n- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.\n- You need to create `s0` and `c0` to initialize your `post_activation_LSTM_cell` with 0s.\n- Given the `model()` you coded, you need the \"outputs\" to be a list of 11 elements of shape (m, T_y). So that: `outputs[i][0], ..., outputs[i][Ty]` represent the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). More generally, `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.", "_____no_output_____" ] ], [ [ "s0 = np.zeros((m, n_s))\nc0 = np.zeros((m, n_s))\noutputs = list(Yoh.swapaxes(0,1))", "_____no_output_____" ] ], [ [ "Let's now fit the model and run it for one epoch.", "_____no_output_____" ] ], [ [ "model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)", "Epoch 1/1\n10000/10000 [==============================] - 35s - loss: 16.1592 - dense_3_loss_1: 1.1816 - dense_3_loss_2: 0.9146 - dense_3_loss_3: 1.6444 - dense_3_loss_4: 2.6827 - dense_3_loss_5: 0.7530 - dense_3_loss_6: 1.2778 - dense_3_loss_7: 2.5924 - dense_3_loss_8: 0.8461 - dense_3_loss_9: 1.6718 - dense_3_loss_10: 2.5947 - dense_3_acc_1: 0.5434 - dense_3_acc_2: 0.7314 - dense_3_acc_3: 0.3430 - dense_3_acc_4: 0.0705 - dense_3_acc_5: 0.9299 - dense_3_acc_6: 0.3774 - dense_3_acc_7: 0.0745 - dense_3_acc_8: 0.9297 - dense_3_acc_9: 0.2204 - dense_3_acc_10: 0.0945 \n" ] ], [ [ "While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: \n\n<img src=\"images/table.png\" style=\"width:700;height:200px;\"> <br>\n<caption><center>Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. </center></caption>\n\n\nWe have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.) ", "_____no_output_____" ] ], [ [ "model.load_weights('models/model.h5')", "_____no_output_____" ] ], [ [ "You can now see the results on new examples.", "_____no_output_____" ] ], [ [ "EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']\nfor example in EXAMPLES:\n \n source = string_to_int(example, Tx, human_vocab)\n source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)\n prediction = model.predict([source, s0, c0])\n prediction = np.argmax(prediction, axis = -1)\n output = [inv_machine_vocab[int(i)] for i in prediction]\n \n print(\"source:\", example)\n print(\"output:\", ''.join(output))", "source: 3 May 1979\noutput: 1979-05-03\nsource: 5 April 09\noutput: 2009-05-05\nsource: 21th of August 2016\noutput: 2016-08-21\nsource: Tue 10 Jul 2007\noutput: 2007-07-10\nsource: Saturday May 9 2018\noutput: 2018-05-09\nsource: March 3 2001\noutput: 2001-03-03\nsource: March 3rd 2001\noutput: 2001-03-03\nsource: 1 March 2001\noutput: 2001-03-01\n" ] ], [ [ "You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. ", "_____no_output_____" ], [ "## 3 - Visualizing Attention (Optional / Ungraded)\n\nSince the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.\n\nConsider the task of translating \"Saturday 9 May 2018\" to \"2018-05-09\". If we visualize the computed $\\alpha^{\\langle t, t' \\rangle}$ we get this: \n\n<img src=\"images/date_attention.png\" style=\"width:600;height:300px;\"> <br>\n<caption><center> **Figure 8**: Full Attention Map</center></caption>\n\nNotice how the output ignores the \"Saturday\" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's \"18\" in order to generate \"2018.\" \n\n", "_____no_output_____" ], [ "### 3.1 - Getting the activations from the network\n\nLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\\alpha^{\\langle t, t' \\rangle}$. \n\nTo figure out where the attention values are located, let's start by printing a summary of the model .", "_____no_output_____" ] ], [ [ "model.summary()", "____________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n====================================================================================================\ninput_1 (InputLayer) (None, 30, 37) 0 \n____________________________________________________________________________________________________\ns0 (InputLayer) (None, 64) 0 \n____________________________________________________________________________________________________\nbidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0] \n____________________________________________________________________________________________________\nrepeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] \n lstm_1[0][0] \n lstm_1[1][0] \n lstm_1[2][0] \n lstm_1[3][0] \n lstm_1[4][0] \n lstm_1[5][0] \n lstm_1[6][0] \n lstm_1[7][0] \n lstm_1[8][0] \n____________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0] \n repeat_vector_1[0][0] \n bidirectional_1[0][0] \n repeat_vector_1[1][0] \n bidirectional_1[0][0] \n repeat_vector_1[2][0] \n bidirectional_1[0][0] \n repeat_vector_1[3][0] \n bidirectional_1[0][0] \n repeat_vector_1[4][0] \n bidirectional_1[0][0] \n repeat_vector_1[5][0] \n bidirectional_1[0][0] \n repeat_vector_1[6][0] \n bidirectional_1[0][0] \n repeat_vector_1[7][0] \n bidirectional_1[0][0] \n repeat_vector_1[8][0] \n bidirectional_1[0][0] \n repeat_vector_1[9][0] \n____________________________________________________________________________________________________\ndense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0] \n concatenate_1[1][0] \n concatenate_1[2][0] \n concatenate_1[3][0] \n concatenate_1[4][0] \n concatenate_1[5][0] \n concatenate_1[6][0] \n concatenate_1[7][0] \n concatenate_1[8][0] \n concatenate_1[9][0] \n____________________________________________________________________________________________________\ndense_2 (Dense) (None, 30, 1) 11 dense_1[0][0] \n dense_1[1][0] \n dense_1[2][0] \n dense_1[3][0] \n dense_1[4][0] \n dense_1[5][0] \n dense_1[6][0] \n dense_1[7][0] \n dense_1[8][0] \n dense_1[9][0] \n____________________________________________________________________________________________________\nattention_weights (Activation) (None, 30, 1) 0 dense_2[0][0] \n dense_2[1][0] \n dense_2[2][0] \n dense_2[3][0] \n dense_2[4][0] \n dense_2[5][0] \n dense_2[6][0] \n dense_2[7][0] \n dense_2[8][0] \n dense_2[9][0] \n____________________________________________________________________________________________________\ndot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0] \n bidirectional_1[0][0] \n attention_weights[1][0] \n bidirectional_1[0][0] \n attention_weights[2][0] \n bidirectional_1[0][0] \n attention_weights[3][0] \n bidirectional_1[0][0] \n attention_weights[4][0] \n bidirectional_1[0][0] \n attention_weights[5][0] \n bidirectional_1[0][0] \n attention_weights[6][0] \n bidirectional_1[0][0] \n attention_weights[7][0] \n bidirectional_1[0][0] \n attention_weights[8][0] \n bidirectional_1[0][0] \n attention_weights[9][0] \n bidirectional_1[0][0] \n____________________________________________________________________________________________________\nc0 (InputLayer) (None, 64) 0 \n____________________________________________________________________________________________________\nlstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0] \n s0[0][0] \n c0[0][0] \n dot_1[1][0] \n lstm_1[0][0] \n lstm_1[0][2] \n dot_1[2][0] \n lstm_1[1][0] \n lstm_1[1][2] \n dot_1[3][0] \n lstm_1[2][0] \n lstm_1[2][2] \n dot_1[4][0] \n lstm_1[3][0] \n lstm_1[3][2] \n dot_1[5][0] \n lstm_1[4][0] \n lstm_1[4][2] \n dot_1[6][0] \n lstm_1[5][0] \n lstm_1[5][2] \n dot_1[7][0] \n lstm_1[6][0] \n lstm_1[6][2] \n dot_1[8][0] \n lstm_1[7][0] \n lstm_1[7][2] \n dot_1[9][0] \n lstm_1[8][0] \n lstm_1[8][2] \n____________________________________________________________________________________________________\ndense_3 (Dense) (None, 11) 715 lstm_1[0][0] \n lstm_1[1][0] \n lstm_1[2][0] \n lstm_1[3][0] \n lstm_1[4][0] \n lstm_1[5][0] \n lstm_1[6][0] \n lstm_1[7][0] \n lstm_1[8][0] \n lstm_1[9][0] \n====================================================================================================\nTotal params: 52,960\nTrainable params: 52,960\nNon-trainable params: 0\n____________________________________________________________________________________________________\n" ] ], [ [ "Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \\ldots, T_y-1$. Lets get the activations from this layer.\n\nThe function `attention_map()` pulls out the attention values from your model and plots them.", "_____no_output_____" ] ], [ [ "attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, \"Tuesday 09 Oct 1993\", num = 7, n_s = 64)", "_____no_output_____" ] ], [ [ "On the generated plot you can observe the values of the attention weights for each character of the predicted output. Examine this plot and check that where the network is paying attention makes sense to you.\n\nIn the date translation application, you will observe that most of the time attention helps predict the year, and hasn't much impact on predicting the day/month.", "_____no_output_____" ], [ "### Congratulations!\n\n\nYou have come to the end of this assignment \n\n<font color='blue'> **Here's what you should remember from this notebook**:\n\n- Machine translation models can be used to map from one sequence to another. They are useful not just for translating human languages (like French->English) but also for tasks like date format translation. \n- An attention mechanism allows a network to focus on the most relevant parts of the input when producing a specific part of the output. \n- A network using an attention mechanism can translate from inputs of length $T_x$ to outputs of length $T_y$, where $T_x$ and $T_y$ can be different. \n- You can visualize attention weights $\\alpha^{\\langle t,t' \\rangle}$ to see what the network is paying attention to while generating each output.", "_____no_output_____" ], [ "Congratulations on finishing this assignment! You are now able to implement an attention model and use it to learn complex mappings from one sequence to another. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e7a8ae7d11bbe784398ada02898c99292a29d0f6
121,653
ipynb
Jupyter Notebook
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
e124f0078faf97568951e19e4302451ad0c7cf6c
[ "Apache-2.0" ]
null
null
null
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
e124f0078faf97568951e19e4302451ad0c7cf6c
[ "Apache-2.0" ]
null
null
null
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
e124f0078faf97568951e19e4302451ad0c7cf6c
[ "Apache-2.0" ]
null
null
null
193.1
78,692
0.893237
[ [ [ "# LassoLars Regression with PowerTransformer \n", "_____no_output_____" ], [ "This Code template is for the regression analysis using a simple LassoLars Regression with Feature Transformation technique PowerTransformer in a pipeline. It is a lasso model implemented using the LARS algorithm.", "_____no_output_____" ], [ "### Required Packages", "_____no_output_____" ] ], [ [ "import warnings\nimport numpy as np \nimport pandas as pd \nimport seaborn as se \nimport matplotlib.pyplot as plt \nfrom sklearn.model_selection import train_test_split \nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import PowerTransformer\nfrom sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error \nfrom sklearn.linear_model import LassoLars\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "### Initialization\n\nFilepath of CSV file", "_____no_output_____" ] ], [ [ "#filepath\nfile_path= \"\"", "_____no_output_____" ] ], [ [ "List of features which are required for model training .", "_____no_output_____" ] ], [ [ "#x_values\nfeatures=[]", "_____no_output_____" ] ], [ [ "Target feature for prediction.", "_____no_output_____" ] ], [ [ "#y_value\ntarget=''", "_____no_output_____" ] ], [ [ "### Data Fetching\n\nPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.\n\nWe will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.", "_____no_output_____" ] ], [ [ "df=pd.read_csv(file_path)\ndf.head()", "_____no_output_____" ] ], [ [ "### Feature Selections\n\nIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.\n\nWe will assign all the required input features to X and target/outcome to Y.", "_____no_output_____" ] ], [ [ "X=df[features]\nY=df[target]", "_____no_output_____" ] ], [ [ "### Data Preprocessing\n\nSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.\n", "_____no_output_____" ] ], [ [ "def NullClearner(df):\n if(isinstance(df, pd.Series) and (df.dtype in [\"float64\",\"int64\"])):\n df.fillna(df.mean(),inplace=True)\n return df\n elif(isinstance(df, pd.Series)):\n df.fillna(df.mode()[0],inplace=True)\n return df\n else:return df\ndef EncodeX(df):\n return pd.get_dummies(df)", "_____no_output_____" ] ], [ [ "Calling preprocessing functions on the feature and target set.\n", "_____no_output_____" ] ], [ [ "x=X.columns.to_list()\nfor i in x:\n X[i]=NullClearner(X[i])\nX=EncodeX(X)\nY=NullClearner(Y)\nX.head()", "_____no_output_____" ] ], [ [ "#### Correlation Map\n\nIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.", "_____no_output_____" ] ], [ [ "f,ax = plt.subplots(figsize=(18, 18))\nmatrix = np.triu(X.corr())\nse.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)\nplt.show()", "_____no_output_____" ] ], [ [ "### Data Splitting\n\nThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.", "_____no_output_____" ] ], [ [ "x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)", "_____no_output_____" ] ], [ [ "### Feature Transformation\n\nPower transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.\n\n[More on PowerTransformer module and parameters](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)\n\n### Model\n\nLassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients.\n\n### Tuning parameters\n\n> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations\n\n> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.\n\n> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.\n\n> **max_iter** -> Maximum number of iterations to perform.\n\n> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.\n\n> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations. ", "_____no_output_____" ] ], [ [ "model = make_pipeline(PowerTransformer(),LassoLars(random_state=123))\nmodel.fit(x_train,y_train)", "_____no_output_____" ] ], [ [ "#### Model Accuracy\n\nWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.\n\nscore: The score function returns the coefficient of determination R2 of the prediction.\n", "_____no_output_____" ] ], [ [ "print(\"Accuracy score {:.2f} %\\n\".format(model.score(x_test,y_test)*100))", "Accuracy score 72.55 %\n\n" ] ], [ [ "> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. \n\n> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. \n\n> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ", "_____no_output_____" ] ], [ [ "y_pred=model.predict(x_test)\nprint(\"R2 Score: {:.2f} %\".format(r2_score(y_test,y_pred)*100))\nprint(\"Mean Absolute Error {:.2f}\".format(mean_absolute_error(y_test,y_pred)))\nprint(\"Mean Squared Error {:.2f}\".format(mean_squared_error(y_test,y_pred)))", "R2 Score: 72.55 %\nMean Absolute Error 303.15\nMean Squared Error 126073.78\n" ] ], [ [ "#### Prediction Plot\n\nFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.\nFor the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(14,10))\nplt.plot(range(20),y_test[0:20], color = \"green\")\nplt.plot(range(20),model.predict(x_test[0:20]), color = \"red\")\nplt.legend([\"Actual\",\"prediction\"]) \nplt.title(\"Predicted vs True Value\")\nplt.xlabel(\"Record number\")\nplt.ylabel(target)\nplt.show()", "_____no_output_____" ] ], [ [ "#### Creator: Nikhil Shrotri , Github: [Profile](https://github.com/nikhilshrotri)\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7a8b1b5c45d45fb4ac7049f4954beed84c54368
25,059
ipynb
Jupyter Notebook
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
cf59f69ca3a7fc278fb4abf7e9bfa896c68a3418
[ "CC-BY-3.0" ]
null
null
null
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
cf59f69ca3a7fc278fb4abf7e9bfa896c68a3418
[ "CC-BY-3.0" ]
null
null
null
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
cf59f69ca3a7fc278fb4abf7e9bfa896c68a3418
[ "CC-BY-3.0" ]
null
null
null
27.327154
329
0.53829
[ [ [ "# Pymongo - mongo in python\nTo use python with mongo we need to use the pymongo package\n - install using `pip install pymongo`, or via the anaconda application", "_____no_output_____" ], [ "## Connecting\nTo connect to our Database we need to instantiate a client connection. To do this wee need:\n - hostname or ip-address\n - port\n - username\n - password\n \nIn addition we may sometimes need to provide an *authSource*. This simply tells Mongo where the information on our user exists.", "_____no_output_____" ] ], [ [ "from pymongo import MongoClient\n\nclient = MongoClient(host='18.219.151.47', #host is the hostname for the database\n port=27017, #port is the port number that mongo is running on\n username='student', #username for the db\n password='emse6992pass', #password for the db\n authSource='emse6992') #Since our user only exists for the emse6992 db, we need to specify this\n", "_____no_output_____" ] ], [ [ "***NOTE: NEVER hard encode your password!!!***", "_____no_output_____" ], [ "Verify the connection is working:", "_____no_output_____" ] ], [ [ "client.server_info()", "_____no_output_____" ] ], [ [ "### Accessing Databases and Collections\nEven if we have authenticated oursevles, we still need to tell Mongo what database and collections we are interested. Once connected those attributes are name addressable:\n - `conn['database_name']` or `conn.database_name`\n - `database['coll_name']` or `database.coll_name`", "_____no_output_____" ], [ "**Connecting to the Database:**", "_____no_output_____" ] ], [ [ "db = client.emse6992\n# db = client['emse6992'] - Alternative method", "_____no_output_____" ] ], [ [ "Proof we're connected:", "_____no_output_____" ] ], [ [ "db.list_collection_names()", "_____no_output_____" ] ], [ [ "**Connecting to the Collections:**", "_____no_output_____" ] ], [ [ "favs_coll = db.twitter_favorites\n# favs_coll = db['twitter_favorites']", "_____no_output_____" ] ], [ [ "Proof this works:", "_____no_output_____" ] ], [ [ "doc = favs_coll.find_one({})\ndoc", "_____no_output_____" ], [ "doc['favorited_by_screen_name']", "_____no_output_____" ] ], [ [ "## Querying\nOnce connected, we are ready to start querying the database.\n\nThe great thing about Python is it's integration with both JSON and Mongo, meaning that the Python Mongo API is almost exactly the same as Monog's own query API.", "_____no_output_____" ], [ "### find_one()\nThis method works exactly the same as the Mongo equivelant. In addition the interior logic is a direct 1-to-1 with Mongo's", "_____no_output_____" ] ], [ [ "doc = favs_coll.find_one({\"favorited_by_screen_name\": \"elonmusk\"})\ndoc", "_____no_output_____" ] ], [ [ "## In Class Excercise:\nUsing the **twitter_favorites** collection, find a **singular status** with a **tesla hashtag**", "_____no_output_____" ] ], [ [ "#Room for in-class work\ndoc = favs_coll.find_one({\"hashtags.text\": \"tesla\"},\n {'hashtags': 1, 'user.screen_name': 1, 'user.description': 1})\nprint(doc)", "_____no_output_____" ] ], [ [ "# find()\nLikewise pymongo's **find()** works exactly like mongo's console find() command. One thing to note `find({})` returns a cursor (iterable), not an actual document.\n\n**In Class Questions:**\n 1. What is the advantage to using a generator/iterable in this instance?\n 2. What is the benefit of being able to query for one document `find_one()` vs a list of documents `find()`?", "_____no_output_____" ] ], [ [ "docs = favs_coll.find({})\nprint(docs) # notice this is cursor, no actual data", "_____no_output_____" ], [ "print(docs[600]) # By indexing we can extract results from the query", "_____no_output_____" ] ], [ [ "### Iterating Through Our Cursor\nWe can prove the query executed correctly by iterating through all of the documents", "_____no_output_____" ] ], [ [ "# Our query\ndocs = favs_coll.find({\"favorited_by_screen_name\": \"elonmusk\"})\n# Variable to store the state of the test\nworked = True\n\n# Iterate through each of the docs looking for an invalid state\nfor doc in docs:\n if doc['favorited_by_screen_name'] != 'elonmusk':\n worked = False\n break\n\n# If worked is still True, then our query worked (or at least passed this evaluation)\nif worked:\n print(\"Worked!!\")\nelse:\n print(\"Failed!\")", "_____no_output_____" ] ], [ [ "Instead of iterating through the documents, we can also extract all of the documents at once by calling `list(docs)`. This approach though comes with some drawbacks.\n - The code will have to wait for all of the records to be pulled (unless threaded)\n - You'll need to ensure that you have the memory to store all of the results\n - Any connection errors will reset the process\n - etc.", "_____no_output_____" ] ], [ [ "docs = favs_coll.find({\"favorited_by_screen_name\": \"elonmusk\"})\ndoc_lst = list(docs)\nprint(len(doc_lst))", "_____no_output_____" ], [ "docs.count()", "_____no_output_____" ] ], [ [ "## In Class Excercise:\n\nUsing the **twitter_statuses** collection, calculate the **total number of favorites** that **elonmusk** has received", "_____no_output_____" ] ], [ [ "stats_coll = db.twitter_statuses", "_____no_output_____" ], [ "#Room for in-class work\ndocs = stats_coll.find({'user.screen_name': 'elonmusk'})\n\ntot = sum([doc.get('favorite_count', 0) for doc in docs])\n\nprint(tot)", "_____no_output_____" ] ], [ [ "Would we get the same result if we ran this processes against the **twitter_favorites** collection?", "_____no_output_____" ], [ "## Exception to the Rule\nWhile pymongo's pattern system effectively parallels the mongo shell, there is one key exception:\n - The use of the **$** \n \nIn mongo shell the following is valid:\n - **`db.coll_name.find({\"attr\": {$exists: true}})`**\n \nHowever, in pymongo this would be phrased as:\n - **`db.coll_name.find({\"attr\": {\"$exists\": True}})`**\n \nSince **$** isn't a valid value in python, these functions need to be wrapped as strings.", "_____no_output_____" ], [ "## In Class Excercise:\nUsing a mixture of mongo queries and python, determine if the person who has the most favorited tweet (***favorites collection***) in 2021 is a friend of Elon Musks (screen_name - 'elonmusk').\n\nNote: Sorting with pymongo is slightly different - `.sort([(\"field1\", 1), (\"field2\", -1)])`", "_____no_output_____" ] ], [ [ "# Space for work\nfrom datetime import datetime\ndate = datetime(2021, 1, 1)\ndocs = favs_coll.find({\"created_at\": {\"$gte\": date}}).sort([('favorite_count', -1)])\nuser = docs[0].get('user').get('screen_name')\n\nfriends_coll = db.twitter_friends\ndoc = friends_coll.find_one({\n \"$and\": [\n {\"screen_name\": user},\n {\"friend_of_screen_name\": 'elonmusk'}\n ]\n})", "_____no_output_____" ], [ "if doc:\n print(\"friends\")\nelse:\n print(\"not friends\")", "not friends\n" ] ], [ [ "# insert_one() and insert_many()\nThese methods enable us to insert one or more documents into the collection\n\n**Do not run the following sections!**\n\n**Question**:\nWill the following cell cause an error?", "_____no_output_____" ] ], [ [ "test_coll = db.test_collection\ndoc = test_coll.find_one({\"test\": \"passed!\"})\nprint(doc)", "None\n" ] ], [ [ "We can insert any valid object by simply calling:\n - **`coll_name.insert_one(doc)`**\n \n*Note: If we do not provide a `_id` field in the document mongo will automatically create one. This means that there is nothing stopping us from inserting duplicate records*", "_____no_output_____" ] ], [ [ "doc = {\"test\": \"passed!\"}\nresult = test_coll.insert_one(doc)", "_____no_output_____" ], [ "result.inserted_id", "_____no_output_____" ] ], [ [ "We can verify on the python side by querying for the record", "_____no_output_____" ] ], [ [ "doc = test_coll.find_one({\"test\": \"passed!\"})\nprint(doc)", "_____no_output_____" ] ], [ [ "We can also insert many documents at once:\n - **`coll_name.insert_many(docs)`**\n - where docs is a list of valid BSON documents\n \n ", "_____no_output_____" ] ], [ [ "#Don't run this - just for demonstration\n\ndocs = [{'test': 'passed-' + str(x)} for x in range(5)]\n\ntest_coll.insert_many(docs)", "_____no_output_____" ] ], [ [ "Verification:", "_____no_output_____" ] ], [ [ "# Since it's a sample collection it only has our inserted docs\ndocs = test_coll.find({})\n\ndocs_lst = list(docs)\n\nfor doc in docs_lst:\n # This will simply help the formatting on the output\n print(doc)\n ", "_____no_output_____" ] ], [ [ "# update_one() and update_many()\nAs discussed in the slides, these methods are used to modify an existing record.\n\nWhile they are a bit more complexed than the other methods, I did want to provide a little example.\n\n**`coll_name.update_one(find_pattern, update_pattern)`**\n 1. We find the documnet(s) that match the find_pattern\n - The find_pattern follows the same structure as the mongo shell and pymongo find methods\n 2. We dictate the update pattern for the identified document(s)", "_____no_output_____" ] ], [ [ "# Here we will be adding an attribute that indicates the document has been updated\ntest_coll.update_one({\"test\": \"passed!\"}, {\"$set\": {\"updated\": True}})\n\ndoc = test_coll.find_one({\"test\": \"updated\"})\nprint(doc)", "_____no_output_____" ] ], [ [ "Works the same way for **`coll_name.update_many(find_pattern, update_pattern)`**\n", "_____no_output_____" ] ], [ [ "test_coll.update_many({\"test\": {\"$exists\": True}}, {\"$set\": {\"updated\": True}})", "_____no_output_____" ], [ "docs = test_coll.find({})\nfor doc in docs:\n # This will simply help the formatting on the output\n print(doc)", "_____no_output_____" ] ], [ [ "# delete_one() and delete_many()\nDeleting records works almost the same was as updating, except we only provide a **find_pattern** to the method.\n\n**`coll_name.delete_one(find_pattern)`**", "_____no_output_____" ] ], [ [ "result = test_coll.delete_one({\"test\": \"updated\"})", "_____no_output_____" ] ], [ [ "Now we shouldn't be able to find that document:", "_____no_output_____" ] ], [ [ "doc = test_coll.find_one({\"test\": \"updated\"})\nprint(doc)", "_____no_output_____" ] ], [ [ "We can also inspect the **DeleteResult** from the command:", "_____no_output_____" ] ], [ [ "print(result.raw_result)\n\nprint(result.deleted_count)\n\nprint(result.acknowledged)", "_____no_output_____" ] ], [ [ "Small example using **`coll_name.delete_many()`**", "_____no_output_____" ] ], [ [ "def num_field(field):\n docs = test_coll.find({field: {\"$exists\": True}})\n count = sum(1 for x in docs)\n return(count) \n\n\nprint(num_field('test'))\ntest_coll.delete_many({'test': {\"$exists\": True}})\nprint(num_field('test'))\n ", "_____no_output_____" ] ], [ [ "## In Class Excercise:\n 1. Insert a JSON document into the test_collection with the following structure:\n ```JSON\n {\n \"name\": `your_name`,\n \"favorite_movie\": `movie_name`,\n \"favorite_bands\": [\n `band_name_1`,\n `band_name_2`,\n `etc.`\n ]\n }\n```\n 2. Review the response object and execute a query in python to prove your document has sucessfully been inserted\n 3. Using python, delete your object and verify the results by reviewing the response object and querying the collection.", "_____no_output_____" ] ], [ [ "# Space for work\nresp = test_coll.insert_one(\n {\n \"name\": \"Joel\",\n \"favorite_movie\": 'Big Fish',\n \"favorite_bands\": [\n 'Jon Bellion',\n 'Blink-182'\n ]\n }\n)", "_____no_output_____" ], [ "if resp.acknowledged:\n print(\"Inserted\")", "Inserted\n" ], [ "_id = resp.inserted_id\ntest_coll.find_one({\"_id\": _id})", "_____no_output_____" ], [ "resp = test_coll.delete_one({\"_id\": _id})", "_____no_output_____" ], [ "if resp.acknowledged:\n print(f'{resp.deleted_count} documents removed')", "1 documents removed\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7a8b3fc90bbce32a167c941e4838b43b07a023d
45,167
ipynb
Jupyter Notebook
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
34f94a04436dc3fa0ded8c353e0f3260f1b3305e
[ "MIT-0" ]
2
2020-08-24T15:07:23.000Z
2020-10-12T16:11:03.000Z
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
34f94a04436dc3fa0ded8c353e0f3260f1b3305e
[ "MIT-0" ]
null
null
null
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
34f94a04436dc3fa0ded8c353e0f3260f1b3305e
[ "MIT-0" ]
null
null
null
47.59431
1,288
0.625922
[ [ [ "## Introduction", "_____no_output_____" ], [ "In this notebook you will learn about the **AR-CNN** - a novel self-correcting, autoregressive model that uses a convolutional neural network in its architecture. By the end of this notebook, you will have trained and ran inference on your very own custom model. This notebook dives into details on the model and assumes a moderate level of understanding of machine learning concepts; as a result, we encourage you to read the introductory [learning capsules](https://console.aws.amazon.com/deepcomposer/home?region=us-east-1#learningCapsules) before going through this notebook.\n\nTraditionally, there have been two primary approaches to generating music with deep neural network-based generative models. One treats music generation as an image generation problem,\nwhile the other treats music as a time series generation problem analogous to autoregressive language modeling. The AR-CNN model uses elements from both approaches to generate music. We view each piece of music as a piano roll (an image representation of music), but generate each note (i.e. pixel) autoregressively.\n\nGenerating images autoregressively has been an area of interest to researchers. \n* Orderless NADE showcased an approach to generating images assuming ordering-invariance in the next pixel to be added. \n* PixelCNN demonstrated with a fixed row by row ordering that an autoregressive approach can generate convincing results for CIFAR-10.\n\nIn the music domain, CocoNET - the algorithm behind Google’s Bach Doodle - adopts an approach similar to orderless NADE, but using Gibbs Sampling to obtain inference results. \n\nOne common theme with autoregressive approaches, however, is that they are very prone to accumulation of error. Our approach is novel in that the model is trained to detect mistakes - including those it made itself - and fix them. We do this by viewing music generation as a series of **edit events** which can be either the addition of a new note or removal of an existing note. An **edit sequence** is a series of **edit events** and every edit sequence can directly correspond to a piece of music. By training our model to view the problem as edit events rather than as an entire image or just the addition of notes, we found that our model is able to offset accumulation of error and generate higher quality music.\n\nNow that you understand the basic theory behind our approach, let’s dive into the practical code. In the next section we discuss and show examples using the piano roll format.", "_____no_output_____" ], [ "## Dependencies\nFirst, let's install and import all of the python packages we will use throughout the tutorial.\n", "_____no_output_____" ] ], [ [ "# The MIT-Zero License\n\n# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so.\n\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\n\n# Create the environment and install required packages\n!pip install -r requirements.txt", "_____no_output_____" ], [ "# Imports\nimport os\nimport glob\nimport json\nimport numpy as np\nimport keras\nfrom enum import Enum\nfrom keras.models import Model\nfrom keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, concatenate, BatchNormalization, Dropout\nfrom keras.optimizers import Adam, RMSprop\nfrom keras import backend as K\nfrom random import randrange\nimport random\nimport math\nimport pypianoroll\nfrom utils.midi_utils import play_midi, plot_pianoroll, get_music_metrics, process_pianoroll, process_midi\nfrom constants import Constants\nfrom augmentation import AddAndRemoveAPercentageOfNotes\nfrom data_generator import PianoRollGenerator\nfrom utils.generate_training_plots import GenerateTrainingPlots\nfrom inference import Inference", "_____no_output_____" ] ], [ [ "## Dataset Summary\nIn this tutorial, we use the [`JSB-Chorales-dataset`](http://www-etud.iro.umontreal.ca/~boulanni/icml2012), comprising 229 chorale snippets. A chorale is a hymn that is usually sung with a single voice playing a simple melody and three lower voices providing harmony. In this dataset, these voices are represented by four piano tracks.\n\nIn case, you want to train the ArCnn model on your own dataset, please replace the current **data_dir** path with your directory to midi files.\nLet's listen to a song from this dataset.", "_____no_output_____" ] ], [ [ "# Get The List Of Midi Files\ndata_dir = 'data/*.mid'\nmidi_files = glob.glob(data_dir)\nrandom_midi = randrange(len(midi_files))\nplay_midi(midi_files[random_midi])", "_____no_output_____" ] ], [ [ "## Data Format - Piano Roll", "_____no_output_____" ], [ "For the purpose of this tutorial, we represent music from the JSB-Chorales dataset in the piano roll format.\n\n\nA **piano roll** is a discrete, image-like representation of music which can be viewed as a two-dimensional grid with **\"Time\"** on the horizontal axis and **\"Pitch\"** on the vertical axis. In our use case, the presence of a pixel in any particular cell in this grid indicates if a note was played or not at that time and pitch.\nLet us look at a few piano rolls in our dataset. In this example, a single piano roll track has 128 discrete time steps and 128 pitches.", "_____no_output_____" ], [ "<img src=\"images/pianoroll.png\" alt=\"Dataset summary\" width=\"800\">\n\nArCnn model when comes across midi files with multiple tracks, all the tracks are merged to form a single track and this can be visualized below.\n\n<img src=\"images/merged_pianoroll.png\" alt=\"Merged Piano roll\" width=\"300\">\n\nYou might notice this representation looks similar to an image. While the sequence of notes is often the natural way that people view music, many modern machine learning models instead treat music as images and leverage existing techniques within the computer vision domain. You will see such techniques used in our architecture later in this tutorial.", "_____no_output_____" ], [ "**Why 128 time steps?**\n\nFor the purpose of this tutorial, we sample eight non-empty bars (https://en.wikipedia.org/wiki/Bar_(music)) from each song in the JSB-Chorales dataset. A **bar** (or **measure**) is a unit of composition and contains four beats for songs in our particular dataset (our songs are all in 4/4 time) :\n\nWe’ve found that using a resolution of four time steps per beat captures enough of the musical detail in this dataset.\n\nThis yields...\n\n$$ \\frac{4\\;timesteps}{1\\;beat} * \\frac{4\\;beats}{1\\;bar} * \\frac{8\\;bars}{1} = 128\\;timesteps $$\n", "_____no_output_____" ], [ "## Create The Dataset", "_____no_output_____" ] ], [ [ "# Generate Midi File Samples\ndef generate_samples(midi_files, bars, beats_per_bar, beat_resolution, bars_shifted_per_sample):\n \"\"\"\n dataset_files: All files in the dataset\n return: piano roll samples sized to X bars\n \"\"\"\n timesteps_per_nbars = bars * beats_per_bar * beat_resolution\n time_steps_shifted_per_sample = bars_shifted_per_sample * beats_per_bar * beat_resolution\n samples = []\n for midi_file in midi_files:\n pianoroll = process_midi(midi_file, beat_resolution) # Parse the midi file and get the piano roll\n samples.extend(process_pianoroll(pianoroll, time_steps_shifted_per_sample, timesteps_per_nbars))\n return samples", "_____no_output_____" ], [ "# Convert Input Midi Files To Tensors\ndataset_samples = generate_samples(midi_files, Constants.bars, Constants.beats_per_bar, \n Constants.beat_resolution, \n Constants.bars_shifted_per_sample)\n# Shuffle The Dataset\nrandom.shuffle(dataset_samples)", "_____no_output_____" ], [ "# Visualize A Random Piano roll\nrandom_pianoroll = dataset_samples[randrange(len(dataset_samples))]\nplot_pianoroll(pianoroll = random_pianoroll,\n beat_resolution = 4)", "_____no_output_____" ] ], [ [ "## Training Augmentation\n\nThe augmented **input piano roll** is created by adding and removing notes from the original piano roll. By keeping the original piano roll as the target, the model learns what edit events (i.e. notes to add and remove) are needed to recreate from the augmented piano roll. The augmented piano roll can represent a user input melody which has some mistakes / off-tune notes that need to be corrected. In this way, the model learns how to fix/improve the input.\nDuring training, the data generator creates (input, target) pairs by applying augmentations on the piano rolls present in the dataset. In each epoch, different notes are added and removed from original piano rolls to form the augmented piano rolls (as these notes are added/removed in random pixels). This means that we will have a new set of augmented piano rolls for each epoch, and this effectively creates an unlimited input training dataset. \n\nThere can be multiple augmented piano rolls for a single original piano roll, and this can be configured using the parameter - **“samples_per_ground_truth_data_item”** in **constants.py**. Details of adding and removing notes during augmentation are explained below.\n", "_____no_output_____" ], [ "## Removing Notes From The Original Piano Roll\n\nNotes are randomly removed from the original piano roll to form the augmented piano roll. The model learns that it needs to add these notes in the augmented piano roll to recreate the original piano roll. This teaches the model how to fill in missing notes. The percentage of original notes to remove is determined by sampling from a uniform distribution between a lower and upper bound. The default lower bound of notes to remove is 0% as this helps the model learn that it doesn’t need to add notes to the input when the input is already “perfect”. The default upper bound is 100% as this helps the model create music when nothing is given as input (the unconditioned music generation case). ", "_____no_output_____" ], [ "![SegmentLocal](images/removenotes.gif \"segment\")", "_____no_output_____" ], [ "## Adding Notes To The Original Piano Roll \n\nNotes are randomly added to the original piano roll to form the augmented piano roll. The model learns that it needs to remove these notes in the augmented piano roll to recreate the original. This teaches the model how to remove unnecessary or off-tune notes. The percentage of extra notes to add is determined by sampling from a uniform distribution between a lower and upper bound (similar to the removing notes case). The default lower bound of notes to add is 0% of the current empty notes. This teaches the model to remove no notes when the input is already “perfect”. The default upper bound of notes to add is 1.5% of the current empty pixels (that do not have a note). This upper percentage may seem small, but since the percentage is out of the total empty pixels (which are usually far greater than the number of notes), the upper bound ends up being sufficiently large. ", "_____no_output_____" ], [ "![SegmentLocal](images/addnotes.gif \"segment\")", "_____no_output_____" ], [ "For both percentage of notes to add and remove, sampling is done from a uniform distribution to ensure that the model sees different potential states equally often. During training, this equal representation helps the model learn how to fill in or remove different numbers of notes, and how to recreate the original from any stage of the input. This is useful during the iterative inference process which we describe in more detail in the Inference section.\n\nBoth adding and removing notes is performed together on each piano roll. The sampling lower bound and sampling upper bound parameters for these can be changed via the parameters - **“sampling_lower_bound_remove”**, **“sampling_upper_bound_remove”**, **“sampling_lower_bound_add”**, and **“sampling_upper_bound_add”**.\n", "_____no_output_____" ] ], [ [ "sampling_lower_bound_remove = 0\nsampling_upper_bound_remove = 100\nsampling_lower_bound_add = 1\nsampling_upper_bound_add = 1.5", "_____no_output_____" ] ], [ [ "## Loss Function\n\nRather than using a traditional loss function such as binary crossentropy, we calculate a custom loss function for our model. In our augmentation we both add extraneous notes and remove existing notes from the piano roll. Our end goal is to have the model pick the next **edit event**(i.e. the next note to add or remove) so that we can take the input piano roll and bring it closer to the original piano roll, also known as the **target piano roll**. Notice that the model could pick any one of the extraneous or missing notes to bring the input piano roll closer to the target piano roll. These extraneous or missing notes is the **symmetric difference** between the input and target piano rolls. We can calculate the symmetric difference as the **exclusive-or** between the input and target piano rolls. Assuming that choosing any of the notes in the symmetric difference is equally likely, the model’s goal is to minimize the difference between its output and a uniform distribution for the probabilities of each of those notes. This difference in distributions can be calculated as the **Kullback–Leibler divergence**. Thus our loss function is the difference between the model’s output and the uniform distribution of all pixels/note probabilities in the symmetric difference.", "_____no_output_____" ] ], [ [ "# Customized Loss function\nclass Loss():\n @staticmethod\n def built_in_softmax_kl_loss(target, output):\n '''\n Custom Loss Function\n :param target: ground truth values\n :param output: predicted values\n :return kullback_leibler_divergence loss\n '''\n target = K.flatten(target)\n output = K.flatten(output)\n target = target / K.sum(target)\n output = K.softmax(output)\n return keras.losses.kullback_leibler_divergence(target, output)", "_____no_output_____" ] ], [ [ "## Model Architecture", "_____no_output_____" ], [ "Our Model architecture is adapted from the U-Net architecture (a popular CNN that is used extensively in the computer vision domain), consisting of an **“encoder”** that maps the single track music data (represented as piano roll images) to a relatively lower dimensional “latent space“ and a **”decoder“** that maps the latent space back to multi-track music data.\n\nHere are the inputs provided to the generator:\n\n**Single-track piano roll input**: A single melody track of size (128, 128, 1) => (TimeStep, NumPitches, NumTracks) is provided as the input to the model. \n\nNotice from the figure below that the encoding layers of the model on the left side and decoder layer on on the right side are connected to create a U-shape, thereby giving the name U-Net to this architecture.", "_____no_output_____" ], [ "<img src=\"images/unet.png\" alt=\"Model architecture\" width=\"800\">", "_____no_output_____" ] ], [ [ "# Build The Model\nclass ArCnnModel():\n def __init__(self,\n input_dim,\n num_filters,\n growth_factor,\n num_layers,\n dropout_rate_encoder,\n dropout_rate_decoder,\n batch_norm_encoder,\n batch_norm_decoder,\n learning_rate,\n optimizer_enum,\n pre_trained=None):\n\n # Piano roll Input Dimensions\n self.input_dim = input_dim\n # Number of filters in the convolution\n self.num_filters = num_filters\n # Growth rate of number of filters at each convolution\n self.growth_factor = growth_factor\n # Number of Encoder and Decoder layers\n self.num_layers = num_layers\n # A list of dropout values at each encoder layer\n self.dropout_rate_encoder = dropout_rate_encoder\n # A list of dropout values at each decoder layer\n self.dropout_rate_decoder = dropout_rate_decoder\n # A list of flags to ensure if batch_nromalization at each encoder\n self.batch_norm_encoder = batch_norm_encoder\n # A list of flags to ensure if batch_nromalization at each decoder\n self.batch_norm_decoder = batch_norm_decoder\n # Path to pretrained Model\n self.pre_trained = pre_trained\n # Learning rate for the model\n self.learning_rate = learning_rate\n # Optimizer to use while training the model\n self.optimizer_enum = optimizer_enum\n if self.num_layers < 1:\n raise ValueError(\n \"Number of layers should be greater than or equal to 1\")\n\n # Number of times Conv2D to be performed\n CONV_PER_LAYER = 2\n\n def down_sampling(self,\n layer_input,\n num_filters,\n batch_normalization=False,\n dropout_rate=0):\n '''\n :param: layer_input: Input Layer to the downsampling block\n :param: num_filters: Number of filters\n :param: batch_normalization: Flag to check if batch normalization to be performed\n :param: dropout_rate: To regularize overfitting\n '''\n encoder = layer_input\n for _ in range(self.CONV_PER_LAYER):\n encoder = Conv2D(num_filters, (3, 3),\n activation='relu',\n padding='same')(encoder)\n pooling_layer = MaxPooling2D(pool_size=(2, 2))(encoder)\n if dropout_rate:\n pooling_layer = Dropout(dropout_rate)(pooling_layer)\n if batch_normalization:\n pooling_layer = BatchNormalization()(pooling_layer)\n return encoder, pooling_layer\n\n def up_sampling(self,\n layer_input,\n skip_input,\n num_filters,\n batch_normalization=False,\n dropout_rate=0):\n '''\n :param: layer_input: Input Layer to the downsampling block\n :param: num_filters: Number of filters\n :param: batch_normalization: Flag to check if batch normalization to be performed\n :param: dropout_rate: To regularize overfitting\n '''\n decoder = concatenate(\n [UpSampling2D(size=(2, 2))(layer_input), skip_input])\n if batch_normalization:\n decoder = BatchNormalization()(decoder)\n for _ in range(self.CONV_PER_LAYER):\n decoder = Conv2D(num_filters, (3, 3),\n activation='relu',\n padding='same')(decoder)\n\n if dropout_rate:\n decoder = Dropout(dropout_rate)(decoder)\n return decoder\n\n def get_optimizer(self, optimizer_enum, learning_rate):\n '''\n Use either Adam or RMSprop.\n '''\n if OptimizerType.ADAM == optimizer_enum:\n optimizer = Adam(lr=learning_rate)\n elif OptimizerType.RMSPROP == optimizer_enum:\n optimizer = RMSprop(lr=learning_rate)\n else:\n raise Exception(\"Only Adam and RMSProp optimizers are supported\")\n return optimizer\n\n def build_model(self):\n # Create a list of encoder sampling layers\n down_sampling_layers = []\n up_sampling_layers = []\n inputs = Input(self.input_dim)\n layer_input = inputs\n num_filters = self.num_filters\n # encoder samplimg layers\n for layer in range(self.num_layers):\n encoder, pooling_layer = self.down_sampling(\n layer_input=layer_input,\n num_filters=num_filters,\n batch_normalization=self.batch_norm_encoder[layer],\n dropout_rate=self.dropout_rate_encoder[layer])\n\n down_sampling_layers.append(encoder)\n layer_input = pooling_layer # Get the previous pooling_layer_input\n num_filters *= self.growth_factor\n\n # bottle_neck layer\n bottle_neck = Conv2D(num_filters, (3, 3),\n activation='relu',\n padding='same')(pooling_layer)\n bottle_neck = Conv2D(num_filters, (3, 3),\n activation='relu',\n padding='same')(bottle_neck)\n num_filters //= self.growth_factor\n\n # upsampling layers\n decoder = bottle_neck\n for index, layer in enumerate(reversed(down_sampling_layers)):\n decoder = self.up_sampling(\n layer_input=decoder,\n skip_input=layer,\n num_filters=num_filters,\n batch_normalization=self.batch_norm_decoder[index],\n dropout_rate=self.dropout_rate_decoder[index])\n up_sampling_layers.append(decoder)\n num_filters //= self.growth_factor\n\n output = Conv2D(1, 1, activation='linear')(up_sampling_layers[-1])\n model = Model(inputs=inputs, outputs=output)\n optimizer = self.get_optimizer(self.optimizer_enum, self.learning_rate)\n model.compile(optimizer=optimizer, loss=Loss.built_in_softmax_kl_loss)\n if self.pre_trained:\n model.load_weights(self.pre_trained)\n model.summary()\n return model\n\n\nclass OptimizerType(Enum):\n ADAM = \"Adam\"\n RMSPROP = \"RMSprop\"", "_____no_output_____" ] ], [ [ "## Training\nWe split the dataset into training and validation sets. The default training-validation split is 0.9, but this can be changed with the parameter **“training_validation_split”** in **constants.py**.\n\nDuring training, the data generator creates (input, target) pairs by applying augmentations on the piano rolls present in the dataset. Details of the augmentation are described in the previous section. In each epoch, different notes will be added and removed from original piano rolls to form the augmented piano rolls (as these notes are added/removed in random spots each time). This means that we will have a new set of augmented piano rolls for each epoch, and this effectively creates an unlimited input training dataset. \n", "_____no_output_____" ] ], [ [ "dataset_size = len(dataset_samples)\ndataset_split = math.floor(dataset_size * Constants.training_validation_split)\nprint(0, dataset_split, dataset_split + 1, dataset_size)\n\ntraining_samples = dataset_samples[0:dataset_split]\nprint(\"training samples length: {}\".format(len(training_samples)))\nvalidation_samples = dataset_samples[dataset_split + 1:dataset_size]\nprint(\"validation samples length: {}\".format(len(validation_samples)))", "_____no_output_____" ] ], [ [ "All the ArCnn model related hyperparameters can be changed from below. For instance, to decrease the model size, change the default value of num_layers from 5, and update the dropout_rate_encoder, dropout_rate_deoder, batch_norm_encoder and batch_norm_decoder lists accordingly.", "_____no_output_____" ] ], [ [ "# Piano Roll Input Dimensions\ninput_dim = (Constants.bars * Constants.beats_per_bar * Constants.beat_resolution, \n Constants.number_of_pitches, \n Constants.number_of_channels)\n# Number of Filters In The Convolution\nnum_filters = 32\n# Growth Rate Of Number Of Filters At Each Convolution\ngrowth_factor = 2\n# Number Of Encoder And Decoder Layers\nnum_layers = 5\n# A List Of Dropout Values At Each Encoder Layer\ndropout_rate_encoder = [0, 0.5, 0.5, 0.5, 0.5]\n# A List Of Dropout Values At Each Decoder Layer\ndropout_rate_decoder = [0.5, 0.5, 0.5, 0.5, 0]\n# A List Of Flags To Ensure If batch_normalization Should be performed At Each Encoder\nbatch_norm_encoder = [True, True, True, True, False]\n# A List Of Flags To Ensure If batch_normalization Should be performed At Each Decoder\nbatch_norm_decoder = [True, True, True, True, False]\n# Path to Pretrained Model If You Want To Initialize Weights Of The Network With The Pretrained Model\npre_trained = False\n# Learning Rate Of The Model\nlearning_rate = 0.001\n# Optimizer To Use While Training The Model\noptimizer_enum = OptimizerType.ADAM\n# Batch Size\nbatch_size = 32\n# Number Of Epochs\nepochs = 500", "_____no_output_____" ], [ "# The Number of Batch Iterations Before A Training Epoch Is Considered Finsihed\nsteps_per_epoch = int(\n len(training_samples) * Constants.samples_per_ground_truth_data_item / int(batch_size))\n\nprint(\"The Total Number Of Steps Per Epoch Are: \"+ str(steps_per_epoch))\n\n# Total Number Of Time Steps\nn_timesteps = Constants.bars * Constants.beat_resolution * Constants.beats_per_bar", "_____no_output_____" ] ], [ [ "## Build The Data Generators\n\nNow let's build the training and validation data generators to create data on the fly during training.", "_____no_output_____" ] ], [ [ "## Training Data Generator\ntraining_data_generator = PianoRollGenerator(sample_list=training_samples,\n sampling_lower_bound_remove = sampling_lower_bound_remove,\n sampling_upper_bound_remove = sampling_upper_bound_remove,\n sampling_lower_bound_add = sampling_lower_bound_add,\n sampling_upper_bound_add = sampling_upper_bound_add,\n batch_size = batch_size,\n bars = Constants.bars,\n samples_per_data_item = Constants.samples_per_ground_truth_data_item,\n beat_resolution = Constants.beat_resolution,\n beats_per_bar = Constants.beats_per_bar,\n number_of_pitches = Constants.number_of_pitches,\n number_of_channels = Constants.number_of_channels)", "_____no_output_____" ], [ "# Vaalidation Data Generator\nvalidation_data_generator = PianoRollGenerator(sample_list = validation_samples,\n sampling_lower_bound_remove = sampling_lower_bound_remove,\n sampling_upper_bound_remove = sampling_upper_bound_remove,\n sampling_lower_bound_add = sampling_lower_bound_add,\n sampling_upper_bound_add = sampling_upper_bound_add,\n batch_size = batch_size, \n bars = Constants.bars,\n samples_per_data_item = Constants.samples_per_ground_truth_data_item,\n beat_resolution = Constants.beat_resolution,\n beats_per_bar = Constants.beats_per_bar, \n number_of_pitches = Constants.number_of_pitches,\n number_of_channels = Constants.number_of_channels)", "_____no_output_____" ] ], [ [ "## Create Callbacks for the model. \n1. Create **Training Vs Validation** loss plots during training.\n2. Save model checkpoints based on the **Best Validation Loss**.", "_____no_output_____" ] ], [ [ "# Callback For Loss Plots \nplot_losses = GenerateTrainingPlots()\n## Checkpoint Path\ncheckpoint_filepath = 'checkpoints/-best-model-epoch:{epoch:04d}.hdf5'\n\n# Callback For Saving Model Checkpoints \nmodel_checkpoint_callback = keras.callbacks.ModelCheckpoint(\n filepath=checkpoint_filepath,\n save_weights_only=False,\n monitor='val_loss',\n mode='min',\n save_best_only=True)\n\n# Create A List Of Callbacks\ncallbacks_list = [plot_losses, model_checkpoint_callback]", "_____no_output_____" ], [ "# Create A Model Instance\nMusicModel = ArCnnModel(input_dim = input_dim,\n num_filters = num_filters,\n growth_factor = growth_factor,\n num_layers = num_layers,\n dropout_rate_encoder = dropout_rate_encoder,\n dropout_rate_decoder = dropout_rate_decoder,\n batch_norm_encoder = batch_norm_encoder,\n batch_norm_decoder = batch_norm_decoder,\n pre_trained = pre_trained,\n learning_rate = learning_rate,\n optimizer_enum = optimizer_enum)", "_____no_output_____" ], [ "model = MusicModel.build_model()", "_____no_output_____" ], [ "# Start Training\nhistory = model.fit_generator(training_data_generator,\n validation_data = validation_data_generator,\n steps_per_epoch = steps_per_epoch,\n epochs = epochs,\n callbacks = callbacks_list)", "_____no_output_____" ] ], [ [ "## Inference ", "_____no_output_____" ], [ "## Generating Bach Like Enhanced Melody For Custom Input\n\nCongratulations! You have trained your very own AutoRegressive model to generate music. Let us see how our music model performs on a custom input.\n\nBefore loading the model, we need to load inference related parameters. After that, we load our pretrained model and generate a new melody based on **\"Twinkle Twinkle Little Star\"**.", "_____no_output_____" ], [ "Inference is done by sampling from the model’s predicted probability distribution across the entire piano roll. It is an iterative process, and a note is added or removed to the input in every iteration via sampling. After adding or removing a note to the input in an iteration, this new input is fed back into the model. The model is trained to both remove and add notes, and it can improve the input melody, and can correct mistakes that it may have made in earlier iterations as well.\n\nYou can change certain inference parameters to observe the differences in the generated music as described below. \n* **Sampling iterations** - his specifies the number of iterations during inference. Larger number of sampling iterations can ensure that the model has had enough time to improve the input melody and also correct any mistakes it may have made along the way. Beyond a certain number of sampling iterations, it can be observed that the model keeps adding notes and then removing those notes in a subsequent iterations or vice-versa. This implies that convergence has been reached.\n* **Maximum Notes to Remove** - This specifies the maximum percentage of notes of the original input melody that can be removed during inference. If you choose this to be 0%, then none of your original melody will be removed during inference. \n* **Maximum Notes to Add** - This specifies the maximum number of new notes to add to the original input melody during inference.With the “Maximum Notes to Remove” and “Maximum Notes to Add”, you can choose the degree to which you would like to preserve your original input melody. However, by restricting the model’s ability to add or remove notes, you may risk losing some music quality. \n* **Creativity**- The output probability distribution generated by the model is obtained via softmax, and you can change the temperature for softmax to get different levels of “creativity”. By using lower temperatures, the output probability distribution would have more distinct peaks, and the model would be more confident in its predictions. By using higher temperatures, the output probability distribution would be flatter, and the model would have a higher chance of choosing less likely notes to add/remove. By increasing the temperature, you can give the model the ability to take more risks, and increase its “creativity”.\n", "_____no_output_____" ], [ "Let us first load our last saved or pretrained checkpoint and inference related parameters. To modify the inference related parameters, please navigate to **inference_parameters.json** and change the values in the json file.", "_____no_output_____" ] ], [ [ "# Load The Inference Related Parameters\nwith open('inference_parameters.json') as json_file:\n inference_params = json.load(json_file)", "_____no_output_____" ], [ "# Create An Inference Object\ninference_obj = Inference()\n# Load The Checkpoint\ninference_obj.load_model('checkpoints/-best-model-epoch:0001.hdf5')\n", "_____no_output_____" ] ], [ [ "Please navigate to **sample_inputs** directory to find different input melodies we have already created for you to help generating novel compositions.\n\nTo download the novel compositions, you have created using the model we just trained, please navigate to **outputs** directory and download the midi file.", "_____no_output_____" ] ], [ [ "# Generate The Composition\ninference_obj.generate_composition('sample_inputs/twinkle_twinkle.midi', inference_params)", "_____no_output_____" ] ], [ [ "## Now, Let's Play The Generated Output And Listen To It", "_____no_output_____" ] ], [ [ "play_midi(\"outputs/output_0.mid\")", "_____no_output_____" ] ], [ [ "## Evaluate Results\n\nNow that we have finished generating our enhanced melody, let's find out how we did. We will analyze our output using below three metrics and compare them with the sample input:\n\n- **Empty Bar Rate:** The ratio of empty bars to total number of bars.\n- **Pitch Histogram Distance:** A metric that captures the distribution and position of pitches.\n- **In Scale Ratio:** Ratio of the number of notes that are in C major key, which is a common key found in music, to the total number of notes. \n\nAfter computing the metrics, let's also visualize the input piano roll and compare it with the generated output piano roll to notice the notes added.\n", "_____no_output_____" ] ], [ [ "# Input Midi Metrics:\nprint(\"The input midi metrics are:\")\nget_music_metrics(\"sample_inputs/twinkle_twinkle.midi\", beat_resolution=4)\n\nprint(\"\\n\")\n# Generated Output Midi Metrics:\nprint(\"The generated output midi metrics are:\")\nget_music_metrics(\"outputs/output_0.mid\", beat_resolution=4)", "_____no_output_____" ], [ "# Convert The Input and Generated Midi To Tensors\ninput_pianoroll = process_midi(\"sample_inputs/twinkle_twinkle.midi\", beat_resolution=4)\noutput_pianoroll = process_midi(\"outputs/output_0.mid\", beat_resolution=4)", "_____no_output_____" ], [ "# Plot Input Piano Roll\nplot_pianoroll(input_pianoroll, beat_resolution=4)", "_____no_output_____" ], [ "# Plot Output Piano Roll\nplot_pianoroll(output_pianoroll, beat_resolution=4)", "_____no_output_____" ] ], [ [ "## Appendix", "_____no_output_____" ], [ "## Open Source Implementations\nFor more open-source implementations of generative models for music, check out:\n\n- [MuseNet](https://openai.com/blog/musenet/): Uses GPT2, a large-scale Transformer model to predict the next token in sequence\n- [Jukebox](https://openai.com/blog/jukebox/): Uses various neural nets to generate music, including rudimentary singing, as raw audio in a variety of genres and artist styles.\n- [Music Transformer](https://github.com/tensorflow/magenta/tree/master/magenta/models/score2perf): Uses transformers to generate music!", "_____no_output_____" ], [ "## References", "_____no_output_____" ], [ "<a id='references'></a>\n1. [MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment.](https://arxiv.org/abs/1709.06298)\n2. [MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation.](https://arxiv.org/abs/1703.10847)\n3. [A Hierarchical Recurrent Neural Network for Symbolic Melody Generation.](https://pubmed.ncbi.nlm.nih.gov/31796422/)\n4. [Counterpoint by Convolution](https://arxiv.org/abs/1903.07227)\n5. [MusicTransformer:Generating Music With Long-Term Structure](https://arxiv.org/abs/1809.04281)\n6. [Conditional Image Generation with PixelCNN Decoders](https://arxiv.org/abs/1606.05328)\n7. [Neural Autoregressive Distribution Estimation](https://arxiv.org/abs/1605.02226)\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e7a8bc59cce9a20326c34c074cf29b702d86d0dd
339,069
ipynb
Jupyter Notebook
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
38286379c91f80c72d09e13424f90d3333d43096
[ "MIT" ]
7
2016-01-14T20:33:30.000Z
2020-07-10T14:15:35.000Z
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
38286379c91f80c72d09e13424f90d3333d43096
[ "MIT" ]
null
null
null
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
38286379c91f80c72d09e13424f90d3333d43096
[ "MIT" ]
null
null
null
280.686258
64,450
0.904716
[ [ [ "# Redshift fitting\n\nJavier Sánchez, 06/09/2016", "_____no_output_____" ], [ "A big part of the astrophysical and cosmological information comes from geometry, i.e., we can infer a lot of properties of our observable Universe using the positions of stars, galaxies and other objects. The sky appears to us as a 2D projection of our 3D Universe. The angular position can be inferred straightforwardly, however, how far away is one object from us given its angular coordinates is quite challenging and encodes very valuable information.\n\nA simple way to compute the distance between us and a light source is by measuring its redshift $z$. If the source emits at wavelength $\\lambda_{em}$ and is observed by us at wavelength $\\lambda_{obs}$, $z$ is given by:\n\n$$z = \\lambda_{obs}/\\lambda_{em}-1$$\n\nWe saw this in Chapter 8.", "_____no_output_____" ] ], [ [ "%pylab inline", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "import time", "_____no_output_____" ], [ "import os\nimport urllib2\n\nimport numpy as np\nimport pylab as pl\nfrom matplotlib.patches import Arrow\n\nREFSPEC_URL = 'http://www.astro.washington.edu/users/ivezic/DMbook/data/1732526_nic_002.ascii'\nURL = 'http://www.sdss.org/dr7/instruments/imager/filters/%s.dat'\n\ndef fetch_filter(filt):\n assert filt in 'ugriz'\n url = URL % filt\n \n if not os.path.exists('downloads'):\n os.makedirs('downloads')\n\n loc = os.path.join('downloads', '%s.dat' % filt)\n if not os.path.exists(loc):\n print \"downloading from %s\" % url\n F = urllib2.urlopen(url)\n open(loc, 'w').write(F.read())\n\n F = open(loc)\n \n data = np.loadtxt(F)\n return data\n\ndef fetch_vega_spectrum():\n if not os.path.exists('downloads'):\n os.makedirs('downloads')\n\n refspec_file = os.path.join('downloads', REFSPEC_URL.split('/')[-1])\n\n if not os.path.exists(refspec_file):\n print \"downloading from %s\" % REFSPEC_URL\n F = urllib2.urlopen(REFSPEC_URL)\n open(refspec_file, 'w').write(F.read())\n\n F = open(refspec_file)\n\n data = np.loadtxt(F)\n return data\n\n\nXref = fetch_vega_spectrum()\nXref[:, 1] /= 2.1 * Xref[:, 1].max()\n\n#----------------------------------------------------------------------\n# Plot filters in color with a single spectrum\npl.figure()\npl.plot(Xref[:, 0], Xref[:, 1], '-k', lw=2)\n\nfor f,c in zip('ugriz', 'bgrmk'):\n X = fetch_filter(f)\n pl.fill(X[:, 0], X[:, 1], ec=c, fc=c, alpha=0.4)\n\nkwargs = dict(fontsize=20, ha='center', va='center', alpha=0.5)\npl.text(3500, 0.02, 'u', color='b', **kwargs)\npl.text(4600, 0.02, 'g', color='g', **kwargs)\npl.text(6100, 0.02, 'r', color='r', **kwargs)\npl.text(7500, 0.02, 'i', color='m', **kwargs)\npl.text(8800, 0.02, 'z', color='k', **kwargs)\n\npl.xlim(3000, 11000)\n\npl.title('SDSS Filters and Reference Spectrum')\npl.xlabel('Wavelength (Angstroms)')\npl.ylabel('normalized flux / filter transmission')\n\n#----------------------------------------------------------------------\n# Plot filters in gray with several redshifted spectra\npl.figure()\n\nredshifts = [0.0, 0.4, 0.8]\ncolors = 'bgr'\n\nfor z, c in zip(redshifts, colors):\n pl.plot((1. + z) * Xref[:, 0], Xref[:, 1], color=c)\n\npl.gca().add_patch(Arrow(4200, 0.47, 1300, 0, lw=0, width=0.05, color='r'))\npl.gca().add_patch(Arrow(5800, 0.47, 1250, 0, lw=0, width=0.05, color='r'))\n\npl.text(3800, 0.49, 'z = 0.0', fontsize=14, color=colors[0])\npl.text(5500, 0.49, 'z = 0.4', fontsize=14, color=colors[1])\npl.text(7300, 0.49, 'z = 0.8', fontsize=14, color=colors[2])\n\nfor f in 'ugriz':\n X = fetch_filter(f)\n pl.fill(X[:, 0], X[:, 1], ec='k', fc='k', alpha=0.2)\n\nkwargs = dict(fontsize=20, color='gray', ha='center', va='center')\npl.text(3500, 0.02, 'u', **kwargs)\npl.text(4600, 0.02, 'g', **kwargs)\npl.text(6100, 0.02, 'r', **kwargs)\npl.text(7500, 0.02, 'i', **kwargs)\npl.text(8800, 0.02, 'z', **kwargs)\n\npl.xlim(3000, 11000)\npl.ylim(0, 0.55)\n\npl.title('Redshifting of a Spectrum')\npl.xlabel('Observed Wavelength (Angstroms)')\npl.ylabel('normalized flux / filter transmission')\n\npl.show()", "_____no_output_____" ] ], [ [ "### Idea: Measure light at different wavelengths from the sources to determine their redshift", "_____no_output_____" ], [ "### Spectra", "_____no_output_____" ], [ "If we measure the spectra at different wavelengths with certain resolution we can compare with an object with the same characteristics and a known redshift and compute it.", "_____no_output_____" ], [ "### Photometry", "_____no_output_____" ], [ "Instead of using a spectrograph we use filters and take images of the objects to build a low resolution spectrum and infer the redshfit.", "_____no_output_____" ], [ "Photometry has the advantage of speed: we can measure more objects simultaneously. The problem is that these objects have very low resolution spectra (5 points across the 3000-10000 Angstroms for SDSS, DES and LSST) range. Spectroscopy gives a way higher resolution ($\\lambda/\\Delta \\lambda$ ~ 1500 in BOSS at 3800 Angstroms and 2500 at 9000 Angstroms $\\Rightarrow$ ~ 2.5/3.6 Angstroms pixels. 1 Angstrom pixels for DESI), the problem is that it requires more time.", "_____no_output_____" ], [ "## Redsfhit fitting techniques", "_____no_output_____" ], [ "There are a lot of different options to retrieve the redshift information from an astronomical source. All of them have their advantages and disadvantages and depend on the nature of the data.\n\nFor spectra, the most usual technique is to compare with a collection of spectral templates and minimize a $\\chi^{2}$. For example, in SDSS-III/BOSS a PCA analysis is performed and then a $\\chi^{2}$ minimization of the principal components (http://www.sdss.org/dr12/algorithms/redshifts/ -- http://arxiv.org/pdf/1207.7326v2.pdf).\n\nOther approaches:\n \n * Cross-correlation with templates\n * Emission line fitting\n * Pure $\\chi^{2}$\n * Bayesian (bayez)\n \nFor photometric redshifts there is a wider variety of methods given that the number of inputs is lower and thus, a ML approach is easier to treat:\n \n * Artificial Neural Networks [Multilayer perceptron] (ANNz/Skynet)\n * Random forests/Boosted Decision Trees (TPZ/ArborZ)\n * Bayesian (BPZ)\n * $\\chi^{2}$ minimization using templates (LePhare)\n * Nearest neighbors (KNN)\n * Gaussian processes (http://arxiv.org/pdf/1505.05489v3.pdf)\n * Linear regression/polynomial regression (outdated)", "_____no_output_____" ], [ "## Examples", "_____no_output_____" ], [ "### Linear regression", "_____no_output_____" ] ], [ [ "\"\"\"\nPhotometric Redshifts via Linear Regression\n-------------------------------------------\nLinear Regression for photometric redshifts\nWe could use sklearn.linear_model.LinearRegression, but to be more\ntransparent, we'll do it by hand using linear algebra.\n\"\"\"\n# Author: Jake VanderPlas\n# License: BSD\n# The figure produced by this code is published in the textbook\n# \"Statistics, Data Mining, and Machine Learning in Astronomy\" (2013)\n# For more information, see http://astroML.github.com\n# To report a bug or issue, use the following forum:\n# https://groups.google.com/forum/#!forum/astroml-general\nimport itertools\n\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics.pairwise import euclidean_distances\n\nfrom astroML.datasets import fetch_sdss_specgals\n\n#----------------------------------------------------------------------\n# This function adjusts matplotlib settings for a uniform feel in the textbook.\n# Note that with usetex=True, fonts are rendered with LaTeX. This may\n# result in an error if LaTeX is not installed on your system. In that case,\n# you can set usetex to False.\nfrom astroML.plotting import setup_text_plots\nsetup_text_plots(fontsize=8, usetex=True)\n\nnp.random.seed(0)\n\ndata = fetch_sdss_specgals()\n\n# put magnitudes in a matrix\n# with a constant (for the intercept) at position zero\nmag = np.vstack([np.ones(data.shape)]\n + [data['modelMag_%s' % f] for f in 'ugriz']).T\nz = data['z']\n\n# train on ~60,000 points\nmag_train = mag[::10]\nz_train = z[::10]\n\n# test on ~6,000 distinct points\nmag_test = mag[1::100]\nz_test = z[1::100]\n\n\ndef plot_results(z, z_fit, plotlabel=None,\n xlabel=True, ylabel=True):\n plt.scatter(z, z_fit, s=1, lw=0, c='k')\n plt.plot([-0.1, 0.4], [-0.1, 0.4], ':k')\n plt.xlim(-0.05, 0.4001)\n plt.ylim(-0.05, 0.4001)\n plt.gca().xaxis.set_major_locator(plt.MultipleLocator(0.1))\n plt.gca().yaxis.set_major_locator(plt.MultipleLocator(0.1))\n\n if plotlabel:\n plt.text(0.03, 0.97, plotlabel,\n ha='left', va='top', transform=ax.transAxes)\n\n if xlabel:\n plt.xlabel(r'$\\rm z_{true}$')\n else:\n plt.gca().xaxis.set_major_formatter(plt.NullFormatter())\n\n if ylabel:\n plt.ylabel(r'$\\rm z_{fit}$')\n else:\n plt.gca().yaxis.set_major_formatter(plt.NullFormatter())\n\n\ndef combinations_with_replacement(iterable, r):\n pool = tuple(iterable)\n n = len(pool)\n for indices in itertools.product(range(n), repeat=r):\n if sorted(indices) == list(indices):\n yield tuple(pool[i] for i in indices)\n\n\ndef poly_features(X, p):\n \"\"\"Compute polynomial features\n Parameters\n ----------\n X: array_like\n shape (n_samples, n_features)\n p: int\n degree of polynomial\n Returns\n -------\n X_p: array\n polynomial feature matrix\n \"\"\"\n X = np.asarray(X)\n N, D = X.shape\n ind = list(combinations_with_replacement(range(D), p))\n X_poly = np.empty((X.shape[0], len(ind)))\n\n for i in range(len(ind)):\n X_poly[:, i] = X[:, ind[i]].prod(1)\n\n return X_poly\n\n\ndef gaussian_RBF_features(X, centers, widths):\n \"\"\"Compute gaussian Radial Basis Function features\n Parameters\n ----------\n X: array_like\n shape (n_samples, n_features)\n centers: array_like\n shape (n_centers, n_features)\n widths: array_like\n shape (n_centers, n_features) or (n_centers,)\n Returns\n -------\n X_RBF: array\n RBF feature matrix, shape=(n_samples, n_centers)\n \"\"\"\n X, centers, widths = map(np.asarray, (X, centers, widths))\n if widths.ndim == 1:\n widths = widths[:, np.newaxis]\n return np.exp(-0.5 * ((X[:, np.newaxis, :]\n - centers) / widths) ** 2).sum(-1)\n\nplt.figure(figsize=(10, 10))\nplt.subplots_adjust(hspace=0.05, wspace=0.05,\n left=0.1, right=0.95,\n bottom=0.1, top=0.95)\n\n#----------------------------------------------------------------------\n# first do a simple linear regression between the r-band and redshift,\n# ignoring uncertainties\nax = plt.subplot(221)\nX_train = mag_train[:, [0, 3]]\nX_test = mag_test[:, [0, 3]]\nz_fit = LinearRegression().fit(X_train, z_train).predict(X_test)\nplot_results(z_test, z_fit,\n plotlabel='Linear Regression:\\n r-band',\n xlabel=False)\n\n#----------------------------------------------------------------------\n# next do a linear regression with all bands\nax = plt.subplot(222)\nz_fit = LinearRegression().fit(mag_train, z_train).predict(mag_test)\nplot_results(z_test, z_fit, plotlabel=\"Linear Regression:\\n ugriz bands\",\n xlabel=False, ylabel=False)\n\n#----------------------------------------------------------------------\n# next do a 3rd-order polynomial regression with all bands\nax = plt.subplot(223)\nX_train = poly_features(mag_train, 3)\nX_test = poly_features(mag_test, 3)\nz_fit = LinearRegression().fit(X_train, z_train).predict(X_test)\nplot_results(z_test, z_fit, plotlabel=\"3rd order Polynomial\\nRegression\")\n\n#----------------------------------------------------------------------\n# next do a radial basis function regression with all bands\nax = plt.subplot(224)\n\n# remove bias term\nmag = mag[:, 1:]\nmag_train = mag_train[:, 1:]\nmag_test = mag_test[:, 1:]\n\ncenters = mag[np.random.randint(mag.shape[0], size=100)]\ncenters_dist = euclidean_distances(centers, centers, squared=True)\nwidths = np.sqrt(centers_dist[:, :10].mean(1))\n\nX_train = gaussian_RBF_features(mag_train, centers, widths)\nX_test = gaussian_RBF_features(mag_test, centers, widths)\nz_fit = LinearRegression().fit(X_train, z_train).predict(X_test)\nplot_results(z_test, z_fit, plotlabel=\"Gaussian Basis Function\\nRegression\",\n ylabel=False)\n\nplt.show()", "/Users/javiers/anaconda/lib/python2.7/site-packages/scipy/linalg/basic.py:884: RuntimeWarning: internal gelsd driver lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver.\n warnings.warn(mesg, RuntimeWarning)\n" ] ], [ [ "### Decision trees", "_____no_output_____" ] ], [ [ "\"\"\"\nPhotometric Redshifts by Decision Trees\n---------------------------------------\nFigure 9.14\nPhotometric redshift estimation using decision-tree regression. The data is\ndescribed in Section 1.5.5. The training set consists of u, g , r, i, z\nmagnitudes of 60,000 galaxies from the SDSS spectroscopic sample.\nCross-validation is performed on an additional 6000 galaxies. The left panel\nshows training error and cross-validation error as a function of the maximum\ndepth of the tree. For a number of nodes N > 13, overfitting is evident.\n\"\"\"\n# Author: Jake VanderPlas\n# License: BSD\n# The figure produced by this code is published in the textbook\n# \"Statistics, Data Mining, and Machine Learning in Astronomy\" (2013)\n# For more information, see http://astroML.github.com\n# To report a bug or issue, use the following forum:\n# https://groups.google.com/forum/#!forum/astroml-general\n\nfrom sklearn.tree import DecisionTreeRegressor\nfrom astroML.datasets import fetch_sdss_specgals\n\n#----------------------------------------------------------------------\n# This function adjusts matplotlib settings for a uniform feel in the textbook.\n# Note that with usetex=True, fonts are rendered with LaTeX. This may\n# result in an error if LaTeX is not installed on your system. In that case,\n# you can set usetex to False.\nfrom astroML.plotting import setup_text_plots\nsetup_text_plots(fontsize=8, usetex=True)\n\n#------------------------------------------------------------\n# Fetch data and prepare it for the computation\ndata = fetch_sdss_specgals()\n\n# put magnitudes in a matrix\nmag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T\nz = data['z']\n\n# train on ~60,000 points\nmag_train = mag[::10]\nz_train = z[::10]\n\n# test on ~6,000 separate points\nmag_test = mag[1::100]\nz_test = z[1::100]\n\n#------------------------------------------------------------\n# Compute the cross-validation scores for several tree depths\ndepth = np.arange(1, 21)\nrms_test = np.zeros(len(depth))\nrms_train = np.zeros(len(depth))\ni_best = 0\nz_fit_best = None\n\nfor i, d in enumerate(depth):\n clf = DecisionTreeRegressor(max_depth=d, random_state=0)\n clf.fit(mag_train, z_train)\n\n z_fit_train = clf.predict(mag_train)\n z_fit = clf.predict(mag_test)\n rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))\n rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2))\n\n if rms_test[i] <= rms_test[i_best]:\n i_best = i\n z_fit_best = z_fit\n\nbest_depth = depth[i_best]\n\n#------------------------------------------------------------\n# Plot the results\nfig = plt.figure(figsize=(10, 5))\nfig.subplots_adjust(wspace=0.25,\n left=0.1, right=0.95,\n bottom=0.15, top=0.9)\n\n# first panel: cross-validation\nax = fig.add_subplot(121)\nax.plot(depth, rms_test, '-k', label='cross-validation')\nax.plot(depth, rms_train, '--k', label='training set')\nax.set_xlabel('depth of tree')\nax.set_ylabel('rms error')\nax.yaxis.set_major_locator(plt.MultipleLocator(0.01))\nax.set_xlim(0, 21)\nax.set_ylim(0.009, 0.04)\nax.legend(loc=1)\n\n# second panel: best-fit results\nax = fig.add_subplot(122)\nax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')\nax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')\nax.text(0.04, 0.96, \"depth = %i\\nrms = %.3f\" % (best_depth, rms_test[i_best]),\n ha='left', va='top', transform=ax.transAxes)\nax.set_xlabel(r'$z_{\\rm true}$')\nax.set_ylabel(r'$z_{\\rm fit}$')\n\nax.set_xlim(-0.02, 0.4001)\nax.set_ylim(-0.02, 0.4001)\nax.xaxis.set_major_locator(plt.MultipleLocator(0.1))\nax.yaxis.set_major_locator(plt.MultipleLocator(0.1))\n\nplt.show()", "_____no_output_____" ] ], [ [ "### Boosted decision trees", "_____no_output_____" ] ], [ [ "\"\"\"\nPhotometric Redshifts by Random Forests\n---------------------------------------\nFigure 9.16\nPhotometric redshift estimation using gradient-boosted decision trees, with 100\nboosting steps. As with random forests (figure 9.15), boosting allows for\nimproved results over the single tree case (figure 9.14). Note, however, that\nthe computational cost of boosted decision trees is such that it is\ncomputationally prohibitive to use very deep trees. By stringing together a\nlarge number of very naive estimators, boosted trees improve on the\nunderfitting of each individual estimator.\n\"\"\"\n# Author: Jake VanderPlas\n# License: BSD\n# The figure produced by this code is published in the textbook\n# \"Statistics, Data Mining, and Machine Learning in Astronomy\" (2013)\n# For more information, see http://astroML.github.com\n# To report a bug or issue, use the following forum:\n# https://groups.google.com/forum/#!forum/astroml-general\n\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom astroML.datasets import fetch_sdss_specgals\nfrom astroML.decorators import pickle_results\n\n\n#----------------------------------------------------------------------\n# This function adjusts matplotlib settings for a uniform feel in the textbook.\n# Note that with usetex=True, fonts are rendered with LaTeX. This may\n# result in an error if LaTeX is not installed on your system. In that case,\n# you can set usetex to False.\nfrom astroML.plotting import setup_text_plots\nsetup_text_plots(fontsize=8, usetex=True)\n\n#------------------------------------------------------------\n# Fetch and prepare the data\ndata = fetch_sdss_specgals()\n\n# put magnitudes in a matrix\nmag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T\nz = data['z']\n\n# train on ~60,000 points\nmag_train = mag[::10]\nz_train = z[::10]\n\n# test on ~6,000 distinct points\nmag_test = mag[1::100]\nz_test = z[1::100]\n\n\n#------------------------------------------------------------\n# Compute the results\n# This is a long computation, so we'll save the results to a pickle.\n@pickle_results('photoz_boosting.pkl')\ndef compute_photoz_forest(N_boosts):\n rms_test = np.zeros(len(N_boosts))\n rms_train = np.zeros(len(N_boosts))\n i_best = 0\n z_fit_best = None\n\n for i, Nb in enumerate(N_boosts):\n try:\n # older versions of scikit-learn\n clf = GradientBoostingRegressor(n_estimators=Nb, learn_rate=0.1,\n max_depth=3, random_state=0)\n except TypeError:\n clf = GradientBoostingRegressor(n_estimators=Nb, learning_rate=0.1,\n max_depth=3, random_state=0)\n clf.fit(mag_train, z_train)\n\n z_fit_train = clf.predict(mag_train)\n z_fit = clf.predict(mag_test)\n rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))\n rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2))\n\n if rms_test[i] <= rms_test[i_best]:\n i_best = i\n z_fit_best = z_fit\n\n return rms_test, rms_train, i_best, z_fit_best\n\nN_boosts = (10, 100, 200, 300, 400, 500)\nrms_test, rms_train, i_best, z_fit_best = compute_photoz_forest(N_boosts)\nbest_N = N_boosts[i_best]\n\n#------------------------------------------------------------\n# Plot the results\nfig = plt.figure(figsize=(10, 5))\nfig.subplots_adjust(wspace=0.25,\n left=0.1, right=0.95,\n bottom=0.15, top=0.9)\n\n# left panel: plot cross-validation results\nax = fig.add_subplot(121)\nax.plot(N_boosts, rms_test, '-k', label='cross-validation')\nax.plot(N_boosts, rms_train, '--k', label='training set')\nax.legend(loc=1)\n\nax.set_xlabel('number of boosts')\nax.set_ylabel('rms error')\nax.set_xlim(0, 510)\nax.set_ylim(0.009, 0.032)\nax.yaxis.set_major_locator(plt.MultipleLocator(0.01))\n\nax.text(0.03, 0.03, \"Tree depth: 3\",\n ha='left', va='bottom', transform=ax.transAxes)\n\n# right panel: plot best fit\nax = fig.add_subplot(122)\nax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')\nax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')\nax.text(0.04, 0.96, \"N = %i\\nrms = %.3f\" % (best_N, rms_test[i_best]),\n ha='left', va='top', transform=ax.transAxes)\n\nax.set_xlabel(r'$z_{\\rm true}$')\nax.set_ylabel(r'$z_{\\rm fit}$')\n\nax.set_xlim(-0.02, 0.4001)\nax.set_ylim(-0.02, 0.4001)\nax.xaxis.set_major_locator(plt.MultipleLocator(0.1))\nax.yaxis.set_major_locator(plt.MultipleLocator(0.1))\n\nplt.show()", "@pickle_results: using precomputed results from 'photoz_boosting.pkl'\n" ] ], [ [ "### KNN", "_____no_output_____" ] ], [ [ "\"\"\"\nK-Neighbors for Photometric Redshifts\n-------------------------------------\nEstimate redshifts from the colors of sdss galaxies and quasars.\nThis uses colors from a sample of 50,000 objects with SDSS photometry\nand ugriz magnitudes. The example shows how far one can get with an\nextremely simple machine learning approach to the photometric redshift\nproblem.\nThe function :func:`fetch_sdss_galaxy_colors` used below actually queries\nthe SDSS CASjobs server for the colors of the 50,000 galaxies.\n\"\"\"\n# Author: Jake VanderPlas <[email protected]>\n# License: BSD\n# The figure is an example from astroML: see http://astroML.github.com\n\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom astroML.plotting import scatter_contour\nn_neighbors=10\nN = len(data)\n\n# shuffle data\nnp.random.seed(0)\nnp.random.shuffle(data)\n\n# put colors in a matrix\nX = np.zeros((N, 4))\nX[:, 0] = data['modelMag_u'] - data['modelMag_g']\nX[:, 1] = data['modelMag_g'] - data['modelMag_r']\nX[:, 2] = data['modelMag_r'] - data['modelMag_i']\nX[:, 3] = data['modelMag_i'] - data['modelMag_z']\nz = data['z']\n\n# divide into training and testing data\nNtrain = N // 2\nXtrain = X[:Ntrain]\nztrain = z[:Ntrain]\n\nXtest = X[Ntrain:]\nztest = z[Ntrain:]\n\nknn = KNeighborsRegressor(n_neighbors, weights='distance')\nzpred = knn.fit(Xtrain, ztrain).predict(Xtest)\n\naxis_lim = np.array([-0.1, 0.4])\n\nrms = np.sqrt(np.mean((ztest - zpred) ** 2))\nprint(\"RMS error = %.2g\" % rms)\n\nax = plt.axes()\nplt.scatter(ztest, zpred, c='k', lw=0, s=4)\nplt.plot(axis_lim, axis_lim, '--k')\nplt.plot(axis_lim, axis_lim + rms, ':k')\nplt.plot(axis_lim, axis_lim - rms, ':k')\nplt.xlim(axis_lim)\nplt.ylim(axis_lim)\n\nplt.text(0.99, 0.02, \"RMS error = %.2g\" % rms,\n ha='right', va='bottom', transform=ax.transAxes,\n bbox=dict(ec='w', fc='w'), fontsize=16)\n\nplt.title('Photo-z: Nearest Neigbor Regression')\nplt.xlabel(r'$\\mathrm{z_{spec}}$', fontsize=14)\nplt.ylabel(r'$\\mathrm{z_{phot}}$', fontsize=14)\nplt.show()", "RMS error = 0.024\n" ] ], [ [ "### Neural Network", "_____no_output_____" ], [ "In this case I am going to use a Recurrent Neural Network (Long Short Term Memory). More info on: http://colah.github.io/posts/2015-08-Understanding-LSTMs/", "_____no_output_____" ] ], [ [ "from keras.models import Sequential\nmodel = Sequential()\nfrom keras.layers import Dense, Activation\nfrom keras.layers.recurrent import GRU, SimpleRNN\nfrom keras.layers.recurrent import LSTM\nfrom keras.layers import Embedding\nmodel.add(LSTM(64,input_dim=4, return_sequences=False, activation='tanh'))\nmodel.add(Dense(64))\nmodel.add(Dense(32, init='normal', activation='tanh'))\nmodel.add(Dense(16, init='normal', activation='tanh'))\nmodel.add(Dense(8))\nmodel.add(Dense(4, init='normal', activation='tanh'))\nmodel.add(Dense(1, init='normal'))\nmodel.compile(loss='mse', optimizer='rmsprop')", "Using Theano backend.\n" ], [ "#model.train_on_batch(X[:60000].reshape(60000,4,1), z[:60000])\nbatch_size=60000\nmodel.fit(X[:batch_size].reshape(-1,1,4), z[:batch_size], batch_size=batch_size, nb_epoch=300, verbose=0, validation_split=0.5)", "_____no_output_____" ], [ "test_size=6000\npredicted_output = model.predict_on_batch(X[batch_size:batch_size+test_size].reshape(-1,1,4))", "_____no_output_____" ], [ "plt.hist(predicted_output)", "_____no_output_____" ], [ "print predicted_output[:,0].shape\nprint z.shape\ndiff = np.sqrt((predicted_output[:1000,0]-z[batch_size:1000+batch_size])**2)\nplt.hist(diff, bins=100, range=(0,0.15));\nnp.percentile(diff,68)", "(6000,)\n(661598,)\n" ], [ "axis_lim = np.array([-0.1, 0.4])\n\nrms = np.sqrt(np.mean((predicted_output - z[batch_size:batch_size+test_size]) ** 2))\nprint(\"RMS error = %.2g\" % rms)\n\nax = plt.axes()\nplt.scatter(z[batch_size:batch_size+test_size], predicted_output, c='k', lw=0, s=4)\nplt.plot(axis_lim, axis_lim, '--k')\nplt.plot(axis_lim, axis_lim + rms, ':k')\nplt.plot(axis_lim, axis_lim - rms, ':k')\nplt.xlim(axis_lim)\nplt.ylim(axis_lim)\n\nplt.text(0.99, 0.02, \"RMS error = %.2g\" % rms,\n ha='right', va='bottom', transform=ax.transAxes,\n bbox=dict(ec='w', fc='w'), fontsize=16)\n\nplt.title('Photo-z: Recurrent Neural Network')\nplt.xlabel(r'$\\mathrm{z_{spec}}$', fontsize=14)\nplt.ylabel(r'$\\mathrm{z_{phot}}$', fontsize=14)\nplt.show()", "RMS error = 0.072\n" ] ], [ [ "Long short-term memory units (LSTMs): One challenge affecting RNNs is that early models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem discussed in Chapter 5. Recall that the usual manifestation of this problem is that the gradient gets smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time that can make the gradient extremely unstable and hard to learn from. Fortunately, it's possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. The units were introduced by Hochreiter and Schmidhuber in 1997 with the explicit purpose of helping address the unstable gradient problem. LSTMs make it much easier to get good results when training RNNs, and many recent papers (including many that I linked above) make use of LSTMs or related ideas.\n\nExtracted from (http://neuralnetworksanddeeplearning.com/chap6.html)\n", "_____no_output_____" ], [ "## Bayez (D. Kirkby, J. Sánchez, N. Kennamer) (https://github.com/dkirkby/bayez)", "_____no_output_____" ], [ "Bayez is a bayesian redshift estimator of spectroscopic data. It estimates the redshift given the spectra and the object type (STAR, QSO, ELG, LRG). Optionally it can use magnitude information to improve the accuracy. What we compute using bayez is:\n\n$$ P(z| D, M, C) = \\int\\int d\\theta dm P(\\theta,m,z|D,M,C) = P(D, M |C)^{-1} F(z) $$\n\nWhere $z$ is the redshift, $D$ the spectral information, $M$ the magnitude information, $C$ the object type/class, $m$ the magnitude, and $\\theta$ other class parameters.", "_____no_output_____" ], [ "$$ F(z) = \\int \\int d\\theta dm P(D,M|\\theta,m,z) P(\\theta, m,z|C) $$\n\n$P(D,M|\\theta,m,z) \\Leftrightarrow$ likelihood,\n\n$P(\\theta, m,z|C) \\Leftrightarrow$ prior (Luminosity function)\n\n$$P(D,M|C) = \\int dz F(z)$$\n", "_____no_output_____" ], [ "We generate a large number of priors called \"exemplars\" (we simulate them using the specsim package: https://github.com/desihub/specsim) and perform a MC estimate of the multidimensional integral. An advantage is that flux normalization (magnitude change) are fast to perform and we can separate the integral as follows:\n\n$$ F(z) \\approx \\frac{1}{N_{s}} \\delta_{D}(z-z_{i})\\int dm P(D,M|m,i)P(m|i)$$\n\nWhere $i=1,2,...,N_{s}$ are the samples (simulated templates)", "_____no_output_____" ], [ "Bayez then compares with the library of templates and computes the likelihood $e^{-\\chi^{2}/2}$. The package supports downsampling the spectra (it accelerates the estimation) and performs the $\\chi^{2}$ calculation in each part of the spectra before coadding (DESI has 3 cameras b,r,z covering different wavelength ranges)", "_____no_output_____" ], [ "### Example of usage: https://github.com/dkirkby/bayez/blob/master/docs/nb/BayezExamples.ipynb", "_____no_output_____" ], [ "### Performance: https://github.com/dkirkby/bayez/blob/master/docs/nb/BayezResults.ipynb", "_____no_output_____" ], [ "### Pros and cons of this approach\n\n#### Pros: It can be as precise as your simulations. Having realistic simulations/templates allows you to have very precise results. It is fast and highly parallel. You can use the full posterior for your analysis (as opposed to point estimators)\n#### Cons: It is limited by the accuracy of your simulations/templates (as good as your templates). As for now it needs prior information on the object type (we haven't implemented an object classifier yet).\n\nIt is intended to be used in DESI. We are taking part on the redshift data challenge -- We have to estimate the redshift from visually inspected eBOSS spectra (Noble)\n\nWe might port the code to use it in GPU", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7a8c9bce529fc63e6bc213912c0e4e647f0dc44
103,138
ipynb
Jupyter Notebook
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
32b0992a58191723ef660e1de629193862b19f52
[ "MIT" ]
null
null
null
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
32b0992a58191723ef660e1de629193862b19f52
[ "MIT" ]
null
null
null
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
32b0992a58191723ef660e1de629193862b19f52
[ "MIT" ]
2
2021-06-26T01:52:28.000Z
2021-08-10T14:42:46.000Z
63.862539
44,104
0.785006
[ [ [ "# Lecture 06: Recap and overview", "_____no_output_____" ], [ "[Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2021)\n\n[<img src=\"https://mybinder.org/badge_logo.svg\">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2021/master?urlpath=lab/tree/06/Examples_and_overview.ipynb)", "_____no_output_____" ], [ "1. [Lecture 02: Fundamentals](#Lecture-02:-Fundamentals)\n2. [Lecture 03: Optimize, print and plot](#Lecture-03:-Optimize,-print-and-plot)\n3. [Lecture 04: Random numbers and simulation](#Lecture-04:-Random-numbers-and-simulation)\n4. [Lectue 05: Workflow and debugging](#Lectue-05:-Workflow-and-debugging)\n5. [Summary](#Summary)\n", "_____no_output_____" ], [ "This lecture recaps and overviews central concepts and methods from lecture 1-5.\n\n**Note:**\n\n1. I will focus on answering **general questions** repeatingly asked in the survey.\n2. If your **more specific questions** are not covered, ask them here: https://github.com/NumEconCopenhagen/lectures-2020/issues.", "_____no_output_____" ] ], [ [ "import itertools as it\nimport numpy as np\nfrom scipy import optimize\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')", "_____no_output_____" ] ], [ [ "<a id=\"Lecture-02:-Fundamentals\"></a>\n\n# 1. Lecture 02: Fundamentals", "_____no_output_____" ], [ "**Abstract:** You will be given an in-depth introduction to the **fundamentals of Python** (objects, variables, operators, classes, methods, functions, conditionals, loops). You learn to discriminate between different **types** such as integers, floats, strings, lists, tuples and dictionaries, and determine whether they are **subscriptable** (slicable) and/or **mutable**. You will learn about **referencing** and **scope**. You will learn a tiny bit about **floating point arithmetics**.", "_____no_output_____" ], [ "## 1.1 For vs. while loops", "_____no_output_____" ], [ "**For loop**: A loop where you know beforehand when it will stop. ", "_____no_output_____" ] ], [ [ "np.random.seed(1917)\nNx = 10\nx = np.random.uniform(0,1,size=(Nx,))", "_____no_output_____" ], [ "for i in range(Nx):\n print(x[i])", "0.15451797797720246\n0.20789496806883712\n0.0027198495778043563\n0.1729632542127988\n0.855555830200955\n0.584099749650399\n0.011903025078194518\n0.0682582385196221\n0.24917894776796679\n0.8936630858183269\n" ] ], [ [ "**While loop**: A loop which continues until some condition is met.", "_____no_output_____" ] ], [ [ "i = 0\nwhile i < Nx:\n print(x[i])\n i += 1", "0.15451797797720246\n0.20789496806883712\n0.0027198495778043563\n0.1729632542127988\n0.855555830200955\n0.584099749650399\n0.011903025078194518\n0.0682582385196221\n0.24917894776796679\n0.8936630858183269\n" ] ], [ [ "**Find first number less than 0.1:**", "_____no_output_____" ] ], [ [ "i = 0\nwhile i < Nx and x[i] >= 0.1:\n i += 1\nprint(x[i])", "0.0027198495778043563\n" ] ], [ [ "Using a break:", "_____no_output_____" ] ], [ [ "i = 0\nwhile i < Nx:\n i += 1\n if x[i] < 0.1:\n break\nprint(x[i])", "0.0027198495778043563\n" ], [ "for i in range(Nx):\n if x[i] < 0.1:\n break\nprint(x[i])", "0.0027198495778043563\n" ] ], [ [ "**Conclusion:** When you can use a for-loop it typically gives you more simple code.", "_____no_output_____" ], [ "## 1.2 Nested loops", "_____no_output_____" ] ], [ [ "Nx = 5\nNy = 5\nNz = 5\nx = np.random.uniform(0,1,size=(Nx))\ny = np.random.uniform(0,1,size=(Ny))\nz = np.random.uniform(0,1,size=(Nz))", "_____no_output_____" ], [ "mysum = 0\nfor i in range(Nx):\n for j in range(Ny):\n mysum += x[i]*y[j]\nprint(mysum)", "4.689237201743941\n" ], [ "mysum = 0\nfor i,j in it.product(range(Nx),range(Ny)):\n mysum += x[i]*y[j]\nprint(mysum)", "4.689237201743941\n" ] ], [ [ "**Meshgrid:**", "_____no_output_____" ] ], [ [ "xmat,ymat = np.meshgrid(x,y,indexing='ij')\nmysum = xmat*ymat\nprint(np.sum(mysum))", "4.689237201743942\n" ], [ "I,J = np.meshgrid(range(Nx),range(Ny),indexing='ij')\nmysum = x[I]*y[J]\nprint(np.sum(mysum))", "4.689237201743942\n" ] ], [ [ "## 1.3 Classes", "_____no_output_____" ] ], [ [ "class Fraction:\n \n def __init__(self,numerator,denominator): # called when created\n \n self.num = numerator\n self.denom = denominator\n \n def __str__(self): # called when using print\n \n return f'{self.num}/{self.denom}' # string = self.nom/self.denom\n \n def __add__(self,other): # called when using +\n \n new_num = self.num*other.denom + other.num*self.denom\n new_denom = self.denom*other.denom\n \n return Fraction(new_num,new_denom) \n \n def reduce(self):\n \n divisor = min(self.num,self.denom)\n \n while divisor >= 2:\n \n if self.num%divisor == 0 and self.denom%divisor == 0:\n \n self.num //= divisor\n self.denom //= divisor\n divisor = min(self.num,self.denom)\n \n else:\n divisor -= 1", "_____no_output_____" ] ], [ [ "In `__add__` we use\n\n$$\\frac{a}{b}+\\frac{c}{d}=\\frac{a \\cdot d+c \\cdot b}{b \\cdot d}$$", "_____no_output_____" ] ], [ [ "x = Fraction(1,3)\nprint(x)", "1/3\n" ], [ "x = Fraction(1,3) # 1/3 = 5/15\ny = Fraction(3,9) # 2/5 = 6/15\nz = x+y # 5/15 + 6/15 = 11/15\nprint(z)", "18/27\n" ], [ "z.reduce()\nprint(z)", "2/3\n" ] ], [ [ "**Check which methods a class have:**", "_____no_output_____" ] ], [ [ "dir(Fraction)", "_____no_output_____" ] ], [ [ "## 1.4 A consumer class", "_____no_output_____" ], [ "$$\n\\begin{aligned}\nV(p_{1},p_{2},I) & = \\max_{x_{1},x_{2}}x_1^{\\alpha}x_2^{1-\\alpha}\\\\\n \\text{s.t.}\\\\\np_{1}x_{1}+p_{2}x_{2} & \\leq I,\\,\\,\\,p_{1},p_{2},I>0\\\\\nx_{1},x_{2} & \\geq 0\n\\end{aligned}\n$$", "_____no_output_____" ], [ "**Goal:** Create a model-class to solve this problem.", "_____no_output_____" ], [ "**Utility function:**", "_____no_output_____" ] ], [ [ "def u_func(model,x1,x2):\n return x1**model.alpha*x2**(1-model.alpha)", "_____no_output_____" ] ], [ [ "**Solution function:**", "_____no_output_____" ] ], [ [ "def solve(model):\n \n # a. objective function (to minimize) \n obj = lambda x: -model.u_func(x[0],x[1]) # minimize -> negtive of utility\n \n # b. constraints and bounds\n con = lambda x: model.I-model.p1*x[0]-model.p2*x[1] # violated if negative\n constraints = ({'type':'ineq','fun':con})\n bounds = ((0,model.I/model.p1),(0,model.I/model.p2))\n \n # c. call solver\n x0 = [(model.I/model.p1)/2,(model.I/model.p2)/2]\n sol = optimize.minimize(obj,x0,method='SLSQP',bounds=bounds,constraints=constraints)\n \n # d. save\n model.x1 = sol.x[0]\n model.x2 = sol.x[1]\n model.u = model.u_func(model.x1,model.x2)", "_____no_output_____" ] ], [ [ "**Create consumer class:**", "_____no_output_____" ] ], [ [ "class ConsumerClass:\n \n def __init__(self):\n \n self.alpha = 0.5\n self.p1 = 1\n self.p2 = 2\n self.I = 10\n \n u_func = u_func\n solve = solve", "_____no_output_____" ] ], [ [ "**Solve consumer problem**:", "_____no_output_____" ] ], [ [ "jeppe = ConsumerClass()\njeppe.alpha = 0.75\njeppe.solve()\nprint(f'(x1,x2) = ({jeppe.x1:.3f},{jeppe.x2:.3f}), u = {jeppe.u:.3f}')", "(x1,x2) = (7.500,1.250), u = 4.792\n" ] ], [ [ "Easy to loop over:", "_____no_output_____" ] ], [ [ "for alpha in np.linspace(0.1,0.9,10):\n jeppe.alpha = alpha\n jeppe.solve()\n print(f'alpha = {alpha:.3f} -> (x1,x2) = ({jeppe.x1:.3f},{jeppe.x2:.3f}), u = {jeppe.u:.3f}')", "alpha = 0.100 -> (x1,x2) = (1.000,4.500), u = 3.872\nalpha = 0.189 -> (x1,x2) = (1.890,4.055), u = 3.510\nalpha = 0.278 -> (x1,x2) = (2.778,3.611), u = 3.357\nalpha = 0.367 -> (x1,x2) = (3.667,3.167), u = 3.342\nalpha = 0.456 -> (x1,x2) = (4.554,2.723), u = 3.442\nalpha = 0.544 -> (x1,x2) = (5.446,2.277), u = 3.661\nalpha = 0.633 -> (x1,x2) = (6.331,1.834), u = 4.020\nalpha = 0.722 -> (x1,x2) = (7.221,1.389), u = 4.569\nalpha = 0.811 -> (x1,x2) = (8.111,0.945), u = 5.404\nalpha = 0.900 -> (x1,x2) = (9.001,0.499), u = 6.741\n" ] ], [ [ "<a id=\"Lecture-03:-Optimize,-print-and-plot\"></a>\n\n# 2. Lecture 03: Optimize, print and plot", "_____no_output_____" ], [ "**Abstract:** You will learn how to work with numerical data (**numpy**) and solve simple numerical optimization problems (**scipy.optimize**) and report the results both in text (**print**) and in figures (**matplotlib**).", "_____no_output_____" ], [ "## 2.1 Numpy", "_____no_output_____" ] ], [ [ "x = np.random.uniform(0,1,size=6)\nprint(x)", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n" ] ], [ [ "Consider the following code with loop:", "_____no_output_____" ] ], [ [ "y = np.empty(x.size*2)\nfor i in range(x.size):\n y[i] = x[i]\nfor i in range(x.size):\n y[x.size + i] = x[i]\nprint(y) ", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102\n 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n" ] ], [ [ "**Vertical extension of vector** (more columns)", "_____no_output_____" ] ], [ [ "y = np.tile(x,2) # tiling (same x repated)\nprint(y)", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102\n 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n" ], [ "y = np.hstack((x,x)) # stacking\nprint(y)", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102\n 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n" ], [ "y = np.insert(x,0,x) # insert vector at place 0\nprint(y)", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102\n 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n" ], [ "y = np.insert(x,6,x) # insert vector at place 0\nprint(y)\nprint(y.shape)", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102\n 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n(12,)\n" ] ], [ [ "**Horizontal extension of vector** (more columns)", "_____no_output_____" ] ], [ [ "y = np.vstack((x,x)) # stacking\nprint(y)\nprint(y.shape)", "[[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n [0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]]\n(2, 6)\n" ], [ "z = y.ravel()\nprint(z)\nprint(z.shape)", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102\n 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n(12,)\n" ], [ "y_ = np.tile(x,2) # tiling (same x repated)\nprint(y_)\nprint(y_.shape)\nprint('')\ny = np.reshape(y_,(2,6))\nprint(y)\nprint(y.shape)", "[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102\n 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n(12,)\n\n[[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n [0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]]\n(2, 6)\n" ], [ "y_ = np.repeat(x,2) # repeat each element\nprint(y_)\nprint('')\ny__ = np.reshape(y_,(6,2))\nprint(y__)\nprint('')\ny = np.transpose(y__)\nprint(y)", "[0.50162377 0.50162377 0.58786823 0.58786823 0.6692749 0.6692749\n 0.67937905 0.67937905 0.87084325 0.87084325 0.30623102 0.30623102]\n\n[[0.50162377 0.50162377]\n [0.58786823 0.58786823]\n [0.6692749 0.6692749 ]\n [0.67937905 0.67937905]\n [0.87084325 0.87084325]\n [0.30623102 0.30623102]]\n\n[[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]\n [0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]]\n" ] ], [ [ "## 2.2 Numpy vs. dictionary vs. list vs. tuple", "_____no_output_____" ] ], [ [ "x_np = np.zeros(0)\nx_list = []\nx_dict = {}\nx_tuple = ()", "_____no_output_____" ] ], [ [ "1. If you data is **numeric**, and is changing on the fly, use **numpy**\n2. If your data is **heterogenous**, and is changing on the fly, use a **list** or a **dictionary**\n3. If your data is **fixed** use a tuple", "_____no_output_____" ], [ "## 2.3 Optimizers", "_____no_output_____" ], [ "All **optimization problems** are characterized by:\n\n1. Control vector (choices), $\\boldsymbol{x} \\in \\mathbb{R}^k$\n2. Objective function (payoff) to minimize, $f:\\mathbb{R}^k \\rightarrow \\mathbb{R}$ (differentiable or not)\n3. Constraints, i.e. $\\boldsymbol{x} \\in C \\subseteq \\mathbb{R}^k$ (linear or non-linear interdependence)", "_____no_output_____" ], [ "**Maximization** is just **minimization** of $-f$. ", "_____no_output_____" ], [ "All **optimizers** (minimizers) have the follow steps:\n\n1. Make initial guess\n2. Evaluate the function (and perhaps gradients)\n3. Check for convergence\n4. Update guess and return to step 2", "_____no_output_____" ], [ "**Convergence:** \"Small\" change in function value since last iteration or zero gradient.", "_____no_output_____" ], [ "**Characteristics** of optimizers:\n\n1. Use gradients or not.\n2. Allow for specifying bounds.\n3. Allow for specifying general constraints.", "_____no_output_____" ], [ "**Gradients** provide useful information, but can be costly to compute (using analytical formula or numerically).", "_____no_output_____" ], [ "## 2.4 Loops vs. optimizer", "_____no_output_____" ], [ "**Define function:**", "_____no_output_____" ] ], [ [ "def f(x):\n return np.sin(x)+0.05*x**2", "_____no_output_____" ] ], [ [ "**Solution with loop:**", "_____no_output_____" ] ], [ [ "N = 100\nx_vec = np.linspace(-10,10,N)\nf_vec = np.empty(N)\n\nf_best = np.inf # initial maximum\nx_best = np.nan # not-a-number\n\nfor i,x in enumerate(x_vec):\n f_now = f_vec[i] = f(x)\n if f_now < f_best:\n x_best = x\n f_best = f_now\n\nprint(f'best with loop is {f_best:.8f} at x = {x_best:.8f}')", "best with loop is -0.88366802 at x = -1.51515152\n" ] ], [ [ "**Solution with scipy optimize:**", "_____no_output_____" ] ], [ [ "x_guess = [0] \nobj = lambda x: f(x[0])\nres = optimize.minimize(obj, x_guess, method='Nelder-Mead')\nx_best_scipy = res.x[0]\nf_best_scipy = res.fun\n\nprint(f'best with scipy.optimize is {f_best_scipy:.8f} at x = {x_best_scipy:.8f}')", "best with scipy.optimize is -0.88786283 at x = -1.42756250\n" ] ], [ [ "**Link:** [Scipy on the choice of optimizer](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)", "_____no_output_____" ], [ "**Comparison:**", "_____no_output_____" ] ], [ [ "fig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nax.plot(x_vec,f_vec,ls='--',lw=2,color='black',label='$f(x)$')\nax.plot(x_best,f_best,ls='',marker='s',label='loop')\nax.plot(x_best_scipy,f_best_scipy,ls='',marker='o',\n markeredgecolor='red',label='scipy.optimize')\n\nax.set_xlabel('x')\nax.set_ylabel('f')\nax.legend(loc='upper center');", "_____no_output_____" ] ], [ [ "## 2.5 Gradient descent optimizer", "_____no_output_____" ], [ "**Algorithm:** `minimize_gradient_descent()`\n\n1. Choose tolerance $\\epsilon>0$, step size $\\alpha > 0$, and guess on $x_0$, set $n=0$.\n2. Compute $f(x_n)$ and $f^\\prime(x_n) \\approx \\frac{f(\\boldsymbol{x}_{n}+\\Delta)-f(\\boldsymbol{x}_{n})}{\\Delta}$.\n3. If $|f^\\prime(x_n)| < \\epsilon$ then stop.\n4. Compute new guess \"down the hill\":\n\n $$\n \\boldsymbol{x}_{n+1} = \\boldsymbol{x}_{n} - \\alpha f^\\prime(x_n)\n $$\n\n\n5. Set $n = n + 1$ and return to step 2.", "_____no_output_____" ], [ "**Code for algorithm:**", "_____no_output_____" ] ], [ [ "def gradient_descent(f,x0,alpha=1,Delta=1e-8,max_iter=500,eps=1e-8):\n \"\"\" minimize function with gradient descent\n \n Args:\n\n f (callable): function\n x0 (float): initial value\n alpha (float,optional): step size factor in search\n Delta (float,optional): step size in numerical derivative\n max_iter (int,optional): maximum number of iterations\n eps (float,optional): tolerance\n \n Returns:\n \n x (float): minimum\n fx (float): funciton value at minimum\n trials (list): list with tuple (x,value,derivative)\n \n \"\"\"\n \n # step 1: initialize\n x = x0\n n = 0\n trials = []\n \n # step 2-4:\n while n < max_iter:\n \n # step 2: compute function value and derivative\n fx = f(x)\n fp = (f(x+Delta)-fx)/Delta\n \n trials.append({'x':x,'fx':fx,'fp':fp}) \n \n # step 3: check convergence\n print(f'n = {n:3d}: x = {x:12.8f}, f = {fx:12.8f}, fp = {fp:12.8f}')\n if np.abs(fp) < eps:\n break\n \n # step 4: update x and n\n x -= alpha*fp\n n += 1\n \n return x,fx,trials", "_____no_output_____" ] ], [ [ "**Call the optimizer:**", "_____no_output_____" ] ], [ [ "x0 = 0\nalpha = 0.5\nx,fx,trials = gradient_descent(f,x0,alpha)\nprint(f'best with gradient_descent is {fx:.8f} at x = {x:.8f}')", "n = 0: x = 0.00000000, f = 0.00000000, fp = 1.00000000\nn = 1: x = -0.50000000, f = -0.46692554, fp = 0.82758257\nn = 2: x = -0.91379128, f = -0.75007422, fp = 0.51936899\nn = 3: x = -1.17347578, f = -0.85324884, fp = 0.26960144\nn = 4: x = -1.30827650, f = -0.88015974, fp = 0.12868722\nn = 5: x = -1.37262011, f = -0.88622298, fp = 0.05961955\nn = 6: x = -1.40242989, f = -0.88751934, fp = 0.02732913\nn = 7: x = -1.41609445, f = -0.88779134, fp = 0.01247611\nn = 8: x = -1.42233251, f = -0.88784799, fp = 0.00568579\nn = 9: x = -1.42517540, f = -0.88785975, fp = 0.00258927\nn = 10: x = -1.42647003, f = -0.88786219, fp = 0.00117876\nn = 11: x = -1.42705941, f = -0.88786269, fp = 0.00053655\nn = 12: x = -1.42732769, f = -0.88786280, fp = 0.00024420\nn = 13: x = -1.42744979, f = -0.88786282, fp = 0.00011114\nn = 14: x = -1.42750536, f = -0.88786283, fp = 0.00005058\nn = 15: x = -1.42753065, f = -0.88786283, fp = 0.00002303\nn = 16: x = -1.42754217, f = -0.88786283, fp = 0.00001048\nn = 17: x = -1.42754741, f = -0.88786283, fp = 0.00000477\nn = 18: x = -1.42754979, f = -0.88786283, fp = 0.00000218\nn = 19: x = -1.42755088, f = -0.88786283, fp = 0.00000099\nn = 20: x = -1.42755137, f = -0.88786283, fp = 0.00000043\nn = 21: x = -1.42755159, f = -0.88786283, fp = 0.00000021\nn = 22: x = -1.42755170, f = -0.88786283, fp = 0.00000010\nn = 23: x = -1.42755175, f = -0.88786283, fp = 0.00000004\nn = 24: x = -1.42755177, f = -0.88786283, fp = 0.00000001\nn = 25: x = -1.42755177, f = -0.88786283, fp = 0.00000002\nn = 26: x = -1.42755179, f = -0.88786283, fp = 0.00000000\nbest with gradient_descent is -0.88786283 at x = -1.42755179\n" ] ], [ [ "**Illusstration:**", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(10,10))\n\n# a. main figure\nax = fig.add_subplot(2,2,(1,2))\n\ntrial_x_vec = [trial['x'] for trial in trials]\ntrial_f_vec = [trial['fx'] for trial in trials]\ntrial_fp_vec = [trial['fp'] for trial in trials]\n\nax.plot(x_vec,f_vec,ls='--',lw=2,color='black',label='$f(x)$')\nax.plot(trial_x_vec,trial_f_vec,ls='',marker='s',ms=4,color='blue',label='iterations')\n\nax.set_xlabel('$x$')\nax.set_ylabel('$f$')\nax.legend(loc='upper center')\n\n# sub figure 1\nax = fig.add_subplot(2,2,3)\nax.plot(np.arange(len(trials)),trial_x_vec)\nax.set_xlabel('iteration')\nax.set_ylabel('x')\n\n# sub figure 2\nax = fig.add_subplot(2,2,4)\nax.plot(np.arange(len(trials)),trial_fp_vec)\nax.set_xlabel('iteration')\nax.set_ylabel('derivative of f');", "_____no_output_____" ] ], [ [ "<a id=\"Lecture-04:-Random-numbers-and-simulation\"></a>\n\n# 3. Lecture 04: Random numbers and simulation", "_____no_output_____" ], [ "**Abstract:** You will learn how to use a random number generator with a seed and produce simulation results (**numpy.random**, **scipy.stats**), and calcuate the expected value of a random variable through Monte Carlo integration. You will learn how to save your results for later use (**pickle**). Finally, you will learn how to make your figures interactive (**ipywidgets**).", "_____no_output_____" ], [ "**Baseline code:**", "_____no_output_____" ] ], [ [ "def f(x,y):\n return (np.var(x)-np.var(y))**2", "_____no_output_____" ], [ "np.random.seed(1917)\nx = np.random.normal(0,1,size=100)\nprint(f'mean(x) = {np.mean(x):.3f}')\n\nfor sigma in [0.5,1.0,0.5]:\n y = np.random.normal(0,sigma,size=x.size)\n print(f'sigma = {sigma:2f}: f = {f(x,y):.4f}')", "mean(x) = -0.007\nsigma = 0.500000: f = 0.5522\nsigma = 1.000000: f = 0.0001\nsigma = 0.500000: f = 0.4985\n" ] ], [ [ "**Question:** How can we make the loop give the same result for the same value of `sigma`?", "_____no_output_____" ], [ "**Option 1:** Reset seed", "_____no_output_____" ] ], [ [ "np.random.seed(1917)\nx = np.random.normal(0,1,size=100)\nprint(f'var(x) = {np.var(x):.3f}')\n\nfor sigma in [0.5,1.0,0.5]:\n np.random.seed(1918)\n y = np.random.normal(0,sigma,size=x.size)\n print(f'sigma = {sigma:2f}: f = {f(x,y):.4f}')", "var(x) = 0.951\nsigma = 0.500000: f = 0.4908\nsigma = 1.000000: f = 0.0025\nsigma = 0.500000: f = 0.4908\n" ] ], [ [ "**BAD SOLUTION:** Never reset the seed. Variables `x` and `y` are not ensured to be random relative to each other with this method.", "_____no_output_____" ], [ "**Option 2:** Set and get state", "_____no_output_____" ] ], [ [ "np.random.seed(1917)\nx = np.random.normal(0,1,size=100)\nprint(f'var(x) = {np.var(x):.3f}')\n\nstate = np.random.get_state()\nfor sigma in [0.5,1.0,0.5]:\n np.random.set_state(state)\n y = np.random.normal(0,sigma,size=x.size)\n print(f'sigma = {sigma:2f}: f = {f(x,y):.4f}')", "var(x) = 0.951\nsigma = 0.500000: f = 0.5522\nsigma = 1.000000: f = 0.0143\nsigma = 0.500000: f = 0.5522\n" ] ], [ [ "**Option 3:** Draw once before loop", "_____no_output_____" ] ], [ [ "np.random.seed(1917)\nx = np.random.normal(0,1,size=100)\nprint(f'var(x) = {np.var(x):.3f}')\n\ny_ = np.random.normal(0,1,size=x.size)\nfor sigma in [0.5,1.0,0.5]:\n y = sigma*y_\n print(f'sigma = {sigma:2f}: f = {f(x,y):.4f}')", "var(x) = 0.951\nsigma = 0.500000: f = 0.5522\nsigma = 1.000000: f = 0.0143\nsigma = 0.500000: f = 0.5522\n" ] ], [ [ "<a id=\"Lectue-05:-Workflow-and-debugging\"></a>\n\n# 4. Lectue 05: Workflow and debugging", "_____no_output_____" ], [ "**Abstract:** You will learn how to **structure** and **comment** your code and **document** it for later use. You will learn how to **debug** your code using print, **assert** and try/except statements. You will learn how to write **modules** and **run scripts** from a terminal in **VSCode** and how to share your code with others through **Git**.", "_____no_output_____" ], [ "1. **Jupyterlab vs VSCode:** When to use which?\n2. **Python modules:** Make your code more clear\n3. **Git:** Clone-commit-sync cycle", "_____no_output_____" ], [ "<a id=\"Summary\"></a>\n\n# 5. Summary", "_____no_output_____" ], [ "1. **More questions:** Ask them here https://github.com/NumEconCopenhagen/lectures-2020/issues.\n2. **Project 0:** Apply the methods we have talked about so far. Remember, you can revise it later.\n2. **Next time:** Pandas, the central Python package for working with data.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7a8ce1437666be8fe0600e90f80af89a9bd90a2
40,666
ipynb
Jupyter Notebook
docs/notebooks/image_combination.ipynb
yohei99/casadocs
9ff53c08d042ac5e5f580cc049de48378b7bd404
[ "Apache-2.0" ]
6
2020-07-31T12:43:58.000Z
2022-03-11T22:01:57.000Z
docs/notebooks/image_combination.ipynb
yohei99/casadocs
9ff53c08d042ac5e5f580cc049de48378b7bd404
[ "Apache-2.0" ]
15
2021-01-20T03:54:05.000Z
2022-03-21T19:15:33.000Z
docs/notebooks/image_combination.ipynb
yohei99/casadocs
9ff53c08d042ac5e5f580cc049de48378b7bd404
[ "Apache-2.0" ]
8
2020-10-16T06:34:05.000Z
2021-12-09T07:32:25.000Z
84.369295
1,139
0.670462
[ [ [ "# Image Combination \n\n\n\n\n", "_____no_output_____" ], [ "## Joint Single Dish and Interferometer Image Reconstruction \n\nThe SDINT imaging algorithm allows joint reconstruction of wideband single dish and interferometer data. This algorithm is available in the task [sdintimaging](../api/casatasks.rst#imaging) and described in [Rau, Naik & Braun (2019)](https://iopscience.iop.org/article/10.3847/1538-3881/ab1aa7/meta).\n\n<div class=\"alert alert-warning\">\nJoint reconstruction of wideband single dish and interferometer data in CASA is experimental. Please use at own discretion.\n</div>\n\nThe usage modes that have been tested are documented below.\n\n\n### SDINT Algorithm\n\nInterferometer data are gridded into an image cube (and corresponding PSF). The single dish image and PSF cubes are combined with the interferometer cubes in a feathering step. The joint image and PSF cubes then form inputs to any deconvolution algorithm (in either *cube* or *mfs/mtmfs* modes). Model images from the deconvolution algorithm are translated back to model image cubes prior to subtraction from both the single dish image cube as well as the interferometer data to form a new pair of residual image cubes to be feathered in the next iteration. In the case of mosaic imaging, primary beam corrections are performed per channel of the image cube, followed by a multiplication by a common primary beam, prior to deconvolution. Therefore, for mosaic imaging, this task always implements *conjbeams=True* and *normtype='flatnoise'*.\n\n![c914c39a74a69699c2ae1d84231e2133af6d7081](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/c914c39a74a69699c2ae1d84231e2133af6d7081.png?raw=1){.image-inline width=\"674\" height=\"378\"}\n\nThe input single dish data are the single dish image and psf cubes. The input interferometer data is a MeasurementSet. In addition to imaging and deconvolution parameters from interferometric imaging (task **tclean**), there are controls for a feathering step to combine interferometer and single dish cubes within the imaging iterations. Note that the above diagram shows only the \\'mtmfs\\' variant. Cube deconvolution proceeds directly with the cubes in the green box above, without the extra conversion back and forth to the multi-term basis. Primary beam handling is also not shown in this diagram, but full details (via pseudocode) are available in the [reference publication.](https://iopscience.iop.org/article/10.3847/1538-3881/ab1aa7)\n\nThe parameters used for controlling the joint deconvolution are described on the [sdintimaging](../api/casatasks.rst#imaging) task pages.\n\n### Usage Modes\n\nThe task **sdintimaging** contains the algorithm for joint reconstruction of wideband single dish and interferometer data. The **sdintimaging** task shares a significant number of parameters with the **tclean** task, but also contains unique parameters. A detailed overview of these parameters, and how to use them, can be found in the CASA Docs [task pages of sdintimaging](../api/casatasks.rst#imaging).\n\nAs seen from the diagram above and described on the **sdintimaging** task pages, there is considerable flexibility in usage modes. One can choose between interferometer-only, singledish-only and joint interferometer-singledish imaging. Outputs are restored images and associated data products (similar to task tclean).\n\nThe following usage modes are available in the (experimental) sdintimaging task. Tested modes include all 12 combinations of:\n\n- Cube Imaging : All combinations of the following options.\n - *specmode = 'cube'*\n - *deconvolver = 'multiscale', 'hogbom'*\n - *usedata = 'sdint', 'sd' , 'int'*\n - *gridder = 'standard', 'mosaic'*\n - *parallel = False, True*\n- Wideband Multi-Term Imaging : All combinations of the following options. \n - *specmode = 'mfs'*\n - *deconvolver = 'mtmfs'* ( *nterms=1* for a single-term MFS image, and *nterms>1* for multi-term MFS image. Tests use *nterms=2* )\n - *usedata = 'sdint', 'sd' , 'int'*\n - *gridder = 'standard', 'mosaic'*\n - *parallel = False, True*\n\n<div class=\"alert alert-info\">\n**NOTE**: When the INT and/or SD cubes have flagged (and therefore empty) channels, only those channels that have non-zero images in both the INT and SD cubes are used for the joint reconstruction.\n</div>\n\n<div class=\"alert alert-info\">\n**NOTE**: Single-plane joint imaging may be run with deconvolver='mtmfs' and nterms=1.\n</div>\n\n<div class=\"alert alert-info\">\n**NOTE**: All other modes allowed by the new sdintimaging task are currently untested. Tests will be added in subsequent releases. \n</div>\n\n\n### Examples/Demos\n\n#### Basic test results\n\nThe sdintimaging task was run on a pair of simulated test datasets. Both contain a flat spectrum extended emission feature plus three point sources, two of which have spectral index=-1.0 and one which is flat-spectrum (rightmost point). The scale of the top half of the extended structure was chosen to lie within the central hole in the spatial-frequency plane at the middle frequency of the band so as to generate a situation where the interferometer-only imaging is difficult.\n\nPlease refer to the [publication](https://iopscience.iop.org/article/10.3847/1538-3881/ab1aa7/meta) for a more detailed analysis of the imaging quality and comparisons of images without and with SD data. \n\nImages from a run on the ALMA M100 12m+7m+TP Science Verification Data suite are also shown below.\n\n\n*Single Pointing Simulation :*\n\nWideband Multi-Term Imaging ( deconvolver=\\'mtmfs\\', specmode=\\'mfs\\' )\n\n- SD + INT\n\n A joint reconstruction accurately reconstructs both intensity and spectral index for the extended emission as well as the compact sources.\n\n![bbd9a1df-8307-451e-860f-1a4905a57e0c](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/bbd9a1df-8307-451e-860f-1a4905a57e0c.png?raw=1)\n\n- INT-only\n\n The intensity has negative bowls and the spectral index is overly steep, especially for the top half of the extended component.\n\n![62cc52d7-e720-45e4-ae6d-8f782189d7e0](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/62cc52d7-e720-45e4-ae6d-8f782189d7e0.png?raw=1)\n\n- SD-only\n\n The spectral index of the extended emission is accurate (at 0.0) and the point sources are barely visible at this SD angular resolution.\n\n![1ad3d419-8fd9-40e7-a348-9f6b1b2df8c6](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/1ad3d419-8fd9-40e7-a348-9f6b1b2df8c6.png?raw=1)\n\n\n\nCube Imaging ( deconvolver=\\'multiscale\\', specmode=\\'cube\\' )\n\n- SD + INT\n\n A joint reconstruction has lower artifacts and more accurate intensities in all three channels, compared to the int-only reconstructions below \n\n![246193bd-a11e-4179-88be-ce86edc778ea](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/246193bd-a11e-4179-88be-ce86edc778ea.png?raw=1)\n\n\n- INT-only\n\n The intensity has negative bowls in the lower frequency channels and the extended emission is largely absent at the higher frequencies.\n\n![3d45174e-67f7-4159-ad72-be67ff3c396e](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/3d45174e-67f7-4159-ad72-be67ff3c396e.png?raw=1)\n\n\n- SD-only\n\n A demonstration of single-dish cube imaging with deconvolution of the SD-PSF.\n\n In this example, iterations have not been run until full convergence, which is why the sources still contain signatures of the PSF.\n\n![bc98e892-dca1-4e0a-892f-e5a22e2dd2a6](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/bc98e892-dca1-4e0a-892f-e5a22e2dd2a6.png?raw=1)\n\n\n*Mosaic Simulation*\n\nAn observation of the same sky brightness was simulated with 25 pointings.\n\nWideband Multi-Term Mosaic Imaging ( deconvolver=\\'mtmfs\\', specmode=\\'mfs\\' , gridder=\\'mosaic\\' )\n\n- SD + INT\n\n A joint reconstruction accurately reconstructs both intensity and spectral index for the extended emission as well as the compact sources.\n\n This is a demonstration of joint mosaicing along with wideband single-dish and interferometer combination.\n\n![ae742ca7-bf5c-43b4-bf30-28c26bd51b50](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/ae742ca7-bf5c-43b4-bf30-28c26bd51b50.png?raw=1)\n\n\n- INT-only\n\n The intensity has negative bowls and the spectral index is strongly inaccurate. Note that the errors are slightly less than the situation with the single-pointing example (where there was only one pointing's worth of uv-coverage).\n\n![c583bb0c-0fb1-495d-bc9c-a281bf72789a](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/c583bb0c-0fb1-495d-bc9c-a281bf72789a.png?raw=1)\n\n\n\nCube Mosaic Imaging ( deconvolver='multiscale', specmode='cube', gridder='mosaic' )\n\n\n- SD + INT\n\n A joint reconstruction produces better per-channel reconstructions compared to the INT-only situation shown below.\n\n This is a demonstration of cube mosaic imaging along with SD+INT joint reconstruction. \n\n![f49f24e8-c3df-4a48-8290-c8d9ad620010](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/f49f24e8-c3df-4a48-8290-c8d9ad620010.png?raw=1)\n\n\n- INT-only\n\n Cube mosaic imaging with only interferometer data. This clearly shows negative bowls and artifacts arising from the missing flux.\n\n![cead63c1-af84-47b4-b7f2-91f8368b3e9c](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/cead63c1-af84-47b4-b7f2-91f8368b3e9c.png?raw=1)\n\n\n#### ALMA M100 Spectral Cube Imaging : 12m + 7m + TP\n\nThe sdintimaging task was run on the [ALMA M100 Science Verification Datasets](https://almascience.nrao.edu/alma-data/science-verification).\n\n\\(1\\) The single dish (TP) cube was pre-processed by adding per-plane restoringbeam information.\n\n\\(2\\) Cube specification parameters were obtained from the SD Image as follows\n\n```\nfrom sdint_helper import * \nsdintlib = SDINT_helper() \nsdintlib.setup_cube_params(sdcube='M100_TmP')\n\nOutput : Shape of SD cube : [90 90 1 70\\] \nCoordinate ordering : ['Direction', 'Direction', 'Stokes', 'Spectral']\nnchan = 70\nstart = 114732899312.0Hz\nwidth = -1922516.74324Hz\nFound 70 per-plane restoring beams\\#\n\n(For specmode='mfs' in sdintimaging, please remember to set 'reffreq' to a value within the freq range of the cube.\n\nReturned Dict : {'nchan': 70, 'start': '114732899312.0Hz', 'width': '-1922516.74324Hz'}\n```\n\n\\(3\\) Task sdintimaging was run with automatic SD-PSF generation, n-sigma stopping thresholds, a pb-based mask at the 0.3 gain level, and no other deconvolution masks (interactive=False).\n\n```\nsdintimaging(usedata=\"sdint\", sdimage=\"../M100_TP\", sdpsf=\"\",sdgain=3.0, \n dishdia=12.0, vis=\"../M100_12m_7m\", imagename=\"try_sdint_niter5k\", \n imsize=1000, cell=\"0.5arcsec\", phasecenter=\"J2000 12h22m54.936s +15d48m51.848s\", \n stokes=\"I\", specmode=\"cube\", reffreq=\"\", nchan=70, start=\"114732899312.0Hz\", \n width=\"-1922516.74324Hz\", outframe=\"LSRK\", veltype=\"radio\", \n restfreq=\"115.271201800GHz\", interpolation=\"linear\", \n perchanweightdensity=True, gridder=\"mosaic\", mosweight=True, \n pblimit=0.2, deconvolver=\"multiscale\", scales=[0, 5, 10, 15, 20],\n smallscalebias=0.0, pbcor=False, weighting=\"briggs\", robust=0.5, \n niter=5000, gain=0.1, threshold=0.0, nsigma=3.0, interactive=False, \n usemask=\"user\", mask=\"\", pbmask=0.3)\n```\n\n**Results from two channels are show below. **\n\nLEFT : INT only (12m+7m) and RIGHT : SD+INT (12m + 7m + TP)\n\nChannel 23\n\n![18445a5ddbc066530938f1b8712e3a68bf9b8e3a](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/18445a5ddbc066530938f1b8712e3a68bf9b8e3a.png?raw=1)\n\nChannel 43\n\n![f7c37345f62846af242938430ef9287b6b466fd4](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/f7c37345f62846af242938430ef9287b6b466fd4.png?raw=1)\n\n \nMoment 0 Maps : LEFT : INT only. MIDDLE : SD + INT with sdgain=1.0 RIGHT : SD + INT with sdgain=3.0\n\n\n![d38c8835a149a2f61fcbeb77ee3d4f3eb04d6962](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/d38c8835a149a2f61fcbeb77ee3d4f3eb04d6962.png?raw=1)\n\n\nMoment 1 Maps : LEFT : INT only. MIDDLE : SD + INT with sdgain=1.0 RIGHT : SD + INT with sdgain=3.0\n\n![24348b162f7e4fc3ab4b71d12f80f15f361954c6](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/24348b162f7e4fc3ab4b71d12f80f15f361954c6.png?raw=1)\n\n\nA comparison (shown for one channel) with and without masking is shown below.\n\n![6e766bca3645b467ecae383e948f7e688aeee11d](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/6e766bca3645b467ecae383e948f7e688aeee11d.png?raw=1)\n\n \n\nNotes : \n\n- In the reconstructed cubes, negative bowls have clearly been eliminated by using sdintimaging to combine interferometry + SD data. Residual images are close to noise-like too (not pictured above) suggesting a well-constrained and steadily converging imaging run. \n- The source structure is visibly different from the INT-only case, with high and low resolution structure appearing more well defined. However, the *high-resolution* peak flux in the SDINT image cube is almost a factor of 3 lower than the INT-only. While this may simply be because of deconvolution uncertainty in the ill-constrained INT-only reconstruction, it requires more investigation to evaluate absolute flux correctness. For example, it will be useful to evaluate if the INT-only reconstructed flux changes significantly with careful hand-masking.\n - Compare with a Feathered image : http://www.astroexplorer.org/details/apjaa60c2f1 : The reconstructed structure is consistent.\n- The middle and right panels compare reconstructions with different values of sdgain (1.0 and 3.0). The sdgain=3.0 run has a noticeable emphasis on the SD flux in the reconstructed moment maps, while the high resolution structures have the same are the same between sdgain=1 and 3. This is consistent with expectations from the algorithm, but requires further investigation to evaluate robustness in general.\n- Except for the last panel, no deconvolution masks were used (apart from a *pbmask* at the 0.3 gain level). The deconvolution quality even without masking is consistent with the expectation that when supplied with better data constraints in a joint reconstruction, the native algorithms are capable of converging on their own. In this example (same *niter* and *sdgain*), iterative cleaning with interactive and auto-masks (based mostly on interferometric peaks in the images) resulted in more artifacts compared to a run that allowed multi-scale clean to proceed on its own.\n- The results using sdintimaging on these ALMA data can be compared with performance results when [using feather](https://casaguides.nrao.edu/index.php?title=M100_Band3_Combine_5.4), and when [using tp2vis](https://science.nrao.edu/facilities/alma/alma-develop-old-022217/tp2vis_final_report.pdf) (ALMA study by J. Koda and P. Teuben).\n\n\n\n#### Fitting a new restoring beam to the Feathered PSF\nSince the deconvolution uses a joint SD+INT point spread function, the restoring beam is re-fitted after the feather step within the sdintimaging task. As a convenience feature, the corresponding tool method is also available to the user and may be used to invoke PSF refitting standalone, without needing an MS or any gridding of weights to make the PSF. This method will look for the imagename.psf (or imagename.psf.tt0), fit and set the new restoring beam. It is tied to the naming convention of tclean.\n```\nsynu = casac.synthesisutils();\nsynu.fitPsfBeam(imagename='qq', psfcutoff=0.3) # Cubes\nsynu.fitPsfBeam(imagename='qq', nterms=2, psfcutoff=0.3) # Multi-term\n\n```\n\n\n### Tested Use Cases\n\nThe following is a list of use cases that have simulation-based functional verification tests within CASA.\n\n1. Wideband mulit-term imaging (SD+Int)\n\n Wideband data single field imaging by joint-reconstruction from single dish and interferometric data to obtain the high resolution of the interferometer while account for the zero spacing information. Use multi-term multi-frequency synthesis (MTMFS) algorithm to properly account for spectral information of the source.\n\n2. Wideband multi-term imaging: Int only\n\n The same as #1 except for using interferometric data only, which is useful to make a comparison with #1 (i.e. effect of missing flux). This is equivalent to running 'mtmfs' with specmode='mfs' and gridder='standard' in tclean\n \n3. Wideband multi-term imaging: SD only\n\n The same as #1 expect for using single dish data only which is useful to make a comparison with #1 (i.e. to see how much high resolution information is missing). Also, sometimes, the SD PSF has significant sidelobes (Airy disk) and even single dish images can benefit from deconvolution. This is a use case where wideband multi-term imaging is applied to SD data alone to make images at the highest possible resolution as well as to derive spectral index information. \n\n4. Single field cube imaging: SD+Int\n\n Spectral cube single field imaging by joint reconstruction of single dish and interferometric data to obtain single field spectral cube image.\n \n Use multi-scale clean for deconvolution\n \n5. Single field cube imaging: Int only\n\n The same as #4 except for using the interferometric data only, which is useful to make a comparison with #4 (i.e. effect of missing flux). This is equivalent to running 'multiscale' with specmode='cube' and gridder='standard' in tclean.\n\n6. Single field cube imaging: SD only\n\n The same as #4 except for using the single dish data only, which is useful to make a comparison with #4\n \n (i.e. to see how much high resolution information is missing)\n \n Also, it addresses the use case where SD PSF sidelobes are significant and where the SD images could benefit from multiscale (or point source) deconvolution per channel.\n \n7. Wideband multi-term mosaic Imaging: SD+Int\n\n Wideband data mosaic imaging by joint-reconstruction from single dish and interferometric data to obtain the high resolution of the interferometer while account for the zero spacing information.\n \n Use multi-term multi-frequency synthesis (MTMFS) algorithm to properly account for spectral information of the source. Implement the concept of conjbeams (i.e. frequency dependent primary beam correction) for wideband mosaicing.\n \n8. Wideband multi-term mosaic imaging: Int only\n\n The same as #7 except for using interferometric data only, which is useful to make a comparison with #7 (i.e. effect of missing flux). Also, this is an alternate implementation of the concept of conjbeams ( frequency dependent primary beam correction) available via tclean, and which is likely to be more robust to uv-coverage variations (and sumwt) across frequency. \n \n9. Wideband multi-term mosaic imaging: SD only\n\n The same as #7 expect for using single dish data only which is useful to make a comparison with #7 (i.e. to see how much high resolution information is missing). This is the same situation as (3), but made on an image coordinate system that matches an interferometer mosaic mtmfs image.\n \n10. Cube mosaic imaging: SD+Int\n\n Spectral cube mosaic imaging by joint reconstruction of single dish and interferometric data. Use multi-scale clean for deconvolution. \n \n11. Cube mosaic imaging: Int only\n\n The same as #10 except for using the intererometric data only, which is useful to make a comparison with #10 (i.e. effect of missing flux). This is the same use case as gridder='mosaic' and deconvolver='multiscale' in tclean for specmode='cube'.\n\n12. Cube mosaic imaging: SD only\n\n The same as #10 except for using the single dish data only, which is useful to make a comparison with #10 (i.e. to see how much high resolution information is missing). This is the same situation as (6), but made on an image coordinate system that matches an interferometer mosaic cube image.\n \n13. Wideband MTMFS SD+INT with channel 2 flagged in INT\n\n The same as #1, but with partially flagged data in the cubes. This is a practical reality with real data where the INT and SD data are likely to have gaps in the data due to radio frequency interferenece or other weight variations. \n \n14. Cube SD+INT with channel 2 flagged\n\n The same as #4, but with partially flagged data in the cubes. This is a practical reality with real data where the INT and SD data are likely to have gaps in the data due to radio frequency interferenece or other weight variations. \n \n15. Wideband MTMFS SD+INT with sdpsf=\"\"\n\n The same as #1, but with an unspecified sdpsf. This triggers the auto-calculation of the SD PSF cube using restoring beam information from the regridded input sdimage.\n \n16. INT-only cube comparison between tclean and sdintimaging\n\n Compare cube imaging results for a functionally equivalent run.\n\n17. INT-only mtmfs comparison between tclean and sdintimaging\n\n Compare mtmfs imaging results for a functionally equivalent run. Note that the sdintimaging task implements wideband primary beam correction in the image domain on the cube residual image, whereas tclean uses the 'conjbeams' parameter to apply an approximation of this correction during the gridding step.\n\nNote : Serial and Parallel Runs for an ALMA test dataset have been shown to be consistent to a 1e+6 dynamic range, consistent with differences measured for our current implementation of cube parallelization. \n\n### References\n\n[Urvashi Rau, Nikhil Naik, and Timothy Braun 2019 AJ 158, 1](https://iopscience.iop.org/article/10.3847/1538-3881/ab1aa7/meta)\n\nhttps://github.com/urvashirau/WidebandSDINT\n\n\n***\n\n\n\n\n\n", "_____no_output_____" ], [ "## Feather & CASAfeather \n\nFeathering is a technique used to combine a Single Dish (SD) image with an interferometric image of the same field.The goal of this process is to reconstruct the source emission on all spatial scales, ranging from the small spatial scales measured by the interferometer to the large-scale structure measured by the single dish. To do this, feather combines the images in Fourier space, weighting them by the spatial frequency response of each image. This technique assumes that the spatial frequencies of the single dish and interferometric data partially overlap. The subject of interferometric and single dish data combination has a long history. See the introduction of Koda et al 2011 (and references therein) [\\[1\\]](#Bibliography) for a concise review, and Vogel et al 1984 [\\[2\\]](#Bibliography), Stanimirovic et al 1999 [\\[3\\]](#Bibliography), Stanimirovic 2002 [\\[4\\]](#Bibliography), Helfer et al 2003 [\\[5\\]](#Bibliography), and Weiss et al 2001 [\\[6\\]](#Bibliography), among other referenced papers, for other methods and discussions concerning the combination of single dish and interferometric data.\n\nThe feathering algorithm implemented in CASA is as follows: \n\n1. Regrid the single dish image to match the coordinate system, image shape, and pixel size of the high resolution image. \n2. Transform each image onto uniformly gridded spatial-frequency axes.\n3. Scale the Fourier-transformed low-resolution image by the ratio of the volumes of the two \\'clean beams\\' (high-res/low-res) to convert the single dish intensity (in Jy/beam) to that corresponding to the high resolution intensity (in Jy/beam). The volume of the beam is calculated as the volume under a two dimensional Gaussian with peak 1 and major and minor axes of the beam corresponding to the major and minor axes of the Gaussian. \n4. Add the Fourier-transformed data from the high-resolution image, scaled by $(1-wt)$ where $wt$ is the Fourier transform of the \\'clean beam\\' defined in the low-resolution image, to the scaled low resolution image from step 3.5. Transform back to the image plane.\n\nThe input images for feather must have the following characteristics:\n\n1. Both input images must have a well-defined beam shape for this task to work, which will be a \\'clean beam\\' for interferometric images and a \\'primary-beam\\' for a single-dish image. The beam for each image should be specified in the image header. If a beam is not defined in the header or feather cannot guess the beam based on the telescope parameter in the header, then you will need to add the beam size to the header using **imhead**. \n2. Both input images must have the same flux density normalization scale. If necessary, the SD image should be converted from temperature units to Jy/beam. Since measuring absolute flux levels is difficult with single dishes, the single dish data is likely to be the one with the most uncertain flux calibration. The SD image flux can be scaled using the parameter *sdfactor* to place it on the same scale as the interferometer data. The casafeather task (see below) can be used to investigate the relative flux scales of the images.\n\nFeather attemps to regrid the single dish image to the interferometric image. Given that the single dish image frequently originates from other data reduction packages, CASA may have trouble performing the necessary regridding steps. If that happens, one may try to regrid the single dish image manually to the interferometric image. CASA has a few tasks to perform individual steps, including **imregrid** for coordinate transformations, **imtrans** to swap and reverse coordinate axes, the tool **ia.adddegaxes()** for adding degenerate axes (e.g. a single Stokes axis). See the \\\"[Image Analysis](image_analysis.ipynb#image-analysis)\\\" chapter for additional options. If you have trouble changing image projections, you can try the [montage package](http://montage.ipac.caltech.edu/), which also has an [associated python wrapper](http://www.astropy.org/montage-wrapper/).\n\nIf you are feathering large images together, set the numbers of pixels along the X and Y axes to composite (non-prime) numbers in order to improve the algorithm speed. In general, FFTs work much faster on even and composite numbers. Then use the subimage task or tool to trim the number of pixels to something desirable.\n\n### Inputs for task feather\nThe inputs for **feather** are: \n\n```\n#feather :: Combine two images using their Fourier transforms\nimagename = '' #Name of output feathered image\nhighres = '' #Name of high resolution (interferometer) image\nlowres = '' #Name of low resolution (single dish) image\nsdfactor = 1.0 #Scale factor to apply to Single Dish image\neffdishdiam = -1.0 #New effective SingleDish diameter to use in m\nlowpassfiltersd = False #Filter out the high spatial frequencies of the SD image\n```\n\nThe SD data cube is specified by the *lowres* parameter and the interferometric data cube by the *highres* parameter. The combined, feathered output cube name is given by the *imagename* parameter. The parameter *sdfactor* can be used to scale the flux calibration of the SD cube. The parameter *effdishdiam* can be used to change the weighting of the single dish image.\n\nThe weighting functions for the data are usually the Fourier transform of the Single Dish beam FFT(PB~SD~) for the Single dish data, and the inverse, 1-FFT(PB~SD~), for the interferometric data. It is possible, however, to change the weighting functions by pretending that the SD is smaller in size via the *effdishdiam* parameter. This tapers the high spatial frequencies of the SD data and adds more weight to the interferometric data. The *lowpassfiltersd* can take out non-physical artifacts at very high spatial frequencies that are often present in SD data.\n\nNote that the only inputs are for images; **feather** will attempt to regrid the images to a common shape, i.e. pixel size, pixel numbers, and spectral channels. If you are having issues with the regridding inside feather, you may consider regridding using the **imregrid** and **specsmooth** tasks.\n\nThe **feather** task does not perform any deconvolution but combines the single dish image with a presumably deconvolved interferometric image. The short spacings of the interferometric image that are extrapolated by the deconvolution process will be those that are down-weighted the most when combined with the single dish data. The single dish image must have a well-defined beam shape and the correct flux units for a model image (Jy/beam instead of Jy/pixel). Use the tasks **imhead** and **immath** first to convert if needed.\n\nStarting with a cleaned synthesis image and a low resolution image from a single dish telescope, the following example shows how they can be feathered: \n\n```\nfeather(imagename ='feather.im', #Create an image called feather.im\n highres ='synth.im', #The synthesis image is called synth.im\n lowres ='single_dish.im') #The SD image is called single_dish.im\n```\n\n### Visual Interface for feather (casafeather)\n\nCASA also provides a visual interface to the **feather** task. The interface is run from a command line *outside* CASA by typing casafeather in a shell. An example of the interface is shown below. To start, one needs to specify a high and a low resolution image, typically an interferometric and a single dish map. Note that the single dish map needs to be in units of Jy/beam. The output image name can be specified. The non-deconvolved (dirty) interferometric image can also be specified to use as diagnostic of the relative flux scaling of the single dish and interferometer images. See below for more details. At the top of the display, the parameters *effdshdiameter* and *sdfactor* can be provided in the \"Effective Dish Diameter\" and \"Low Resolution Scale Factor\" input boxes. One you have specified the images and parameters, press the \"Feather\" button in the center of the GUI window to start the feathering process. The feathering process here includes regridding the low resolution image to the high resolution image.\n\n![c0ff299b0bd9c0afa9b65a93c6b02212362645d3](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/c0ff299b0bd9c0afa9b65a93c6b02212362645d3.png?raw=1)\n\n>Figure 1: The panel shows the \"Original Data Slice\", which are cuts through the u and v directions of the Fourier-transformed input images. Green is the single dish data (low resolution) and purple the interferometric data (high resolution). To bring them on the same flux scale, the low data were convolved to the high resolution beam and vice versa (selectable in color preferences). In addition, a single dish scaling of 1.2 was applied to adjust calibration differences. The weight functions are shown in yellow (for the low resolution data) and orange (for the high resolution data). The weighting functions were also applied to the green and purple slices. Image slices of the combined, feathered output image are shown in blue. The displays also show the location of the effective dish diameter by the vertical line. This value is kept at the original single dish diameter that is taken from the respective image header.\n \n\nThe initial casafeather display shows two rows of plots. The panel shows the \"Original Data Slice\", which are either cuts through the u and v directions of the Fourier-transformed input images or a radial average. A vertical line shows the location of the effective dish diameter(s). The blue lines are the combined, feathered slices.\n\n![c57e182275861b522d1e6836eab16a853d7aae7c](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/c57e182275861b522d1e6836eab16a853d7aae7c.png?raw=1)\n\n>Figure 2: The casafeather \"customize\" window.\n\n\nThe \\'Customize\\' button (gear icon on the top menu page) allows one to set the display parameters. Options are to show the slice plot, the scatter plot, or the legend. One can also select between logarithmic and linear axes; a good option is usually to make both axes logarithmic. You can also select whether the x-axis for the slices are in the u, or v, or both directions, or, alternatively a radial average in the uv-plane. For data cubes, one can also select a particular velocity plane, or to average the data across all velocity channels. The scatter plot can display any two data sets on the two axes, selected from the \\'Color Preferences\\' menu. The data can be the unmodified, original data, or data that have been convolved with the high or low resolution beams. One can also select to display data that were weighted and scaled by the functions discussed above.\n\n![df8181251aae5df396fe516f5befe53d616680da](https://github.com/casangi/casadocs/blob/master/docs/notebooks/media/df8181251aae5df396fe516f5befe53d616680da.png?raw=1)\n\n>Figure 3: The scatter plot in casafeather. The low data, convolved with high beam, weighted and scaled is still somewhat below the equality line (plotted against high data, convolved with low beam, weighted). In this case one can try to adjust the \\\"low resolution scale factor\\\" to bring the values closer to the line of equality, ie. to adjust the calibration scales. \n\nPlotting the data as a scatter plot is a useful diagnostic tool for checking for differences in flux scaling between the high and low resolution data sets.The dirty interferometer image contains the actual flux measurements made by the telescope. Therefore, if the single dish scaling is correct, the flux in the dirty image convolved with the low resolution beam and with the appropriate weighting applied should be the same as the flux of the low-resolution data convolved with the high resolution beam once weighted and scaled. If not, the *sdfactor* parameter can be adjusted until they are the same. One may also use the cleaned high resolution image instead of the dirty image, if the latter is not available. However, note that the cleaned high resolution image already contains extrapolations to larger spatial scales that may bias the comparison.\n\n\n***\n\n### Bibliography\n\n1. Koda et al 2011 (http://adsabs.harvard.edu/abs/2011ApJS..193...19K)\n2. Vogel et al 1984 (http://adsabs.harvard.edu/abs/1984ApJ...283..655V)\n3. Stanimirovic et al 1999 (http://adsabs.harvard.edu/abs/1999MNRAS.302..417S)\n4. Stanimirovic et al 2002 (http://adsabs.harvard.edu/abs/2002ASPC..278..375S)\n5. Helfer et al 2003 (http://adsabs.harvard.edu/abs/2003ApJS..145..259H)\n6. Weiss et al 2001 (http://adsabs.harvard.edu/abs/2001A%26A...365..571W)\n", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ] ]
e7a8d67610a55baca2cbffae6ffafa7c5a58d3da
1,639
ipynb
Jupyter Notebook
object-recog-lemon-custom-model/object-recog-lemon-custom-model.ipynb
roboteur/computer-vision-part-01
d7b09109b4c9d7acde0c94e4484c6771c74fbf25
[ "MIT" ]
3
2020-03-23T04:01:14.000Z
2020-08-29T16:27:31.000Z
object-recog-lemon-custom-model/object-recog-lemon-custom-model.ipynb
roboteur/computer-vision-part-01
d7b09109b4c9d7acde0c94e4484c6771c74fbf25
[ "MIT" ]
null
null
null
object-recog-lemon-custom-model/object-recog-lemon-custom-model.ipynb
roboteur/computer-vision-part-01
d7b09109b4c9d7acde0c94e4484c6771c74fbf25
[ "MIT" ]
null
null
null
19.987805
75
0.518609
[ [ [ "import cv2", "_____no_output_____" ], [ "image = cv2.imread(\"test-image-lemon.jpg\")\ngray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)", "_____no_output_____" ], [ "haarcascade_object1 = cv2.CascadeClassifier(\"cascade-lemon.xml\")\nobject1 = haarcascade_object1.detectMultiScale(gray, 1.3, 5)\n", "_____no_output_____" ], [ "for (x,y,w,h) in object1:\n cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2) ", "_____no_output_____" ], [ "print ((\"Number Of Object Detected =\"), len(object1))\ncv2.imshow(\"Detected\", image)\ncv2.waitKey(0)", "Number Of Object Detected = 2\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
e7a8eb50da4d6f437e7de1a72bfbf4d7de1d2a9e
17,241
ipynb
Jupyter Notebook
Manipulating_Data_with_NumPy_Code_Along.ipynb
vidSanas/greyatom-python-for-data-science
7a4f747213df59a60e96b0f2081bb729bb039301
[ "MIT" ]
null
null
null
Manipulating_Data_with_NumPy_Code_Along.ipynb
vidSanas/greyatom-python-for-data-science
7a4f747213df59a60e96b0f2081bb729bb039301
[ "MIT" ]
null
null
null
Manipulating_Data_with_NumPy_Code_Along.ipynb
vidSanas/greyatom-python-for-data-science
7a4f747213df59a60e96b0f2081bb729bb039301
[ "MIT" ]
null
null
null
35.257669
1,299
0.492315
[ [ [ "<a href=\"https://colab.research.google.com/github/vidSanas/greyatom-python-for-data-science/blob/master/Manipulating_Data_with_NumPy_Code_Along.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# IPL Dataset Analysis\n\n## Problem Statement\nWe want to know as to what happens during an IPL match which raises several questions in our mind with our limited knowledge about the game called cricket on which it is based. This analysis is done to know as which factors led one of the team to win and how does it matter.", "_____no_output_____" ], [ "## About the Dataset :\nThe Indian Premier League (IPL) is a professional T20 cricket league in India contested during April-May of every year by teams representing Indian cities. It is the most-attended cricket league in the world and ranks sixth among all the sports leagues. It has teams with players from around the world and is very competitive and entertaining with a lot of close matches between teams.\n\nThe IPL and other cricket related datasets are available at [cricsheet.org](https://cricsheet.org/%c2%a0(data). Feel free to visit the website and explore the data by yourself as exploring new sources of data is one of the interesting activities a data scientist gets to do.\n\n## About the dataset:\nSnapshot of the data you will be working on:<br>\n<br>\nThe dataset 1452 data points and 23 features<br>\n\n|Features|Description|\n|-----|-----|\n|match_code|Code pertaining to individual match|\n|date|Date of the match played|\n|city|Location where the match was played|\n|team1|team1|\n|team2|team2|\n|toss_winner|Who won the toss out of two teams|\n|toss_decision|toss decision taken by toss winner|\n|winner|Winner of that match between two teams|\n|win_type|How did the team won(by wickets or runs etc.)|\n|win_margin|difference with which the team won| \n|inning|inning type(1st or 2nd)|\n|delivery|ball delivery|\n|batting_team|current team on batting|\n|batsman|current batsman on strike|\n|non_striker|batsman on non-strike|\n|bowler|Current bowler|\n|runs|runs scored|\n|extras|extra run scored|\n|total|total run scored on that delivery including runs and extras|\n|extras_type|extra run scored by wides or no ball or legby|\n|player_out|player that got out|\n|wicket_kind|How did the player got out|\n|wicket_fielders|Fielder who caught out the player by catch|\n", "_____no_output_____" ], [ "### Analysing data using numpy module", "_____no_output_____" ], [ "### Read the data using numpy module.", "_____no_output_____" ] ], [ [ "import numpy as np\n# Not every data format will be in csv there are other file formats also.\n# This exercise will help you deal with other file formats and how toa read it.\npath = './ipl_matches_small.csv'\ndata_ipl = np.genfromtxt(path, delimiter=',', skip_header=1, dtype=str)\n\n", "_____no_output_____" ], [ "print(data_ipl)", "[['392203' '2009-05-01' 'East London' ... '' '' '']\n ['392203' '2009-05-01' 'East London' ... '' '' '']\n ['392203' '2009-05-01' 'East London' ... '' '' '']\n ...\n ['335987' '2008-04-21' 'Jaipur' ... '' '' '']\n ['335987' '2008-04-21' 'Jaipur' ... '' '' '']\n ['335987' '2008-04-21' 'Jaipur' ... '' '' '']]\n" ] ], [ [ "### Calculate the unique no. of matches in the provided dataset ?", "_____no_output_____" ] ], [ [ "# How many matches were held in total we need to know so that we can analyze further statistics keeping that in mind.im\nimport numpy as np\nunique_match_code=np.unique(data_ipl[:,0])\nprint(unique_match_code)", "['335987' '392197' '392203' '392212' '501226' '729297']\n" ] ], [ [ "### Find the set of all unique teams that played in the matches in the data set.", "_____no_output_____" ] ], [ [ "# this exercise deals with you getting to know that which are all those six teams that played in the tournament.\nimport numpy as np\nunique_match_team3=np.unique(data_ipl[:,3])\nprint(unique_match_team3)\nunique_match_team4=np.unique(data_ipl[:,4])\nprint(unique_match_team4)\n\nunion=np.union1d(unique_match_team3,unique_match_team4)\nprint(union)\nunique=np.unique(union)\nprint(unique)", "['Chennai Super Kings' 'Deccan Chargers' 'Kolkata Knight Riders'\n 'Rajasthan Royals']\n['Chennai Super Kings' 'Kings XI Punjab' 'Mumbai Indians' 'Pune Warriors']\n['Chennai Super Kings' 'Deccan Chargers' 'Kings XI Punjab'\n 'Kolkata Knight Riders' 'Mumbai Indians' 'Pune Warriors'\n 'Rajasthan Royals']\n['Chennai Super Kings' 'Deccan Chargers' 'Kings XI Punjab'\n 'Kolkata Knight Riders' 'Mumbai Indians' 'Pune Warriors'\n 'Rajasthan Royals']\n" ], [ "", "_____no_output_____" ] ], [ [ "### Find sum of all extras in all deliveries in all matches in the dataset", "_____no_output_____" ] ], [ [ "# An exercise to make you familiar with indexing and slicing up within data.\nimport numpy as np\nextras=data_ipl[:,17]\ndata=extras.astype(np.int)\nprint(sum(data))\n", "88\n" ] ], [ [ "### Get the array of all delivery numbers when a given player got out. Also mention the wicket type.", "_____no_output_____" ] ], [ [ "import numpy as np\ndeliveries=[]\nwicket_type=[]\nfor i in data_ipl:\n if(i[20]!=\"\"):\n a=i[11]\n b=i[21]\n deliveries.append(a)\n wicket_type.append(b)\nprint(deliveries)\nprint(wicket_type)\n \n \n \n \n \n \n \n \n", "['3.2', '5.5', '7.6', '11.4', '15.6', '18.6', '0.4', '2.2', '14.5', '17.2', '18.6', '19.3', '12.2', '13.5', '14.4', '15.1', '16.6', '18.5', '1.7', '2.7', '10.2', '12.1', '12.3', '13.2', '14.5', '15.1', '15.2', '1.5', '5.3', '9.4', '12.6', '17.1', '19.1', '1.4', '1.5', '8.5', '14.1', '15.5', '15.6', '17.1', '17.3', '5.3', '7.2', '8.2', '10.1', '11.1', '14.5', '1.3', '5.2', '6.4', '6.5', '10.5', '12.6', '13.3', '14.2', '18.3', '19.5', '9.2', '9.6', '16.4', '17.2', '17.5', '19.6', '2.4', '3.6', '4.6', '5.3', '12.6', '18.3', '18.5', '19.1', '19.2', '4.5', '6.3', '7.4', '8.6', '16.5', '17.2', '17.4', '18.6', '1.1', '2.3', '4.5', '11.2']\n['caught', 'caught', 'caught', 'bowled', 'caught', 'caught', 'bowled', 'bowled', 'caught', 'bowled', 'run out', 'caught', 'lbw', 'caught', 'caught', 'run out', 'caught', 'caught', 'caught', 'caught', 'bowled', 'caught', 'caught', 'caught', 'caught', 'bowled', 'bowled', 'caught', 'caught', 'bowled', 'bowled', 'caught', 'run out', 'caught', 'bowled', 'caught', 'caught', 'bowled', 'bowled', 'caught', 'stumped', 'caught', 'caught', 'caught', 'run out', 'caught', 'caught', 'run out', 'caught', 'caught', 'caught and bowled', 'caught', 'caught', 'caught', 'bowled', 'caught', 'run out', 'caught', 'bowled', 'stumped', 'caught', 'caught', 'caught', 'bowled', 'bowled', 'bowled', 'bowled', 'caught', 'caught', 'run out', 'run out', 'caught', 'bowled', 'caught and bowled', 'stumped', 'lbw', 'lbw', 'bowled', 'caught', 'run out', 'caught', 'caught and bowled', 'caught', 'lbw']\n" ], [ "", "_____no_output_____" ] ], [ [ "### How many matches the team `Mumbai Indians` has won the toss?", "_____no_output_____" ] ], [ [ "data_arr=[]\nfor i in data_ipl:\n if(i[5]==\"Mumbai Indians\"):\n data_arr.append(i[0])\nunique_match_id=np.unique(data_arr)\nprint(unique_match_id)\nprint(len(unique_match_id))\n \n\n\n", "['392197' '392203']\n2\n" ] ], [ [ "### Create a filter that filters only those records where the batsman scored 6 runs. Also who has scored the maximum no. of sixes overall ?", "_____no_output_____" ] ], [ [ "# An exercise to know who is the most aggresive player or maybe the scoring player \nimport numpy as np\ncounter=0\nrun_dict={}\narr=[]\nfor i in data_ipl:\n #print(i[13])\n #current_run = i[16]\n #prev_run = run_dict[batsman_nm]\n #batsman_nm = i[13]\n #if prev_run == None:\n #run_dict[batsman_nm] = current_run\n #else:\n #run_dict[batsman_nm] = run_dict[batsman_nm]current_run\n if i[13] in run_dict:\n run_dict[i[13]]=run_dict[i[13]]+int(i[16])\n else:\n run_dict[i[13]]=int(i[16])\nprint(run_dict)\n", "{'ST Jayasuriya': 63, 'SR Tendulkar': 104, 'Harbhajan Singh': 24, 'AM Nayar': 19, 'JP Duminy': 122, 'GR Napier': 15, 'AM Rahane': 25, 'Z Khan': 2, 'CH Gayle': 19, 'SC Ganguly': 34, 'BJ Hodge': 97, 'MN van Wyk': 32, 'LR Shukla': 12, 'BB McCullum': 12, 'WP Saha': 8, 'DJ Bravo': 16, 'S Dhawan': 12, 'SS Tiwary': 7, 'Yashpal Singh': 8, 'AN Ghosh': 0, 'I Sharma': 6, 'BAW Mendis': 0, 'AB Dinda': 0, 'AC Gilchrist': 25, 'HH Gibbs': 0, 'TL Suman': 20, 'RG Sharma': 38, 'DR Smith': 66, 'Y Venugopal Rao': 28, 'DB Ravi Teja': 4, 'RJ Harris': 5, 'PR Shah': 29, 'RR Raje': 11, 'DS Kulkarni': 35, 'SK Raina': 6, 'F du Plessis': 7, 'MS Dhoni': 31, 'RA Jadeja': 72, 'M Manhas': 30, 'R Ashwin': 9, 'SV Samson': 16, 'SR Watson': 83, 'SPD Smith': 19, 'STR Binny': 8, 'R Bhatia': 23, 'JP Faulkner': 4, 'TG Southee': 4, 'PV Tambe': 2, 'M Vijay': 31, 'MEK Hussey': 61, 'JA Morkel': 0, 'S Badrinath': 11, 'S Anirudha': 7, 'JD Ryder': 15, 'MD Mishra': 9, 'MK Pandey': 12, 'RV Uthappa': 0, 'Yuvraj Singh': 91, 'NL McCullum': 15, 'R Sharma': 1, 'JE Taylor': 2, 'M Kartik': 1, 'AC Thomas': 1, 'K Goel': 26, 'JR Hopes': 16, 'KC Sangakkara': 20, 'DPMD Jayawardene': 2, 'IK Pathan': 12, 'S Sohal': 4, 'B Lee': 0, 'PP Chawla': 24, 'WA Mota': 1, 'M Kaif': 5, 'YK Pathan': 7, 'Kamran Akmal': 15, 'DS Lehmann': 17}\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a8fd4750c9c32588ca8f9d21e7482ce634fe8f
11,920
ipynb
Jupyter Notebook
Pytorch/Task4.ipynb
asd55667/DateWhale
d45fe48d18943e960b97b1b1df8e074d5f6df414
[ "Apache-2.0" ]
null
null
null
Pytorch/Task4.ipynb
asd55667/DateWhale
d45fe48d18943e960b97b1b1df8e074d5f6df414
[ "Apache-2.0" ]
null
null
null
Pytorch/Task4.ipynb
asd55667/DateWhale
d45fe48d18943e960b97b1b1df8e074d5f6df414
[ "Apache-2.0" ]
null
null
null
33.389356
259
0.47047
[ [ [ "import torch as t\nimport os\nfrom torch.utils import data\nfrom PIL import Image\nimport numpy as np\nimport torch.nn as nn\nimport time\nimport random\n\nrandom.seed(20190412)\nt.manual_seed(20190412)\nt.cuda.manual_seed(20190412)\n\n\npath = '/media/wcw/SeaGate316G: Data/kaggle/data/Dogs_vs_Cats/data/'# + 'train'", "_____no_output_____" ] ], [ [ "# 读取数据", "_____no_output_____" ] ], [ [ "import torchvision.transforms as T\nimg_shape = (3, 224, 224)\n\ndef read_raw_img(path, resize, L=False):\n img = Image.open(path)\n if resize:\n img = img.resize(resize)\n if L:\n img = img.convert('L')\n return np.asarray(img)\n\nclass DogCat(data.Dataset):\n def __init__(self,path, img_shape):\n# self.batch_size = batch_size\n self.img_shape = img_shape\n imgs = os.listdir(path)\n random.shuffle(imgs)\n self.imgs = [os.path.join(path, img) for img in imgs]\n \n# normalize = T.Normalize(mean = [0.485, 0.456, 0.406],\n# std = [0.229, 0.224, 0.225])\n# self.transforms = T.Compose([T.Resize(224), \n# T.CenterCrop(224),\n# T.ToTensor(),\n# normalize])\n \n def __getitem__(self, index):\n# start = index * self.batch_size\n# end = min(start + self.batch_size, len(self.imgs))\n# size = end - start\n# assert size > 0\n \n# img_paths = self.imgs[start:end]\n# a = t.zeros((size,) + self.img_shape, requires_grad=True)\n# b = t.zeros((size, 1))\n \n# for i in range(size):\n img = read_raw_img(self.imgs[index], self.img_shape[1:], L=False).transpose((2,1,0))\n# img = Image.open(img_paths[i])\n x = t.from_numpy(img)\n# a[i] = self.transforms(img)\n y = 1 if 'dog' in self.imgs[index].split('/')[-1].split('.')[0] else 0\n return x, y\n \n def __len__(self):\n return len(self.imgs)\n \ntrain = DogCat(path+'train', img_shape) ", "_____no_output_____" ] ], [ [ "# 构建模型", "_____no_output_____" ] ], [ [ "import math\nclass Vgg16(nn.Module):\n\n def __init__(self, features, num_classes=1, init_weights=True):\n super(Vgg16, self).__init__()\n self.features = features\n self.classifier = nn.Sequential(\n nn.Linear(512 * 7 * 7, 4096),\n nn.ReLU(True),\n nn.Dropout(),\n nn.Linear(4096, 4096),\n nn.ReLU(True),\n nn.Dropout(),\n nn.Linear(4096, num_classes),\n )\n if init_weights:\n self._initialize_weights()\n\n def forward(self, x):\n x = self.features(x)\n x = x.view(x.size(0), -1)\n x = self.classifier(x)\n x = t.sigmoid(x)\n return x\n\n def _initialize_weights(self):\n for m in self.modules():\n if isinstance(m, nn.Conv2d):\n n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n m.weight.data.normal_(0, math.sqrt(2. / n))\n if m.bias is not None:\n m.bias.data.zero_()\n elif isinstance(m, nn.BatchNorm2d):\n m.weight.data.fill_(1)\n m.bias.data.zero_()\n elif isinstance(m, nn.Linear):\n m.weight.data.normal_(0, 0.01)\n m.bias.data.zero_()\n\n\ndef make_layers(cfg, mode, batch_norm=False):\n layers = []\n if mode == 'RGB':\n in_channels = 3\n elif mode == 'L':\n in_channels = 1\n else:\n print('only RGB or L mode')\n \n for v in cfg:\n if v == 'M':\n layers += [nn.MaxPool2d(kernel_size=2, stride=2)]\n else:\n conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)\n if batch_norm:\n layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]\n else:\n layers += [conv2d, nn.ReLU(inplace=True)]\n in_channels = v\n return nn.Sequential(*layers)\n\ncfg = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M']", "_____no_output_____" ] ], [ [ "# 损失函数与优化器", "_____no_output_____" ] ], [ [ "vgg = Vgg16(make_layers(cfg, 'RGB')).cuda()\nprint(vgg)\ncriterion = nn.BCELoss()\noptimizer = t.optim.Adam(vgg.parameters(),lr=0.001)", "Vgg16(\n (features): Sequential(\n (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): ReLU(inplace)\n (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): ReLU(inplace)\n (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (6): ReLU(inplace)\n (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (8): ReLU(inplace)\n (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (11): ReLU(inplace)\n (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (13): ReLU(inplace)\n (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (15): ReLU(inplace)\n (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (18): ReLU(inplace)\n (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (20): ReLU(inplace)\n (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (22): ReLU(inplace)\n (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (25): ReLU(inplace)\n (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (27): ReLU(inplace)\n (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (29): ReLU(inplace)\n (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n )\n (classifier): Sequential(\n (0): Linear(in_features=25088, out_features=4096, bias=True)\n (1): ReLU(inplace)\n (2): Dropout(p=0.5)\n (3): Linear(in_features=4096, out_features=4096, bias=True)\n (4): ReLU(inplace)\n (5): Dropout(p=0.5)\n (6): Linear(in_features=4096, out_features=1, bias=True)\n )\n)\n" ] ], [ [ "# 模型训练", "_____no_output_____" ] ], [ [ "use_cuda = t.cuda.is_available()\ndevice = t.device(\"cuda:0\" if use_cuda else \"cpu\")\n# cudnn.benchmark = True\n\n# Parameters\nparams = {'batch_size': 64,\n 'shuffle': True,\n 'num_workers': 6}\nmax_epochs = 1\n\n# train = DogCat(path+'train', img_shape) \n\ntraining_generator = data.DataLoader(train, **params)\n\nfor epoch in range(max_epochs):\n for x, y_ in training_generator:\n x, y_ = x.float().to(device), y_.float().to(device)\n \n y = vgg(x)\n \n loss = criterion(y, y_)\n \n loss.backward()\n \n optimizer.step()\n ", "/home/wcw/anaconda3/envs/tf/lib/python3.6/site-packages/torch/nn/functional.py:2016: UserWarning: Using a target size (torch.Size([64])) that is different to the input size (torch.Size([64, 1])) is deprecated. Please ensure they have the same size.\n \"Please ensure they have the same size.\".format(target.size(), input.size()))\n/home/wcw/anaconda3/envs/tf/lib/python3.6/site-packages/torch/nn/functional.py:2016: UserWarning: Using a target size (torch.Size([28])) that is different to the input size (torch.Size([28, 1])) is deprecated. Please ensure they have the same size.\n \"Please ensure they have the same size.\".format(target.size(), input.size()))\n" ] ], [ [ "# 模型评估", "_____no_output_____" ] ], [ [ "accs = []\ntest = DogCat(path+'test', img_shape=img_shape)\ntest_loader = data.DataLoader(test, **params)\n\nwith t.set_grad_enabled(False):\n for x, y_ in test_loader: \n x, y_ = x.float().to(device), y_.float().to(device)\n \n y = vgg(x)\n \n acc = y.eq(y_).sum().item()/y.shape[0]\n# acc = t.max(y, 1)[1].eq(t.max(y_, 1)[1]).sum().item()/y.shape[0]\n\n accs.append(acc)\n \nnp.mean(accs) ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a90eb3290b70965f48542b987f8642e386ea91
24,441
ipynb
Jupyter Notebook
examples/model_assisted_labeling/ner_mal.ipynb
Cyniikal/labelbox-python
526fb8235c245a3c6161af57c354a47d68385bab
[ "Apache-2.0" ]
null
null
null
examples/model_assisted_labeling/ner_mal.ipynb
Cyniikal/labelbox-python
526fb8235c245a3c6161af57c354a47d68385bab
[ "Apache-2.0" ]
null
null
null
examples/model_assisted_labeling/ner_mal.ipynb
Cyniikal/labelbox-python
526fb8235c245a3c6161af57c354a47d68385bab
[ "Apache-2.0" ]
null
null
null
35.942647
347
0.532548
[ [ [ "<td>\n <a target=\"_blank\" href=\"https://labelbox.com\" ><img src=\"https://labelbox.com/blog/content/images/2021/02/logo-v4.svg\" width=256/></a>\n</td>", "_____no_output_____" ], [ "<td>\n<a href=\"https://colab.research.google.com/github/Labelbox/labelbox-python/blob/develop/examples/model_assisted_labeling/image_mal.ipynb\" target=\"_blank\"><img\nsrc=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n</td>\n\n<td>\n<a href=\"https://github.com/Labelbox/labelbox-python/tree/develop/examples/model_assisted_labeling/image_mal.ipynb\" target=\"_blank\"><img\nsrc=\"https://img.shields.io/badge/GitHub-100000?logo=github&logoColor=white\" alt=\"GitHub\"></a>\n</td>", "_____no_output_____" ], [ "# Text Annotation Import\n* This notebook will provide examples of each supported annotation type for text assets. It will cover the following:\n * Model-assisted labeling - used to provide pre-annotated data for your labelers. This will enable a reduction in the total amount of time to properly label your assets. Model-assisted labeling does not submit the labels automatically, and will need to be reviewed by a labeler for submission.\n * Label Import - used to provide ground truth labels. These can in turn be used and compared against prediction labels, or used as benchmarks to see how your labelers are doing.", "_____no_output_____" ], [ "* For information on what types of annotations are supported per data type, refer to this documentation:\n * https://docs.labelbox.com/docs/model-assisted-labeling#option-1-import-via-python-annotation-types-recommended", "_____no_output_____" ], [ "* Notes:\n * Wait until the import job is complete before opening the Editor to make sure all annotations are imported properly.", "_____no_output_____" ], [ "# Installs", "_____no_output_____" ] ], [ [ "!pip install -q 'labelbox[data]'", "_____no_output_____" ] ], [ [ "# Imports", "_____no_output_____" ] ], [ [ "from labelbox.schema.ontology import OntologyBuilder, Tool, Classification, Option\nfrom labelbox import Client, LabelingFrontend, LabelImport, MALPredictionImport\nfrom labelbox.data.annotation_types import (\n Label, TextData, Checklist, Radio, ObjectAnnotation, TextEntity,\n ClassificationAnnotation, ClassificationAnswer\n)\nfrom labelbox.data.serialization import NDJsonConverter\nimport uuid\nimport json\nimport numpy as np", "_____no_output_____" ] ], [ [ "# API Key and Client\nProvide a valid api key below in order to properly connect to the Labelbox Client.", "_____no_output_____" ] ], [ [ "# Add your api key\nAPI_KEY = None\nclient = Client(api_key=API_KEY)", "INFO:labelbox.client:Initializing Labelbox client at 'https://api.labelbox.com/graphql'\n" ] ], [ [ "---- \n### Steps\n1. Make sure project is setup\n2. Collect annotations\n3. Upload", "_____no_output_____" ], [ "### Project setup", "_____no_output_____" ], [ "We will be creating two projects, one for model-assisted labeling, and one for label imports", "_____no_output_____" ] ], [ [ "ontology_builder = OntologyBuilder(\n tools=[\n Tool(tool=Tool.Type.NER, name=\"named_entity\")\n ],\n classifications=[\n Classification(class_type=Classification.Type.CHECKLIST, instructions=\"checklist\", options=[\n Option(value=\"first_checklist_answer\"),\n Option(value=\"second_checklist_answer\") \n ]),\n Classification(class_type=Classification.Type.RADIO, instructions=\"radio\", options=[\n Option(value=\"first_radio_answer\"),\n Option(value=\"second_radio_answer\")\n ])])", "_____no_output_____" ], [ "mal_project = client.create_project(name=\"text_mal_project\")\nli_project = client.create_project(name=\"text_label_import_project\")\n\n\ndataset = client.create_dataset(name=\"text_annotation_import_demo_dataset\")\ntest_txt_url = \"https://storage.googleapis.com/labelbox-sample-datasets/nlp/lorem-ipsum.txt\"\ndata_row = dataset.create_data_row(row_data=test_txt_url)\neditor = next(client.get_labeling_frontends(where=LabelingFrontend.name == \"Editor\"))\n\nmal_project.setup(editor, ontology_builder.asdict())\nmal_project.datasets.connect(dataset)\n\nli_project.setup(editor, ontology_builder.asdict())\nli_project.datasets.connect(dataset)", "_____no_output_____" ] ], [ [ "### Create Label using Annotation Type Objects\n* It is recommended to use the Python SDK's annotation types for importing into Labelbox.", "_____no_output_____" ], [ "### Object Annotations", "_____no_output_____" ] ], [ [ "def create_objects():\n named_enity = TextEntity(start=10,end=20)\n named_enity_annotation = ObjectAnnotation(value=named_enity, name=\"named_entity\")\n return named_enity_annotation", "_____no_output_____" ] ], [ [ "### Classification Annotations", "_____no_output_____" ] ], [ [ "def create_classifications():\n checklist = Checklist(answer=[ClassificationAnswer(name=\"first_checklist_answer\"),ClassificationAnswer(name=\"second_checklist_answer\")])\n checklist_annotation = ClassificationAnnotation(value=checklist, name=\"checklist\")\n radio = Radio(answer = ClassificationAnswer(name = \"second_radio_answer\"))\n radio_annotation = ClassificationAnnotation(value=radio, name=\"radio\")\n return checklist_annotation, radio_annotation", "_____no_output_____" ] ], [ [ "### Create a Label object with all of our annotations", "_____no_output_____" ] ], [ [ "image_data = TextData(uid=data_row.uid)\n\nnamed_enity_annotation = create_objects()\nchecklist_annotation, radio_annotation = create_classifications()\n\nlabel = Label(\n data=image_data,\n annotations = [\n named_enity_annotation, checklist_annotation, radio_annotation\n ]\n)\n\nlabel.__dict__", "_____no_output_____" ] ], [ [ "### Model Assisted Labeling ", "_____no_output_____" ], [ "To do model-assisted labeling, we need to convert a Label object into an NDJSON. \n\nThis is easily done with using the NDJSONConverter class\n\nWe will create a Label called mal_label which has the same original structure as the label above\n\nNotes:\n* Each label requires a valid feature schema id. We will assign it using our built in `assign_feature_schema_ids` method\n* the NDJsonConverter takes in a list of labels", "_____no_output_____" ] ], [ [ "mal_label = Label(\n data=image_data,\n annotations = [\n named_enity_annotation, checklist_annotation, radio_annotation\n ]\n)\n\nmal_label.assign_feature_schema_ids(ontology_builder.from_project(mal_project))\n\nndjson_labels = list(NDJsonConverter.serialize([mal_label]))\n\nndjson_labels", "_____no_output_____" ], [ "upload_job = MALPredictionImport.create_from_objects(\n client = client, \n project_id = mal_project.uid, \n name=\"upload_label_import_job\", \n predictions=ndjson_labels)", "_____no_output_____" ], [ "# Errors will appear for each annotation that failed.\n# Empty list means that there were no errors\n# This will provide information only after the upload_job is complete, so we do not need to worry about having to rerun\nprint(\"Errors:\", upload_job.errors)", "INFO:labelbox.schema.annotation_import:Sleeping for 10 seconds...\n" ] ], [ [ "### Label Import", "_____no_output_____" ], [ "Label import is very similar to model-assisted labeling. We will need to re-assign the feature schema before continuing, \nbut we can continue to use our NDJSonConverter\n\nWe will create a Label called li_label which has the same original structure as the label above", "_____no_output_____" ] ], [ [ "#for the purpose of this notebook, we will need to reset the schema ids of our checklist and radio answers\nimage_data = TextData(uid=data_row.uid)\n\nnamed_enity_annotation = create_objects()\nchecklist_annotation, radio_annotation = create_classifications()\n\nli_label = Label(\n data=image_data,\n annotations = [\n named_enity_annotation, checklist_annotation, radio_annotation\n ]\n)\n\nli_label.assign_feature_schema_ids(ontology_builder.from_project(li_project))\n\nndjson_labels = list(NDJsonConverter.serialize([li_label]))\n\nndjson_labels, li_project.ontology().normalized", "_____no_output_____" ], [ "upload_job = LabelImport.create_from_objects(\n client = client, \n project_id = li_project.uid, \n name=\"upload_label_import_job\", \n labels=ndjson_labels)", "_____no_output_____" ], [ "print(\"Errors:\", upload_job.errors)", "INFO:labelbox.schema.annotation_import:Sleeping for 10 seconds...\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
e7a91a6210abb90a6a09b996b01982816b1b8ef8
906,183
ipynb
Jupyter Notebook
neos.ipynb
gradhep/neos-scratch
032518d41ff085a44e1e069be441da8ba32a3b34
[ "Apache-2.0" ]
1
2021-06-30T17:41:10.000Z
2021-06-30T17:41:10.000Z
neos.ipynb
gradhep/neos-scratch
032518d41ff085a44e1e069be441da8ba32a3b34
[ "Apache-2.0" ]
null
null
null
neos.ipynb
gradhep/neos-scratch
032518d41ff085a44e1e069be441da8ba32a3b34
[ "Apache-2.0" ]
1
2021-05-05T18:17:13.000Z
2021-05-05T18:17:13.000Z
1,076.226841
443,098
0.947255
[ [ [ "from __future__ import annotations\n\nimport relaxed\nimport jax\nfrom jax import jit\nfrom chex import Array\nfrom typing import NamedTuple, Callable, Any\nimport pyhf\nfrom sklearn.model_selection import train_test_split\nfrom jax.random import PRNGKey, multivariate_normal\nimport numpy.random as npr\nimport optax\nimport jaxopt\nimport time\nfrom functools import partial\nfrom pprint import pprint\n\ndef make_model(s, b_nom, b_up, b_down):\n m = {\n \"channels\": [\n {\n \"name\": \"singlechannel\",\n \"samples\": [\n {\n \"name\": \"signal\",\n \"data\": s,\n \"modifiers\": [\n {\"name\": \"mu\", \"type\": \"normfactor\", \"data\": None},\n ],\n },\n {\n \"name\": \"background\",\n \"data\": b_nom,\n \"modifiers\": [\n {\n \"name\": \"correlated_bkg_uncertainty\",\n \"type\": \"histosys\",\n \"data\": {\"hi_data\": b_up, \"lo_data\": b_down},\n },\n ],\n },\n ],\n },\n ],\n }\n return pyhf.Model(m, validate=False)\n\ndef nn_summary_stat(pars, data, nn, bandwidth, bins, reflect=False, sig_scale=2,\n bkg_scale=10, LUMI=10):\n s_data, b_nom_data, b_up_data, b_down_data = data\n\n nn_s, nn_b_nom, nn_b_up, nn_b_down = (\n nn(pars, s_data).ravel(),\n nn(pars, b_nom_data).ravel(),\n nn(pars, b_up_data).ravel(),\n nn(pars, b_down_data).ravel(),\n )\n\n num_points = len(s_data)\n\n yields =s, b_nom, b_up, b_down = [\n relaxed.hist(nn_s, bins, bandwidth, reflect_infinities=reflect)\n * sig_scale\n / num_points\n * LUMI,\n relaxed.hist(nn_b_nom, bins, bandwidth, reflect_infinities=reflect)\n * bkg_scale\n / num_points\n * LUMI,\n relaxed.hist(nn_b_up, bins, bandwidth, reflect_infinities=reflect)\n * bkg_scale\n / num_points\n * LUMI,\n relaxed.hist(\n nn_b_down,\n bins,\n bandwidth,\n reflect_infinities=reflect,\n )\n * bkg_scale\n / num_points\n * LUMI,\n ]\n\n return yields\n\n\n@partial(jit, static_argnames=[\"model\", \"return_mle_pars\", \"return_constrained_pars\"]) # forward pass\ndef hypotest(\n test_poi: float,\n data: Array,\n model: pyhf.Model,\n lr: float,\n bonly_pars: Array,\n return_constrained_pars: bool = False,\n) -> tuple[Array, Array] | Array:\n # hard-code 1 as inits for now\n # TODO: need to parse different inits for constrained and global fits\n init_pars = jnp.asarray(model.config.suggested_init())[model.config.par_slice('correlated_bkg_uncertainty')]\n conditional_pars = relaxed.mle.fixed_poi_fit(\n data, model, poi_condition=test_poi, init_pars=init_pars, lr=lr\n )\n mle_pars = bonly_pars\n profile_likelihood = -2 * (\n model.logpdf(conditional_pars, data)[0] - model.logpdf(mle_pars, data)[0]\n )\n\n poi_hat = mle_pars[model.config.poi_index]\n qmu = jnp.where(poi_hat < test_poi, profile_likelihood, 0.0)\n\n CLsb = 1 - pyhf.tensorlib.normal_cdf(jnp.sqrt(qmu))\n altval = 0.0\n CLb = 1 - pyhf.tensorlib.normal_cdf(altval)\n CLs = CLsb / CLb\n if return_constrained_pars:\n return CLs, conditional_pars\n else:\n return CLs\n\nclass Pipeline(NamedTuple):\n \"\"\"Class to compose the pipeline for training a learnable summary statistic.\"\"\"\n yields_from_pars: Callable[..., tuple[Array, ...]]\n model_from_yields: Callable[..., pyhf.Model]\n init_pars: Array\n data: Array | None = None\n yield_kwargs: dict[str, Any] | None = None\n nuisance_parname: str = 'correlated_bkg_uncertainty'\n random_state: int = 0\n num_epochs: int = 20\n batch_size: int = 500\n learning_rate: float = 0.001\n optimizer: str = 'adam'\n loss: Callable[[dict], float] = lambda x: x['CLs']\n test_size: float = 0.2\n per_epoch_callback: Callable = lambda x: None\n first_epoch_callback: Callable = lambda x: None \n last_epoch_callback: Callable = lambda x: None \n post_training_callback: Callable = lambda x: None \n plot_setup: Callable = lambda x: None \n possible_metrics: tuple[str, ...] = ('CLs', 'mu_uncert', '1-pull_width**2', 'gaussianity')\n animate: bool = True\n\n def run(self):\n pyhf.set_backend(\"jax\", default=True)\n\n def pipeline(pars, data):\n yields = self.yields_from_pars(pars, data, **self.yield_kwargs)\n model = self.model_from_yields(*yields)\n state: dict[str, Any] = {}\n state[\"yields\"] = yields\n bonly_pars = jnp.asarray(model.config.suggested_init()).at[model.config.poi_index].set(0.0)\n data = jnp.asarray(model.expected_data(bonly_pars))\n state[\"CLs\"], constrained = hypotest(1.0, data, model, return_constrained_pars=True, bonly_pars=bonly_pars, lr=1e-2)\n uncerts = relaxed.cramer_rao_uncert(model, bonly_pars, data)\n state[\"mu_uncert\"] = uncerts[model.config.poi_index]\n pull_width = uncerts[model.config.par_slice(self.nuisance_parname)][0]\n state[\"pull_width\"] = pull_width\n state[\"1-pull_width**2\"] = (1-pull_width) **2\n #state[\"gaussianity\"] = relaxed.gaussianity(model, bonly_pars, data, rng_key=PRNGKey(self.random_state))\n state[\"pull\"] = jnp.array(\n [\n (constrained - jnp.array(model.config.suggested_init()))[\n model.config.par_order.index(k)\n ]\n / model.config.param_set(k).width()[0]\n for k in model.config.par_order\n if model.config.param_set(k).constrained\n ]\n )\n loss = self.loss(state)\n state[\"loss\"] = loss\n return loss, state\n \n if self.data is not None:\n split = train_test_split(\n *self.data, \n test_size=self.test_size, \n random_state=self.random_state\n )\n train, test = split[::2], split[1::2]\n\n num_train = train[0].shape[0]\n num_complete_batches, leftover = divmod(num_train, self.batch_size)\n num_batches = num_complete_batches + bool(leftover)\n\n # batching mechanism\n def data_stream():\n rng = npr.RandomState(self.random_state)\n while True:\n perm = rng.permutation(num_train)\n for i in range(num_batches):\n batch_idx = perm[i * self.batch_size : (i + 1) * self.batch_size]\n yield [points[batch_idx] for points in train]\n\n batches = data_stream()\n else:\n def blank_data():\n while True:\n yield None\n batches = blank_data()\n\n solver = jaxopt.OptaxSolver(fun=pipeline, opt=optax.adam(self.learning_rate), has_aux=True)\n params, state = solver.init(self.init_pars)\n\n plot_kwargs = self.plot_setup(self)\n\n for epoch_num in range(self.num_epochs):\n batch_data = next(batches)\n print(f'{epoch_num=}: ', end=\"\")\n start = time.perf_counter()\n params, state = solver.update(params=params, state=state, data=batch_data)\n end = time.perf_counter()\n t = end-start\n print(f'took {t:.4f}s. state:')\n pprint(state.aux)\n if epoch_num == 0:\n plot_kwargs[\"camera\"] = self.first_epoch_callback(\n params,\n this_batch=batch_data, \n metrics=state.aux, \n maxN = self.num_epochs,\n **self.yield_kwargs, \n **plot_kwargs\n )\n elif epoch_num == self.num_epochs-1:\n plot_kwargs[\"camera\"] = self.last_epoch_callback(\n params,\n this_batch=batch_data, \n metrics=state.aux, \n maxN = self.num_epochs,\n **self.yield_kwargs, \n **plot_kwargs\n )\n else:\n plot_kwargs[\"camera\"] = self.per_epoch_callback(\n params,\n this_batch=batch_data, \n metrics=state.aux,\n maxN = self.num_epochs,\n **self.yield_kwargs, \n **plot_kwargs\n )\n if self.animate:\n plot_kwargs[\"camera\"].animate().save(\"animation.gif\", writer=\"imagemagick\", fps=8)\n\n\nfrom jax.example_libraries import stax\nimport jax.numpy as jnp\n\nrng_state = 0\n\ndef gen_blobs(rng = PRNGKey(rng_state), num_points=10000, sig_mean=jnp.asarray([-1, 1]),\n bup_mean=jnp.asarray([2.5, 2]),\n bdown_mean=jnp.asarray([-2.5, -1.5]),\n b_mean=jnp.asarray([1, -1])):\n sig = multivariate_normal(\n rng, sig_mean, jnp.asarray([[1, 0], [0, 1]]), shape=(num_points,)\n )\n bkg_up = multivariate_normal(\n rng, bup_mean, jnp.asarray([[1, 0], [0, 1]]), shape=(num_points,)\n )\n bkg_down = multivariate_normal(\n rng, bdown_mean, jnp.asarray([[1, 0], [0, 1]]), shape=(num_points,)\n )\n \n bkg_nom = multivariate_normal(\n rng, b_mean, jnp.asarray([[1, 0], [0, 1]]), shape=(num_points,)\n )\n return sig, bkg_nom, bkg_up, bkg_down\n \ninit_random_params, nn = stax.serial(\n stax.Dense(1024),\n stax.Relu,\n stax.Dense(1024),\n stax.Relu,\n stax.Dense(1),\n stax.Sigmoid,\n)\n\n_, init_pars = init_random_params(PRNGKey(rng_state), (-1, 2))\n\np = Pipeline(\n yields_from_pars=nn_summary_stat,\n model_from_yields=make_model,\n init_pars=init_pars, \n data=gen_blobs(),\n yield_kwargs=dict(nn=nn, bandwidth=1e-1, bins=jnp.linspace(0,1,5)),\n random_state=rng_state,\n loss=lambda x: x[\"CLs\"],\n first_epoch_callback=first_epoch,\n last_epoch_callback=last_epoch,\n per_epoch_callback=per_epoch,\n plot_setup=mpl_setup,\n num_epochs=5\n)\np.run()\n", "epoch_num=0: took 5.9741s. state:\n{'1-pull_width**2': DeviceArray(0.14997934, dtype=float64),\n 'CLs': DeviceArray(0.05885904, dtype=float64),\n 'loss': DeviceArray(0.24811496, dtype=float64),\n 'mu_uncert': DeviceArray(0.49811139, dtype=float64),\n 'pull': DeviceArray([-0.09171364], dtype=float64),\n 'pull_width': DeviceArray(0.61272833, dtype=float64),\n 'yields': [DeviceArray([ 0.16475093, 10.57287479, 9.16472435, 0.0976359 ], dtype=float64),\n DeviceArray([ 0.51705293, 46.4369292 , 52.25306637, 0.79288196], dtype=float64),\n DeviceArray([ 1.36530154, 59.64468863, 38.70605926, 0.28380903], dtype=float64),\n DeviceArray([ 0.57750525, 48.32141814, 50.42568632, 0.67533104], dtype=float64)]}\nepoch_num=1: took 5.7300s. state:\n{'1-pull_width**2': DeviceArray(0.79476904, dtype=float64),\n 'CLs': DeviceArray(0.02191363, dtype=float64),\n 'loss': DeviceArray(0.20570452, dtype=float64),\n 'mu_uncert': DeviceArray(0.4535466, dtype=float64),\n 'pull': DeviceArray([0.30172747], dtype=float64),\n 'pull_width': DeviceArray(0.1085018, dtype=float64),\n 'yields': [DeviceArray([ 3.22576709, 12.93245445, 3.74116951, 0.07230144], dtype=float64),\n DeviceArray([ 0.45081388, 18.96185441, 64.93389865, 15.54710792], dtype=float64),\n DeviceArray([ 0.07982958, 8.13931729, 60.93510452, 30.40814852], dtype=float64),\n DeviceArray([30.03139818, 61.25151494, 8.13015328, 0.08596997], dtype=float64)]}\nepoch_num=2: took 5.8115s. state:\n{'1-pull_width**2': DeviceArray(0.92098163, dtype=float64),\n 'CLs': DeviceArray(0.01223595, dtype=float64),\n 'loss': DeviceArray(0.10613498, dtype=float64),\n 'mu_uncert': DeviceArray(0.32578363, dtype=float64),\n 'pull': DeviceArray([0.12454629], dtype=float64),\n 'pull_width': DeviceArray(0.04032212, dtype=float64),\n 'yields': [DeviceArray([ 2.59544927, 11.75652546, 5.27097494, 0.34695239], dtype=float64),\n DeviceArray([ 0.47487847, 12.40612351, 50.77970904, 34.48755191], dtype=float64),\n DeviceArray([3.12655592e-03, 1.29395825e+00, 2.91357967e+01,\n 6.20538662e+01], dtype=float64),\n DeviceArray([55.71883605, 35.20045565, 4.28831376, 0.1459004 ], dtype=float64)]}\nepoch_num=3: took 5.7535s. state:\n{'1-pull_width**2': DeviceArray(0.93663564, dtype=float64),\n 'CLs': DeviceArray(0.00458725, dtype=float64),\n 'loss': DeviceArray(0.07637111, dtype=float64),\n 'mu_uncert': DeviceArray(0.27635323, dtype=float64),\n 'pull': DeviceArray([0.07723285], dtype=float64),\n 'pull_width': DeviceArray(0.03220062, dtype=float64),\n 'yields': [DeviceArray([ 1.72334426, 10.76860924, 6.9683772 , 0.50518273], dtype=float64),\n DeviceArray([ 0.50710083, 8.28461167, 32.85070488, 51.24147678], dtype=float64),\n DeviceArray([9.35376492e-03, 8.78489345e-01, 1.81812208e+01,\n 6.54469072e+01], dtype=float64),\n DeviceArray([55.45835376, 33.7735184 , 4.43699893, 0.27284942], dtype=float64)]}\nepoch_num=4: took 5.7482s. state:\n{'1-pull_width**2': DeviceArray(0.93695934, dtype=float64),\n 'CLs': DeviceArray(0.00247475, dtype=float64),\n 'loss': DeviceArray(0.06331699, dtype=float64),\n 'mu_uncert': DeviceArray(0.25162867, dtype=float64),\n 'pull': DeviceArray([0.06784392], dtype=float64),\n 'pull_width': DeviceArray(0.0320334, dtype=float64),\n 'yields': [DeviceArray([ 1.53105892, 10.48100383, 7.20816275, 0.7061442 ], dtype=float64),\n DeviceArray([ 0.59081864, 6.83754364, 25.52879309, 54.80150393], dtype=float64),\n DeviceArray([6.97085559e-03, 1.10858713e+00, 1.73259656e+01,\n 6.31026260e+01], dtype=float64),\n DeviceArray([55.67467927, 30.39093957, 5.64962225, 1.45037212], dtype=float64)]}\n" ], [ "import jax.scipy as jsp\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef make_kde(data, bw):\n @jax.jit\n def get_kde(x):\n return jnp.mean(\n jsp.stats.norm.pdf(x, loc=data.reshape(-1, 1), scale=bw), axis=0\n )\n\n return get_kde\n\ndef bar_plot(ax, data, colors=None, total_width=0.8, single_width=1, legend=True, bins=None):\n \"\"\"Draws a bar plot with multiple bars per data point.\n\n Parameters\n ----------\n ax : matplotlib.pyplot.axis\n The axis we want to draw our plot on.\n\n data: dictionary\n A dictionary containing the data we want to plot. Keys are the names of the\n data, the items is a list of the values.\n\n Example:\n data = {\n \"x\":[1,2,3],\n \"y\":[1,2,3],\n \"z\":[1,2,3],\n }\n\n colors : array-like, optional\n A list of colors which are used for the bars. If None, the colors\n will be the standard matplotlib color cyle. (default: None)\n\n total_width : float, optional, default: 0.8\n The width of a bar group. 0.8 means that 80% of the x-axis is covered\n by bars and 20% will be spaces between the bars.\n\n single_width: float, optional, default: 1\n The relative width of a single bar within a group. 1 means the bars\n will touch eachother within a group, values less than 1 will make\n these bars thinner.\n\n legend: bool, optional, default: True\n If this is set to true, a legend will be added to the axis.\n \"\"\"\n\n # Check if colors where provided, otherwhise use the default color cycle\n if colors is None:\n colors = plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"]\n\n # Number of bars per group\n n_bars = len(data)\n\n # The width of a single bar\n bar_width = total_width / n_bars\n\n # List containing handles for the drawn bars, used for the legend\n bars = []\n\n # Iterate over all data\n for i, (name, values) in enumerate(data.items()):\n # The offset in x direction of that bar\n x_offset = (i - n_bars / 2) * bar_width + bar_width / 2\n\n # Draw a bar for every value of that type\n for x, y in enumerate(values):\n bar = ax.bar(\n x + x_offset,\n y,\n width=bar_width * single_width,\n color=colors[i % len(colors)],\n )\n\n # Add a handle to the last drawn bar, which we'll need for the legend\n bars.append(bar[0])\n\n labels = [f\"[{a:.1g},{b:.1g}]\" for a, b in zip(bins[:-1], bins[1:])]\n ax.set_xticks(range(len(labels)))\n ax.set_xticklabels(labels)\n\n # Draw legend if we need\n if legend:\n ax.legend(bars, data.keys(), fontsize=\"x-small\")\n\ndef plot(network, axs, axins, metrics, maxN, this_batch, nn, bins, bandwidth, legend=False, reflect=False):\n ax = axs[\"Data space\"]\n g = np.mgrid[-5:5:101j, -5:5:101j]\n if jnp.inf in bins:\n levels = bins[1:-1] # infinite\n else:\n levels = bins\n ax.contourf(\n g[0],\n g[1],\n nn(network, np.moveaxis(g, 0, -1)).reshape(101, 101, 1)[:, :, 0],\n levels=levels,\n cmap=\"binary\",\n )\n ax.contour(\n g[0],\n g[1],\n nn(network, np.moveaxis(g, 0, -1)).reshape(101, 101, 1)[:, :, 0],\n colors=\"w\",\n levels=levels,\n )\n sig, bkg_nom, bkg_up, bkg_down = this_batch\n\n ax.scatter(sig[:, 0], sig[:, 1], alpha=0.3, c=\"C9\", label=\"signal\")\n ax.scatter(\n bkg_up[:, 0], bkg_up[:, 1], alpha=0.1, c=\"orangered\", marker=6, label=\"bkg up\"\n )\n ax.scatter(\n bkg_down[:, 0], bkg_down[:, 1], alpha=0.1, c=\"gold\", marker=7, label=\"bkg down\"\n )\n ax.scatter(bkg_nom[:, 0], bkg_nom[:, 1], alpha=0.3, c=\"C1\", label=\"bkg\")\n\n ax.set_xlim(-5, 5)\n ax.set_ylim(-5, 5)\n ax.set_xlabel(\"x\")\n ax.set_ylabel(\"y\")\n if legend:\n ax.legend(fontsize=\"x-small\", loc=\"upper right\")\n ax = axs[\"Losses\"]\n # ax.axhline(0.05, c=\"slategray\", linestyle=\"--\")\n ax.plot(metrics[\"loss\"], c=\"C9\", linewidth=2.0, label=r\"train $\\log(CL_s)$\")\n #ax.plot(metrics[\"test_loss\"], c=\"C4\", linewidth=2.0, label=r\"test $\\log(CL_s)$\") \n #ax.set_yscale(\"log\")\n # ax.set_ylim(1e-4, 0.06)\n ax.set_xlim(0, maxN)\n ax.set_xlabel(\"epoch\")\n ax.set_ylabel(r\"loss value\")\n if legend:\n ax.legend(fontsize=\"x-small\", loc=\"upper right\")\n\n ax = axs[\"Metrics\"]\n ax.plot(\n metrics[\"1-pull_width**2\"],\n c=\"slategray\",\n linewidth=2.0,\n label=r\"$\\sigma_{\\mathsf{nuisance}}$\",\n )\n ax.plot(metrics[\"mu_uncert\"], c=\"steelblue\", linewidth=2.0, label=r\"$\\sigma_\\mu$\")\n ax.plot(metrics[\"CLs\"], c=\"C9\", linewidth=2, label=r'$CL_s$')\n # ax.set_ylim(1e-4, 0.06)\n ax.set_xlim(0, maxN)\n ax.set_xlabel(\"epoch\")\n ax.set_yscale(\"log\")\n ax.set_ylabel(r\"metric value\")\n if legend:\n ax.legend(fontsize=\"x-small\", loc=\"upper right\")\n\n ax = axs[\"Histogram model\"]\n s, b, bup, bdown = metrics[\"yields\"]\n\n if jnp.inf in bins:\n noinf = bins[1:-1]\n bin_width = 1 / (len(noinf) - 1)\n centers = noinf[:-1] + np.diff(noinf) / 2.0\n centers = jnp.array([noinf[0] - bin_width, *centers, noinf[-1] + bin_width])\n\n dct = {\n \"signal\": s,\n \"bkg up\": bup,\n \"bkg\": b,\n \"bkg down\": bdown,\n }\n\n bar_plot(\n ax,\n dct,\n colors=[\"C9\", \"orangered\", \"C1\", \"gold\"],\n total_width=0.8,\n single_width=1,\n legend=legend,\n bins=bins\n )\n ax.set_ylabel(\"frequency\")\n ax.set_xlabel(\"interval over nn output\")\n\n ax = axs[\"Nuisance pull\"]\n\n pulls = metrics[\"pull\"]\n pullerr = metrics[\"pull_width\"]\n\n ax.set_ylabel(r\"$(\\theta - \\hat{\\theta})\\,/ \\Delta \\theta$\", fontsize=18)\n\n # draw the +/- 2.0 horizontal lines\n ax.hlines([-2, 2], -0.5, len(pulls) - 0.5, colors=\"black\", linestyles=\"dotted\")\n # draw the +/- 1.0 horizontal lines\n ax.hlines([-1, 1], -0.5, len(pulls) - 0.5, colors=\"black\", linestyles=\"dashdot\")\n # draw the +/- 2.0 sigma band\n ax.fill_between([-0.5, len(pulls) - 0.5], [-2, -2], [2, 2], facecolor=\"yellow\")\n # drawe the +/- 1.0 sigma band\n ax.fill_between([-0.5, len(pulls) - 0.5], [-1, -1], [1, 1], facecolor=\"green\")\n # draw a horizontal line at pull=0.0\n ax.hlines([0], -0.5, len(pulls) - 0.5, colors=\"black\", linestyles=\"dashed\")\n\n ax.scatter(range(len(pulls)), pulls, color=\"black\")\n # and their uncertainties\n ax.errorbar(\n range(len(pulls)),\n pulls,\n color=\"black\",\n xerr=0,\n yerr=pullerr,\n marker=\".\",\n fmt=\"none\",\n )\n\n ax = axs[\"Example KDE\"]\n b_data = bkg_nom\n d = np.array(nn(network, b_data).ravel().tolist())\n kde = make_kde(d, bandwidth)\n yields = b\n ls = [-1, 2]\n x = np.linspace(ls[0], ls[1], 300)\n db = jnp.array(jnp.diff(bins), float) # bin spacing\n yields = yields / db / yields.sum(axis=0) # normalize to bin width\n if jnp.inf in bins:\n pbins = [ls[0], *noinf, ls[1]]\n else:\n pbins = bins\n ax.stairs(yields, pbins, label=\"KDE hist\", color=\"C1\")\n if reflect:\n ax.plot(x, 2*jnp.abs(kde(x)), label=\"KDE\", color=\"C0\")\n else:\n ax.plot(x, kde(x), label=\"KDE\", color=\"C0\")\n\n ax.set_xlim(*ls)\n\n # rug plot of the data\n ax.plot(\n d,\n jnp.zeros_like(d) - 0.01,\n \"|\",\n linewidth=3,\n alpha=0.4,\n color=\"black\",\n label=\"data\",\n )\n\n if legend:\n if jnp.inf in bins:\n\n width = jnp.diff(noinf)[0]\n else:\n width = jnp.diff(bins)[0] \n xlim = (\n [(width / 2) - (1.1 * bandwidth), (width / 2) + (1.1 * bandwidth)]\n if (width / 2) - bandwidth < 0\n else [-width / 3, width + width / 3]\n )\n axins.stairs([1], [0, width], color=\"C1\")\n y = jnp.linspace(xlim[0], xlim[1], 300)\n demo = jsp.stats.norm.pdf(y, loc=width / 2, scale=bandwidth)\n axins.plot(y, demo / max(demo), color=\"C0\", linestyle=\"dashed\", label=\"kernel\")\n # draw two vertical lines at ((width/2)-bandwidth)/2 and ((width/2)+bandwidth)/2\n axins.vlines(\n [(width / 2) - bandwidth, (width / 2) + bandwidth],\n 0,\n 1,\n colors=\"black\",\n linestyles=\"dotted\",\n label=r\"$\\pm$bandwidth\",\n )\n # write text in the middle of the vertical lines with the value of the bandwidth\n ratio = bandwidth / width\n axins.text(\n width / 2,\n -0.3,\n r\"$\\mathsf{\\frac{bandwidth}{bin\\,width}}=$\" + f\"{ratio:.2f}\",\n ha=\"center\",\n va=\"center\",\n size=\"x-small\",\n )\n\n axins.set_xlim(*xlim)\n\n handles, labels = ax.get_legend_handles_labels()\n handles1, labels1 = axins.get_legend_handles_labels()\n ax.legend(\n handles + handles1, labels + labels1, loc=\"upper right\", fontsize=\"x-small\"\n )", "_____no_output_____" ], [ "def first_epoch(network, camera, axs, axins, metrics, maxN, this_batch, nn, bins, bandwidth, **kwargs):\n plot(\n axs=axs, \n axins=axins, \n network=network, \n metrics=metrics, \n maxN=maxN, \n this_batch=this_batch, \n nn=nn,\n bins=bins, \n bandwidth=bandwidth, \n legend=True\n )\n plt.tight_layout()\n camera.snap()\n return camera\n\ndef last_epoch(\n network, camera, axs, axins, metrics, maxN, this_batch, nn, bins, bandwidth, **kwargs \n):\n plot(\n axs=axs, \n axins=axins, \n network=network, \n metrics=metrics, \n maxN=maxN, \n this_batch=this_batch, \n nn=nn,\n bins=bins, \n bandwidth=bandwidth, \n ) \n plt.tight_layout()\n camera.snap()\n fig2, axs2 = plt.subplot_mosaic(\n [\n [\"Data space\", \"Histogram model\", \"Example KDE\"],\n [\"Losses\", \"Metrics\", \"Nuisance pull\"],\n ]\n )\n\n for label, ax in axs2.items():\n ax.set_title(label, fontstyle=\"italic\")\n axins2 = axs2[\"Example KDE\"].inset_axes([0.01, 0.79, 0.3, 0.2])\n axins2.axis(\"off\")\n plot(\n axs=axs2, \n axins=axins2, \n network=network, \n metrics=metrics, \n maxN=maxN, \n this_batch=this_batch, \n nn=nn,\n bins=bins, \n bandwidth=bandwidth,\n legend=True \n ) \n plt.tight_layout()\n fig2.savefig(\n f\"random.pdf\"\n )\n return camera\n\ndef per_epoch(network, camera, axs, axins, metrics, maxN, this_batch, nn, bins, bandwidth, **kwargs):\n plot(\n axs=axs, \n axins=axins, \n network=network, \n metrics=metrics, \n maxN=maxN, \n this_batch=this_batch, \n nn=nn,\n bins=bins, \n bandwidth=bandwidth, \n )\n plt.tight_layout()\n camera.snap()\n return camera\n\nfrom celluloid import Camera\ndef mpl_setup(pipeline):\n plt.style.use(\"default\")\n\n plt.rcParams.update(\n {\n \"axes.labelsize\": 13,\n \"axes.linewidth\": 1.2,\n \"xtick.labelsize\": 13,\n \"ytick.labelsize\": 13,\n \"figure.figsize\": [16.0, 9.0],\n \"font.size\": 13,\n \"xtick.major.size\": 3,\n \"ytick.major.size\": 3,\n \"legend.fontsize\": 11,\n }\n )\n\n plt.rc(\"figure\", dpi=120)\n\n fig, axs = plt.subplot_mosaic(\n [\n [\"Data space\", \"Histogram model\", \"Example KDE\"],\n [\"Losses\", \"Metrics\", \"Nuisance pull\"],\n ]\n )\n\n for label, ax in axs.items():\n ax.set_title(label, fontstyle=\"italic\")\n axins = axs[\"Example KDE\"].inset_axes([0.01, 0.79, 0.3, 0.2])\n axins.axis(\"off\")\n ax_cpy = axs\n axins_cpy = axins\n if pipeline.animate:\n camera = Camera(fig)\n return dict(camera=camera, axs=axs, axins=axins, ax_cpy=ax_cpy, axins_cpy=axins_cpy, fig=fig)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e7a92282114e026b78c63089e7e56efa1ea8e3e6
153,205
ipynb
Jupyter Notebook
linear regression.ipynb
lchloride/paper_review
a68a3bde7374389143397842dfe2ced343b4e83a
[ "MIT" ]
null
null
null
linear regression.ipynb
lchloride/paper_review
a68a3bde7374389143397842dfe2ced343b4e83a
[ "MIT" ]
null
null
null
linear regression.ipynb
lchloride/paper_review
a68a3bde7374389143397842dfe2ced343b4e83a
[ "MIT" ]
null
null
null
75.17419
56,442
0.693248
[ [ [ "import pandas as pd\nimport numpy as np\nimport json", "_____no_output_____" ], [ "with open('./data_zx/prediction_data_normal.json', 'r') as f:\n pred = json.load(f)", "_____no_output_____" ], [ "for i, idx in enumerate(pred):\n for j in range(4):\n pred[idx]['cite_%d'%(2014+j)] = pred[idx]['cite'][j]\n pred[idx]['kw_mean_%d'%(2014+j)] = pred[idx]['kw_mean'][j]\n pred[idx]['author_score_%d'%(2014+j)] = pred[idx]['author_score'][j]\n del pred[idx]['cite']\n del pred[idx]['kw_mean']\n del pred[idx]['author_score']\n if i % 1_000_000 == 0:\n print(i)", "0\n1000000\n2000000\n3000000\n4000000\n5000000\n6000000\n7000000\n8000000\n9000000\n10000000\n11000000\n" ], [ "for i, idx in enumerate(pred):\n print(pred[idx])\n if i > 3:\n break", "{'cite_2014': 12, 'kw_mean_2014': 29040.222222222223, 'author_score_2014': 1.0162432414641644, 'cite_2015': 3, 'kw_mean_2015': 28599.444444444445, 'author_score_2015': 0.9934141887023189, 'cite_2016': 4, 'kw_mean_2016': 22924.88888888889, 'author_score_2016': 0.37411916711921805, 'cite_2017': 0, 'kw_mean_2017': 5704.444444444444, 'author_score_2017': 0.5900868895320555, 'cite_2017_predi': 0}\n{'cite_2014': 0, 'kw_mean_2014': 63007.5, 'author_score_2014': -0.2192276383151145, 'cite_2015': 0, 'kw_mean_2015': 62395.5, 'author_score_2015': -0.2817034355560821, 'cite_2016': 0, 'kw_mean_2016': 50758.5, 'author_score_2016': -0.22305333874067942, 'cite_2017': 0, 'kw_mean_2017': 12445.5, 'author_score_2017': -0.09232853196101516, 'cite_2017_predi': 0}\n{'cite_2014': 0, 'kw_mean_2014': 37642.28571428572, 'author_score_2014': 0.17501728737653768, 'cite_2015': 0, 'kw_mean_2015': 37914.57142857143, 'author_score_2015': -0.2817034355560821, 'cite_2016': 0, 'kw_mean_2016': 33353.857142857145, 'author_score_2016': 0.48045427862205253, 'cite_2017': 0, 'kw_mean_2017': 14270.142857142857, 'author_score_2017': -0.09232853196101516, 'cite_2017_predi': 0}\n{'cite_2014': 1, 'kw_mean_2014': 26045.636363636364, 'author_score_2014': 0.37466764216526177, 'cite_2015': 0, 'kw_mean_2015': 26330.909090909092, 'author_score_2015': -0.20423981041691153, 'cite_2016': 0, 'kw_mean_2016': 21744.090909090908, 'author_score_2016': 0.23866989397455957, 'cite_2017': 0, 'kw_mean_2017': 5644.272727272727, 'author_score_2017': 0.15074147520394748, 'cite_2017_predi': 0}\n{'cite_2014': 0, 'kw_mean_2014': 27728.2, 'author_score_2014': -0.17004391294767615, 'cite_2015': 0, 'kw_mean_2015': 27791.8, 'author_score_2015': -0.17363944945015042, 'cite_2016': 0, 'kw_mean_2016': 24303.0, 'author_score_2016': -0.2090632095972783, 'cite_2017': 0, 'kw_mean_2017': 10460.6, 'author_score_2017': -0.07442523436267859, 'cite_2017_predi': 0}\n" ], [ "cnt = 0\nfor i in pred.keys():\n if pred[i]['cite_2014']+pred[i]['cite_2015']+pred[i]['cite_2016']+pred[i]['cite_2017'] > 100:\n print(pred[i], i)\n cnt+=1\n if cnt > 5:\n break", "{'cite_2014': 48, 'kw_mean_2014': 23086.333333333332, 'author_score_2014': 1.3511861685310278, 'cite_2015': 45, 'kw_mean_2015': 23044.666666666668, 'author_score_2015': 0.412412976958436, 'cite_2016': 41, 'kw_mean_2016': 20446.25, 'author_score_2016': -0.14502271349954685, 'cite_2017': 20, 'kw_mean_2017': 9080.416666666666, 'author_score_2017': 0.011037513563218476, 'cite_2017_predi': 0} 36\n{'cite_2014': 25, 'kw_mean_2014': 33308.5, 'author_score_2014': -0.08045300011114784, 'cite_2015': 35, 'kw_mean_2015': 33237.125, 'author_score_2015': -0.022557850930107076, 'cite_2016': 31, 'kw_mean_2016': 29700.25, 'author_score_2016': -0.03193085013961058, 'cite_2017': 20, 'kw_mean_2017': 12693.0, 'author_score_2017': -0.05784047095159591, 'cite_2017_predi': 0} 344\n{'cite_2014': 56, 'kw_mean_2014': 16366.142857142857, 'author_score_2014': -0.05737829317707273, 'cite_2015': 48, 'kw_mean_2015': 16598.714285714286, 'author_score_2015': 0.12480035441196048, 'cite_2016': 40, 'kw_mean_2016': 14444.714285714286, 'author_score_2016': 0.27034640436477225, 'cite_2017': 36, 'kw_mean_2017': 6701.928571428572, 'author_score_2017': 0.013285199110697463, 'cite_2017_predi': 0} 798\n{'cite_2014': 29, 'kw_mean_2014': 42936.142857142855, 'author_score_2014': 0.8451762681964544, 'cite_2015': 38, 'kw_mean_2015': 43055.57142857143, 'author_score_2015': 2.0778480176721006, 'cite_2016': 39, 'kw_mean_2016': 36638.142857142855, 'author_score_2016': 0.5710227135640341, 'cite_2017': 14, 'kw_mean_2017': 9255.857142857143, 'author_score_2017': 0.1837629852672738, 'cite_2017_predi': 0} 3164\n{'cite_2014': 67, 'kw_mean_2014': 34645.88888888889, 'author_score_2014': 0.2356264361508409, 'cite_2015': 45, 'kw_mean_2015': 34831.444444444445, 'author_score_2015': 0.2088716811100677, 'cite_2016': 42, 'kw_mean_2016': 29301.444444444445, 'author_score_2016': -0.1494924345100728, 'cite_2017': 10, 'kw_mean_2017': 7501.888888888889, 'author_score_2017': -0.09232853196101516, 'cite_2017_predi': 0} 3920\n{'cite_2014': 35, 'kw_mean_2014': 37865.833333333336, 'author_score_2014': 0.1298111083263711, 'cite_2015': 34, 'kw_mean_2015': 38052.333333333336, 'author_score_2015': -0.06082241050037764, 'cite_2016': 25, 'kw_mean_2016': 33503.0, 'author_score_2016': -0.0017585934969079418, 'cite_2017': 10, 'kw_mean_2017': 15146.833333333334, 'author_score_2017': -0.06222047017265627, 'cite_2017_predi': 0} 3990\n" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "import seaborn as sns", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(12, 6))\nmask = np.zeros_like(corr)\nmask[np.triu_indices_from(mask)] = True\nhm = sns.heatmap(round(corr, 2), mask = mask, annot = True, ax = ax, fmt = '.2f', linewidths=.05, cmap = 'Spectral_r')\nfig.subplots_adjust(top = 0.9)\nt = fig.suptitle('Correlation Heatmap', fontsize = 14)", "_____no_output_____" ], [ "list_5 = []\nfor i, idx in enumerate(pred):\n if i==36 or i==344 or i==798 or i==3164 or i==3920:\n list_5.append(pred[idx])\n if i > 3930:\n break", "_____no_output_____" ], [ "list_5", "_____no_output_____" ], [ "dic_5 = dict((idx, {}) for idx in [36, 344, 798, 3164, 3920])\nfor i, idx in enumerate(dic_5):\n dic_5[idx] = list_5[i]", "_____no_output_____" ], [ "dic_5", "_____no_output_____" ], [ "df_5 = pd.DataFrame.from_dict(dic_5, orient='index')\ndf_5.index.rename('nid', inplace=True)", "_____no_output_____" ], [ "df_5 = df_5.loc[:, ['cite_2014', 'cite_2015', 'cite_2016', 'cite_2017']]\ndf_5", "_____no_output_____" ], [ "x = range(2014, 2018)\ncolor = ['b', 'g', 'r', 'y', 'c']\ncnt = 0\nfor idx, row in df_5.iterrows():\n y = [row['cite_2014'], row['cite_2015'], row['cite_2016'], row['cite_2017']]\n print()\n plt.plot(x, y, color[cnt])\n cnt += 1\nplt.show()", "\n\n\n\n\n" ], [ "df = pd.DataFrame.from_dict(pred, orient='index').sort_index()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.index = df.index.astype(int)", "_____no_output_____" ], [ "df.sort_index(inplace = True)\ndf.head()", "_____no_output_____" ], [ "df_test = df.loc[:10000]\ndf_test.head()", "_____no_output_____" ], [ "features = ['cite_2014', 'kw_mean_2014', 'author_score_2014', 'cite_2015', 'kw_mean_2015', 'author_score_2015','cite_2016', 'kw_mean_2016', 'author_score_2016']\nX = df[features].values\ny = df['cite_2017'].values", "_____no_output_____" ], [ "X.shape, y.shape", "_____no_output_____" ], [ "for col in df.columns:\n print(col, df[col].isnull().sum())", "cite_2014 0\nkw_mean_2014 0\nauthor_score_2014 0\ncite_2015 0\nkw_mean_2015 0\nauthor_score_2015 0\ncite_2016 0\nkw_mean_2016 0\nauthor_score_2016 0\ncite_2017 0\nkw_mean_2017 0\nauthor_score_2017 0\ncite_2017_predi 0\n" ], [ "df.corr()", "_____no_output_____" ], [ "from sklearn.linear_model import LinearRegression", "_____no_output_____" ], [ "y = df['cite_2017'].values", "_____no_output_____" ], [ "y = np.array([y]).T\nX = df.loc[:, 'cite_2014': 'author_score_2016'].as_matrix(columns = None)", "_____no_output_____" ], [ "l = LinearRegression()\nl.fit(X, y)", "_____no_output_____" ], [ "l.coef_", "_____no_output_____" ], [ "features = ['cite_2014', 'kw_mean_2014', 'author_score_2014', 'cite_2015', 'kw_mean_2015', 'author_score_2015','cite_2016', 'kw_mean_2016', 'author_score_2016']\ndf_test['cite_2017_predi'] = l.predict(df_test.loc[:, features])\ndf_test.head(10)", "/home/sherlockshaw1024/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \n" ], [ "print(l.predict([[12, 29040.222222, 1.016243, 3, 28599.444444, 0.993414, 4, 22924.888889, 0.374119]]))", "[[ 1.37774274]]\n" ], [ "df_test.head(10)", "_____no_output_____" ], [ "dic_test = df_test.to_dict(orient='index')", "_____no_output_____" ], [ "for i, idx in enumerate(dic_test):\n row = dic_test[idx]\n row['cite_2017_predi'] = l.predict([[row['cite_2014'], row['cite_2015'], row['cite_2016'], row['kw_mean_2014'],row['kw_mean_2015'],row['kw_mean_2016'], row['author_score_2014'], row['author_score_2015'], row['author_score_2016']]])[0][0]\n ", "_____no_output_____" ], [ "for i, idx in enumerate(dic_test):\n print(dic_test[idx])\n if i > 5:\n break", "{'cite_2014': 12.0, 'kw_mean_2014': 29040.222222222223, 'author_score_2014': 1.0162432414641644, 'cite_2015': 3.0, 'kw_mean_2015': 28599.444444444445, 'author_score_2015': 0.9934141887023189, 'cite_2016': 4.0, 'kw_mean_2016': 22924.888888888891, 'author_score_2016': 0.37411916711921805, 'cite_2017': 0.0, 'kw_mean_2017': 5704.4444444444443, 'author_score_2017': 0.59008688953205546, 'cite_2017_predi': 2819.4033878732766}\n{'cite_2014': 0.0, 'kw_mean_2014': 63007.5, 'author_score_2014': -0.21922763831511449, 'cite_2015': 0.0, 'kw_mean_2015': 62395.5, 'author_score_2015': -0.28170343555608213, 'cite_2016': 0.0, 'kw_mean_2016': 50758.5, 'author_score_2016': -0.22305333874067942, 'cite_2017': 0.0, 'kw_mean_2017': 12445.5, 'author_score_2017': -0.092328531961015162, 'cite_2017_predi': 6116.7858511094937}\n{'cite_2014': 0.0, 'kw_mean_2014': 37642.285714285717, 'author_score_2014': 0.17501728737653768, 'cite_2015': 0.0, 'kw_mean_2015': 37914.571428571428, 'author_score_2015': -0.28170343555608213, 'cite_2016': 0.0, 'kw_mean_2016': 33353.857142857145, 'author_score_2016': 0.48045427862205253, 'cite_2017': 0.0, 'kw_mean_2017': 14270.142857142857, 'author_score_2017': -0.092328531961015162, 'cite_2017_predi': 3654.5439822856197}\n{'cite_2014': 1.0, 'kw_mean_2014': 26045.636363636364, 'author_score_2014': 0.37466764216526177, 'cite_2015': 0.0, 'kw_mean_2015': 26330.909090909092, 'author_score_2015': -0.20423981041691153, 'cite_2016': 0.0, 'kw_mean_2016': 21744.090909090908, 'author_score_2016': 0.23866989397455957, 'cite_2017': 0.0, 'kw_mean_2017': 5644.272727272727, 'author_score_2017': 0.15074147520394748, 'cite_2017_predi': 2528.6930927211392}\n{'cite_2014': 0.0, 'kw_mean_2014': 27728.200000000001, 'author_score_2014': -0.17004391294767615, 'cite_2015': 0.0, 'kw_mean_2015': 27791.799999999999, 'author_score_2015': -0.17363944945015042, 'cite_2016': 0.0, 'kw_mean_2016': 24303.0, 'author_score_2016': -0.20906320959727831, 'cite_2017': 0.0, 'kw_mean_2017': 10460.6, 'author_score_2017': -0.074425234362678588, 'cite_2017_predi': 2691.9298262811926}\n{'cite_2014': 0.0, 'kw_mean_2014': 34501.0, 'author_score_2014': -0.097567267370258448, 'cite_2015': 0.0, 'kw_mean_2015': 34586.875, 'author_score_2015': -0.119934151419383, 'cite_2016': 0.0, 'kw_mean_2016': 30246.25, 'author_score_2016': -0.15662425916277734, 'cite_2017': 0.0, 'kw_mean_2017': 13025.375, 'author_score_2017': -0.039840692782747142, 'cite_2017_predi': 3349.4796252653491}\n{'cite_2014': 0.0, 'kw_mean_2014': 27307.5, 'author_score_2014': 3.5114280784242098, 'cite_2015': 0.0, 'kw_mean_2015': 27289.916666666668, 'author_score_2015': 1.3992194116186674, 'cite_2016': 0.0, 'kw_mean_2016': 22731.333333333332, 'author_score_2016': 0.50831582721057478, 'cite_2017': 0.0, 'kw_mean_2017': 5729.333333333333, 'author_score_2017': 0.16012701182322106, 'cite_2017_predi': 2652.1722600633716}\n" ], [ "df_test.to_csv('./data_zx/perdi_result_1w.csv')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a92b9e8926274ffc18d32b12f2e174fb8145db
73,801
ipynb
Jupyter Notebook
1_2_Convolutional_Filters_Edge_Detection/.ipynb_checkpoints/6_1. Hough lines-checkpoint.ipynb
sxtien/CVND_Exercises
e28cc7ce6c34976322176390048a58f4b4804f7a
[ "MIT" ]
null
null
null
1_2_Convolutional_Filters_Edge_Detection/.ipynb_checkpoints/6_1. Hough lines-checkpoint.ipynb
sxtien/CVND_Exercises
e28cc7ce6c34976322176390048a58f4b4804f7a
[ "MIT" ]
null
null
null
1_2_Convolutional_Filters_Edge_Detection/.ipynb_checkpoints/6_1. Hough lines-checkpoint.ipynb
sxtien/CVND_Exercises
e28cc7ce6c34976322176390048a58f4b4804f7a
[ "MIT" ]
null
null
null
452.766871
64,224
0.946058
[ [ [ "# Hough Lines", "_____no_output_____" ], [ "### Import resources and display the image", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport cv2\n\n%matplotlib inline\n\n# Read in the image\nimage = cv2.imread('images/phone.jpg')\n\n# Change color to RGB (from BGR)\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\nplt.imshow(image)", "_____no_output_____" ] ], [ [ "### Perform edge detection", "_____no_output_____" ] ], [ [ "# Convert image to grayscale\ngray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n\n# Define our parameters for Canny\nlow_threshold = 50\nhigh_threshold = 100\nedges = cv2.Canny(gray, low_threshold, high_threshold)\n\nplt.imshow(edges, cmap='gray')", "_____no_output_____" ] ], [ [ "### Find lines using a Hough transform", "_____no_output_____" ] ], [ [ "# Define the Hough transform parameters\n# Make a blank the same size as our image to draw on\nrho = 1\ntheta = np.pi/180\nthreshold = 60\nmin_line_length = 50\nmax_line_gap = 5\n\nline_image = np.copy(image) #creating an image copy to draw lines on\n\n# Run Hough on the edge-detected image\nlines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),\n min_line_length, max_line_gap)\n\n\n# Iterate over the output \"lines\" and draw lines on the image copy\nfor line in lines:\n for x1,y1,x2,y2 in line:\n cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),5)\n \nplt.imshow(line_image)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a935de4b1f2a36eccd5bea5023fa91a9fe6b9e
3,225
ipynb
Jupyter Notebook
Chapter3_Slicing_3D_Tensors.ipynb
SokichiFujita/PyTorch-for-Deep-Learning-and-Computer-Vision
0c37d0869c07ca96ed1032c5a3220b9a675ed77e
[ "MIT" ]
1
2020-07-26T15:09:42.000Z
2020-07-26T15:09:42.000Z
Chapter3_Slicing_3D_Tensors.ipynb
SokichiFujita/PyTorch-for-Deep-Learning-and-Computer-Vision
0c37d0869c07ca96ed1032c5a3220b9a675ed77e
[ "MIT" ]
null
null
null
Chapter3_Slicing_3D_Tensors.ipynb
SokichiFujita/PyTorch-for-Deep-Learning-and-Computer-Vision
0c37d0869c07ca96ed1032c5a3220b9a675ed77e
[ "MIT" ]
null
null
null
24.067164
285
0.40186
[ [ [ "<a href=\"https://colab.research.google.com/github/SokichiFujita/PyTorch-for-Deep-Learning-and-Computer-Vision/blob/master/Chapter3_Slicing_3D_Tensors.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import torch", "_____no_output_____" ], [ "x = torch.arange(18).view(3,2,3)\nprint(x)", "tensor([[[ 0, 1, 2],\n [ 3, 4, 5]],\n\n [[ 6, 7, 8],\n [ 9, 10, 11]],\n\n [[12, 13, 14],\n [15, 16, 17]]])\n" ], [ "print(x[0,0,0])\nprint(x[1,0,0])\nprint(x[1,1,1])", "tensor(0)\ntensor(6)\ntensor(10)\n" ], [ "x[1,0:2,0:2]", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7a94d4f530029632ade28312b2f241d14351637
2,880
ipynb
Jupyter Notebook
.ipynb_checkpoints/PublicationList-checkpoint.ipynb
raux/R-G-Kula
1ec35666af17d4ed64b191d9009bb648c657a807
[ "MIT" ]
null
null
null
.ipynb_checkpoints/PublicationList-checkpoint.ipynb
raux/R-G-Kula
1ec35666af17d4ed64b191d9009bb648c657a807
[ "MIT" ]
1
2021-07-14T00:52:41.000Z
2021-07-14T00:52:41.000Z
.ipynb_checkpoints/PublicationList-checkpoint.ipynb
raux/R-G-Kula
1ec35666af17d4ed64b191d9009bb648c657a807
[ "MIT" ]
null
null
null
22.677165
268
0.535069
[ [ [ "# Research Interests", "_____no_output_____" ], [ "## Software Reuse\nMy interest includes how developers manage and update their third-party libraries. Related Publications in include work on Library Updates and Vulnerabilties <cite data-cite=\"4542931/DBCEME7K\"></cite>, Library Aging <cite data-cite=\"4542931/MV39Q277\"></cite>", "_____no_output_____" ], [ "## Software Ecosystems", "_____no_output_____" ], [ "## Software Visualizations", "_____no_output_____" ], [ "## Software Process\n### Code Maintenance (Code Review and Bug Fixing Processes)", "_____no_output_____" ], [ "# References\n<div class=\"cite2c-biblio\"></div>", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7a9621e169788cba616613b6e3a3ca041f8c33a
85,622
ipynb
Jupyter Notebook
17 - Deep Learning with TensorFlow 2.0/5_Introduction to TensorFlow 2/8_Exercises/TensorFlow_Minimal_example_All_exercises.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
17 - Deep Learning with TensorFlow 2.0/5_Introduction to TensorFlow 2/8_Exercises/TensorFlow_Minimal_example_All_exercises.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
17 - Deep Learning with TensorFlow 2.0/5_Introduction to TensorFlow 2/8_Exercises/TensorFlow_Minimal_example_All_exercises.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
33.472244
10,284
0.325255
[ [ [ "# Using the same code as before, please solve the following exercises\n 1. Change the number of observations to 100,000 and see what happens.\n 2. Play around with the learning rate. Values like 0.0001, 0.001, 0.1, 1 are all interesting to observe. \n 3. Change the loss function. An alternative loss for regressions is the Huber loss. \n The Huber loss is more appropriate than the L2-norm when we have outliers, as it is less sensitive to them (in our example we don't have outliers, but you will surely stumble upon a dataset with outliers in the future). The L2-norm loss puts all differences *to the square*, so outliers have a lot of influence on the outcome. \n The proper syntax of the Huber loss is 'huber_loss'\n \n \nUseful tip: When you change something, don't forget to RERUN all cells. This can be done easily by clicking:\nKernel -> Restart & Run All\nIf you don't do that, your algorithm will keep the OLD values of all parameters.\n\nYou can either use this file for all the exercises, or check the solutions of EACH ONE of them in the separate files we have provided. All other files are solutions of each problem. If you feel confident enough, you can simply change values in this file. Please note that it will be nice, if you return the file to starting position after you have solved a problem, so you can use the lecture as a basis for comparison.", "_____no_output_____" ], [ "## Import the relevant libraries", "_____no_output_____" ] ], [ [ "# We must always import the relevant libraries for our problem at hand. NumPy and TensorFlow are required for this example.\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf", "_____no_output_____" ] ], [ [ "## Data generation\n\nWe generate data using the exact same logic and code as the example from the previous notebook. The only difference now is that we save it to an npz file. Npz is numpy's file type which allows you to save numpy arrays into a single .npz file. We introduce this change because in machine learning most often: \n\n* you are given some data (csv, database, etc.)\n* you preprocess it into a desired format (later on we will see methods for preprocesing)\n* you save it into npz files (if you're working in Python) to access later\n\nNothing to worry about - this is literally saving your NumPy arrays into a file that you can later access, nothing more.", "_____no_output_____" ] ], [ [ "# First, we should declare a variable containing the size of the training set we want to generate.\nobservations = 1000\n\n# We will work with two variables as inputs. You can think about them as x1 and x2 in our previous examples.\n# We have picked x and z, since it is easier to differentiate them.\n# We generate them randomly, drawing from an uniform distribution. There are 3 arguments of this method (low, high, size).\n# The size of xs and zs is observations x 1. In this case: 1000 x 1.\nxs = np.random.uniform(low=-10, high=10, size=(observations,1))\nzs = np.random.uniform(-10, 10, (observations,1))\n\n# Combine the two dimensions of the input into one input matrix. \n# This is the X matrix from the linear model y = x*w + b.\n# column_stack is a Numpy method, which combines two matrices (vectors) into one.\ngenerated_inputs = np.column_stack((xs,zs))\n\n# We add a random small noise to the function i.e. f(x,z) = 2x - 3z + 5 + <small noise>\nnoise = np.random.uniform(-1, 1, (observations,1))\n\n# Produce the targets according to our f(x,z) = 2x - 3z + 5 + noise definition.\n# In this way, we are basically saying: the weights should be 2 and -3, while the bias is 5.\ngenerated_targets = 2*xs - 3*zs + 5 + noise\n\n# save into an npz file called \"TF_intro\"\nnp.savez('TF_intro', inputs=generated_inputs, targets=generated_targets)", "_____no_output_____" ] ], [ [ "## Solving with TensorFlow\n\n<i/>Note: This intro is just the basics of TensorFlow which has way more capabilities and depth than that.<i>", "_____no_output_____" ] ], [ [ "# Load the training data from the NPZ\ntraining_data = np.load('TF_intro.npz')", "_____no_output_____" ], [ "# Declare a variable where we will store the input size of our model\n# It should be equal to the number of variables you have\ninput_size = 2\n# Declare the output size of the model\n# It should be equal to the number of outputs you've got (for regressions that's usually 1)\noutput_size = 1\n\n# Outline the model\n# We lay out the model in 'Sequential'\n# Note that there are no calculations involved - we are just describing our network\nmodel = tf.keras.Sequential([\n # Each 'layer' is listed here\n # The method 'Dense' indicates, our mathematical operation to be (xw + b)\n tf.keras.layers.Dense(output_size,\n # there are extra arguments you can include to customize your model\n # in our case we are just trying to create a solution that is \n # as close as possible to our NumPy model\n kernel_initializer=tf.random_uniform_initializer(minval=-0.1, maxval=0.1),\n bias_initializer=tf.random_uniform_initializer(minval=-0.1, maxval=0.1)\n )\n ])\n\n# We can also define a custom optimizer, where we can specify the learning rate\ncustom_optimizer = tf.keras.optimizers.SGD(learning_rate=0.02)\n# Note that sometimes you may also need a custom loss function \n# That's much harder to implement and won't be covered in this course though\n\n# 'compile' is the place where you select and indicate the optimizers and the loss\nmodel.compile(optimizer=custom_optimizer, loss='mean_squared_error')\n\n# finally we fit the model, indicating the inputs and targets\n# if they are not otherwise specified the number of epochs will be 1 (a single epoch of training), \n# so the number of epochs is 'kind of' mandatory, too\n# we can play around with verbose; we prefer verbose=2\nmodel.fit(training_data['inputs'], training_data['targets'], epochs=100, verbose=2)", "Epoch 1/100\n1000/1000 - 0s - loss: 24.5755\nEpoch 2/100\n1000/1000 - 0s - loss: 1.1773\nEpoch 3/100\n1000/1000 - 0s - loss: 0.4253\nEpoch 4/100\n1000/1000 - 0s - loss: 0.3853\nEpoch 5/100\n1000/1000 - 0s - loss: 0.3727\nEpoch 6/100\n1000/1000 - 0s - loss: 0.3932\nEpoch 7/100\n1000/1000 - 0s - loss: 0.3817\nEpoch 8/100\n1000/1000 - 0s - loss: 0.3877\nEpoch 9/100\n1000/1000 - 0s - loss: 0.3729\nEpoch 10/100\n1000/1000 - 0s - loss: 0.3982\nEpoch 11/100\n1000/1000 - 0s - loss: 0.3809\nEpoch 12/100\n1000/1000 - 0s - loss: 0.3788\nEpoch 13/100\n1000/1000 - 0s - loss: 0.3714\nEpoch 14/100\n1000/1000 - 0s - loss: 0.3608\nEpoch 15/100\n1000/1000 - 0s - loss: 0.3507\nEpoch 16/100\n1000/1000 - 0s - loss: 0.3918\nEpoch 17/100\n1000/1000 - 0s - loss: 0.3697\nEpoch 18/100\n1000/1000 - 0s - loss: 0.3811\nEpoch 19/100\n1000/1000 - 0s - loss: 0.3781\nEpoch 20/100\n1000/1000 - 0s - loss: 0.3974\nEpoch 21/100\n1000/1000 - 0s - loss: 0.3974\nEpoch 22/100\n1000/1000 - 0s - loss: 0.3724\nEpoch 23/100\n1000/1000 - 0s - loss: 0.3561\nEpoch 24/100\n1000/1000 - 0s - loss: 0.3691\nEpoch 25/100\n1000/1000 - 0s - loss: 0.3650\nEpoch 26/100\n1000/1000 - 0s - loss: 0.3569\nEpoch 27/100\n1000/1000 - 0s - loss: 0.3707\nEpoch 28/100\n1000/1000 - 0s - loss: 0.4100\nEpoch 29/100\n1000/1000 - 0s - loss: 0.3703\nEpoch 30/100\n1000/1000 - 0s - loss: 0.3598\nEpoch 31/100\n1000/1000 - 0s - loss: 0.3775\nEpoch 32/100\n1000/1000 - 0s - loss: 0.3936\nEpoch 33/100\n1000/1000 - 0s - loss: 0.3968\nEpoch 34/100\n1000/1000 - 0s - loss: 0.3614\nEpoch 35/100\n1000/1000 - 0s - loss: 0.3588\nEpoch 36/100\n1000/1000 - 0s - loss: 0.3777\nEpoch 37/100\n1000/1000 - 0s - loss: 0.3637\nEpoch 38/100\n1000/1000 - 0s - loss: 0.3662\nEpoch 39/100\n1000/1000 - 0s - loss: 0.3655\nEpoch 40/100\n1000/1000 - 0s - loss: 0.3582\nEpoch 41/100\n1000/1000 - 0s - loss: 0.3759\nEpoch 42/100\n1000/1000 - 0s - loss: 0.4468\nEpoch 43/100\n1000/1000 - 0s - loss: 0.3613\nEpoch 44/100\n1000/1000 - 0s - loss: 0.3905\nEpoch 45/100\n1000/1000 - 0s - loss: 0.3825\nEpoch 46/100\n1000/1000 - 0s - loss: 0.3810\nEpoch 47/100\n1000/1000 - 0s - loss: 0.3546\nEpoch 48/100\n1000/1000 - 0s - loss: 0.3520\nEpoch 49/100\n1000/1000 - 0s - loss: 0.3878\nEpoch 50/100\n1000/1000 - 0s - loss: 0.3748\nEpoch 51/100\n1000/1000 - 0s - loss: 0.3978\nEpoch 52/100\n1000/1000 - 0s - loss: 0.3669\nEpoch 53/100\n1000/1000 - 0s - loss: 0.3650\nEpoch 54/100\n1000/1000 - 0s - loss: 0.3869\nEpoch 55/100\n1000/1000 - 0s - loss: 0.3952\nEpoch 56/100\n1000/1000 - 0s - loss: 0.3897\nEpoch 57/100\n1000/1000 - 0s - loss: 0.3698\nEpoch 58/100\n1000/1000 - 0s - loss: 0.3655\nEpoch 59/100\n1000/1000 - 0s - loss: 0.3717\nEpoch 60/100\n1000/1000 - 0s - loss: 0.3942\nEpoch 61/100\n1000/1000 - 0s - loss: 0.4334\nEpoch 62/100\n1000/1000 - 0s - loss: 0.3836\nEpoch 63/100\n1000/1000 - 0s - loss: 0.3631\nEpoch 64/100\n1000/1000 - 0s - loss: 0.3804\nEpoch 65/100\n1000/1000 - 0s - loss: 0.3671\nEpoch 66/100\n1000/1000 - 0s - loss: 0.3801\nEpoch 67/100\n1000/1000 - 0s - loss: 0.4032\nEpoch 68/100\n1000/1000 - 0s - loss: 0.3764\nEpoch 69/100\n1000/1000 - 0s - loss: 0.3549\nEpoch 70/100\n1000/1000 - 0s - loss: 0.3585\nEpoch 71/100\n1000/1000 - 0s - loss: 0.3747\nEpoch 72/100\n1000/1000 - 0s - loss: 0.3633\nEpoch 73/100\n1000/1000 - 0s - loss: 0.3493\nEpoch 74/100\n1000/1000 - 0s - loss: 0.3924\nEpoch 75/100\n1000/1000 - 0s - loss: 0.4246\nEpoch 76/100\n1000/1000 - 0s - loss: 0.3701\nEpoch 77/100\n1000/1000 - 0s - loss: 0.3959\nEpoch 78/100\n1000/1000 - 0s - loss: 0.3923\nEpoch 79/100\n1000/1000 - 0s - loss: 0.3587\nEpoch 80/100\n1000/1000 - 0s - loss: 0.3729\nEpoch 81/100\n1000/1000 - 0s - loss: 0.3649\nEpoch 82/100\n1000/1000 - 0s - loss: 0.3611\nEpoch 83/100\n1000/1000 - 0s - loss: 0.3701\nEpoch 84/100\n1000/1000 - 0s - loss: 0.3699\nEpoch 85/100\n1000/1000 - 0s - loss: 0.3494\nEpoch 86/100\n1000/1000 - 0s - loss: 0.3613\nEpoch 87/100\n1000/1000 - 0s - loss: 0.3933\nEpoch 88/100\n1000/1000 - 0s - loss: 0.4031\nEpoch 89/100\n1000/1000 - 0s - loss: 0.3814\nEpoch 90/100\n1000/1000 - 0s - loss: 0.3481\nEpoch 91/100\n1000/1000 - 0s - loss: 0.3664\nEpoch 92/100\n1000/1000 - 0s - loss: 0.3691\nEpoch 93/100\n1000/1000 - 0s - loss: 0.3599\nEpoch 94/100\n1000/1000 - 0s - loss: 0.3817\nEpoch 95/100\n1000/1000 - 0s - loss: 0.3572\nEpoch 96/100\n1000/1000 - 0s - loss: 0.3699\nEpoch 97/100\n1000/1000 - 0s - loss: 0.3666\nEpoch 98/100\n1000/1000 - 0s - loss: 0.3667\nEpoch 99/100\n1000/1000 - 0s - loss: 0.4198\nEpoch 100/100\n1000/1000 - 0s - loss: 0.3667\n" ] ], [ [ "## Extract the weights and bias\nExtracting the weight(s) and bias(es) of a model is not an essential step for the machine learning process. In fact, usually they would not tell us much in a deep learning context. However, this simple example was set up in a way, which allows us to verify if the answers we get are correct.", "_____no_output_____" ] ], [ [ "# Extracting the weights and biases is achieved quite easily\nmodel.layers[0].get_weights()", "_____no_output_____" ], [ "# We can save the weights and biases in separate variables for easier examination\n# Note that there can be hundreds or thousands of them!\nweights = model.layers[0].get_weights()[0]\nweights", "_____no_output_____" ], [ "# We can save the weights and biases in separate variables for easier examination\n# Note that there can be hundreds or thousands of them!\nbias = model.layers[0].get_weights()[1]\nbias", "_____no_output_____" ] ], [ [ "## Extract the outputs (make predictions)\nOnce more, this is not an essential step, however, we usually want to be able to make predictions.", "_____no_output_____" ] ], [ [ "# We can predict new values in order to actually make use of the model\n# Sometimes it is useful to round the values to be able to read the output\n# Usually we use this method on NEW DATA, rather than our original training data\nmodel.predict_on_batch(training_data['inputs']).round(1)", "_____no_output_____" ], [ "# If we display our targets (actual observed values), we can manually compare the outputs and the targets\ntraining_data['targets'].round(1)", "_____no_output_____" ] ], [ [ "## Plotting the data", "_____no_output_____" ] ], [ [ "# The model is optimized, so the outputs are calculated based on the last form of the model\n\n# We have to np.squeeze the arrays in order to fit them to what the plot function expects.\n# Doesn't change anything as we cut dimensions of size 1 - just a technicality.\nplt.plot(np.squeeze(model.predict_on_batch(training_data['inputs'])), np.squeeze(training_data['targets']))\nplt.xlabel('outputs')\nplt.ylabel('targets')\nplt.show()\n\n# Voila - what you see should be exactly the same as in the previous notebook!\n# You probably don't see the point of TensorFlow now - it took us the same number of lines of code\n# to achieve this simple result. However, once we go deeper in the next chapter,\n# TensorFlow will save us hundreds of lines of code.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e7a965770d670d1fd80201a281e7fd5fe34953d4
683,215
ipynb
Jupyter Notebook
notebooks/ExploratoryDataAnalysis.ipynb
geracharu/DataScienceProject
79351e5962fb5a26e920ac2a166e6b07085f9226
[ "MIT" ]
null
null
null
notebooks/ExploratoryDataAnalysis.ipynb
geracharu/DataScienceProject
79351e5962fb5a26e920ac2a166e6b07085f9226
[ "MIT" ]
null
null
null
notebooks/ExploratoryDataAnalysis.ipynb
geracharu/DataScienceProject
79351e5962fb5a26e920ac2a166e6b07085f9226
[ "MIT" ]
null
null
null
163.761985
137,148
0.82837
[ [ [ "## Loan Default Risk - Exploratory Data Analysis\n\n#### This notebook is focused on data exploration. The key objective is to familiarise myself with the data and to identify any issues. This could lead to data cleaning or feature engineering. <br> \n\n\n\n<u> Contents </u> \n\n 1. Importing Relevant Libraries, Reading In Data\n\n 2. Anomly Detection and Correction\n \n 3. Data Exploration\n \n 4. Summary\n \n 5. Distribution of New Datasets", "_____no_output_____" ], [ "#### 1.1 Importing Relevant Libraries", "_____no_output_____" ] ], [ [ "#Importing data wrangling library\nimport pandas as pd #Data Wrangling/Cleaning package for mixed data\nimport numpy as np #Data wrangling & manipulation for numerical data\nimport os\n\n#Importing visulization libraries\nfrom matplotlib import pyplot as plt #Importing visulization libraries\nimport seaborn as sns\n\n\n#Importing Machine Learning Libraries(Preprocessing)\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import LabelEncoder # sklearn preprocessing for dealing with categorical variables\n\n\n#Importing Machine Learning Libraries(Modelling And Evaluation) \nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import accuracy_score\n", "_____no_output_____" ], [ "os.getcwd() # Get working Directory", "_____no_output_____" ] ], [ [ "#### 1.2 Reading In Data", "_____no_output_____" ] ], [ [ "rawfilepath = 'C:/Users/chara.geru/OneDrive - Avanade/DataScienceProject/HomeCreditModel/data/raw/'\nfilename = 'application_train.csv'\n\ninterimfilepath1 = 'C:/Users/chara.geru/OneDrive - Avanade/DataScienceProject/HomeCreditModel/data/interim/'\nfilename1 = 'df1.csv'\nfilename2 = 'df2.csv'\n \napplication_train = pd.read_csv(rawfilepath + filename)\ndf1 = pd.read_csv(interimfilepath1 + filename1)\ndf2 = pd.read_csv(interimfilepath1 + filename2)", "_____no_output_____" ], [ "print('Size of application_train data:', application_train.shape) #Printing shape of datasets", "Size of application_train data: (307511, 122)\n" ] ], [ [ "This dataset has: \n- 122 columns (features)\n- 307511 rows", "_____no_output_____" ] ], [ [ "application_train.columns.values #Printing all column names", "_____no_output_____" ], [ "pd.set_option('display.max_columns', None) #Display all columns\napplication_train.describe() #Get summary statistics for all columns", "_____no_output_____" ], [ "application_train.head() #View first 5 rows of the dataset", "_____no_output_____" ] ], [ [ "Generally the data looks good based on the statistics shown from the describe method. \n\n<u>Potentional issues </u> <br>\n- Values in DAYS_BIRTH column are negative. They represent number of days a person was before they applied for a loan. For a better representation, I will convert them to positive values and convert days to years, for better representation of the data.<br>\n- DAYS_EMPLOYED will be given the same treatment for the same reasons.", "_____no_output_____" ], [ "#### 2.1 Anomly Detection", "_____no_output_____" ] ], [ [ "(application_train['DAYS_BIRTH']).describe()", "_____no_output_____" ], [ "(application_train['DAYS_EMPLOYED']).describe()", "_____no_output_____" ] ], [ [ "#### 3.1 Check for Nulls", "_____no_output_____" ] ], [ [ "# \"This function creates a table to summarize the null values\"\n\ndef nulltable(df):\n \"\"\"\n This function creates a table to summarize the null values\n \"\"\"\n \n total = df.isnull().sum().sort_values(ascending = False)\n percent = (df.isnull().sum()/df.isnull().count()*100).sort_values(ascending = False)\n missing_df_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])\n return missing_df_data.head(30) ", "_____no_output_____" ], [ "nulltable(application_train)", "_____no_output_____" ] ], [ [ "- There are a large number of columns(features) with more than 50% NULLS.\n- I've deicided to drop these columns as they will not provide much information for training the model.\n- If a feaure has less than 50% NULLS, these maybe filled up using an appropriate calcualtion such as mean, median or mode", "_____no_output_____" ], [ "#### 3.2 Data balanced or imbalanced", "_____no_output_____" ] ], [ [ "#Data balanced or imbalanced\ntemp = application_train[\"TARGET\"].value_counts()\nfig1, ax1 = plt.subplots()\nax1.pie(temp, labels=['Loan Repayed','Loan Not Repayed'], autopct='%1.1f%%',wedgeprops={'edgecolor':'black'})\nax1.axis('equal')\nplt.title('Loan Repayed or Not')\nplt.show()", "_____no_output_____" ] ], [ [ "Data is higly imbalanced <br>\n- This emphasises the importance of assessing the precision/recall to evaluate results. For example, predicting all rows as not defaulted would lead to an accuracy of 91.9%.\n- Consider rebalancing the training data\n", "_____no_output_____" ], [ "#### 3.3 Number of each type of column", "_____no_output_____" ] ], [ [ "# Number of each type of column\napplication_train.dtypes.value_counts()", "_____no_output_____" ] ], [ [ "- There are 16 object columns.\n- These will need to be encoded when building the model (using label encoder or one hot encoder)", "_____no_output_____" ], [ "#### 3.4 Number of unique classes in each object column", "_____no_output_____" ] ], [ [ "# Number of unique classes in each object column\napplication_train.select_dtypes('object').apply(pd.Series.nunique, axis = 0)", "_____no_output_____" ] ], [ [ "- Gender has 3 values. This needs investigation and correction.", "_____no_output_____" ], [ "#### 4. Summary\nBased on the analysis so far we have identified the need to handle:\n \n - skewed data\n - removing rows with large number of NULLS\n - remove rows with third gender\n \nThis led to the development of 2 new datasets, as we can't be sure which iterations would lead to the best model performance. This is a good opportunity for trial and error where I compare the perforomance of different permutations.\n\nHere are some visualisations to describe the new datasets\n\ndf1<br>\n- Negative DAYS_BIRTH converted to positive YEARS_BIRTH\n- Rows with third gender dropped\n- Dealt with features with large number of NULLs\n\ndf2<br>\n- df2 has all the changes implemented to df1\n- In adition to that, in df2 the data for the skewed columns ('AMT_CREDIT', 'AMT_INCOME_TOTAL', 'AMT_GOODS_PRICE') hads been logged\n", "_____no_output_____" ], [ "### 5. Distrubutions of New Datasets", "_____no_output_____" ], [ " 5.1 Plotting distribution of Datasets <br>\n -I had initally plotted historgams for features like AMT_CREDIT, AMT_TOTAL_INCOME and AMT_GOOD_PRICE using df1. But I found that the distributions were skewed. So I logged these features to create df2 and plotted these features again. I show the comparions of the same below: ", "_____no_output_____" ] ], [ [ "# Set the style of plots\nplt.style.use('fivethirtyeight')\n\nplt.figure(figsize = (10, 12))\n\nplt.subplot(2, 1, 1)\n\n\nplt.title(\"Distribution of AMT_CREDIT\")\nplt.hist(df1[\"AMT_CREDIT\"], bins =20) \nplt.xlabel(\"AMT_CREDIT\")\n\n\nplt.subplot(2, 1, 2)\n\n\nplt.title(\" Log Distribution of AMT_CREDIT\")\n\nplt.hist(df2[\"AMT_CREDIT\"], bins =20)\nplt.xlabel(\"Log_AMT_CREDIT\")", "_____no_output_____" ], [ "plt.figure(figsize = (10, 12))\n\nplt.subplot(2, 1, 1)\nplt.title(\"Distribution of AMT_INCOME_TOTAL\")\nplt.hist(df1[\"AMT_INCOME_TOTAL\"].dropna(), bins =25)\n\n\nplt.subplot(2, 1, 2)\nplt.title(\" Log Distribution of AMT_INCOME_TOTAL\")\n\nplt.hist(df2[\"AMT_INCOME_TOTAL\"].dropna(), bins =25)\nplt.xlabel(\"Log_INCOME_TOTAL\")", "_____no_output_____" ], [ "plt.figure(figsize = (10, 12))\n\nplt.subplot(2, 1, 1)\nplt.title(\"Distribution of AMT_GOODS_PRICE\")\nplt.hist(df1[\"AMT_GOODS_PRICE\"].dropna(), bins = 20)\n\n\nplt.subplot(2, 1, 2)\nplt.title(\" Log Distribution of AMT_GOODS_PRICE\")\n\nplt.hist(df2[\"AMT_GOODS_PRICE\"].dropna(), bins = 20)\nplt.xlabel(\"Log_GOODS_PRICE\")", "_____no_output_____" ], [ "plt.hist(df1['YEARS_EMPLOYED'])\nplt.xlabel('Years of Employment')", "_____no_output_____" ] ], [ [ "- It is not resonable to have such high years of employment (> 40-60 years).\n- As there are amny rows, I would try and repalce the values with average years of employment\n- I would plot the distribution of the reasonable values to get a clearer picture of the disrtibution.\n- Based on the ditribution, I would decide to take a log of the values to reduce the skew.", "_____no_output_____" ] ], [ [ "less_years = df1[df1.YEARS_EMPLOYED <= 80]\nmore_years = df1[df1.YEARS_EMPLOYED >80]", "_____no_output_____" ], [ "plt.hist(less_years['YEARS_EMPLOYED'])\nplt.xlabel('Distribution of Lesser Years of Employment')", "_____no_output_____" ] ], [ [ "#### log this data and change it in df1", "_____no_output_____" ] ], [ [ "less_years['YEARS_EMPLOYED'].mean()", "_____no_output_____" ], [ "df1['YEARS_EMPLOYED'] = np.where(df1['YEARS_EMPLOYED'] > 80, 7, df1['YEARS_EMPLOYED'])", "_____no_output_____" ], [ "plt.hist(df1['YEARS_EMPLOYED'])\nplt.xlabel('Years of Employment')", "_____no_output_____" ], [ "plt.hist(df2['YEARS_EMPLOYED'])\nplt.xlabel('Years of Employment')", "_____no_output_____" ], [ "##Defaul = df1[df1['TARGET'] == 1]\nNot_defaul = df1[df1['TARGET'] == 0]", "_____no_output_____" ], [ "# Find correlations with the target and sort\ncorrelations = df1.corr()['TARGET'].sort_values()\n\n# Display correlations\nprint('\\nMost Positive Correlations: \\n ', correlations.tail(15))\nprint('\\nMost Negative Correlations:\\n', correlations.head(15))", "\nMost Positive Correlations: \n FLAG_WORK_PHONE 0.027335\nLIVE_CITY_NOT_WORK_CITY 0.032575\nDEF_60_CNT_SOCIAL_CIRCLE 0.032796\nDEF_30_CNT_SOCIAL_CIRCLE 0.032987\nOWN_CAR_AGE 0.039432\nDAYS_REGISTRATION 0.041980\nREG_CITY_NOT_LIVE_CITY 0.044239\nFLAG_DOCUMENT_3 0.044920\nFLAG_EMP_PHONE 0.045508\nREG_CITY_NOT_WORK_CITY 0.050967\nDAYS_ID_PUBLISH 0.052650\nDAYS_LAST_PHONE_CHANGE 0.054597\nREGION_RATING_CLIENT 0.059188\nREGION_RATING_CLIENT_W_CITY 0.061188\nTARGET 1.000000\nName: TARGET, dtype: float64\n\nMost Negative Correlations:\n EXT_SOURCE_3 -0.178404\nEXT_SOURCE_2 -0.161311\nEXT_SOURCE_1 -0.154455\nYEARS_BIRTH -0.078049\nYEARS_EMPLOYED -0.071547\nFLOORSMAX_AVG -0.045667\nFLOORSMAX_MEDI -0.045520\nFLOORSMAX_MODE -0.044567\nAMT_GOODS_PRICE -0.039177\nREGION_POPULATION_RELATIVE -0.037834\nFLOORSMIN_AVG -0.037009\nFLOORSMIN_MEDI -0.036770\nELEVATORS_AVG -0.036300\nELEVATORS_MEDI -0.035934\nFLOORSMIN_MODE -0.035415\nName: TARGET, dtype: float64\n" ], [ "# Find the correlation of the positive days since birth and target\ndf1['YEARS_BIRTH'] = abs(df1['YEARS_BIRTH'])\ndf1['YEARS_BIRTH'].corr(df1['TARGET'])", "_____no_output_____" ], [ "# Set the style of plots\nplt.style.use('fivethirtyeight')\n\n# Plot the distribution of ages in years\nplt.hist(df1['YEARS_BIRTH'], edgecolor = 'k', bins = 25)\nplt.title('Age of Client'); plt.xlabel('Age (years)'); plt.ylabel('Count');", "_____no_output_____" ], [ "# KDE plot of loans that were repaid on time\nsns.kdeplot(df1.loc[df1['TARGET'] == 0, 'YEARS_BIRTH'] / 365, label = 'target == 0')\n\n# KDE plot of loans which were not repaid on time\nsns.kdeplot(df1.loc[df1['TARGET'] == 1, 'YEARS_BIRTH'] / 365, label = 'target == 1')\n\n# Labeling of plot\nplt.xlabel('Age (years)'); plt.ylabel('Density'); plt.title('Distribution of Ages');", "_____no_output_____" ], [ "age_data = df1[['TARGET', 'YEARS_BIRTH']]\nage_data['YEARS_BIRTH'] = age_data['YEARS_BIRTH']\n\n# Bin the age data\nage_data['YEARS_BINNED'] = pd.cut(age_data['YEARS_BIRTH'], bins = np.linspace(20, 70, num = 11))\nage_data.head(10)", "<ipython-input-30-f08e4b57f1df>:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n age_data['YEARS_BIRTH'] = age_data['YEARS_BIRTH']\n<ipython-input-30-f08e4b57f1df>:5: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n age_data['YEARS_BINNED'] = pd.cut(age_data['YEARS_BIRTH'], bins = np.linspace(20, 70, num = 11))\n" ], [ "# Group by the bin and calculate averages\nage_groups = age_data.groupby('YEARS_BINNED').mean()\nage_groups", "_____no_output_____" ], [ "plt.figure(figsize = (8, 8))\n\n# Graph the age bins and the average of the target as a bar plot\nplt.bar(age_groups.index.astype(str), 100 * age_groups['TARGET'])\n\n# Plot labeling\nplt.xticks(rotation = 75); plt.xlabel('Age Group (years)'); plt.ylabel('Failure to Repay (%)')\nplt.title('Failure to Repay by Age Group');", "_____no_output_____" ], [ "# Extract the EXT_SOURCE variables and show correlations\next_data = df1[['TARGET', 'EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'YEARS_BIRTH']]\next_data_corrs = ext_data.corr()\next_data_corrs", "_____no_output_____" ], [ "# Extract the EXT_SOURCE variables and show correlations\next_data = application_train[['TARGET', 'EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'DAYS_BIRTH']]\next_data_corrs = ext_data.corr()\next_data_corrs", "_____no_output_____" ], [ "plt.figure(figsize = (8, 6))\n\n# Heatmap of correlations\nsns.heatmap(ext_data_corrs, cmap = plt.cm.RdYlBu_r, vmin = -0.25, annot = True, vmax = 0.6)\nplt.title('Correlation Heatmap');", "_____no_output_____" ] ], [ [ "#DAYS_BIRTH is positively correlated with EXT_SOURCE_1 indicating that maybe one of the factors in this score is the client age.\n\n#so try build model with EXT_SOURCE_1 and/or DAYS_BIRTH", "_____no_output_____" ] ], [ [ "plt.figure(figsize = (10, 12))\n\n# iterate through the sources\nfor i, source in enumerate(['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3']):\n \n # create a new subplot for each source\n #\n plt.subplot(3, 1, i+1)\n # plot repaid loans\n sns.kdeplot(df1.loc[df1['TARGET'] == 0, source], label = 'target == 0')\n # plot loans that were not repaid\n sns.kdeplot(df1.loc[application_train['TARGET'] == 1, source], label = 'target == 1')\n \n # Label the plots\n plt.title('Distribution of %s by Target Value' % source)\n plt.xlabel('%s' % source); plt.ylabel('Density');\n \nplt.tight_layout(h_pad = 2.5)", "_____no_output_____" ], [ "# Make a new dataframe for polynomial features\npoly_features = df1[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'YEARS_BIRTH', 'TARGET']]\npoly_features_test = df1[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'YEARS_BIRTH']]\n\n# imputer for handling missing values\nfrom sklearn.preprocessing import Imputer\nimputer = Imputer(strategy = 'median')\n\npoly_target = poly_features['TARGET']\n\npoly_features = poly_features.drop(columns = ['TARGET'])\n\n# Need to impute missing values\npoly_features = imputer.fit_transform(poly_features)\npoly_features_test = imputer.transform(poly_features_test)\n\nfrom sklearn.preprocessing import PolynomialFeatures\n \n# Create the polynomial object with specified degree\npoly_transformer = PolynomialFeatures(degree = 3)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a96a88bc18ff27e5d314f916e226d89e7a8f78
2,086
ipynb
Jupyter Notebook
Determinant_of_Matrix.ipynb
jnrtnan/Linear-Algebra-58020
77ac5f7a5f0c03879a308b6a36df57a7ce852ce2
[ "Apache-2.0" ]
null
null
null
Determinant_of_Matrix.ipynb
jnrtnan/Linear-Algebra-58020
77ac5f7a5f0c03879a308b6a36df57a7ce852ce2
[ "Apache-2.0" ]
null
null
null
Determinant_of_Matrix.ipynb
jnrtnan/Linear-Algebra-58020
77ac5f7a5f0c03879a308b6a36df57a7ce852ce2
[ "Apache-2.0" ]
null
null
null
22.923077
246
0.439597
[ [ [ "<a href=\"https://colab.research.google.com/github/jnrtnan/Linear-Algebra-58020/blob/main/Determinant_of_Matrix.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import numpy as np\n\nA=([1,2,-1],[4,6,-2],[-1,3,3])\nprint(A)", "([1, 2, -1], [4, 6, -2], [-1, 3, 3])\n" ], [ "print(round(np.linalg.det(A)))", "-14\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
e7a9746d1616557ee10c2c9bc932fef807c623e9
18,354
ipynb
Jupyter Notebook
examples/mnist_gan.ipynb
tirkarthi/dm-haiku
803671cf6ce5bc35fca7e6af89938579407e12ff
[ "Apache-2.0" ]
1
2020-06-25T13:19:17.000Z
2020-06-25T13:19:17.000Z
examples/mnist_gan.ipynb
tirkarthi/dm-haiku
803671cf6ce5bc35fca7e6af89938579407e12ff
[ "Apache-2.0" ]
null
null
null
examples/mnist_gan.ipynb
tirkarthi/dm-haiku
803671cf6ce5bc35fca7e6af89938579407e12ff
[ "Apache-2.0" ]
null
null
null
35.917808
147
0.487687
[ [ [ "#### Copyright 2020 DeepMind Technologies Limited. All Rights Reserved.\n\n#### Licensed under the Apache License, Version 2.0 (the \"License\");", "_____no_output_____" ], [ "#### Full license text", "_____no_output_____" ] ], [ [ "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# A (very) basic GAN for MNIST in JAX/Haiku\n\nBased on a TensorFlow tutorial written by Mihaela Rosca.\n\nOriginal GAN paper: https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "# Uncomment the line below if running on colab.research.google.com.\n# !pip install dm-haiku\n\nimport functools\nfrom typing import Any, NamedTuple\n\nimport haiku as hk\nimport jax\nfrom jax.experimental import optix\nimport jax.numpy as jnp\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport tensorflow as tf\nimport tensorflow_datasets as tfds", "_____no_output_____" ] ], [ [ "## Define the dataset", "_____no_output_____" ] ], [ [ "# Download the data once.\nmnist = tfds.load(\"mnist\")\n\n\ndef make_dataset(batch_size, seed=1):\n def _preprocess(sample):\n # Convert to floats in [0, 1].\n image = tf.image.convert_image_dtype(sample[\"image\"], tf.float32)\n # Scale the data to [-1, 1] to stabilize training.\n return 2.0 * image - 1.0\n\n ds = mnist[\"train\"]\n ds = ds.map(map_func=_preprocess, \n num_parallel_calls=tf.data.experimental.AUTOTUNE)\n ds = ds.cache()\n ds = ds.shuffle(10 * batch_size, seed=seed).repeat().batch(batch_size)\n return tfds.as_numpy(ds)", "_____no_output_____" ] ], [ [ "## Define the model", "_____no_output_____" ] ], [ [ "class Generator(hk.Module):\n \"\"\"Generator network.\"\"\"\n\n def __init__(self, output_channels=(32, 1), name=None):\n super().__init__(name=name)\n self.output_channels = output_channels\n\n def __call__(self, x):\n \"\"\"Maps noise latents to images.\"\"\"\n x = hk.Linear(7 * 7 * 64)(x)\n x = jnp.reshape(x, x.shape[:1] + (7, 7, 64)) \n for output_channels in self.output_channels:\n x = jax.nn.relu(x)\n x = hk.Conv2DTranspose(output_channels=output_channels,\n kernel_shape=[5, 5],\n stride=2,\n padding=\"SAME\")(x)\n # We use a tanh to ensure that the generated samples are in the same\n # range as the data.\n return jnp.tanh(x)\n\n\nclass Discriminator(hk.Module):\n \"\"\"Discriminator network.\"\"\"\n\n def __init__(self,\n output_channels=(8, 16, 32, 64, 128),\n strides=(2, 1, 2, 1, 2),\n name=None): \n super().__init__(name=name)\n self.output_channels = output_channels\n self.strides = strides\n\n def __call__(self, x):\n \"\"\"Classifies images as real or fake.\"\"\"\n for output_channels, stride in zip(self.output_channels, self.strides):\n x = hk.Conv2D(output_channels=output_channels,\n kernel_shape=[5, 5],\n stride=stride,\n padding=\"SAME\")(x)\n x = jax.nn.leaky_relu(x, negative_slope=0.2)\n x = hk.Flatten()(x) \n # We have two classes: 0 = input is fake, 1 = input is real.\n logits = hk.Linear(2)(x)\n return logits\n\n\ndef tree_shape(xs):\n return jax.tree_map(lambda x: x.shape, xs)\n\n\ndef sparse_softmax_cross_entropy(logits, labels):\n one_hot_labels = jax.nn.one_hot(labels, logits.shape[-1])\n return -jnp.sum(one_hot_labels * jax.nn.log_softmax(logits), axis=-1)\n\n\nclass GANTuple(NamedTuple):\n gen: Any\n disc: Any\n\n\nclass GANState(NamedTuple):\n params: GANTuple\n opt_state: GANTuple\n\n\nclass GAN:\n \"\"\"A basic GAN.\"\"\"\n\n def __init__(self, num_latents):\n self.num_latents = num_latents\n \n # Define the Haiku network transforms.\n # We don't use BatchNorm so we don't use `with_state`.\n self.gen_transform = hk.transform(lambda *args: Generator()(*args))\n self.disc_transform = hk.transform(lambda *args: Discriminator()(*args))\n \n # Build the optimizers.\n self.optimizers = GANTuple(gen=optix.adam(1e-4, b1=0.5, b2=0.9),\n disc=optix.adam(1e-4, b1=0.5, b2=0.9))\n\n @functools.partial(jax.jit, static_argnums=0)\n def initial_state(self, rng, batch):\n \"\"\"Returns the initial parameters and optimize states.\"\"\"\n # Generate dummy latents for the generator.\n dummy_latents = jnp.zeros((batch.shape[0], self.num_latents))\n\n # Get initial network parameters.\n rng_gen, rng_disc = jax.random.split(rng)\n params = GANTuple(gen=self.gen_transform.init(rng_gen, dummy_latents),\n disc=self.disc_transform.init(rng_disc, batch))\n print(\"Generator: \\n\\n{}\\n\".format(tree_shape(params.gen)))\n print(\"Discriminator: \\n\\n{}\\n\".format(tree_shape(params.disc)))\n \n # Initialize the optimizers.\n opt_state = GANTuple(gen=self.optimizers.gen.init(params.gen),\n disc=self.optimizers.disc.init(params.disc))\n \n return GANState(params=params, opt_state=opt_state)\n\n def sample(self, rng, gen_params, num_samples):\n \"\"\"Generates images from noise latents.\"\"\"\n latents = jax.random.normal(rng, shape=(num_samples, self.num_latents))\n return self.gen_transform.apply(gen_params, latents)\n\n def gen_loss(self, gen_params, rng, disc_params, batch):\n \"\"\"Generator loss.\"\"\"\n # Sample from the generator.\n fake_batch = self.sample(rng, gen_params, num_samples=batch.shape[0])\n\n # Evaluate using the discriminator. Recall class 1 is real.\n fake_logits = self.disc_transform.apply(disc_params, fake_batch)\n fake_probs = jax.nn.softmax(fake_logits)[:, 1]\n loss = -jnp.log(fake_probs)\n \n return jnp.mean(loss)\n\n def disc_loss(self, disc_params, rng, gen_params, batch):\n \"\"\"Discriminator loss.\"\"\"\n # Sample from the generator.\n fake_batch = self.sample(rng, gen_params, num_samples=batch.shape[0])\n\n # For efficiency we process both the real and fake data in one pass.\n real_and_fake_batch = jnp.concatenate([batch, fake_batch], axis=0)\n real_and_fake_logits = self.disc_transform.apply(disc_params, \n real_and_fake_batch)\n real_logits, fake_logits = jnp.split(real_and_fake_logits, 2, axis=0)\n\n # Class 1 is real.\n real_labels = jnp.ones((batch.shape[0],), dtype=jnp.int32)\n real_loss = sparse_softmax_cross_entropy(real_logits, real_labels)\n\n # Class 0 is fake.\n fake_labels = jnp.zeros((batch.shape[0],), dtype=jnp.int32)\n fake_loss = sparse_softmax_cross_entropy(fake_logits, fake_labels)\n\n return jnp.mean(real_loss + fake_loss)\n\n @functools.partial(jax.jit, static_argnums=0)\n def update(self, rng, gan_state, batch):\n \"\"\"Performs a parameter update.\"\"\"\n rng, rng_gen, rng_disc = jax.random.split(rng, 3)\n \n # Update the discriminator.\n disc_loss, disc_grads = jax.value_and_grad(self.disc_loss)(\n gan_state.params.disc,\n rng_disc, \n gan_state.params.gen,\n batch)\n disc_update, disc_opt_state = self.optimizers.disc.update(\n disc_grads, gan_state.opt_state.disc)\n disc_params = optix.apply_updates(gan_state.params.disc, disc_update)\n\n # Update the generator.\n gen_loss, gen_grads = jax.value_and_grad(self.gen_loss)(\n gan_state.params.gen,\n rng_gen, \n gan_state.params.disc,\n batch)\n gen_update, gen_opt_state = self.optimizers.gen.update(\n gen_grads, gan_state.opt_state.gen)\n gen_params = optix.apply_updates(gan_state.params.gen, gen_update)\n \n params = GANTuple(gen=gen_params, disc=disc_params)\n opt_state = GANTuple(gen=gen_opt_state, disc=disc_opt_state)\n gan_state = GANState(params=params, opt_state=opt_state)\n log = {\n \"gen_loss\": gen_loss,\n \"disc_loss\": disc_loss,\n }\n\n return rng, gan_state, log", "_____no_output_____" ] ], [ [ "## Train the model", "_____no_output_____" ] ], [ [ "#@title {vertical-output: true}\n\nnum_steps = 20001\nlog_every = num_steps // 100\n\n# Let's see what hardware we're working with. The training takes a few\n# minutes on a GPU, a bit longer on CPU.\nprint(f\"Number of devices: {jax.device_count()}\")\nprint(\"Device:\", jax.devices()[0].device_kind)\nprint(\"\")\n\n# Make the dataset.\ndataset = make_dataset(batch_size=64)\n\n# The model.\ngan = GAN(num_latents=20)\n\n# Top-level RNG.\nrng = jax.random.PRNGKey(1729)\n\n# Initialize the network and optimizer.\nrng, rng1 = jax.random.split(rng)\ngan_state = gan.initial_state(rng1, next(dataset))\n\nsteps = []\ngen_losses = []\ndisc_losses = []\n\nfor step in range(num_steps):\n rng, gan_state, log = gan.update(rng, gan_state, next(dataset))\n\n # Log the losses.\n if step % log_every == 0: \n # It's important to call `device_get` here so we don't take up device\n # memory by saving the losses.\n log = jax.device_get(log)\n gen_loss = log[\"gen_loss\"]\n disc_loss = log[\"disc_loss\"]\n print(f\"Step {step}: \"\n f\"gen_loss = {gen_loss:.3f}, disc_loss = {disc_loss:.3f}\")\n steps.append(step)\n gen_losses.append(gen_loss)\n disc_losses.append(disc_loss)", "_____no_output_____" ] ], [ [ "## Visualize the losses\nUnlike losses for classifiers or VAEs, GAN losses do not decrease steadily, instead going up and down depending on the training dynamics.", "_____no_output_____" ] ], [ [ "sns.set_style(\"whitegrid\")\n\nfig, axes = plt.subplots(1, 2, figsize=(20, 6))\n\n# Plot the discriminator loss.\naxes[0].plot(steps, disc_losses, \"-\")\naxes[0].plot(steps, np.log(2) * np.ones_like(steps), \"r--\", \n label=\"Discriminator is being fooled\")\naxes[0].legend(fontsize=20)\naxes[0].set_title(\"Discriminator loss\", fontsize=20)\n\n# Plot the generator loss.\naxes[1].plot(steps, gen_losses, '-')\naxes[1].set_title(\"Generator loss\", fontsize=20);", "_____no_output_____" ] ], [ [ "## Visualize samples", "_____no_output_____" ] ], [ [ "#@title {vertical-output: true}\n\ndef make_grid(samples, num_cols=8, rescale=True): \n batch_size, height, width = samples.shape\n assert batch_size % num_cols == 0\n num_rows = batch_size // num_cols\n # We want samples.shape == (height * num_rows, width * num_cols).\n samples = samples.reshape(num_rows, num_cols, height, width)\n samples = samples.swapaxes(1, 2)\n samples = samples.reshape(height * num_rows, width * num_cols)\n return samples\n\n\n# Generate samples from the trained generator.\nrng = jax.random.PRNGKey(12)\nsamples = gan.sample(rng, gan_state.params.gen, num_samples=64)\nsamples = jax.device_get(samples)\nsamples = samples.squeeze(axis=-1)\n# Our model outputs values in [-1, 1] so scale it back to [0, 1].\nsamples = (samples + 1.0) / 2.0\n\nplt.gray()\nplt.axis(\"off\")\nsamples_grid = make_grid(samples)\nplt.imshow(samples_grid);", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7a98c28b4e5f75414886324aa4169b3f971d032
12,202
ipynb
Jupyter Notebook
Code/4.-Cómo calcular los autovalores y autovectores.ipynb
DataEngel/Linear-algebra-applied-to-ML-with-Python
3c52105acf9f6b5089bb3e80ad05ae31dd7a28a0
[ "MIT" ]
null
null
null
Code/4.-Cómo calcular los autovalores y autovectores.ipynb
DataEngel/Linear-algebra-applied-to-ML-with-Python
3c52105acf9f6b5089bb3e80ad05ae31dd7a28a0
[ "MIT" ]
null
null
null
Code/4.-Cómo calcular los autovalores y autovectores.ipynb
DataEngel/Linear-algebra-applied-to-ML-with-Python
3c52105acf9f6b5089bb3e80ad05ae31dd7a28a0
[ "MIT" ]
null
null
null
51.485232
7,308
0.783068
[ [ [ "## ¿Cómo podemos calcular con las funciones de Python los autovectores y los autovalores? ", "_____no_output_____" ] ], [ [ "# Importamos las bibliotecas\n\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# Creamos una matriz\nX = np.array([[3, 2], [4, 1]])\nprint(X)", "[[3 2]\n [4 1]]\n" ], [ "# Vemos la biblioteca para calcular los autovectores y autovalores de Numpy\nprint(np.linalg.eig(X))", "(array([ 5., -1.]), array([[ 0.70710678, -0.4472136 ],\n [ 0.70710678, 0.89442719]]))\n" ], [ "# Pedimos que muestre los autovalores\nautovalores, autovectores = np.linalg.eig(X)\nprint(autovalores)", "[ 5. -1.]\n" ], [ "# Pedimos cual es el autovalor asociado a cada autovector \nprint(autovectores[:, 0])", "[0.70710678 0.70710678]\n" ], [ "# Mostramos el autovector numero 1\nprint(autovectores[:, 1])", "[-0.4472136 0.89442719]\n" ], [ "# Importamos nuestra función para graficar. \n%run \".\\\\Funciones auxiliares\\graficarVectores.ipynb\"", "_____no_output_____" ], [ "# Definamos un array \nv = np.array([[-1], [2]])\n# Calculamos la tranformacion con el calculo del producto interno \nXv = X.dot(v)\n# Y lo comparamos con el autovector anterior \nv_np = autovectores[:, 1]\nprint(Xv)\nprint(v_np)", "[[ 1]\n [-2]]\n[-0.4472136 0.89442719]\n" ], [ "# Graficamos al que calculamos con el producto interno, el vector original y el que nos devolvió el método \ngraficarVectores([Xv.flatten(), v.flatten(), v_np], cols = ['green','orange','blue'])\n\nplt.ylim(-4,2)\nplt.xlim(-7,3)", "_____no_output_____" ] ], [ [ "\n## Conclusión: \n\nEntonces, podemos ver que el autovector encontrado por Numpy es un múltiplo del autovector que nosotros propusimos. Eso quiere decir que los autovectores son lo mismo, solo que pueden variar en amplitud o en sentido, pero la dirección se mantiene. Si tenemos un autovector, menos ese autovector también es autovector. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7a99790dc7891aa26d9b1dd80255f6e2441ff6d
37,265
ipynb
Jupyter Notebook
es/notebooks/intro/describing-quantum-computers.ipynb
gitlocalize/platypus
b281bb13ba4666f1931523d985e51ea6a955049c
[ "Apache-2.0" ]
2
2022-03-09T13:39:05.000Z
2022-03-24T16:35:55.000Z
es/notebooks/intro/describing-quantum-computers.ipynb
gitlocalize/platypus
b281bb13ba4666f1931523d985e51ea6a955049c
[ "Apache-2.0" ]
null
null
null
es/notebooks/intro/describing-quantum-computers.ipynb
gitlocalize/platypus
b281bb13ba4666f1931523d985e51ea6a955049c
[ "Apache-2.0" ]
null
null
null
55.12574
882
0.537394
[ [ [ "# Descripción de las computadoras cuánticas.\n\nEste capítulo presentará los diferentes objetos matemáticos y notaciones que usaremos para describir las computadoras cuánticas. Los símbolos, las ecuaciones y el vocabulario especializado nos permiten comunicarnos y trabajar con las matemáticas de una manera muy concisa, son herramientas increíblemente poderosas, pero también tienen un costo; son difíciles de entender si no sabes lo que significan todos los símbolos, y esto puede alienar a las personas. Para contrarrestar esto, en este libro de texto, las ecuaciones son interactivas. Puede mover el mouse sobre los símbolos para ver qué significan. También agregaremos lentamente algunas palabras [esotéricas](gloss:esoteric) para que pueda comenzar a hablar el lenguaje de las matemáticas y la computación cuántica, y también puede ver las explicaciones de estas palabras moviendo el mouse sobre estas palabras.\n\n## amplitudes\n\nUna probabilidad clásica a menudo se representa mediante un [número real](gloss:real-number) entre 0 y 1, pero las amplitudes también tienen una dirección. Un candidato natural para representar una amplitud es un [número complejo](gloss:complex-number) , ya que un número complejo también se puede describir completamente tanto por una magnitud como por una dirección, pero en este curso solo trabajaremos con amplitudes que pueden apuntar en dos direcciones (por ejemplo, izquierda y derecha). ) y no nos preocuparemos de nada más.\n\n![Imagen que compara amplitudes y probabilidades](images/quantum-states/prob-vs-amp.svg)\n\nEsto simplifica mucho las matemáticas, ya que ahora podemos describir cualquier amplitud como un número entre -1 y +1; si el número es positivo, la amplitud está mirando hacia adelante, y si es negativo, está mirando hacia atrás. ¡Resulta que esto todavía es suficiente para hacer cosas interesantes!\n\n<!-- ::: q-block.exercise -->\n\n### Test rápido\n\n<!-- ::: q-quiz(goal=\"intro-describing-0\") -->\n\n<!-- ::: .question -->\n\n¿Cuál de estas es una amplitud válida pero *no* una probabilidad válida?\n\n<!-- ::: -->\n\n<!-- ::: .option(correct) -->\n\n1. $-1$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $1/3$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $1.01$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $\\sqrt{-2}$\n\n<!-- ::: -->\n\n<!-- ::: -->\n\n<!-- ::: -->", "_____no_output_____" ], [ "## Vectores de estado\n\nVimos en la última página que podemos predecir el comportamiento de un sistema cuántico haciendo un seguimiento de las amplitudes de probabilidad para cada resultado en cada punto de nuestro cálculo. También vimos que, para n qubits, hay $2^n$ resultados posibles, y podemos almacenar estas amplitudes en listas de longitud $2^n$ que llamamos vectores. Dado que estos vectores describen el estado de nuestros qubits, los llamamos \"vectores de estado\".\n\nAquí hay un ejemplo de un vector de estado para una computadora cuántica con dos qubits:\n\n$$\\class{x-ket}{|x\\rangle} \\class{def-equal}{:=} \\begin{bmatrix}\\cssId{_amp-0-0}{\\sqrt{\\tfrac{1}{ 2}}} \\ \\cssId{_amp-1-0}{\\sqrt{\\tfrac{1}{2}}} \\ \\cssId{_amp-2-0}{0} \\ \\cssId{_amp-3-0 }{0} \\end{bmatriz}$$\n\nDedique algún tiempo a leer la información sobre herramientas en la ecuación anterior, luego responda las preguntas a continuación.\n\n<!-- ::: q-block.exercise -->\n\n### Test rápido\n\n<!-- ::: q-quiz(goal=\"intro-describing-1\") -->\n\n<!-- ::: .question -->\n\nEn el vector de estado anterior, ¿cuál es la *amplitud* del resultado '01'?\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $1$\n\n<!-- ::: -->\n\n<!-- ::: .option(correct) -->\n\n1. $\\sqrt{\\tfrac{1}{2}}$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $1/2$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $0$\n\n<!-- ::: -->\n\n<!-- ::: -->\n\n---\n\n<!-- ::: q-quiz(goal=\"intro-describing-2\") -->\n\n<!-- ::: .question -->\n\nSi el vector de estado anterior describiera el estado de algunos qubits, ¿cuál sería la *probabilidad* de medir '00'?\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $1$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $\\sqrt{\\tfrac{1}{2}}$\n\n<!-- ::: -->\n\n<!-- ::: .option(correct) -->\n\n1. $1/2$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $0$\n\n<!-- ::: -->\n\n<!-- ::: -->\n\n<!-- ::: -->", "_____no_output_____" ], [ "## Sumar y multiplicar vectores\n\nSi estudias otras áreas de las matemáticas, encontrarás que muchas cosas pueden considerarse vectores. Hemos introducido los vectores como 'listas de números', porque así es como los consideraremos tanto en este libro de texto como en Qiskit. Pero lo que separa a un vector de cualquier lista antigua de números es que los matemáticos decidieron algunas reglas bien definidas para sumar dos vectores y para multiplicar vectores por [escalares](gloss:scalar-gloss) .\n\n### Multiplicar vectores por escalares\n\nPor ejemplo, aquí hay un vector multiplicado por un escalar:\n\n$$ \\cssId{_number-tres}{3} \\begin{bmatrix} \\class{_vec-el-0}{1} \\ \\class{_vec-el-1}{2} \\ \\class{_vec-el- 2}{-1} \\ \\class{_vec-el-3}{\\tfrac{1}{2}} \\ \\end{bmatrix} \\class{equals}{=} \\begin{bmatrix} \\class{_vec- el-0}{3} \\ \\class{_vec-el-1}{6} \\ \\class{_vec-el-2}{-3} \\ \\class{_vec-el-3}{\\tfrac{3} {2}} \\ \\end{bmatriz} $$\n\nPodemos ver que cada elemento del vector ha sido multiplicado por 3. La regla más general para un vector con $N$ elementos es:\n\n$$ \\class{scalar}{s} \\begin{bmatrix} \\class{_vec-el-0}{e_0} \\ \\class{_vec-el-1}{e_1} \\ \\class{_vec-el-2} {e_2} \\ \\class{puntos}{\\vdots} \\ \\class{ *vec-el-n}{e* {N-1}} \\ \\end{bmatrix} \\class{equals}{=} \\begin{bmatrix} \\class{_vec-el-0}{s\\times e_0} \\ \\class{_vec-el-1}{s\\times e_1} \\ \\class{_vec-el-2}{s\\times e_2} \\ \\class {puntos}{\\vpuntos} \\ \\class{ *vec-el-n}{s\\times e* {N-1}} \\ \\end{bmatrix} $$\n\nAsí que podríamos haber escrito el vector de estado $|x\\rangle$ que definimos arriba más claramente así:\n\n$$ \\class{x-ket}{|x\\rangle} = \\class{scalar}{\\sqrt{\\tfrac{1}{2}}} \\begin{bmatrix} \\cssId{_amp-0-1}{ 1} \\ \\cssId{_amp-1-1}{1} \\ \\cssId{_amp-2-1}{0} \\ \\cssId{_amp-3-1}{0} \\ \\end{bmatrix} $$\n\n### Sumar dos vectores\n\nLa segunda regla es para sumar dos vectores. Esto solo se define cuando los dos vectores tienen el mismo número de elementos y da un nuevo vector con el mismo número de elementos. Esta es la regla general:\n\n$$ \\begin{bmatrix} \\class{_vec-el-0}{a_0} \\ \\class{_vec-el-1}{a_1} \\ \\class{_vec-el-2}{a_2} \\ \\class{_vec -el-3}{a_3} \\ \\class{puntos}{\\vdots} \\ \\class{ *vec-el-n}{a* {N-1}} \\ \\end{bmatrix} + \\begin{bmatrix} \\class {_vec-el-0}{b_0} \\ \\class{_vec-el-1}{b_1} \\ \\class{_vec-el-2}{b_2} \\ \\class{_vec-el-3}{b_3} \\ \\class{puntos}{\\vpuntos} \\ \\class{ *vec-el-n}{b* {N-1}} \\ \\end{bmatrix} \\class{equals}{=} \\begin{bmatrix} \\class{_vec -el-0}{a_0 + b_0} \\ \\class{_vec-el-1}{a_1 + b_1} \\ \\class{_vec-el-2}{a_2 + b_2} \\ \\class{ <em data-md-type=\"raw_html\">vec-el-3} {a_3 + b_3} \\ \\class{puntos}{\\vdots} \\ \\class{ *vec-el-n}{a* {N-1} + b</em> {N-1}} \\ \\end{bmatrix} $$\n\nEsto significa que podemos sumar y restar vectores para hacer nuevos vectores. Por ejemplo, si definimos los vectores $|00\\rangle$ y $|01\\rangle$ así:\n\n$$ \\class{def-00}{|00\\rangle} \\class{def-equal}{:=} \\begin{bmatrix} \\class{_amp-0-general}{1} \\ \\class{_amp-1 -general}{0} \\class{_amp-2-general}{0} \\class{_amp-3-general}{0} \\end{bmatrix}, \\quad \\class{def-01}{|01 \\rangle} \\class{def-equal}{:=} \\begin{bmatrix} \\class{_amp-0-general}{0} \\ \\class{_amp-1-general}{1} \\ \\class{_amp- 2-general}{0} \\class{_amp-3-general}{0} \\end{bmatrix} $$\n\nPodemos escribir $\\class{x-ket}{|x\\rangle}$ en la forma:\n\n$$\\class{x-ket}{|x\\rangle} = \\sqrt{\\tfrac{1}{2}}(\\class{def-00}{|00\\rangle} + \\class{def-01} {|01\\ángulo})$$\n\nLlamamos a agregar estados cuánticos como este \"superponerlos\", por lo que podemos decir que \"$|x\\rangle$ es una superposición de los estados $|00\\rangle$ y $|01\\rangle$\". De hecho, es una convención en la computación cuántica definir los estados básicos computacionales de la siguiente manera:\n\n$$ \\class{def-00}{|00\\rangle} \\class{def-equal}{:=} \\begin{bmatrix} \\class{_amp-0-general}{1} \\ \\class{_amp-1 -general}{0} \\class{_amp-2-general}{0} \\class{_amp-3-general}{0} \\end{bmatrix}, \\quad \\class{def-01}{|01 \\rangle} \\class{def-equal}{:=} \\begin{bmatrix} \\class{_amp-0-general}{0} \\ \\class{_amp-1-general}{1} \\ \\class{_amp- 2-general}{0} \\class{_amp-3-general}{0} \\end{bmatrix}, \\quad \\class{def-10}{|10\\rangle} \\class{def-equal}{: =} \\begin{bmatrix} \\class{_amp-0-general}{0} \\ \\class{_amp-1-general}{0} \\ \\class{_amp-2-general}{1} \\ \\class{_amp -3-general}{0} \\end{bmatrix}, \\quad \\class{def-11}{|11\\rangle} \\class{def-equal}{:=} \\begin{bmatrix} \\class{_amp- 0-general}{0} \\ \\class{_amp-1-general}{0} \\ \\class{_amp-2-general}{0} \\ \\class{_amp-3-general}{1} \\end{bmatrix } $$\n\nY podemos escribir cualquier estado cuántico como una superposición de estos vectores de estado, si multiplicamos cada vector por el número correcto y los sumamos:\n\n$$ \\cssId{_psi-ket}{|\\psi\\rangle} = \\class{ *amp-0-general}{a* {00}}\\class{def-00}{|00\\rangle}\n\n- \\class{ *amp-1-general}{a* {01}}\\class{def-01}{|01\\rangle}\n- \\class{ *amp-2-general}{a* {10}}\\class{def-10}{|10\\rangle}\n- \\class{ *amp-3-general}{a* {11}}\\class{def-11}{|11\\rangle} \\class{equals}{=} \\begin{bmatrix} \\class{ *amp-0-general} {a* {00}} \\ \\class{ *amp-1-general}{a* {01}} \\ \\class{ *amp-2-general}{a* {10}} \\ \\class{ *amp-3-general}{a* {11}} \\ \\end{bmatriz} $$\n\nComo podemos escribir cualquier vector como una combinación de estos cuatro vectores, decimos que estos cuatro vectores forman una base, a la que llamaremos *base computacional* . La base de cálculo no es la única base. Para qubits individuales, una base popular está formada por los vectores $\\class{plus-ket}{|{+}\\rangle}$ y $\\class{minus-ket}{|{-}\\rangle}$:\n\n<!-- ::: column -->\n\n![imagen que muestra la base |0>, |1> y la base |+>, |-> en el mismo plano](images/quantum-states/basis.svg)\n\n<!-- ::: column -->\n\n$$ \\class{plus-ket}{|{+}\\rangle} = \\sqrt{\\tfrac{1}{2}} \\begin{bmatrix} \\class{_sq-amp0}{1} \\ \\class{_sq -amp1}{1} \\end{bmatrix} $$ $$ \\class{menos ket}{|{-}\\rangle} = \\sqrt{\\tfrac{1}{2}} \\begin{bmatrix} \\class {_sq-amp0}{1} \\ \\class{_sq-amp1}{-1} \\end{bmatrix} $$\n\n<!-- ::: -->\n\n<!-- ::: q-block.exercise -->\n\n### Intentalo\n\nEncuentre valores para $\\alpha$, $\\beta$, $\\gamma$ y $\\delta$ tales que estas ecuaciones sean verdaderas:\n\n- $\\alpha|{+}\\rangle + \\beta|{-}\\rangle = |0\\rangle$\n- $\\gamma|{+}\\rangle + \\delta|{-}\\rangle = |1\\rangle$\n\n<!-- ::: -->", "_____no_output_____" ], [ "## ¿Cuántos vectores de estado diferentes hay?\n\nSabemos que podemos representar cualquier estado cuántico usando vectores, pero ¿cualquier vector es un estado cuántico válido? En nuestro caso, no; dado que elevamos al cuadrado nuestras amplitudes para encontrar la probabilidad de que ocurran los resultados, necesitamos que estos cuadrados sumen uno, de lo contrario no tiene sentido.\n\n$$ \\cssId{suma}{\\sum^{N-1}_{i=0}} \\cssId{_amp-i}{a_i}^2 = 1 $$\n\n<!-- ::: q-block.exercise -->\n\n### Test rápido\n\n<!-- ::: q-quiz(goal=\"quiz2\") -->\n\n<!-- ::: .question -->\n\n¿Cuál de estos es un estado cuántico válido? (Intente sumar las amplitudes al cuadrado).\n\n<!-- ::: -->\n\n<!-- ::: .option(correct) -->\n\n1. $\\sqrt{\\tfrac{1}{3}}\\begin{bmatrix} 1 \\ -1 \\ 1 \\ 0 \\end{bmatrix}$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $\\sqrt{\\tfrac{1}{2}}\\begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \\end{bmatrix}$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $\\tfrac{1}{2}\\begin{bmatrix} 1 \\ 1 \\end{bmatrix}$\n\n<!-- ::: -->\n\n<!-- ::: -->\n\n<!-- ::: -->\n\nOtro factor es algo que llamamos \"fases globales\" del vector de estado. Dado que solo sabemos que la fase existe debido a los efectos de interferencia que produce, solo podemos medir las *diferencias* de fase. Si rotamos todas las amplitudes en un vector de estado por la misma cantidad, aún veríamos exactamente el mismo comportamiento.\n\n<!-- ::: column -->\n\n![imagen que muestra efectos de interferencia con diferentes fases iniciales](images/quantum-states/global-phase-L.svg)\n\n<!-- ::: column -->\n\n![imagen que muestra efectos de interferencia con diferentes fases iniciales](images/quantum-states/global-phase-R.svg)\n\n<!-- ::: -->\n\nPor ejemplo, no hay ningún experimento que podamos realizar que sea capaz de distinguir entre estos dos estados:\n\n<!-- ::: column -->\n\n$$ |a\\rangle = \\sqrt{\\tfrac{1}{2}}\\begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \\end{bmatrix} $$\n\n<!-- ::: column -->\n\n$$ -|a\\rangle = \\sqrt{\\tfrac{1}{2}}\\begin{bmatrix} -1 \\ 0 \\ 0 \\ -1 \\end{bmatrix} $$\n\n<!-- ::: -->\n\nPorque las diferencias entre cada una de las amplitudes es la misma. Se podría decir que estos dos vectores son diferentes *matemáticamente* , pero *físicamente* iguales.\n\n## Operaciones cuánticas\n\nEntonces, ahora que sabemos todo sobre los diferentes estados en los que pueden estar nuestros qubits, es hora de ver cómo representamos las operaciones que transforman un estado en otro.\n\nDe la misma manera que existe una probabilidad de transición de que una determinada acción transforme una moneda de cara a cruz, existe una amplitud de transición para cada estado inicial y final de nuestros qubits. Podemos describir cualquier operación cuántica a través de estas amplitudes de transición.\n\n![Imagen que muestra dos vectores de estado antes y después de una operación](images/quantum-states/quantum-operation.svg)\n\nEntonces, ¿qué transformaciones posibles hay? Digamos que tenemos un estado inicial $|a\\rangle$ que se transforma en un nuevo estado $|b\\rangle$. Si queremos que nuestra representación cubra todas las transformaciones posibles, entonces cada amplitud en $|a\\rangle$ debe tener una amplitud de transición para cada amplitud en $|b\\rangle$.\n\n<!-- ::: q-block.exercise -->\n\n### Cuestionario rápido\n\n<!-- ::: q-quiz(goal=\"intro-describing-3\") -->\n\n<!-- ::: .question -->\n\nUn vector de estado $n$-qubit puede contener hasta $2^n$ amplitudes. ¿Cuál es el mayor número de amplitudes de transición que necesitaríamos para representar cualquier operación cuántica en $n$ qubits?\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $2\\cdot 2^n$\n\n<!-- ::: -->\n\n<!-- ::: .option(correct) -->\n\n1. $(2^n)^2$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $4^n$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $2^{2^n}$\n\n<!-- ::: -->\n\n<!-- ::: -->\n\n<!-- ::: -->\n\nDibujar líneas como esta es una forma un poco complicada de hacerlo, por lo que podemos poner todos estos números en una [matriz](gloss:matrix) :", "_____no_output_____" ], [ "$$ \\cssId{u-gate}{U} = \\begin{bmatrix} \\class{ *t_amp_00_00}{t* {00\\to 00}} &amp; \\class{ *t_amp_01_00}{t* {01\\to 00}} &amp; \\class { *t_amp_10_00}{t* {10\\to 00}} &amp; \\class{ *t_amp_11_00}{t* {11\\to 00}} \\ \\class{ *t_amp_00_01}{t* {00\\to 01}} &amp; \\class{ *t_amp_01_01}{t* {01\\a 01}} &amp; \\class{ *t_amp_10_01}{t* {10\\a 01}} &amp; \\class{ *t_amp_11_01}{t* {11\\a 01}} \\ \\class{ *t_amp_00_10}{t* {00\\a 10 }} &amp; \\class{ *t_amp_01_10}{t* {01\\to 10}} &amp; \\class{ *t_amp_10_10}{t* {10\\to 10}} &amp; \\class{ *t_amp_11_10}{t* {11\\to 10}} \\ \\class { *t_amp_00_11}{t* {00\\to 11}} &amp; \\class{ *t_amp_01_11}{t* {01\\to 11}} &amp; \\class{ *t_amp_10_11}{t* {10\\to 11}} &amp; \\class{ *t_amp_11_11}{t* {11\\a 11}} \\ \\end{bmatriz} $$\n\nPor ejemplo, aquí está la matriz que representa la operación CNOT que vimos en los átomos de computación:\n\n$$ \\cssId{_cnot-gate}{\\text{CNOT}} = \\begin{bmatrix} \\class{_t_amp_00_00}{1} &amp; \\class{_t_amp_01_00}{0} &amp; \\class{_t_amp_10_00}{0} &amp; \\ clase{_t_amp_11_00}{0} \\ \\class{_t_amp_00_01}{0} &amp; \\class{_t_amp_01_01}{0} &amp; \\class{_t_amp_10_01}{0} &amp; \\class{_t_amp_11_01}{1} \\ \\class{_t_amp_00_10}{ 0} &amp; \\class{_t_amp_01_10}{0} &amp; \\class{_t_amp_10_10}{1} &amp; \\class{_t_amp_11_10}{0} \\ \\class{_t_amp_00_11}{0} &amp; \\class{_t_amp_01_11}{1} &amp; \\class {_t_amp_10_11}{0} &amp; \\class{_t_amp_11_11}{0} \\ \\end{bmatriz} $$\n\n<!-- ::: q-block.exercise -->\n\n### Test rápido\n\n<!-- ::: q-quiz(goal=\"intro-maths-0\") -->\n\n<!-- ::: .question -->\n\n¿Cuál es la amplitud de transición de la operación CNOT (como se muestra arriba) que transforma el estado $|10\\rangle$ en $|01\\rangle$?\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $1$\n\n<!-- ::: -->\n\n<!-- ::: .option(correct) -->\n\n1. $0$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $\\begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0\\end{bmatrix}$\n\n<!-- ::: -->\n\n<!-- ::: .option -->\n\n1. $\\begin{bmatriz} 0 \\ 0 \\ 1 \\ 0\\end{bmatriz}$\n\n<!-- ::: -->\n\n<!-- ::: -->\n\n<!-- ::: -->\n\nY aquí está la matriz de la puerta H que vimos en la página anterior:\n\n$$ H = \\sqrt{\\tfrac{1}{2}} \\begin{bmatrix} \\class{_t_amp_0_0}{1} &amp; \\class{_t_amp_1_0}{1} \\ \\class{_t_amp_0_1}{1} &amp; \\class {_t_amp_1_1}{-1} \\ \\end{bmatriz} $$\n\n(usamos la misma regla para multiplicar una matriz por un escalar como lo hacemos con los vectores). Y cuando queremos ver qué efecto tendrá una operación en algunos qubits, multiplicamos cada amplitud de transición por la amplitud de cada estado en nuestro vector de estado de entrada y luego sumamos las amplitudes de cada estado para obtener nuestro vector de estado de salida. Esto es exactamente lo mismo que multiplicar a lo largo de cada rama en un árbol de probabilidad (o amplitud) y sumar las probabilidades totales (o amplitudes) al final.\n\nPara cualquier matemático en la audiencia, esto es solo una multiplicación de matrices estándar.\n\n$$ H|0\\rangle = \\sqrt{\\tfrac{1}{2}} \\begin{bmatrix} \\class{_t_amp_0_0}{1} &amp; \\class{_t_amp_1_0}{ 1} \\ \\class{_t_amp_0_1}{1 } &amp; \\class{_t_amp_1_1}{-1} \\ \\end{bmatrix} \\begin{bmatrix} \\class{_sq-amp0}{1} \\ \\class{_sq-amp1}{0} \\ \\end{bmatrix} = \\sqrt{\\tfrac{1}{2}} \\begin{bmatrix} (1 \\class{dot}{\\cdot} 1) &amp; + &amp; (1 \\class{dot}{\\cdot} 0) \\ (1 \\ clase{punto}{\\cdot} 1) &amp; + &amp; (-1 \\class{punto}{\\cdot} 0) \\ \\end{bmatrix} = \\sqrt{\\tfrac{1}{2}} \\begin{bmatrix } \\class{_sq-amp0}{1} \\ \\class{_sq-amp1}{1} \\ \\end{bmatrix} $$\n\n![imagen que muestra cómo la puerta H transforma el estado |0> en el estado |+>](images/quantum-states/h-gate.svg)", "_____no_output_____" ], [ "## Reglas de operaciones cuánticas\n\nDe la misma manera que no todo vector es un vector de estado válido, no toda matriz es una operación cuántica válida. Para que una matriz tenga sentido como una operación real, debe mantener la probabilidad total de los estados de salida igual a 1. Entonces, por ejemplo, esto no podría ser una operación real:\n\n$$ \\begin{bmatrix} \\class{_t_amp_0_0}{1} &amp; \\class{_t_amp_1_0}{0} \\ \\class{_t_amp_0_1}{1} &amp; \\class{_t_amp_1_1}{0} \\ \\end{bmatrix} $$\n\nPorque si actúa sobre el estado $|0\\rangle$ obtenemos:\n\n# $$ \\begin{bmatrix} \\class{_t_amp_0_0}{1} &amp; \\class{_t_amp_1_0}{0} \\ \\class{_t_amp_0_1}{1} &amp; \\class{_t_amp_1_1}{0} \\ \\end{bmatrix}\\begin {bmatriz} 1 \\ 0 \\end{bmatriz}\n\n\\begin{bmatrix} \\class{_sq-amp0}{1} \\ \\class{_sq-amp1}{1} \\end{bmatrix} $$\n\ny las probabilidades totales suman dos, lo cual no tiene ningún sentido. Alternativamente, si actuara sobre el estado $|1\\rangle$, entonces las probabilidades totales sumarían cero, lo que tampoco tiene sentido. Para preservar la probabilidad total en todos los casos, nuestras operaciones deben ser reversibles. Esto significa que podemos realizar nuestras puertas cuánticas al revés para 'deshacerlas' (recordando invertir las rotaciones) y quedarnos con el estado con el que comenzamos. Decimos que las matrices con esta propiedad son *unitarias* . A menudo verá puertas cuánticas denominadas 'unitarias' o 'puertas unitarias'.", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7a99d62dc451cd15d4fb184a08c6da4e6a6e3af
385,828
ipynb
Jupyter Notebook
notebooks/stats_alerts.ipynb
robertdstein/nuztfpaper
34ee9326f55c37d25327519639b8edee26e44770
[ "MIT" ]
null
null
null
notebooks/stats_alerts.ipynb
robertdstein/nuztfpaper
34ee9326f55c37d25327519639b8edee26e44770
[ "MIT" ]
null
null
null
notebooks/stats_alerts.ipynb
robertdstein/nuztfpaper
34ee9326f55c37d25327519639b8edee26e44770
[ "MIT" ]
null
null
null
442.463303
100,944
0.917857
[ [ [ "%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport os\nfrom astropy.time import Time\nfrom astropy.table import Table\nfrom nuztfpaper.style import output_folder, big_fontsize, base_width, base_height, dpi, plot_dir\nfrom nuztfpaper.alerts import obs, non, joint\nimport seaborn as sns\nimport json\nfrom astropy.time import Time", "/Users/robertstein/Code/ztf_nu_paper_code/nuztfpaper/alerts.py:23: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n non[\"Rejection reason\"][mask] = new\n" ] ], [ [ "# Alert Statistics", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(base_width, base_height), dpi=dpi)\nax1 = plt.subplot(111)\n\nreasons = [x if x != \"Poor Signalness and Localisation\" else \"Poor Signalness \\n and Localisation\" for x in non[\"Rejection reason\"]]\nreasons = [x if x != \"Separation from Galactic Plane\" else \"Separation from \\n Galactic Plane\" for x in reasons]\n\nt_min = 1803\nt_max = 3303\n\nreasons = [x for i, x in enumerate(reasons) if np.logical_and(float(non[\"Event\"][i][2:6]) > t_min, float(non[\"Event\"][i][2:6]) < t_max)]\nlabels = sorted(list(set(reasons)))\n\nsizes = []\n\nfor l in labels:\n sizes.append(list(reasons).count(l))\n \nexplode =[0.1] + [0.0 for _ in labels]\n\nlabels = [\"Observed\"] + labels\nsizes = [len(obs)] + sizes\n\ndef absolute_value(val):\n a = np.round(val/100.*np.sum(sizes), 0)\n return int(a)\n\nprint(labels)\n\npatches, texts, autotexts = ax1.pie(sizes, \n explode=explode, \n labels=labels, \n autopct=absolute_value,\n pctdistance=0.9,\n textprops={'fontsize': big_fontsize}\n )\n\n[autotext.set_color('white') for autotext in autotexts]\n\n\nax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.\n\nfilename = \"pie.pdf\"\n\noutput_path = os.path.join(output_folder, filename)\nplt.savefig(os.path.join(plot_dir, filename))\nplt.savefig(output_path, bbox_inches='tight', pad_inches=0)", "['Observed', 'Alert Retraction', 'Low Altitude', 'Poor Signalness \\n and Localisation', 'Proximity to Sun', 'Separation from \\n Galactic Plane', 'Southern Sky', 'Telescope Maintenance']\n" ] ], [ [ "# Observed alerts (Table 1)", "_____no_output_____" ] ], [ [ "text = r\"\"\"\n\\begin{table*}\n\\centering\n \\begin{tabular}{||c | c c c c c c ||} \n \\hline\n \\textbf{Event} & \\textbf{R.A. (J2000)} & \\textbf{Dec (J2000)} & \\textbf{90\\% area} & \\textbf{ZTF obs} &~ \\textbf{Signalness}& \\textbf{Refs}\\\\\n & \\textbf{[deg]}&\\textbf{[deg]}& \\textbf{[sq. deg.]}& \\textbf{[sq. deg.]} &&\\\\\n \\hline\n\"\"\"\n\ntot_area = 0.\n\nfor index, row in obs.iterrows():\n \n name = str(row[\"Event\"].lower())\n \n ras = json.loads(row[\"RA Unc (rectangle)\"])\n \n decs = json.loads(row[\"Dec Unc (rectangle)\"]) \n \n delta_r = ras[0] - ras[1]\n delta_d = decs[0] - decs[1]\n \n area = delta_r * delta_d * np.cos(np.radians(float(row[\"Dec\"])))\n \n if np.isnan(float(row[\"Signalness\"])):\n s = \"-\"\n else:\n s = f'{100.*row[\"Signalness\"]:.0f}\\%'\n \n text += f'\\t {row[\"Event\"]} & {row[\"RA\"]} & {row[\"Dec\"]:+.2f} & {area:.1f} & {row[\"Observed area (corrected for chip gaps)\"]:.1f} & {s} & \\cite{{{name}}} \\\\\\\\\\ \\n'\n\n text += f'\\t &&&&&& \\cite{{{name}_ztf}} \\\\\\\\ \\n'\n if not isinstance(row[\"Additional ZTF GCN\"], float):\n text += f'\\t &&&&&& \\cite{{{name}_ztf_2}} \\\\\\\\ \\n'\n \n text += \"\\t \\hline\"\n \n tot_area += row[\"Observed area (corrected for chip gaps)\"]\n\ntext += f\"\"\"\n \\end{{tabular}}\n \\caption{{Summary of the {len(obs)} neutrino alerts followed up by ZTF since survey start on 2018 March 20.}}\n \\label{{tab:nu_alerts}}\n\\end{{table*}}\n\"\"\"\n\nprint(text)", "\n\\begin{table*}\n\\centering\n \\begin{tabular}{||c | c c c c c c ||} \n \\hline\n \\textbf{Event} & \\textbf{R.A. (J2000)} & \\textbf{Dec (J2000)} & \\textbf{90\\% area} & \\textbf{ZTF obs} &~ \\textbf{Signalness}& \\textbf{Refs}\\\\\n & \\textbf{[deg]}&\\textbf{[deg]}& \\textbf{[sq. deg.]}& \\textbf{[sq. deg.]} &&\\\\\n \\hline\n\t IC190503A & 120.28 & +6.35 & 1.9 & 1.4 & 36\\% & \\cite{ic190503a} \\\\\\ \n\t &&&&&& \\cite{ic190503a_ztf} \\\\ \n\t \\hline\t IC190619A & 343.26 & +10.73 & 27.2 & 21.6 & 55\\% & \\cite{ic190619a} \\\\\\ \n\t &&&&&& \\cite{ic190619a_ztf} \\\\ \n\t \\hline\t IC190730A & 225.79 & +10.47 & 5.4 & 4.5 & 67\\% & \\cite{ic190730a} \\\\\\ \n\t &&&&&& \\cite{ic190730a_ztf} \\\\ \n\t \\hline\t IC190922B & 5.76 & -1.57 & 4.5 & 4.1 & 51\\% & \\cite{ic190922b} \\\\\\ \n\t &&&&&& \\cite{ic190922b_ztf} \\\\ \n\t \\hline\t IC191001A & 314.08 & +12.94 & 25.5 & 23.1 & 59\\% & \\cite{ic191001a} \\\\\\ \n\t &&&&&& \\cite{ic191001a_ztf} \\\\ \n\t \\hline\t IC200107A & 148.18 & +35.46 & 7.6 & 6.3 & - & \\cite{ic200107a} \\\\\\ \n\t &&&&&& \\cite{ic200107a_ztf} \\\\ \n\t \\hline\t IC200109A & 164.49 & +11.87 & 22.5 & 22.4 & 77\\% & \\cite{ic200109a} \\\\\\ \n\t &&&&&& \\cite{ic200109a_ztf} \\\\ \n\t \\hline\t IC200117A & 116.24 & +29.14 & 2.9 & 2.7 & 38\\% & \\cite{ic200117a} \\\\\\ \n\t &&&&&& \\cite{ic200117a_ztf} \\\\ \n\t &&&&&& \\cite{ic200117a_ztf_2} \\\\ \n\t \\hline\t IC200512A & 295.18 & +15.79 & 9.8 & 9.3 & 32\\% & \\cite{ic200512a} \\\\\\ \n\t &&&&&& \\cite{ic200512a_ztf} \\\\ \n\t \\hline\t IC200530A & 255.37 & +26.61 & 25.3 & 22.0 & 59\\% & \\cite{ic200530a} \\\\\\ \n\t &&&&&& \\cite{ic200530a_ztf} \\\\ \n\t &&&&&& \\cite{ic200530a_ztf_2} \\\\ \n\t \\hline\t IC200620A & 162.11 & +11.95 & 1.7 & 1.2 & 32\\% & \\cite{ic200620a} \\\\\\ \n\t &&&&&& \\cite{ic200620a_ztf} \\\\ \n\t \\hline\t IC200916A & 109.78 & +14.36 & 4.2 & 3.6 & 32\\% & \\cite{ic200916a} \\\\\\ \n\t &&&&&& \\cite{ic200916a_ztf} \\\\ \n\t &&&&&& \\cite{ic200916a_ztf_2} \\\\ \n\t \\hline\t IC200926A & 96.46 & -4.33 & 1.7 & 1.3 & 44\\% & \\cite{ic200926a} \\\\\\ \n\t &&&&&& \\cite{ic200926a_ztf} \\\\ \n\t \\hline\t IC200929A & 29.53 & +3.47 & 1.1 & 0.9 & 47\\% & \\cite{ic200929a} \\\\\\ \n\t &&&&&& \\cite{ic200929a_ztf} \\\\ \n\t \\hline\t IC201007A & 265.17 & +5.34 & 0.6 & 0.6 & 88\\% & \\cite{ic201007a} \\\\\\ \n\t &&&&&& \\cite{ic201007a_ztf} \\\\ \n\t \\hline\t IC201021A & 260.82 & +14.55 & 6.9 & 6.3 & 30\\% & \\cite{ic201021a} \\\\\\ \n\t &&&&&& \\cite{ic201021a_ztf} \\\\ \n\t \\hline\t IC201130A & 30.54 & -12.10 & 5.4 & 4.5 & 15\\% & \\cite{ic201130a} \\\\\\ \n\t &&&&&& \\cite{ic201130a_ztf} \\\\ \n\t \\hline\t IC201209A & 6.86 & -9.25 & 4.7 & 3.2 & 19\\% & \\cite{ic201209a} \\\\\\ \n\t &&&&&& \\cite{ic201209a_ztf} \\\\ \n\t \\hline\t IC201222A & 206.37 & +13.44 & 1.5 & 1.4 & 53\\% & \\cite{ic201222a} \\\\\\ \n\t &&&&&& \\cite{ic201222a_ztf} \\\\ \n\t \\hline\t IC210210A & 206.06 & +4.78 & 2.8 & 2.1 & 65\\% & \\cite{ic210210a} \\\\\\ \n\t &&&&&& \\cite{ic210210a_ztf} \\\\ \n\t \\hline\t IC210510A & 268.42 & +3.81 & 4.0 & 3.7 & 28\\% & \\cite{ic210510a} \\\\\\ \n\t &&&&&& \\cite{ic210510a_ztf} \\\\ \n\t \\hline\t IC210629A & 340.75 & +12.94 & 6.0 & 4.6 & 35\\% & \\cite{ic210629a} \\\\\\ \n\t &&&&&& \\cite{ic210629a_ztf} \\\\ \n\t \\hline\t IC210811A & 270.79 & +25.28 & 3.2 & 2.7 & 66\\% & \\cite{ic210811a} \\\\\\ \n\t &&&&&& \\cite{ic210811a_ztf} \\\\ \n\t \\hline\t IC210922A & 60.73 & -4.18 & 1.6 & 1.2 & 92\\% & \\cite{ic210922a} \\\\\\ \n\t &&&&&& \\cite{ic210922a_ztf} \\\\ \n\t \\hline\n \\end{tabular}\n \\caption{Summary of the 24 neutrino alerts followed up by ZTF since survey start on 2018 March 20.}\n \\label{tab:nu_alerts}\n\\end{table*}\n\n" ] ], [ [ "# Not observed", "_____no_output_____" ] ], [ [ "reasons = [\"Alert Retraction\", \"Proximity to Sun\", \"Low Altitude\", \"Southern Sky\", \"Separation from Galactic Plane\", \"Poor Signalness and Localisation\", \"Telescope Maintenance\"]\nseps = [1, 0, 0, 0, 1, 1, 1]\n\nfull_mask = np.array([float(x[2:6]) > 1802 for x in non[\"Event\"]])\n\n\ntext = r\"\"\"\n\\begin{table*}\n \\centering\n \\begin{tabular}{||c c ||} \n \\hline\n \\textbf{Cause} & \\textbf{Events} \\\\\n \\hline\n\"\"\"\n\nfor i, reason in enumerate(reasons):\n mask = non[\"Rejection reason\"] == reason\n \n names = list(non[\"Event\"][full_mask][mask])\n \n for j, name in enumerate(names):\n names[j] = f'{name} \\citep{{{name.lower()}}}'\n \n text += f'\\t {reason} & '\n \n n_int = 2\n \n while len(names) > n_int:\n text += f'{\", \".join(names[:n_int])} \\\\\\\\ \\n \\t & '\n names = names[n_int:]\n \n text += f'{\", \".join(names)} \\\\\\\\ \\n'\n\n# if seps[i]:\n if True:\n text += \"\\t \\hline \\n\"\n \ntext +=f\"\"\"\n \\end{{tabular}}\n \\caption{{Summary of the {np.sum(full_mask)} neutrino alerts that were not followed up by ZTF since survey start on 2018 March 20.}}\n \\label{{tab:nu_non_observed}}\n\\end{{table*}}\n\"\"\"\n\nprint(text)", "\n\\begin{table*}\n \\centering\n \\begin{tabular}{||c c ||} \n \\hline\n \\textbf{Cause} & \\textbf{Events} \\\\\n \\hline\n\t Alert Retraction & IC180423A \\citep{ic180423a}, IC181031A \\citep{ic181031a} \\\\ \n \t & IC190205A \\citep{ic190205a}, IC190529A \\citep{ic190529a} \\\\ \n \t & IC200120A \\citep{ic200120a}, IC200728A \\citep{ic200728a} \\\\ \n \t & IC201115B \\citep{ic201115b}, IC210213A \\citep{ic210213a} \\\\ \n \t & IC210322A \\citep{ic210322a}, IC210519A \\citep{ic210519a} \\\\ \n\t \\hline \n\t Proximity to Sun & IC180908A \\citep{ic180908a}, IC181014A \\citep{ic181014a} \\\\ \n \t & IC190124A \\citep{ic190124a}, IC190704A \\citep{ic190704a} \\\\ \n \t & IC190712A \\citep{ic190712a}, IC190819A \\citep{ic190819a} \\\\ \n \t & IC191119A \\citep{ic191119a}, IC200227A \\citep{ic200227a} \\\\ \n \t & IC200421A \\citep{ic200421a}, IC200615A \\citep{ic200615a} \\\\ \n \t & IC200806A \\citep{ic200806a}, IC200921A \\citep{ic200921a} \\\\ \n \t & IC200926B \\citep{ic200926b}, IC201014A \\citep{ic201014a} \\\\ \n \t & IC201115A \\citep{ic201115a}, IC201221A \\citep{ic201221a} \\\\ \n \t & IC211117A \\citep{ic211117a}, IC211123A \\citep{ic211123a} \\\\ \n\t \\hline \n\t Low Altitude & IC191215A \\citep{ic191215a}, IC211023A \\citep{ic211023a} \\\\ \n\t \\hline \n\t Southern Sky & IC190104A \\citep{ic190104a}, IC190331A \\citep{ic190331a} \\\\ \n \t & IC190504A \\citep{ic190504a} \\\\ \n\t \\hline \n\t Separation from Galactic Plane & IC201114A \\citep{ic201114a}, IC201120A \\citep{ic201120a} \\\\ \n \t & IC210516A \\citep{ic210516a}, IC210730A \\citep{ic210730a} \\\\ \n\t \\hline \n\t Poor Signalness and Localisation & IC190221A \\citep{ic190221a}, IC190629A \\citep{ic190629a} \\\\ \n \t & IC190922A \\citep{ic190922a}, IC191122A \\citep{ic191122a} \\\\ \n \t & IC191204A \\citep{ic191204a}, IC191231A \\citep{ic191231a} \\\\ \n \t & IC200410A \\citep{ic200410a}, IC200425A \\citep{ic200425a} \\\\ \n \t & IC200523A \\citep{ic200523a}, IC200614A \\citep{ic200614a} \\\\ \n \t & IC200911A \\citep{ic200911a}, IC210503A \\citep{ic210503a} \\\\ \n \t & IC210608A \\citep{ic210608a}, IC210717A \\citep{ic210717a} \\\\ \n \t & IC211125A \\citep{ic211125a} \\\\ \n\t \\hline \n\t Telescope Maintenance & IC181023A \\citep{ic181023a}, IC211116A \\citep{ic211116a} \\\\ \n \t & IC211208A \\citep{ic211208a} \\\\ \n\t \\hline \n\n \\end{tabular}\n \\caption{Summary of the 55 neutrino alerts that were not followed up by ZTF since survey start on 2018 March 20.}\n \\label{tab:nu_non_observed}\n\\end{table*}\n\n" ] ], [ [ "# Full Neutrino List", "_____no_output_____" ] ], [ [ "text = fr\"\"\"\n\\begin{{longtable}}[c]{{||c c c c c c ||}}\n\\caption{{Summary of all {len(joint)} neutrino alerts issued since under the IceCube Realtime Program. Directions are not indicated for retracted events.}} \\label{{tab:all_nu_alerts}} \\\\\n \\hline\n \\textbf{{Event}} & \\textbf{{R.A. (J2000)}} & \\textbf{{Dec (J2000)}} & \\textbf{{90\\% area}} &~ \\textbf{{Signalness}}& \\textbf{{Ref}}\\\\\n & \\textbf{{[deg]}}&\\textbf{{[deg]}} & \\textbf{{[sq. deg.]}} &&\\\\\n \\hline\n\\endfirsthead\n \\hline\n\\textbf{{Event}} & \\textbf{{R.A. (J2000)}} & \\textbf{{Dec (J2000)}} & \\textbf{{90\\% area}} &~ \\textbf{{Signalness}}& \\textbf{{Ref}}\\\\\n & \\textbf{{[deg]}}&\\textbf{{[deg]}} & \\textbf{{[sq. deg.]}} &&\\\\\n \\hline\n\\endhead\n\\hline\n\\endfoot\n\\hline\n\\endlastfoot\n\\hline%\n\"\"\"\n\nfor index, row in joint.iterrows():\n \n name = str(row[\"Event\"].lower())\n \n if not isinstance(row[\"RA Unc (rectangle)\"], float):\n \n ras = json.loads(str(row[\"RA Unc (rectangle)\"]))\n\n decs = json.loads(row[\"Dec Unc (rectangle)\"]) \n\n delta_r = ras[0] - ras[1]\n delta_d = decs[0] - decs[1]\n\n area = f'{delta_r * delta_d * np.cos(np.radians(float(row[\"Dec\"]))):.1f}'\n\n else:\n area = \"-\"\n\n if np.isnan(float(row[\"Signalness\"])):\n s = \"-\"\n else:\n s = f'{100.*row[\"Signalness\"]:.0f}\\%'\n\n if np.isnan(float(row[\"Dec\"])):\n r = \"-\"\n d = \"-\"\n else:\n r = f'{row[\"RA\"]}'\n d = f'{row[\"Dec\"]:+.2f}'\n \n if name not in [\"ic160731a\", \"ic160814a\", \"ic170312a\"]:\n c = name\n else:\n c = \"ic_txs_mm_18\"\n\n text += f'\\t {row[\"Event\"]} & {r} & {d} & {area} & {s} & \\cite{{{c}}} \\\\\\\\ \\n'\n \ntext += f\"\"\"\n\\end{{longtable}}\n\n\"\"\"\n\nprint(text)", "\n\\begin{longtable}[c]{||c c c c c c ||}\n\\caption{Summary of all 92 neutrino alerts issued since under the IceCube Realtime Program. Directions are not indicated for retracted events.} \\label{tab:all_nu_alerts} \\\\\n \\hline\n \\textbf{Event} & \\textbf{R.A. (J2000)} & \\textbf{Dec (J2000)} & \\textbf{90\\% area} &~ \\textbf{Signalness}& \\textbf{Ref}\\\\\n & \\textbf{[deg]}&\\textbf{[deg]} & \\textbf{[sq. deg.]} &&\\\\\n \\hline\n\\endfirsthead\n \\hline\n\\textbf{Event} & \\textbf{R.A. (J2000)} & \\textbf{Dec (J2000)} & \\textbf{90\\% area} &~ \\textbf{Signalness}& \\textbf{Ref}\\\\\n & \\textbf{[deg]}&\\textbf{[deg]} & \\textbf{[sq. deg.]} &&\\\\\n \\hline\n\\endhead\n\\hline\n\\endfoot\n\\hline\n\\endlastfoot\n\\hline%\n\t IC160427A & 240.57 & +9.34 & 1.4 & - & \\cite{ic160427a} \\\\ \n\t IC160731A & 214.5 & -0.33 & 2.2 & 85\\% & \\cite{ic_txs_mm_18} \\\\ \n\t IC160806A & 122.81 & -0.81 & 0.0 & 28\\% & \\cite{ic160806a} \\\\ \n\t IC160814A & 200.3 & -32.40 & 12.0 & - & \\cite{ic_txs_mm_18} \\\\ \n\t IC161103A & 40.83 & +12.56 & 3.1 & - & \\cite{ic161103a} \\\\ \n\t IC161210A & 46.58 & +14.98 & 1.7 & 49\\% & \\cite{ic161210a} \\\\ \n\t IC170312A & 305.15 & -26.61 & 0.9 & - & \\cite{ic_txs_mm_18} \\\\ \n\t IC170321A & 98.3 & -15.02 & 5.6 & 28\\% & \\cite{ic170321a} \\\\ \n\t IC170506A & 221.8 & -26.00 & 21.6 & - & \\cite{ic170506a} \\\\ \n\t IC170922A & 77.43 & +5.72 & 0.3 & 57\\% & \\cite{ic170922a} \\\\ \n\t IC171015A & 162.86 & -15.44 & 14.9 & - & \\cite{ic171015a} \\\\ \n\t IC171028A & - & - & - & - & \\cite{ic171028a} \\\\ \n\t IC171106A & 340.0 & +7.40 & 0.7 & 75\\% & \\cite{ic171106a} \\\\ \n\t IC180423A & - & - & - & - & \\cite{ic180423a} \\\\ \n\t IC180908A & 144.58 & -2.13 & 6.3 & 34\\% & \\cite{ic180908a} \\\\ \n\t IC181014A & 225.15 & -34.80 & 10.5 & - & \\cite{ic181014a} \\\\ \n\t IC181023A & 270.18 & -8.57 & 9.3 & 28\\% & \\cite{ic181023a} \\\\ \n\t IC181031A & - & - & - & - & \\cite{ic181031a} \\\\ \n\t IC190104A & 357.98 & -26.65 & 18.5 & - & \\cite{ic190104a} \\\\ \n\t IC190124A & 307.4 & -32.18 & 2.0 & - & \\cite{ic190124a} \\\\ \n\t IC190205A & - & - & - & - & \\cite{ic190205a} \\\\ \n\t IC190221A & 268.81 & -17.04 & 5.2 & - & \\cite{ic190221a} \\\\ \n\t IC190331A & 337.68 & -20.70 & 0.4 & - & \\cite{ic190331a} \\\\ \n\t IC190503A & 120.28 & +6.35 & 1.9 & 36\\% & \\cite{ic190503a} \\\\ \n\t IC190504A & 65.7866 & -37.44 & - & - & \\cite{ic190504a} \\\\ \n\t IC190529A & - & - & - & - & \\cite{ic190529a} \\\\ \n\t IC190619A & 343.26 & +10.73 & 27.2 & 55\\% & \\cite{ic190619a} \\\\ \n\t IC190629A & 27.22 & +84.33 & - & 34\\% & \\cite{ic190629a} \\\\ \n\t IC190704A & 161.85 & +27.11 & 21.0 & 49\\% & \\cite{ic190704a} \\\\ \n\t IC190712A & 76.46 & +13.06 & 92.0 & 30\\% & \\cite{ic190712a} \\\\ \n\t IC190730A & 225.79 & +10.47 & 5.4 & 67\\% & \\cite{ic190730a} \\\\ \n\t IC190819A & 148.8 & +1.38 & 9.3 & 29\\% & \\cite{ic190819a} \\\\ \n\t IC190922A & 167.43 & -22.39 & 32.2 & 20\\% & \\cite{ic190922a} \\\\ \n\t IC190922B & 5.76 & -1.57 & 4.5 & 51\\% & \\cite{ic190922b} \\\\ \n\t IC191001A & 314.08 & +12.94 & 25.5 & 59\\% & \\cite{ic191001a} \\\\ \n\t IC191119A & 230.1 & +3.17 & 61.2 & 45\\% & \\cite{ic191119a} \\\\ \n\t IC191122A & 27.25 & -0.04 & 12.2 & 33\\% & \\cite{ic191122a} \\\\ \n\t IC191204A & 79.72 & +2.80 & 11.6 & 33\\% & \\cite{ic191204a} \\\\ \n\t IC191215A & 285.87 & +58.92 & 12.8 & 47\\% & \\cite{ic191215a} \\\\ \n\t IC191231A & 46.36 & +20.42 & 35.6 & 46\\% & \\cite{ic191231a} \\\\ \n\t IC200107A & 148.18 & +35.46 & 7.6 & - & \\cite{ic200107a} \\\\ \n\t IC200109A & 164.49 & +11.87 & 22.5 & 77\\% & \\cite{ic200109a} \\\\ \n\t IC200117A & 116.24 & +29.14 & 2.9 & 38\\% & \\cite{ic200117a} \\\\ \n\t IC200120A & - & - & - & - & \\cite{ic200120a} \\\\ \n\t IC200227A & 348.26 & +21.32 & - & 35\\% & \\cite{ic200227a} \\\\ \n\t IC200410A & 242.58 & +11.61 & 377.9 & 31\\% & \\cite{ic200410a} \\\\ \n\t IC200421A & 87.93 & +8.23 & 24.4 & 33\\% & \\cite{ic200421a} \\\\ \n\t IC200425A & 100.1 & +53.57 & 18.8 & 48\\% & \\cite{ic200425a} \\\\ \n\t IC200512A & 295.18 & +15.79 & 9.8 & 32\\% & \\cite{ic200512a} \\\\ \n\t IC200523A & 338.64 & +1.75 & 90.6 & 25\\% & \\cite{ic200523a} \\\\ \n\t IC200530A & 255.37 & +26.61 & 25.3 & 59\\% & \\cite{ic200530a} \\\\ \n\t IC200614A & 33.84 & +31.61 & 47.8 & 42\\% & \\cite{ic200614a} \\\\ \n\t IC200615A & 142.95 & +3.66 & 5.9 & 83\\% & \\cite{ic200615a} \\\\ \n\t IC200620A & 162.11 & +11.95 & 1.7 & 32\\% & \\cite{ic200620a} \\\\ \n\t IC200728A & - & - & - & - & \\cite{ic200728a} \\\\ \n\t IC200806A & 157.25 & +47.75 & 1.8 & 40\\% & \\cite{ic200806a} \\\\ \n\t IC200911A & 51.11 & +38.11 & 52.7 & 41\\% & \\cite{ic200911a} \\\\ \n\t IC200916A & 109.78 & +14.36 & 4.2 & 32\\% & \\cite{ic200916a} \\\\ \n\t IC200921A & 195.29 & +26.24 & 12.0 & 41\\% & \\cite{ic200921a} \\\\ \n\t IC200926A & 96.46 & -4.33 & 1.7 & 44\\% & \\cite{ic200926a} \\\\ \n\t IC200926B & 184.75 & +32.93 & 9.0 & 43\\% & \\cite{ic200926b} \\\\ \n\t IC200929A & 29.53 & +3.47 & 1.1 & 47\\% & \\cite{ic200929a} \\\\ \n\t IC201007A & 265.17 & +5.34 & 0.6 & 88\\% & \\cite{ic201007a} \\\\ \n\t IC201014A & 221.22 & +14.44 & 1.9 & 41\\% & \\cite{ic201014a} \\\\ \n\t IC201021A & 260.82 & +14.55 & 6.9 & 30\\% & \\cite{ic201021a} \\\\ \n\t IC201114A & 105.25 & +6.05 & 4.5 & 56\\% & \\cite{ic201114a} \\\\ \n\t IC201115A & 195.12 & +1.38 & 6.6 & 46\\% & \\cite{ic201115a} \\\\ \n\t IC201115B & - & - & - & - & \\cite{ic201115b} \\\\ \n\t IC201120A & 307.53 & +40.77 & 64.3 & 50\\% & \\cite{ic201120a} \\\\ \n\t IC201130A & 30.54 & -12.10 & 5.4 & 15\\% & \\cite{ic201130a} \\\\ \n\t IC201209A & 6.86 & -9.25 & 4.7 & 19\\% & \\cite{ic201209a} \\\\ \n\t IC201221A & 261.69 & +41.81 & 8.9 & 56\\% & \\cite{ic201221a} \\\\ \n\t IC201222A & 206.37 & +13.44 & 1.5 & 53\\% & \\cite{ic201222a} \\\\ \n\t IC210210A & 206.06 & +4.78 & 2.8 & 65\\% & \\cite{ic210210a} \\\\ \n\t IC210213A & - & - & - & - & \\cite{ic210213a} \\\\ \n\t IC210322A & - & - & - & - & \\cite{ic210322a} \\\\ \n\t IC210503A & 143.53 & +41.81 & 102.6 & 41\\% & \\cite{ic210503a} \\\\ \n\t IC210510A & 268.42 & +3.81 & 4.0 & 28\\% & \\cite{ic210510a} \\\\ \n\t IC210516A & 91.76 & +9.52 & 2.2 & 29\\% & \\cite{ic210516a} \\\\ \n\t IC210519A & - & - & - & - & \\cite{ic210519a} \\\\ \n\t IC210608A & 337.41 & +18.37 & 109.7 & 31\\% & \\cite{ic210608a} \\\\ \n\t IC210629A & 340.75 & +12.94 & 6.0 & 35\\% & \\cite{ic210629a} \\\\ \n\t IC210717A & 46.49 & -1.34 & 30.0 & - & \\cite{ic210717a} \\\\ \n\t IC210730A & 105.73 & +14.79 & 6.6 & 32\\% & \\cite{ic210730a} \\\\ \n\t IC210811A & 270.79 & +25.28 & 3.2 & 66\\% & \\cite{ic210811a} \\\\ \n\t IC210922A & 60.73 & -4.18 & 1.6 & 92\\% & \\cite{ic210922a} \\\\ \n\t IC211023A & 253.3 & -1.72 & 4.8 & 33\\% & \\cite{ic211023a} \\\\ \n\t IC211116A & 42.45 & +0.15 & 5.5 & 40\\% & \\cite{ic211116a} \\\\ \n\t IC211117A & 225.93 & -0.20 & 1.0 & 53\\% & \\cite{ic211117a} \\\\ \n\t IC211123A & 265.52 & +7.33 & 28.8 & 36\\% & \\cite{ic211123a} \\\\ \n\t IC211125A & 43.59 & +22.59 & 21.9 & 39\\% & \\cite{ic211125a} \\\\ \n\t IC211208A & 114.52 & +15.56 & 16.4 & 50\\% & \\cite{ic211208a} \\\\ \n\n\\end{longtable}\n\n\n" ], [ "# Neutrino stats", "_____no_output_____" ], [ "dates = [Time(f\"20{x[2:4]}-{x[4:6]}-{x[6:8]}T00:00:01\") for x in joint[\"Event\"]]\n\nplt.figure(figsize=(base_width, base_height), dpi=dpi)\nax1 = plt.subplot(111)\n\nmjds = []\nlabs = []\nbins = []\n\nfor year in range(2016, 2022):\n for k, month in enumerate([1, 4, 7, 10]):\n \n t = Time(f\"{year}-{month}-01T00:00:00.01\", format='isot', scale='utc').mjd\n \n bins.append(t)\n \n if (k - 1) % 2 > 0:\n \n mjds.append(t)\n labs.append([\"Jan\", \"July\"][int(k/2)] + f\" {year}\")\n \nt_0 = Time(f\"2016-04-01T00:00:00.01\", format='isot', scale='utc').mjd \n \nv1_t = Time(f\"2019-06-17T00:00:00.01\", format='isot', scale='utc').mjd\n\nt_now = Time.now().mjd\n\nalerts_v1 = [x.mjd for i, x in enumerate(dates) if np.logical_and(x.mjd < v1_t, not np.isnan(joint.iloc[i][\"Dec\"]))]\n\nalerts_v2 = [x.mjd for i, x in enumerate(dates) if np.logical_and(\n x.mjd > v1_t, not np.isnan(joint.iloc[i][\"Dec\"]))]\n\nprint(f'{len(alerts_v1)} V1 alerts, {len(alerts_v2)} V2 alerts')\n\nmod = 7.\n\nv1_rate = mod * float(len(alerts_v1))/(v1_t - t_0)\nv2_rate = mod * float(len(alerts_v2))/(t_now - v1_t)\n\nlabels = []\n\nfor (name, rate) in [(\"V1\", v1_rate), (\"V2\", v2_rate)]:\n labels.append(f'{name} ({rate:.2f} per week)')\n \nplt.xticks(mjds, labs, rotation=80)\nplt.locator_params(axis=\"y\", nbins=6)\nplt.hist([alerts_v1, alerts_v2], bins=bins, stacked=True, label=labels)\n\nplt.axvline(v1_t, linestyle=\":\", color=\"k\")\n\nplt.tick_params(axis='both', which='major', labelsize=big_fontsize)\nplt.legend(fontsize=big_fontsize, loc=\"upper left\")\n\nplt.ylabel(\"Alerts (excluding retractions)\", fontsize=big_fontsize)\n\nsns.despine()\n\nplt.ylim(0., 12.)\nplt.tight_layout()\n\nfilename = \"alert_hist.pdf\"\n\noutput_path = os.path.join(output_folder, filename)\nplt.savefig(os.path.join(plot_dir, filename))\nplt.savefig(output_path, bbox_inches='tight', pad_inches=0)", "21 V1 alerts, 60 V2 alerts\n" ], [ "plt.figure(figsize=(base_width, base_height), dpi=dpi)\nax1 = plt.subplot(111)\n\ndates = [Time(f\"20{x[2:4]}-{x[4:6]}-{x[6:8]}T00:00:01\") for x in joint[\"Event\"]]\n\nmjds = []\nlabs = []\nbins = []\n\nfor year in range(2016, 2022):\n for k, month in enumerate([1, 4, 7, 10]):\n \n t = Time(f\"{year}-{month}-01T00:00:00.01\", format='isot', scale='utc').mjd\n \n bins.append(t)\n \n if (k - 1) % 2 > 0:\n \n mjds.append(t)\n labs.append([\"Jan\", \"July\"][int(k/2)] + f\" {year}\")\n \nt_0 = Time(f\"2016-04-01T00:00:00.01\", format='isot', scale='utc').mjd \n \nv1_t = Time(f\"2019-06-17T00:00:00.01\", format='isot', scale='utc').mjd\n\nt_now = Time.now().mjd\n\nalerts_v1 = [x.mjd for i, x in enumerate(dates) if np.logical_and(x.mjd < v1_t, not np.isnan(joint.iloc[i][\"Dec\"]))]\n\nalerts_v2 = [x.mjd for i, x in enumerate(dates) if np.logical_and(\n x.mjd > v1_t, not np.isnan(joint.iloc[i][\"Dec\"]))]\n\nprint(f'{len(alerts_v1)} V1 alerts, {len(alerts_v2)} V2 alerts')\n\nmod = 7.\n\nv1_rate = mod * float(len(alerts_v1))/(v1_t - t_0)\nv2_rate = mod * float(len(alerts_v2))/(t_now - v1_t)\n\nlabels = []\n\nfor (name, rate) in [(\"HESE/EHE\", v1_rate), (\"Gold/Bronze\", v2_rate)]:\n labels.append(f'{name} ({rate:.2f} per week)')\n \nplt.xticks(mjds, labs, rotation=80)\nplt.locator_params(axis=\"y\", nbins=6)\nplt.hist([alerts_v1, alerts_v2], bins=bins[:-1], stacked=True, label=labels, cumulative=True)\n\nplt.axvline(v1_t, linestyle=\":\", color=\"k\")\n\nplt.tick_params(axis='both', which='major', labelsize=big_fontsize)\nplt.legend(fontsize=big_fontsize, loc=\"upper left\")\n\nsns.despine()\n\n# plt.ylim(0., 12.)\nplt.ylabel(\"Alerts (excluding retractions)\", fontsize=big_fontsize)\nplt.tight_layout()\n\nfilename = \"alert_cdf.pdf\"\n\noutput_path = os.path.join(output_folder, filename)\nplt.savefig(os.path.join(plot_dir, filename))\nplt.savefig(output_path, bbox_inches='tight', pad_inches=0)", "21 V1 alerts, 60 V2 alerts\n" ], [ "plt.figure(figsize=(base_width, base_height), dpi=dpi)\nax1 = plt.subplot(111)\n\ndates = [Time(f\"20{x[2:4]}-{x[4:6]}-{x[6:8]}T00:00:01\") for x in obs[\"Event\"]]\n\nmjds = []\nlabs = []\nbins = []\n\nfor year in range(2018, 2022):\n for k, month in enumerate([1, 4, 7, 10]):\n \n t = Time(f\"{year}-{month}-01T00:00:00.01\", format='isot', scale='utc').mjd\n \n bins.append(t)\n \n if (k - 1) % 2 > 0:\n \n mjds.append(t)\n labs.append([\"Jan\", \"July\"][int(k/2)] + f\" {year}\")\n \nt_0 = Time(f\"2018-04-01T00:00:00.01\", format='isot', scale='utc').mjd \n \nv1_t = Time(f\"2019-06-17T00:00:00.01\", format='isot', scale='utc').mjd\n\nt_now = Time(f\"2021-07-01T00:00:00.01\", format='isot', scale='utc').mjd\n\nt_bran_cut = Time(f\"2020-02-01T00:00:00.01\", format='isot', scale='utc').mjd\n\nalerts_v1 = [x.mjd for x in dates if x.mjd < v1_t]\n\nalerts_v2 = [x.mjd for x in dates if x.mjd > v1_t]\n\nprint(f'{len(alerts_v1)} V1 alerts, {len(alerts_v2)} V2 alerts')\n\nmod = 7.\n\nv1_rate = mod * float(len(alerts_v1))/(v1_t - t_0)\nv2_rate = mod * float(len(alerts_v2))/(t_now - v1_t)\n\nlabels = []\n\nfor (name, rate) in [(\"HESE/EHE\", v1_rate), (\"Gold/Bronze\", v2_rate)]:\n labels.append(f'{name} ({rate:.2f} per week)')\n \nplt.xticks(mjds, labs, rotation=80)\nplt.locator_params(axis=\"y\", nbins=6)\nplt.hist([alerts_v1, alerts_v2], bins=bins[:-1], stacked=True, label=labels, cumulative=True)\n\nplt.axvline(v1_t, linestyle=\":\", color=\"k\")\n# plt.axvline(t_bran_cut, linestyle=\"--\", color=\"k\")\n\nplt.tick_params(axis='both', which='major', labelsize=big_fontsize)\nplt.legend(fontsize=big_fontsize, loc=\"upper left\")\n\nsns.despine()\n\n# plt.ylim(0., 12.)\nplt.ylabel(r\"ZTF $\\nu$ follow-up campaigns\", fontsize=big_fontsize)\nplt.tight_layout()\n\nfilename = \"ztf_cdf.pdf\"\n\noutput_path = os.path.join(output_folder, filename)\nplt.savefig(os.path.join(plot_dir, filename))\nplt.savefig(output_path, bbox_inches='tight', pad_inches=0)", "1 V1 alerts, 23 V2 alerts\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7a9a05d5ad94467a9ac766cb0f1f60091ec148c
6,405
ipynb
Jupyter Notebook
src/tide_constituents/water_level_prediction.ipynb
slawler/SI_2019_Coastal
4064d323bc62ce2f47a7af41b9a11ea5538ad181
[ "MIT" ]
1
2020-03-13T07:51:44.000Z
2020-03-13T07:51:44.000Z
src/tide_constituents/water_level_prediction.ipynb
cheginit/SI_2019_Coastal
4064d323bc62ce2f47a7af41b9a11ea5538ad181
[ "MIT" ]
null
null
null
src/tide_constituents/water_level_prediction.ipynb
cheginit/SI_2019_Coastal
4064d323bc62ce2f47a7af41b9a11ea5538ad181
[ "MIT" ]
1
2020-03-13T14:44:57.000Z
2020-03-13T14:44:57.000Z
25.722892
127
0.528025
[ [ [ "import tide_constituents as tc\nimport noaa_coops as nc\nimport pandas as pd\nimport numpy as np\nimport datetime\nimport tappy\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\npd.plotting.register_matplotlib_converters()", "_____no_output_____" ], [ "import matplotlib as mpl\nmpl.rcParams['figure.dpi'] = 200", "_____no_output_____" ], [ "start = '20180201'\nend = '20180228'\ninterval = 1", "_____no_output_____" ], [ "pr = tc.get_tides('20180101', '20180120', -88.2, 30.4)", "_____no_output_____" ], [ "noaa_predict = tc.get_tides(start, end, -88.2, 30.4)", "_____no_output_____" ], [ "start = pd.to_datetime(start)\nend = pd.to_datetime(end)\nd = start\nw, t, p, r = [], [], [], []\n\nwhile d < end:\n start_ = d\n end_ = start_ + pd.DateOffset(interval)\n end_ = end_ if end_ < end else end\n water_level = tc.get_water_levels(start_.strftime('%Y%m%d'),\n end_.strftime('%Y%m%d'),\n -88.2, 30.4)\n tide = tc.tide_constituents(water_level)\n water_level = water_level.water_level.astype('float')\n prediction = 0.0 if 'Z0' not in list(tide.speed_dict.keys()) else tide.speed_dict['Z0']\n prediction += tc.sum_signals(tide.key_list, tide.dates, tide.speed_dict, tide.r, tide.phase)\n residual = water_level - prediction\n w.append(water_level)\n t.append(tide)\n p.append(prediction)\n r.append(residual)\n d = end_", "_____no_output_____" ], [ "df_cons = pd.DataFrame({'amp': list(t[0].r.values()),\n 'phase': list(t[0].phase.values()),\n 'speed': [t[0].speed_dict[i]['speed'] for i in list(t[0].speed_dict.keys()) if not i == 'P1'],\n 'VAU': [t[0].speed_dict[i]['VAU'] for i in list(t[0].speed_dict.keys()) if not i == 'P1']})", "_____no_output_____" ], [ "df_cons.to_csv('constituents_claw.csv', float_format='%3.10f', index=False, header=None)", "_____no_output_____" ], [ "df_FF = pd.DataFrame([t[0].speed_dict[i]['FF'] for i in list(t[0].speed_dict.keys()) if not i == 'P1']).T", "_____no_output_____" ], [ "df_FF.to_csv('constituents_FF_claw.csv', index=False, header=None)", "_____no_output_____" ], [ "water_level = pd.concat(w).to_frame()\nwater_level.columns = ['observation']\nwater_level['prediction'] = np.hstack(p)\nwater_level['residual'] = np.hstack(r)\n#water_level = water_level[['water_level', 'prediction', 'residual']]\n#water_level.columns = ['observation', 'prediction', 'residual']\nwater_level = water_level[['observation', 'prediction']]", "_____no_output_____" ], [ "ax = water_level.plot()\nnoaa_predict.plot(ax=ax)\nax.legend(['observation', 'prediction', 'NOAA prediction'], loc='best')\nax.set_xlabel('');", "_____no_output_____" ], [ "year = '2018'", "_____no_output_____" ], [ "start = year + '0101'\nend = year + '1231'\ndata = tc.get_water_levels(start, end, -88.2, 30.4)", "_____no_output_____" ], [ "pr.head()", "_____no_output_____" ], [ "wl = data.water_level.copy()\ngrouped = wl.groupby(pd.Grouper(freq='M'))\n\ndef f(group):\n return pd.DataFrame({'original': group, 'demeaned': group - group.mean()})\n\nwl_demeaned = grouped.apply(f)", "_____no_output_____" ], [ "ax = wl_demeaned.plot()\nax.set_xlabel('');", "_____no_output_____" ], [ "min_month = wl_demeaned.rolling(30).min().groupby(pd.Grouper(freq='M')).last()\nmax_month = wl_demeaned.rolling(30).max().groupby(pd.Grouper(freq='M')).last()\nmonthly_minmax = min_month.copy()\nmonthly_minmax['high'] = max_month['demeaned']\nmonthly_minmax = monthly_minmax[['demeaned', 'high']]\nmonthly_minmax.columns = ['low', 'high']\nmonthly_minmax['range'] = monthly_minmax.high - monthly_minmax.low\nranked = monthly_minmax.sort_values('range')\nranked", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a9a841a39bf45745cee6d147125056e31c0417
539,187
ipynb
Jupyter Notebook
src/NeuronBlock.ipynb
sazio/NMAs
e5e817363f2f65e6f3e8b37293b4b6ac97b43f8c
[ "MIT" ]
null
null
null
src/NeuronBlock.ipynb
sazio/NMAs
e5e817363f2f65e6f3e8b37293b4b6ac97b43f8c
[ "MIT" ]
null
null
null
src/NeuronBlock.ipynb
sazio/NMAs
e5e817363f2f65e6f3e8b37293b4b6ac97b43f8c
[ "MIT" ]
null
null
null
395.008791
298,574
0.927526
[ [ [ "<a href=\"https://colab.research.google.com/github/sazio/NMAs/blob/main/src/NeuronBlock.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "#Exploratory Data Analysis of Stringer Dataset \n@authors: Simone Azeglio, Chetan Dhulipalla , Khalid Saifullah \n\n\nPart of the code here has been taken from [Neuromatch Academy's Computational Neuroscience Course](https://compneuro.neuromatch.io/projects/neurons/README.html), and specifically from [this notebook](https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/projects/neurons/load_stringer_spontaneous.ipynb)", "_____no_output_____" ], [ "# to do list\n\n1. custom normalization: dividing by mean value per neuron\n1a. downsampling: convolve then downsample by 5\n2. training validation split: withhold last 20 percent of time series for testing\n3. RNN for each layer: a way to capture the dynamics inside each layer instead of capturing extra dynamics from inter-layer interactions. it will be OK to compare the different RNNs. maintain same neuron count in each layer to reduce potential bias \n4. layer weight regularization: L2 \n5. early stopping , dropout?", "_____no_output_____" ], [ "## Loading of Stringer spontaneous data\n\n", "_____no_output_____" ] ], [ [ "#@title Data retrieval\nimport os, requests\n\nfname = \"stringer_spontaneous.npy\"\nurl = \"https://osf.io/dpqaj/download\"\n\nif not os.path.isfile(fname):\n try:\n r = requests.get(url)\n except requests.ConnectionError:\n print(\"!!! Failed to download data !!!\")\n else:\n if r.status_code != requests.codes.ok:\n print(\"!!! Failed to download data !!!\")\n else:\n with open(fname, \"wb\") as fid:\n fid.write(r.content)", "_____no_output_____" ], [ "#@title Import matplotlib and set defaults\nfrom matplotlib import rcParams \nfrom matplotlib import pyplot as plt\nrcParams['figure.figsize'] = [20, 4]\nrcParams['font.size'] =15\nrcParams['axes.spines.top'] = False\nrcParams['axes.spines.right'] = False\nrcParams['figure.autolayout'] = True", "_____no_output_____" ] ], [ [ "## Exploratory Data Analysis (EDA)", "_____no_output_____" ] ], [ [ "#@title Data loading\nimport numpy as np\ndat = np.load('stringer_spontaneous.npy', allow_pickle=True).item()\nprint(dat.keys())", "dict_keys(['sresp', 'run', 'beh_svd_time', 'beh_svd_mask', 'stat', 'pupilArea', 'pupilCOM', 'xyz'])\n" ], [ "# functions \n\ndef moving_avg(array, factor = 5):\n \"\"\"Reducing the number of compontents by averaging of N = factor\n subsequent elements of array\"\"\"\n zeros_ = np.zeros((array.shape[0], 2))\n array = np.hstack((array, zeros_))\n\n array = np.reshape(array, (array.shape[0], int(array.shape[1]/factor), factor))\n array = np.mean(array, axis = 2)\n\n return array", "_____no_output_____" ] ], [ [ "## Extracting Data for RNN (or LFADS)\nThe first problem to address is that for each layer we don't have the exact same number of neurons. We'd like to have a single RNN encoding all the different layers activities, to make it easier we can take the number of neurons ($N_{neurons} = 1131$ of the least represented class (layer) and level out each remaining class. ", "_____no_output_____" ] ], [ [ "# Extract labels from z - coordinate\nfrom sklearn import preprocessing\nx, y, z = dat['xyz']\n\nle = preprocessing.LabelEncoder()\nlabels = le.fit_transform(z)\n### least represented class (layer with less neurons)\nn_samples = np.histogram(labels, bins=9)[0][-1]", "_____no_output_____" ], [ "\nresp = np.array(dat['sresp'])\nxyz = np.array(dat['xyz'])\nprint(resp.shape, xyz[0].shape)", "(11983, 7018) (11983,)\n" ], [ "# Extracting x,y blocks\n\nn_blocks = 9\n\nx_range, y_range, _ = np.ptp(dat['xyz'], axis = 1)\n\nx_block_starts = np.arange(min(x), max(x), x_range // (n_blocks ** 0.5))\ny_block_starts = np.arange(min(y), max(y)+0.01, y_range // (n_blocks ** 0.5))\n\ndata_blocks = list()\nlabels = np.zeros(resp.shape[0])\ncounter = 0\n\nfor i in range(int(n_blocks ** 0.5)):\n for k in range(int(n_blocks ** 0.5)):\n tempx, = np.where((x >= x_block_starts[i]) & (x <= x_block_starts[i+1]))\n tempy, = np.where((y >= y_block_starts[k]) & (y <= y_block_starts[k+1]))\n idx = np.intersect1d(tempx, tempy)\n labels[idx] = counter\n counter += 1\n data_blocks.append(resp[idx])\n \n\nprint(labels.shape)\n\nlabels = np.array(labels)\n", "(11983,)\n" ], [ "print(y_block_starts)\nprint(x_block_starts)", "[ 4. 340. 676. 1012.]\n[ 4. 339. 674. 1009.]\n" ], [ "type(iunq[0])\nprint(iunq)\nprint(labels)\nlabels = np.int64(labels)\nprint(type(iunq[0]), type(labels[0]))", "[8 8 8 ... 0 0 0]\n[0. 0. 0. ... 7. 7. 8.]\n<class 'numpy.int64'> <class 'numpy.int64'>\n" ], [ "from matplotlib import cm\n\nzunq, iunq = np.unique(z, return_inverse=True)\nxc = np.linspace(0.0, 1.0, len(zunq))\ncmap = cm.get_cmap('jet')(xc)\n\nfig = plt.figure(figsize=(6,6))\nax = fig.add_subplot(111, projection='3d')\nax.scatter(x[::-1],y[::-1],z[::-1], 'o', s = 4, c = cmap[labels])\nax.set(xlabel='horizontal(um)', ylabel = 'vertical(um)', zlabel='depth (um)');", "_____no_output_____" ], [ "for i in data_blocks:\n print(i.shape)", "(1314, 7018)\n(768, 7018)\n(802, 7018)\n(1710, 7018)\n(1534, 7018)\n(1177, 7018)\n(1882, 7018)\n(1749, 7018)\n(1129, 7018)\n" ], [ "print(x_range, y_range)\nprint(y_block_starts)\nprint(x_block_starts)", "1006.0 1008.0\n[ 4. 340. 676. 1012.]\n[ 4. 339. 674. 1009.]\n" ], [ "x_block_starts.shape\nprint(int(n_blocks ** 0.5))\n\nfor i in range(3):\n print(i)\nprint(y_block_starts.shape)", "3\n0\n1\n2\n(3,)\n" ], [ "1314 * 9 # roughly 200 neurons are lost at the boundary ", "_____no_output_____" ], [ "### Data for LFADS / RNN \nimport pandas as pd \ndataSet = pd.DataFrame(dat[\"sresp\"])\ndataSet[\"label\"] = labels ", "_____no_output_____" ], [ "# it can be done in one loop ... \ndata_ = []\nfor i in range(0, 9):\n data_.append(dataSet[dataSet[\"label\"] == i].sample(n = n_samples).iloc[:,:-1])\n\ndataRNN = np.zeros((n_samples*9, dataSet.shape[1]-1))\nfor i in range(0,9):\n dataRNN[n_samples*i:n_samples*(i+1), :] = data_[i]\n\n## shuffling for training purposes\n\n#np.random.shuffle(dataRNN)", "_____no_output_____" ], [ "plt.plot(dataRNN[0, :600])\nprint(dataRNN.shape)", "(10179, 7018)\n" ], [ "#@title PCA \n\nfrom sklearn.decomposition import PCA\n\n#pca developed seperately for blocks and layers. note: the number of neurons and timepoints are constrained\n\nblock_pca = PCA(n_components = 500)\nblock_pca = block_pca.fit(data_blocks[0,:1131,:600].T)\ncompress_blocks = block_pca.transform(data_blocks[0,:1131,:1200].T)\n\nprint(compress_blocks.shape)\n\nlayer_pca = PCA(n_components = 500)\nlayer_pca = layer_pca.fit(dataRNN[:1131,:600].T)\ncompress_layers = layer_pca.transform(dataRNN[:1131,:1200].T)\n\nprint(compress_layers.shape)", "_____no_output_____" ], [ "var_block = np.cumsum(pca.explained_variance_ratio_)\nvar_layer = np.cumsum(pca2.explained_variance_ratio_)\n\nplt.plot(var_block)\nplt.title('var exp for blocks')\nplt.figure()\nplt.plot(var_layer)\nplt.title('var exp for layers')\n\n\nprint(var_block[75], var_layer[75])\n\n\n\n", "0.74739975 0.6650082930536177\n" ], [ "unshuffled = np.array(data_)", "_____no_output_____" ], [ "#@title Convolutions code\n\n# convolution moving average\n\n# kernel_length = 50\n# averaging_kernel = np.ones(kernel_length) / kernel_length\n\n# dataRNN.shape\n\n# avgd_dataRNN = list()\n\n# for neuron in dataRNN:\n# avgd_dataRNN.append(np.convolve(neuron, averaging_kernel))\n\n# avg_dataRNN = np.array(avgd_dataRNN)\n\n# print(avg_dataRNN.shape)", "_____no_output_____" ], [ "# @title Z Score Code \n\n\n# from scipy.stats import zscore\n\n\n# neuron = 500\n\n# scaled_all = zscore(avg_dataRNN)\n# scaled_per_neuron = zscore(avg_dataRNN[neuron, :])\n\n# scaled_per_layer = list()\n\n# for layer in unshuffled:\n# scaled_per_layer.append(zscore(layer))\n\n# scaled_per_layer = np.array(scaled_per_layer)\n\n\n\n# plt.plot(avg_dataRNN[neuron, :])\n# plt.plot(avg_dataRNN[2500, :])\n# plt.figure()\n# plt.plot(dataRNN[neuron, :])\n# plt.figure()\n# plt.plot(scaled_all[neuron, :])\n# plt.plot(scaled_per_neuron)\n# plt.figure()\n# plt.plot(scaled_per_layer[0,neuron,:])\n", "_____no_output_____" ], [ "# custom normalization\n\nnormed_dataRNN = list()\nfor neuron in dataRNN:\n normed_dataRNN.append(neuron / neuron.mean())\nnormed_dataRNN = np.array(normed_dataRNN)\n\n# downsampling and averaging \n\navgd_normed_dataRNN = moving_avg(normed_dataRNN, factor=5)", "_____no_output_____" ] ], [ [ "issue: does the individual scaling by layer introduce bias that may artificially increase performance of the network?", "_____no_output_____" ], [ "## Data Loader \n", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F", "_____no_output_____" ], [ "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')", "_____no_output_____" ], [ "# set the seed\nnp.random.seed(42)\n\n# number of neurons \nNN = dataRNN.shape[0]\n\n# let's use 270 latent components\nncomp = 10", "_____no_output_____" ], [ "# swapping the axes to maintain consistency with seq2seq notebook in the following code - the network takes all the neurons at a time step as input, not just one neuron\n\navgd_normed_dataRNN = np.swapaxes(avgd_normed_dataRNN, 0, 1)\navgd_normed_dataRNN.shape", "_____no_output_____" ], [ "frac = 5/6\n#x1 = torch.from_numpy(dataRNN[:,:int(frac*dataRNN.shape[1])]).to(device).float().unsqueeze(0)\n#x2 = torch.from_numpy(dataRNN[:,int(frac*dataRNN.shape[1]):]).to(device).float().unsqueeze(0)\nx1 = torch.from_numpy(avgd_normed_dataRNN[:,:50]).to(device).float().unsqueeze(0)\nx2 = torch.from_numpy(avgd_normed_dataRNN[:,:50]).to(device).float().unsqueeze(0)\n\nNN1 = x1.shape[-1]\nNN2 = x2.shape[-1]", "_____no_output_____" ], [ "class Net(nn.Module):\n def __init__(self, ncomp, NN1, NN2, bidi=True):\n super(Net, self).__init__()\n\n # play with some of the options in the RNN!\n self.rnn = nn.RNN(NN1, ncomp, num_layers = 1, dropout = 0,\n bidirectional = bidi, nonlinearity = 'tanh')\n self.fc = nn.Linear(ncomp, NN2)\n\n def forward(self, x):\n\n y = self.rnn(x)[0]\n\n if self.rnn.bidirectional:\n # if the rnn is bidirectional, it concatenates the activations from the forward and backward pass\n # we want to add them instead, so as to enforce the latents to match between the forward and backward pass\n q = (y[:, :, :ncomp] + y[:, :, ncomp:])/2\n else:\n q = y\n\n # the softplus function is just like a relu but it's smoothed out so we can't predict 0\n # if we predict 0 and there was a spike, that's an instant Inf in the Poisson log-likelihood which leads to failure\n #z = F.softplus(self.fc(q), 10)\n z = self.fc(q)\n\n return z, q", "_____no_output_____" ], [ "# we initialize the neural network\nnet = Net(ncomp, NN1, NN2, bidi = True).to(device)\n\n# special thing: we initialize the biases of the last layer in the neural network\n# we set them as the mean firing rates of the neurons.\n# this should make the initial predictions close to the mean, because the latents don't contribute much\nnet.fc.bias.data[:] = x1.mean(axis = (1,2))\n\n# we set up the optimizer. Adjust the learning rate if the training is slow or if it explodes.\noptimizer = torch.optim.Adam(net.parameters(), lr=.02)", "_____no_output_____" ], [ "# forward check \n# net(x1)", "_____no_output_____" ] ], [ [ "## Training ", "_____no_output_____" ] ], [ [ "from tqdm.notebook import tqdm", "_____no_output_____" ], [ "# you can keep re-running this cell if you think the cost might decrease further\n\ncost = nn.MSELoss()\n\nniter = 100000\nfor k in tqdm(range(niter)):\n # the network outputs the single-neuron prediction and the latents\n z, y = net(x1)\n\n # our cost\n loss = cost(z, x2)\n\n # train the network as usual\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n\n if k % 250 == 0:\n print(f' iteration {k}, cost {loss.item():.4f}')", "_____no_output_____" ], [ "test = net()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7a9b3bc350be44875082242b64b82e6c9329c0f
65,297
ipynb
Jupyter Notebook
MylogReader.ipynb
Telexine/colorizeSketch
f0281d41a9daf30d51b0277ed2fd0fdbfa29e92f
[ "MIT" ]
3
2019-01-30T05:16:29.000Z
2020-04-09T10:31:48.000Z
MylogReader.ipynb
Telexine/colorizeSketch
f0281d41a9daf30d51b0277ed2fd0fdbfa29e92f
[ "MIT" ]
null
null
null
MylogReader.ipynb
Telexine/colorizeSketch
f0281d41a9daf30d51b0277ed2fd0fdbfa29e92f
[ "MIT" ]
null
null
null
181.885794
23,052
0.871878
[ [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd \n\ndef readfile(f):\n fr = open(f,\"r\",encoding=\"utf-8\")\n data = []\n data = fr.readlines()\n fr.close()\n return data\n", "_____no_output_____" ], [ "ls", " Volume in drive C is Samsung SSD 850 EVO\n Volume Serial Number is 558D-6670\n\n Directory of C:\\Users\\me\\Documents\\colorizeSketch\n\n09/02/2018 04:38 PM <DIR> .\n09/02/2018 04:38 PM <DIR> ..\n08/22/2018 09:51 PM 1,344 .gitignore\n08/26/2018 10:09 PM <DIR> .ipynb_checkpoints\n08/26/2018 08:24 PM <DIR> __pycache__\n07/14/2018 03:35 PM 32,883 1.jpg\n08/27/2018 11:11 PM 85,363 2.jpg\n08/26/2018 07:00 PM <DIR> anime\n09/02/2018 10:17 AM 2,151,596 ANNGAN.ipynb\n08/22/2018 09:43 PM 8,137 cc.png\n09/01/2018 06:26 PM 980 combine.py\n07/07/2018 06:56 PM 2,538 data_loader.py\n08/26/2018 07:00 PM <DIR> datasets\n09/02/2018 04:02 PM 1,015,899 gen1.csv\n08/22/2018 09:51 PM 1,932,961 gen1log.txt\n09/02/2018 04:13 PM 1,015,899 gen2.csv\n08/27/2018 11:22 PM 129,267 guide.jpg\n08/26/2018 07:02 PM <DIR> images\n08/22/2018 09:51 PM 1,094 LICENSE\n08/27/2018 11:37 PM 144,763 maxresdefault.jpg\n09/02/2018 01:51 PM <DIR> models\n09/02/2018 04:38 PM 86,285 MylogReader.ipynb\n09/02/2018 01:53 PM 227,806 Predict.ipynb\n08/22/2018 09:51 PM 16 README.md\n08/22/2018 09:51 PM 47 requirements.txt\n08/27/2018 11:37 PM 127,362 sk.jpg\n09/02/2018 01:54 PM 1,382,475 v2log.txt\n 19 File(s) 8,346,715 bytes\n 8 Dir(s) 106,717,573,120 bytes free\n" ] ], [ [ "## Import CSV log", "_____no_output_____" ] ], [ [ "data = readfile(\"ver3.txt\")", "_____no_output_____" ], [ "## normal discogan\ndloss = []\ngloss = []\ntime= []\nepouch = []\nfor i in range(len(data)):\n try:\n t = data[i][data[i].find('time: ')+5: data[i].find(', [d_loss:') - len(data[i])].replace(\" \",\"\") \n if len(t)<3: continue\n time.append(t)\n glossA = (float(data[i][data[i].find('g_loss: ')+7:len(data[i])-2].replace(\" \",\"\")))\n \n dlossA = float(data[i][data[i].find('d_loss: ')+7: data[i].find('g_loss:') - len(data[i])-2].replace(\" \",\"\") )\n\n if(dlossA<5):\n dloss.append(100-dlossA)\n else: dloss.append(95)\n \n if (glossA<5): gloss.append(100-glossA)\n else: gloss.append(95)\n epouch.append(int(data[i][1:data[i].find(']')]))\n except:\n continue", "_____no_output_____" ], [ "# TO rapidmine\ndf = pd.DataFrame({'time':time,'epouch':epouch,'dis_loss':dloss,'gen_loss':gloss })\ndf.to_csv('gen2.csv', index=False)", "_____no_output_____" ], [ "x_max = max(gloss2)\nx_min = min(gloss2)\nx_maxmin = x_max - x_min\nx_norm = []\nx2_norm=[]", "_____no_output_____" ], [ "#normalize\nfor C in range(len(gloss)):\n x2_norm.append((gloss[C]-x_min)/x_maxmin)\n", "_____no_output_____" ], [ "df = pd.DataFrame(x_norm)", "_____no_output_____" ], [ "df2 = pd.DataFrame(x2_norm)", "_____no_output_____" ], [ "plt.rcParams[\"figure.figsize\"] = [16,9]\nplt.plot(df,'r', alpha=0.3)\nplt.plot(df2,'g', alpha=0.3)\nplt.grid()\nplt\n\nplt.show()\n", "_____no_output_____" ], [ "df = pd.DataFrame(dloss)\nplt.rcParams[\"figure.figsize\"] = [16,9]\nplt.plot(df,'r')\nplt.grid()\nplt\nplt.show()\n\n", "_____no_output_____" ], [ "plt.hist(df)\nplt.grid()\n\nplt.show()", "_____no_output_____" ], [ "df.hist(figsize=(10,10))\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a9b6c521d1604586713274797eabfe80e7fc31
122,837
ipynb
Jupyter Notebook
Neil/IsolationForest_Neil.ipynb
nandini192911/nandini
5a8fc9f94724d5b05ac12239c61a6b290dc501a1
[ "Apache-2.0" ]
1
2019-07-12T16:44:30.000Z
2019-07-12T16:44:30.000Z
Neil/IsolationForest_Neil.ipynb
agnimish/Credit_Card_Fraud_Detection
5a8fc9f94724d5b05ac12239c61a6b290dc501a1
[ "Apache-2.0" ]
null
null
null
Neil/IsolationForest_Neil.ipynb
agnimish/Credit_Card_Fraud_Detection
5a8fc9f94724d5b05ac12239c61a6b290dc501a1
[ "Apache-2.0" ]
null
null
null
144.684335
20,216
0.84407
[ [ [ "#Libraries\nimport numpy as np\nimport pandas as pd \nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec # to do the grid of plots", "_____no_output_____" ], [ "# reading data from csv file\ndf = pd.read_csv('creditcard.csv')", "_____no_output_____" ], [ "df = df[df.Amount < 10000]", "_____no_output_____" ], [ "# Reason: robust scaler is immune to outliers, as median is chosen as the central tendancy.\nfrom sklearn.preprocessing import StandardScaler, RobustScaler\n\nrob_scaler = RobustScaler()\n\ndf['scaled_amount'] = rob_scaler.fit_transform(df['Amount'].values.reshape(-1,1))\ndf['scaled_time'] = rob_scaler.fit_transform(df['Time'].values.reshape(-1,1))\n\ndf.drop(['Time','Amount'], axis=1, inplace=True)\ndf = df[['scaled_time','scaled_amount', 'V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10',\n 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20',\n 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28',\n 'Class']]\ndf.to_csv(\"scaled_data.csv\")\nprint('Scaled Data\\n')\ndf.head(10)", "Scaled Data\n\n" ], [ "from sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import StratifiedKFold\n\n\nX = df.drop('Class', axis=1)\ny = df['Class']\n\nX.to_csv(\"X.csv\")\ny.to_csv(\"y.csv\")\n\nsss = StratifiedKFold(n_splits=5, random_state=None, shuffle=False)\n\nfor train_index, test_index in sss.split(X, y):\n # print(\"Train:\", train_index, \"Test:\", test_index)\n original_Xtrain, original_Xtest = X.iloc[train_index], X.iloc[test_index]\n original_ytrain, original_ytest = y.iloc[train_index], y.iloc[test_index]\n\n# We already have X_train and y_train for undersample data thats why I am using original to distinguish and to not overwrite these variables.\n# original_Xtrain, original_Xtest, original_ytrain, original_ytest = train_test_split(X, y, test_size=0.2, random_state=42)\n\n# Check the Distribution of the labels\noriginal_Xtrain.to_csv(\"X_train.csv\")\noriginal_ytrain.to_csv(\"y_train.csv\")\noriginal_Xtest.to_csv(\"X_test.csv\")\noriginal_ytest.to_csv(\"y_test.csv\")\n\n# Turn into an array\noriginal_Xtrain = original_Xtrain.values\noriginal_Xtest = original_Xtest.values\noriginal_ytrain = original_ytrain.values\noriginal_ytest = original_ytest.values\n\n# See if both the train and test label distribution are similarly distributed\ntrain_unique_label, train_counts_label = np.unique(original_ytrain, return_counts=True)\ntest_unique_label, test_counts_label = np.unique(original_ytest, return_counts=True)\nprint('-' * 100)\n\nprint('Label Distributions: \\n')\nprint(train_counts_label/ len(original_ytrain))\nprint(test_counts_label/ len(original_ytest))\n", "----------------------------------------------------------------------------------------------------\nLabel Distributions: \n\n[0.99827076 0.00172924]\n[0.99827952 0.00172048]\n" ], [ "fraud = df[df.Class == 1]\nfraud.to_csv(\"Fraudulant_Transactions.csv\")\nnonfraud = df[df.Class == 0]\nnonfraud.to_csv(\"Non-Fraudulant_Transactions.csv\")\n", "_____no_output_____" ], [ "X_fraud = fraud.drop('Class', axis=1)\nX_fraud.to_csv(\"X_fraud.csv\")", "_____no_output_____" ], [ "from sklearn.ensemble import IsolationForest\nfrom sklearn.metrics import confusion_matrix, classification_report, accuracy_score", "_____no_output_____" ], [ "nonfraud_sample = nonfraud.sample(n=1000)\ndf_outlier=pd.concat([fraud,nonfraud_sample])\ndf_outlier.to_csv(\"df_sample.csv\")\nX_train = df_outlier.drop('Class', axis=1)\ny_train = df_outlier['Class']\n\nX_train.to_csv(\"X_outlier.csv\")\ny_train.to_csv(\"y_outlier.csv\")\n\nstate = 1\noutlier_fraction = len(fraud)/float(len(nonfraud_sample))\n\nclf = IsolationForest(max_samples=len(X), contamination = outlier_fraction, random_state = state)\nclf.fit(X_train)\ny_pred = clf.predict(X)\ny_pred[y_pred == 1] = 0\ny_pred[y_pred == -1] = 1\ny_pred = pd.DataFrame(y_pred)\n\nmat = confusion_matrix(y,y_pred)\nprint(mat)\nprint(accuracy_score(y,y_pred))\nprint(classification_report(y,y_pred))\nprint(\"Total number of Transactions classified as Fraudulent: \", mat[1][1]+mat[0][1])\nprint(\"Number of Fraudulent Transactions classified as Non-fraudulent: \", mat[1][0], \"out of 492\")\n\nfrom sklearn.metrics import roc_curve\nfrom matplotlib import pyplot\nfpr, tpr, thresholds = roc_curve(y, y_pred)\npyplot.plot([0, 1], [0, 1], linestyle='--')\n# plot the roc curve for the model\npyplot.plot(fpr, tpr, marker='.')\npyplot.xlabel('False Positive Rate')\npyplot.ylabel('True Positive Rate')\npyplot.title('ROC curve when n = 1000')\n# show the plot\npyplot.show()", "D:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:223: FutureWarning: behaviour=\"old\" is deprecated and will be removed in version 0.22. Please use behaviour=\"new\", which makes the decision_function change to match other anomaly detection algorithm API.\n FutureWarning)\nD:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:250: UserWarning: max_samples (284807) is greater than the total number of samples (1492). max_samples will be set to n_samples for estimation.\n % (self.max_samples, n_samples))\nD:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:417: DeprecationWarning: threshold_ attribute is deprecated in 0.20 and will be removed in 0.22.\n \" be removed in 0.22.\", DeprecationWarning)\n" ], [ "nonfraud_sample = nonfraud.sample(n=45000)\ndf_outlier=pd.concat([fraud,nonfraud_sample])\ndf_outlier.to_csv(\"df_sample.csv\")\nX_train = df_outlier.drop('Class', axis=1)\ny_train = df_outlier['Class']\n\nX_train.to_csv(\"X_outlier.csv\")\ny_train.to_csv(\"y_outlier.csv\")\n\nstate = 1\noutlier_fraction = len(fraud)/float(len(nonfraud_sample))\n\nclf = IsolationForest(max_samples=len(X), contamination = outlier_fraction, random_state = state)\nclf.fit(X_train)\ny_pred = clf.predict(X)\ny_pred[y_pred == 1] = 0\ny_pred[y_pred == -1] = 1\ny_pred = pd.DataFrame(y_pred)\n\nmat = confusion_matrix(y,y_pred)\nprint(mat)\nprint(accuracy_score(y,y_pred))\nprint(classification_report(y,y_pred))\nprint(\"Total number of Transactions classified as Fraudulent: \", mat[1][1]+mat[0][1])\nprint(\"Number of Fraudulent Transactions classified as Non-fraudulent: \", mat[1][0], \"out of 492\")\n\nfrom sklearn.metrics import roc_curve\nfrom matplotlib import pyplot\nfpr, tpr, thresholds = roc_curve(y, y_pred)\npyplot.plot([0, 1], [0, 1], linestyle='--')\n# plot the roc curve for the model\npyplot.plot(fpr, tpr, marker='.')\npyplot.xlabel('False Positive Rate')\npyplot.ylabel('True Positive Rate')\npyplot.title('ROC curve when n = 45000')\n# show the plot\npyplot.show()", "D:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:223: FutureWarning: behaviour=\"old\" is deprecated and will be removed in version 0.22. Please use behaviour=\"new\", which makes the decision_function change to match other anomaly detection algorithm API.\n FutureWarning)\nD:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:250: UserWarning: max_samples (284807) is greater than the total number of samples (45492). max_samples will be set to n_samples for estimation.\n % (self.max_samples, n_samples))\nD:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:417: DeprecationWarning: threshold_ attribute is deprecated in 0.20 and will be removed in 0.22.\n \" be removed in 0.22.\", DeprecationWarning)\n" ], [ "nonfraud_sample = nonfraud.sample(n=5000)\ndf_outlier=pd.concat([fraud,nonfraud_sample])\ndf_outlier.to_csv(\"df_sample.csv\")\nX_train = df_outlier.drop('Class', axis=1)\ny_train = df_outlier['Class']\n\nX_train.to_csv(\"X_outlier.csv\")\ny_train.to_csv(\"y_outlier.csv\")\n\nstate = 1\noutlier_fraction = len(fraud)/float(len(nonfraud_sample))\n\nclf = IsolationForest(max_samples=len(X), contamination = outlier_fraction, random_state = state)\nclf.fit(X_train)\ny_pred = clf.predict(X)\ny_pred[y_pred == 1] = 0\ny_pred[y_pred == -1] = 1\ny_pred = pd.DataFrame(y_pred)\n\nmat = confusion_matrix(y,y_pred)\nprint(mat)\nprint(accuracy_score(y,y_pred))\nprint(classification_report(y,y_pred))\nprint(\"Total number of Transactions classified as Fraudulent: \", mat[1][1]+mat[0][1])\nprint(\"Number of Fraudulent Transactions classified as Non-fraudulent: \", mat[1][0], \"out of 492\")\n\nfrom sklearn.metrics import roc_curve\nfrom matplotlib import pyplot\nfpr, tpr, thresholds = roc_curve(y, y_pred)\npyplot.plot([0, 1], [0, 1], linestyle='--')\n# plot the roc curve for the model\npyplot.plot(fpr, tpr, marker='.')\npyplot.xlabel('False Positive Rate')\npyplot.ylabel('True Positive Rate')\npyplot.title('ROC curve when n = 5000')\n# show the plot\npyplot.show()\n\n", "D:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:223: FutureWarning: behaviour=\"old\" is deprecated and will be removed in version 0.22. Please use behaviour=\"new\", which makes the decision_function change to match other anomaly detection algorithm API.\n FutureWarning)\nD:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:250: UserWarning: max_samples (284807) is greater than the total number of samples (5492). max_samples will be set to n_samples for estimation.\n % (self.max_samples, n_samples))\nD:\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\iforest.py:417: DeprecationWarning: threshold_ attribute is deprecated in 0.20 and will be removed in 0.22.\n \" be removed in 0.22.\", DeprecationWarning)\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a9bf7276f5c0e0e2f7091b86cf7da74843fe9a
678,682
ipynb
Jupyter Notebook
ipython/epsilon_1_find_diameter/varyParticleFraction/diameter_to_alpha.ipynb
kolbt/whingdingdilly
4c17b594ebc583750fe7565d6414f08678ea7882
[ "BSD-3-Clause" ]
4
2017-09-04T14:36:57.000Z
2022-03-28T23:24:58.000Z
ipython/epsilon_1_find_diameter/varyParticleFraction/diameter_to_alpha.ipynb
kolbt/whingdingdilly
4c17b594ebc583750fe7565d6414f08678ea7882
[ "BSD-3-Clause" ]
null
null
null
ipython/epsilon_1_find_diameter/varyParticleFraction/diameter_to_alpha.ipynb
kolbt/whingdingdilly
4c17b594ebc583750fe7565d6414f08678ea7882
[ "BSD-3-Clause" ]
null
null
null
249.882916
240,420
0.876512
[ [ [ "# Import libraries\nimport os\nimport sys\nimport numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom scipy import stats", "_____no_output_____" ], [ "# Here are my rc parameters for matplotlib\nmpl.rc('font', serif='Helvetica Neue') \nmpl.rcParams.update({'font.size': 9})\nmpl.rcParams['figure.figsize'] = 3.2, 2.8\nmpl.rcParams['figure.dpi'] = 200\nmpl.rcParams['xtick.direction'] = 'in'\nmpl.rcParams['ytick.direction'] = 'in'\nmpl.rcParams['lines.linewidth'] = 0.5\n\n# This link shows you how to greyscale a cmap\n# https://jakevdp.github.io/PythonDataScienceHandbook/04.07-customizing-colorbars.html", "_____no_output_____" ], [ "# First let's find all of our data\nbasePath = '/Users/kolbt/Desktop/compiled/whingdingdilly/ipython/epsilon_1_find_diameter/varyParticleFraction'\ndataPath = basePath + '/diamTxts'\n\n# Go to the correct parent directory\nos.chdir(basePath)\ntxtFiles = os.listdir(dataPath)\nnumFiles = len(txtFiles)", "_____no_output_____" ], [ "# Functions to sort my data with\ndef getFromTxt(fname, first, last):\n \"\"\"Takes a string, text before and after desired text, outs text between\"\"\"\n start = fname.index( first ) + len( first )\n end = fname.index( last, start )\n myTxt = fname[start:end]\n return float(myTxt)\n \ndef varSort(arr):\n \"\"\"Sort an array the slow (but certain) way, returns original indices in sorted order\"\"\"\n # Doing this for alpha\n cpy = np.copy(arr)\n ind = np.arange(0, len(arr))\n for i in xrange(len(cpy)):\n for j in xrange(len(cpy)):\n # Sort by first variable\n if cpy[i] > cpy[j] and i < j:\n # Swap copy array values\n cpy[i], cpy[j] = cpy[j], cpy[i]\n # Swap the corresponding indices\n ind[i], ind[j] = ind[j], ind[i] \n return ind\n\ndef indSort(arr1, arr2):\n \"\"\"Take sorted index array, use to sort array\"\"\"\n # arr1 is array to sort\n # arr2 is index array\n cpy = np.copy(arr1)\n for i in xrange(len(arr1)):\n arr1[i] = cpy[arr2[i]]", "_____no_output_____" ], [ "# You want to load the data in so that it's sorted to begin with\nos.chdir(dataPath)\nxAs = []\nfor i in xrange(numFiles):\n xAs.append(getFromTxt(txtFiles[i], \"_xa\", \".txt\"))\n xAs[i] /= 100.0\n# Now sort the array of txtFile names\nindArr = varSort(xAs)\nindSort(xAs, indArr)\nindSort(txtFiles, indArr)", "_____no_output_____" ], [ "# Read in the data in pandas dataframes\nall_sims = []\nos.chdir(dataPath)\nfor i in xrange(numFiles):\n df = pd.read_csv(txtFiles[i], sep='\\s+', header=0)\n all_sims.append(df)", "_____no_output_____" ], [ "# Make sure all data is chronilogical\ndef chkSort(array):\n \"\"\"Make sure array is chronilogical\"\"\"\n for i in xrange(len(array)-2):\n if array[i] > array[i+1]:\n print(\"{} is not greater than {} for indices=({},{})\").format(array[i+1], array[i], i, i+1)\n return False\n return True\n\n# Check to see if timesteps are in order\nfor i in xrange(numFiles):\n myBool = chkSort(all_sims[i]['Timestep'])\n if myBool is False:\n print(\"{} is not chronilogically sorted!\").format(txtFiles[i])\n exit(1)\n else:\n print(\"{} sorted... \").format(txtFiles[i])", "diam_pa150_pb500_xa0.txt sorted... \ndiam_pa150_pb500_xa10.txt sorted... \ndiam_pa150_pb500_xa20.txt sorted... \ndiam_pa150_pb500_xa30.txt sorted... \ndiam_pa150_pb500_xa40.txt sorted... \ndiam_pa150_pb500_xa50.txt sorted... \ndiam_pa150_pb500_xa60.txt sorted... \ndiam_pa150_pb500_xa70.txt sorted... \ndiam_pa150_pb500_xa80.txt sorted... \ndiam_pa150_pb500_xa90.txt sorted... \ndiam_pa150_pb500_xa100.txt sorted... \n" ], [ "display(all_sims[0])", "_____no_output_____" ], [ "# Make sure I haven't messed up my data\n\n\nfig, ax = plt.subplots(1, 4, sharey=True, figsize=(8, 2))\n\nfor i in xrange(numFiles):\n all_xs = np.arange(0, len(all_sims[i]['Timestep']))\n ax[0].plot(all_xs, all_sims[i]['sigALL'], label=xAs[i])\n ax[1].plot(all_xs, all_sims[i]['sigAA'], label=xAs[i])\n ax[2].plot(all_xs, all_sims[i]['sigAB'], label=xAs[i])\n ax[3].plot(all_xs, all_sims[i]['sigBB'], label=xAs[i])\nax[0].set_ylim(0.7, 0.95)\nax[0].set_ylabel(r'Particle Diameter $(\\sigma)$')\n\nax[0].set_title('All interactions')\nax[1].set_title('Slow-slow')\nax[2].set_title('Slow-Fast')\nax[3].set_title('Fast-Fast')\nplt.legend(title=r'$x_{S}$', loc = 4, bbox_to_anchor=(1.75, -0.3))\nplt.show()\n", "_____no_output_____" ], [ "# Now get time-based steady state values\n\n# Make list of steady state column headers\nheaders = list(all_sims[0])\nheaders.remove('Timestep')\nSS = pd.DataFrame(columns=headers)\nstdErr = pd.DataFrame(columns=headers)\nvar = pd.DataFrame(columns=headers)\n# Initialize dataframes\nfor i in xrange(numFiles):\n SS.loc[i] = [0] * len(headers)\n stdErr.loc[i] = [0] * len(headers)\n var.loc[i] = [0] * len(headers)\n \n# Make dataframe of steady-state data\nfor i in xrange(numFiles):\n # Loop through each column (aside from tstep column)\n for j in range(1, len(all_sims[i].iloc[1])):\n # Compute mean of data after steady-state time (25tb) in jth column of ith file\n avg = np.mean(all_sims[i].iloc[-20:-1, j])\n SS[headers[j-1]][i] = avg\n # Compute the standard deviation and variance in this data\n stdError = np.std(all_sims[i].iloc[-20:-1, j])\n stdErr[headers[j-1]][i] = stdError\n var[headers[j-1]][i] = stdError ** 2\n\npd.set_option('display.max_rows', 2)\ndisplay(SS)\ndisplay(stdErr)\ndisplay(var)\n\n# Correct the SS dataframe at the extremes\nfor i in xrange(numFiles):\n if xAs[i] == 0.0: # Issue with slow particle value\n SS['sigAA'][i] = SS['sigALL'][i]\n SS['sigAB'][i] = SS['sigALL'][i]\n if xAs[i] == 1.0: # Issue with fast particle value\n SS['sigBB'][i] = SS['sigALL'][i]\n SS['sigAB'][i] = SS['sigALL'][i]", "_____no_output_____" ], [ "# Now we plot the steady state diameters\nfig, ax = plt.subplots(1, 4, sharey=True, figsize=(8, 2))\n\nfor i in xrange(numFiles):\n ax[0].scatter(xAs[i], SS['sigALL'][i], label=xAs[i])\n ax[1].scatter(xAs[i], SS['sigAA'][i], label=xAs[i])\n ax[2].scatter(xAs[i], SS['sigAB'][i], label=xAs[i])\n ax[3].scatter(xAs[i], SS['sigBB'][i], label=xAs[i])\nax[0].set_ylim(0.7, 0.9)\nax[1].set_ylim(0.7, 0.9)\nax[2].set_ylim(0.7, 0.9)\nax[3].set_ylim(0.7, 0.9)\nax[0].set_ylabel(r'Particle Diameter $(\\sigma)$')\nax[0].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[1].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[2].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[3].set_xlabel(r'Particle Fraction $(x_{S})$')\n\nax[0].set_title('All interactions')\nax[1].set_title('Slow-slow')\nax[2].set_title('Slow-Fast')\nax[3].set_title('Fast-Fast')\nplt.legend(title=r'$x_{S}$', loc = 4, bbox_to_anchor=(1.75, -0.3))\nplt.show()", "_____no_output_____" ], [ "# Now compute the force experienced (F_LJ has to match this at sigma = 1)\ndef sigmaToForce(r):\n '''Take in the distance get out the equilibrium force'''\n epsilon = 1.0\n sigma = 1.0\n experiencedForce = 24.0 * epsilon * ( ((2*sigma**12)/r**13) - ((sigma**6)/r**7) )\n return experiencedForce\n\n# Now we plot the steady state diameters\nfig, ax = plt.subplots(1, 4, sharey=True, figsize=(8, 2))\n\nfor i in xrange(numFiles):\n ax[0].scatter(xAs[i], sigmaToForce(SS['sigALL'][i]), label=xAs[i])\n ax[1].scatter(xAs[i], sigmaToForce(SS['sigAA'][i]), label=xAs[i])\n ax[2].scatter(xAs[i], sigmaToForce(SS['sigAB'][i]), label=xAs[i])\n ax[3].scatter(xAs[i], sigmaToForce(SS['sigBB'][i]), label=xAs[i])\n# ax[0].set_ylim(0.99, 1.01)\nax[0].set_ylabel(r'$(F_{Equilibrium})$')\nax[0].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[1].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[2].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[3].set_xlabel(r'Particle Fraction $(x_{S})$')\n\nax[0].set_title('All interactions')\nax[1].set_title('Slow-slow')\nax[2].set_title('Slow-Fast')\nax[3].set_title('Fast-Fast')\nplt.legend(title=r'$Pe_{S}$', loc = 4, bbox_to_anchor=(1.75, -0.1))\nplt.show()", "_____no_output_____" ], [ "# Let's plot this as a function of epsilon\nfig, ax = plt.subplots(1, 4, sharey=True, figsize=(8, 2))\n\n# Let's overlay the monodisperse computation of epsilon\ndef overlayRatio(Fa):\n sigma = 1.0\n ratio = (4 * Fa * sigma / 24.0)\n return ratio\nupper = overlayRatio(500)\nlower = overlayRatio(150)\nfor i in xrange(4):\n ax[i].axhline(y=upper, linestyle='--', lw=2, zorder=0)\n ax[i].axhline(y=lower, linestyle='--', lw=2, zorder=0)\n\nfor i in xrange(numFiles):\n ax[0].scatter(xAs[i], sigmaToForce(SS['sigALL'][i])/24.0, label=xAs[i])\n ax[1].scatter(xAs[i], sigmaToForce(SS['sigAA'][i])/24.0, label=xAs[i])\n ax[2].scatter(xAs[i], sigmaToForce(SS['sigAB'][i])/24.0, label=xAs[i])\n ax[3].scatter(xAs[i], sigmaToForce(SS['sigBB'][i])/24.0, label=xAs[i])\nax[0].set_ylabel(r'$\\epsilon_{required}$')\nax[0].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[1].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[2].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[3].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[0].set_xlim(0, 1)\nax[1].set_xlim(0, 1)\nax[2].set_xlim(0, 1)\nax[3].set_xlim(0, 1)\n\nax[0].set_title('All interactions')\nax[1].set_title('Slow-slow')\nax[2].set_title('Slow-Fast')\nax[3].set_title('Fast-Fast')\n\nplt.legend(title=r'$x_{S}$', loc = 4, bbox_to_anchor=(1.75, -0.29))\nplt.show()", "_____no_output_____" ], [ "# Now let's fit this, linearly\ndef fitDataToLine(xdat, ydat):\n slope, intercept, r_value, p_value, std_err = stats.linregress(xdat, ydat)\n return slope, intercept, r_value, p_value, std_err\n\nALLy = []\nAAy = []\nABy = []\nBBy = []\nfor i in xrange(numFiles):\n ALLy.append(sigmaToForce(SS['sigALL'][i])/24.0)\n AAy.append(sigmaToForce(SS['sigAA'][i])/24.0)\n ABy.append(sigmaToForce(SS['sigAB'][i])/24.0)\n BBy.append(sigmaToForce(SS['sigBB'][i])/24.0)\n\nallFit = fitDataToLine(xAs, ALLy)\nAAFit = fitDataToLine(xAs, AAy)\nABFit = fitDataToLine(xAs, ABy)\nBBFit = fitDataToLine(xAs, BBy)\nprint(\"Predicted coefficient in numerator: {}, intercept: {}\").format(allFit[0] * 24.0, allFit[1])\nprint(\"Predicted coefficient in numerator: {}, intercept: {}\").format(AAFit[0] * 24.0, AAFit[1])\nprint(\"Predicted coefficient in numerator: {}, intercept: {}\").format(ABFit[0] * 24.0, ABFit[1])\nprint(\"Predicted coefficient in numerator: {}, intercept: {}\").format(BBFit[0] * 24.0, BBFit[1])\n", "Predicted coefficient in numerator: -1449.10756312, intercept: 80.9853468423\nPredicted coefficient in numerator: -1448.1341126, intercept: 80.3185094339\nPredicted coefficient in numerator: -1413.65083929, intercept: 80.3946532907\nPredicted coefficient in numerator: -1376.27153449, intercept: 80.5938307965\n" ], [ "# Here's an idea: just weight the ratio evaluated at the extremes by the particle fraction\ndef epsFinder(PeS, PeF, xS):\n if xS > 1.0:\n xS /= 100.0\n xF = 1.0 - xS\n sigma = 1.0\n epsBrown = 10.0\n epsNet = (4.0 * ((xS*PeS)+(xF*PeF)) / 24.0) + epsBrown\n# monoFast = (4 * PeF * sigma) / 24.0\n# monoSlow = (4 * PeS * sigma) / 24.0\n# return monoFast * xF + monoSlow * xS\n return epsNet\n\nxvals = np.arange(0, 1.0, 0.001)\nyvals = np.zeros_like(xvals)\nfor i in xrange(len(xvals)):\n yvals[i] = epsFinder(150.0, 500.0, xvals[i])\n\n# Let's evaluate this guess\nfig, ax = plt.subplots(1, 4, sharey=True, figsize=(8, 2))\n\n# Let's overlay the monodisperse computation of epsilon\nupper = overlayRatio(500)\nlower = overlayRatio(150)\nfor i in xrange(4):\n ax[i].axhline(y=upper, linestyle='--', lw=2, zorder=0)\n ax[i].axhline(y=lower, linestyle='--', lw=2, zorder=0)\n\nfor i in xrange(numFiles):\n ax[0].scatter(xAs[i], sigmaToForce(SS['sigALL'][i])/24.0, label=xAs[i])\n ax[1].scatter(xAs[i], sigmaToForce(SS['sigAA'][i])/24.0, label=xAs[i])\n ax[2].scatter(xAs[i], sigmaToForce(SS['sigAB'][i])/24.0, label=xAs[i])\n ax[3].scatter(xAs[i], sigmaToForce(SS['sigBB'][i])/24.0, label=xAs[i])\nax[0].set_ylabel(r'$\\epsilon_{required}$')\nax[0].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[1].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[2].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[3].set_xlabel(r'Particle Fraction $(x_{S})$')\nax[0].set_xlim(0, 1)\nax[1].set_xlim(0, 1)\nax[2].set_xlim(0, 1)\nax[3].set_xlim(0, 1)\n\nax[0].plot(xvals, yvals, lw=2, zorder=0)\nax[1].plot(xvals, yvals, lw=2, zorder=0)\nax[2].plot(xvals, yvals, lw=2, zorder=0)\nax[3].plot(xvals, yvals, lw=2, zorder=0)\n\nax[0].set_title('All interactions')\nax[1].set_title('Slow-slow')\nax[2].set_title('Slow-Fast')\nax[3].set_title('Fast-Fast')\n\nplt.legend(title=r'$x_{S}$', loc = 4, bbox_to_anchor=(1.75, -0.29))\nplt.show()\n ", "_____no_output_____" ], [ "# This means that the best way to do this is with a weighted average\n# epsilon_ALL = (4*PeF*sigma*xF + 4*PeS*sigma*xS) / 24.0", "_____no_output_____" ], [ "# I just need one of the above plots\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7a9d95731bd6233032cf0f06a1acafee50e8ca1
31,322
ipynb
Jupyter Notebook
Poc_ETL.ipynb
charlesLavignasse/POC-Python-ETL
00a3c10de5eae8e2c080699f24624f4d4cff6d6e
[ "CNRI-Python" ]
null
null
null
Poc_ETL.ipynb
charlesLavignasse/POC-Python-ETL
00a3c10de5eae8e2c080699f24624f4d4cff6d6e
[ "CNRI-Python" ]
null
null
null
Poc_ETL.ipynb
charlesLavignasse/POC-Python-ETL
00a3c10de5eae8e2c080699f24624f4d4cff6d6e
[ "CNRI-Python" ]
null
null
null
31.135189
260
0.465168
[ [ [ "# L'objectif de ce notebook sera de réaliser une ébauche de solution ETL en python", "_____no_output_____" ] ], [ [ "#Import des packages necessaires à la réalisation du projet\nimport pyodbc \nimport pyspark\nimport pandas as pd\nimport pandasql as ps\nimport sqlalchemy", "_____no_output_____" ], [ "#creation des variables depuis le fichier de configuration qui doit être placé dans le même dossier que le notebook\nfrom configparser import ConfigParser\n#recuperation de la configuration de la connexion stg_babilou pour la donnee procare\nconfig = ConfigParser()\nconfig.read('config.ini')\ndatabase_procare = config['procare']['database']\nsqlsUrl = config['procare']['host']\nusername = config['procare']['username']\npassword = config['procare']['password']\nport = config['procare']['port']\n\n#recuperation de la configuration de la connexion stg_babilou pour la donnee date\nconfig = ConfigParser()\nconfig.read('config.ini')\ndatabase_date = config['date']['database']\nsqlsUrl = config['date']['host']\nusername = config['date']['username']\npassword = config['date']['password']\nport = config['date']['port']", "_____no_output_____" ] ], [ [ "## Utilisation de pyodbc pour la connexion à la base de donnée SQL SERVER", "_____no_output_____" ] ], [ [ "#String de connexion procare\nconnection_string_procare ='DRIVER={SQL Server Native Client 11.0};SERVER='+sqlsUrl+';DATABASE='+database_procare+';UID='+username+';PWD='+password+';Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;Authentication=ActiveDirectoryIntegrated'\n#String de connexion date\nconnection_string_date ='DRIVER={SQL Server Native Client 11.0};SERVER='+sqlsUrl+';DATABASE='+database_date+';UID='+username+';PWD='+password+';Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;Authentication=ActiveDirectoryIntegrated'", "_____no_output_____" ], [ "#connexion a la base STG_BABILOU pour recuperer les donnees procare\ncnxn_procare:pyodbc.Connection= pyodbc.connect(connection_string_procare)\n\n#connexion a la base DWH_BABILOU pour recuperer les donnees de dates\ncnxn_date:pyodbc.Connection= pyodbc.connect(connection_string_date)", "_____no_output_____" ], [ "#Requetes pour interroger les bases DW_BABILOU et STG_BABILOU\n\n#select_query= 'Select * from STG_PROCARE_AR_SCHEDULE'\n\nselect_procare_query = '''\nWITH SCHOOL AS\n(SELECT \n\t SCO.[SchoolID]\n ,SCO.[Code]\n ,SCO.[SchoolName]\n\t ,SCO.[Database]\n\t ,CHI.[ChildSchoolID]\n\t ,CHI.PersonID\n FROM [dbo].[STG_PROCARE_G_SCHOOLS] SCO\n INNER JOIN STG_PROCARE_AR_CHILDSCHOOL CHI\n\tON (SCO.SchoolID = CHI.SchoolID))\n\nSELECT \nSCH.ScheduleKeyID\n,SCO.PersonID\n,SCO.SchoolID\n,SCH.[Database]\n,(SCD.OutMinute-SCD.InMinute)/60.00 AS HoursWorked\n,SCH.StartAppliesTo\n,SCH.EndAppliesTo\n,SCD.DayNumber\n,ENR.StartDate\n,ENR.EndDate\n,SCH.ScheduleID\n,GETDATE() AS ExtractDate \n\nFROM STG_PROCARE_AR_SCHEDULE SCH\nINNER JOIN STG_PROCARE_AR_SCHEDULEDETAIL SCD\n\tON (SCH.ScheduleKeyID=SCD.ScheduleKeyID\n\t\tand SCH.[Database]=SCD.[Database])\nINNER JOIN STG_PROCARE_G_TYPESTABLE TYP\n\tON (SCH.ChildSchoolID=TYP.TypeID\n\t\tand SCH.[Database]=TYP.[Database])\nINNER JOIN STG_PROCARE_AR_ENROLLMENT ENR\n\tON (SCH.ChildSchoolID=ENR.ChildSchoolID\n\t\tand SCH.[Database]=ENR.[Database])\n/*INNER JOIN STG_PROCARE_G_SCHOOLS SCO\n\tON (SCH.ChildSchoolID=SCO.SchoolID\n\t\tand SCH.[Database]=SCO.[Database]) */\nINNER JOIN SCHOOL SCO\n\tON (SCH.ChildSchoolID=SCO.ChildSchoolID)\n'''\n\nselect_date_query = '''\nSELECT \n [DT_DATE]\n ,[DAY_OF_WEEK]\n ,[LB_DAY_NAME]\n FROM [dbo].[D_TIME]\n '''", "_____no_output_____" ], [ "# Creation du dataframe procare\ndata_procare = pd.read_sql(select_procare_query,cnxn_procare)\n\n# Creation du dataframe date\ndata_date = pd.read_sql(select_date_query,cnxn_date)\n\n#fermeture des connexions\ncnxn_procare.close()\ncnxn_date.close()", "_____no_output_____" ] ], [ [ "## Utilisation des packages pandas et pandasql pour respecter les règles de gestion", "_____no_output_____" ], [ "* On cree la query qui sert à faire la jointure entre les deux dataframes precedemment crees\n* On fait une jointure glissante sur les dates de début et de fin de contrats\n* Le filtre nous permet de récupérer les données cohérentes (le premier jour de la semaine est forcément lundi)", "_____no_output_____" ] ], [ [ "psql_join_query ='''\nSELECT * FROM data_procare pro\nINNER JOIN data_date DAT\nON DAT.DT_DATE BETWEEN PRO.StartAppliesTo AND PRO.EndAppliesTo\nWHERE (PRO.DayNumber = 1 AND DAT.LB_DAY_NAME ='LUNDI')\nOR (PRO.DayNumber = 2 AND DAT.LB_DAY_NAME ='MARDI')\nOR (PRO.DayNumber = 3 AND DAT.LB_DAY_NAME ='MERCREDI')\nOR (PRO.DayNumber = 4 AND DAT.LB_DAY_NAME ='JEUDI')\nOR (PRO.DayNumber = 5 AND DAT.LB_DAY_NAME ='VENDREDI')\nOR (PRO.DayNumber = 6 AND DAT.LB_DAY_NAME ='SAMEDI')\nOR (PRO.DayNumber = 7 AND DAT.LB_DAY_NAME ='DIMANCHE')\n;\n'''", "_____no_output_____" ] ], [ [ "* Execution de la query via pandasql pui affichage des 5 premières lignes", "_____no_output_____" ] ], [ [ "df_psql = ps.sqldf(psql_join_query)\ndf_psql.head(5)", "_____no_output_____" ] ], [ [ "#### On recupere les donnees etp dans un dataframe pour les rajouter dans le dataframe final", "_____no_output_____" ] ], [ [ "df_etp = pd.read_csv('etp.csv',sep=';')", "_____no_output_____" ] ], [ [ "* Conversion des donnees en decimal", "_____no_output_____" ] ], [ [ "df_etp_int = df_etp.astype('float64')", "_____no_output_____" ], [ "sql_etp_query = '''\nSELECT \nPRO.*,\nETP.[Correspondance ETP] \nFROM df_psql PRO\nINNER JOIN df_etp ETP\nON PRO.HoursWorked BETWEEN ETP.MinDailyHour AND ETP.MaxDailyHour\n'''", "_____no_output_____" ], [ "df_join_etp = ps.sqldf(sql_etp_query)\ndf_join_etp.head(2)", "_____no_output_____" ] ], [ [ "## Creation de la table dans SQL Server via Pyodbc", "_____no_output_____" ] ], [ [ "#on établit une nouvelle connexion avec la base DW_BABILOU\nconnexion_dwh = pyodbc.connect(connection_string_date)\ndwh_crusor = connexion_dwh.cursor()", "_____no_output_____" ], [ "#Table déjà installée\n# dwh_crusor.execute('''\n# CREATE TABLE [dbo].PROCARE_ETP(\n# Jour DATE,\n# DayNumber INT,\n# [Database] VARCHAR(40),\n# PersonID INT,\n# SchoolID INT,\n# ETP DECIMAL (3,2),\n# ExtractDate DATE\n# );\n# ''')\n#connexion_dwh.commit() #sert à confirmer les changements dans la base", "_____no_output_____" ] ], [ [ "#### on efface les données de la table dont l'extractdate est à la date du jour ", "_____no_output_____" ] ], [ [ "dwh_crusor.execute('''DELETE FROM PROCARE_ETP WHERE ExtractDate = CONVERT (date, GETDATE())''')\nconnexion_dwh.commit()", "_____no_output_____" ] ], [ [ "## Insertion des donnees dans la table PROCATE_ETP nouvellement creee", "_____no_output_____" ] ], [ [ "# On créé un nouveau DataFrame à l'image de la table finale\ndf_insert_procareETP = ps.sqldf('''SELECT \ndate(DT_DATE) AS jour,\nDayNumber,\n[Database],\nPersonID,\nSchoolID,\n[Correspondance ETP] AS ETP,\ndate(ExtractDate) as ExtractDate\nFROM df_join_etp''')", "_____no_output_____" ], [ "df_insert_procareETP.head(2)", "_____no_output_____" ] ], [ [ "* quelques tests de connexion infructeux", "_____no_output_____" ] ], [ [ "for index,row in df_insert_procareETP.iterrows():\n dwh_crusor.execute('''INSERT INTO PROCATE_ETP(\n [jour],\n [DayNumber],\n [Database],\n PersonID,\n SchoolID,\n ETP,\n ExtractDate) \n values (?,?,?,?,?,?,?)''', \n row['jour'], \n row['DayNumber'], \n row['Database'],\n row['PersonID'],\n row['SchoolID'],\n row['ETP'],\n row['ExtractDate'])", "_____no_output_____" ], [ "df_insert_procareETP.to_sql('PROCARE_ETP',connexion_dwh,)", "_____no_output_____" ], [ "dwh_crusor.execute('''\nINSERT INTO [dbo].PROCATE_ETP(\njour,\nDayNumber,\n[Database],\nPersonID,\nSchoolID,\nETP,\nExtractDate)\n\nSELECT \nCAST(DT_DATE AS DATE) AS JOUR,\nDayNumber,\n[Database],\nPersonID,\nSchoolID,\n[Correspondance ETP] AS ETP,\nExtractDate\nFROM df_join_etp\n''')", "_____no_output_____" ], [ "connexion_dwh.commit() #sert à enregistrer les modifications dans la base", "_____no_output_____" ] ], [ [ "## Script d'insertion dans la table, pandas.to_sql et sqlachemy", "_____no_output_____" ], [ "* On cree les informations de connexion à la table en utilisant le connexion string utilise precedemment dans pyodbc", "_____no_output_____" ] ], [ [ "from sqlalchemy.engine import URL,create_engine\n\nconnection_url = URL.create(\"mssql+pyodbc\", query={\"odbc_connect\": connection_string_date})\n\nengine = create_engine(connection_url,fast_executemany=True)", "_____no_output_____" ], [ "#https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pyodbc\n#https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html\n#df_insert_procareETP_reduced = df_insert_procareETP.head(500)\ndf_insert_procareETP.to_sql('PROCARE_ETP',engine,if_exists='append',index=False,chunksize=1000)", "_____no_output_____" ], [ "#vérification de l'insertion des lignes \npd.read_sql('PROCARE_ETP',engine)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
e7a9e89b80a3a3eb9f1cf3212be85ace9e182099
20,076
ipynb
Jupyter Notebook
04_Evaluation_Methods/06_Classification_Report/.ipynb_checkpoints/Classification_Report-checkpoint.ipynb
CrispenGari/keras-api
948b47500814de17ec67ca093976758736331095
[ "MIT" ]
3
2021-11-08T07:37:10.000Z
2021-11-09T11:01:24.000Z
04_Evaluation_Methods/06_Classification_Report/Classification_Report.ipynb
CrispenGari/keras-api
948b47500814de17ec67ca093976758736331095
[ "MIT" ]
null
null
null
04_Evaluation_Methods/06_Classification_Report/Classification_Report.ipynb
CrispenGari/keras-api
948b47500814de17ec67ca093976758736331095
[ "MIT" ]
null
null
null
138.455172
16,580
0.889619
[ [ [ "### Classfication report\n* A Classification report is used to measure the quality of predictions from a classification algorithm. ... The report shows the main classification metrics precision, recall and f1-score on a per-class basis. The metrics are calculated by using true and false positives, true and false negatives.\n\n```\nsklearn.metrics.classification_report()\n```", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom sklearn.metrics import classification_report\nfrom matplotlib import pyplot as plt", "_____no_output_____" ], [ "y_true = np.array([1., 0., 1, 1, 0, 0, 1])\ny_pred = np.array([1., 1., 1., 0., 0. ,1, 0])", "_____no_output_____" ] ], [ [ "### Using `scikit-learn` to generate the classification report for our predictions\n ", "_____no_output_____" ] ], [ [ "labels = np.array([0., 1])\nreport = classification_report(y_true, y_pred, labels=labels)\nprint(report)", " precision recall f1-score support\n\n 0.0 0.33 0.33 0.33 3\n 1.0 0.50 0.50 0.50 4\n\n accuracy 0.43 7\n macro avg 0.42 0.42 0.42 7\nweighted avg 0.43 0.43 0.43 7\n\n" ] ], [ [ "### Ploting the ``classification_report``", "_____no_output_____" ] ], [ [ "import numpy as np\nimport seaborn as sns\nfrom sklearn.metrics import classification_report\nimport pandas as pd\nreport = classification_report(y_true, y_pred, labels=labels, output_dict=True)\nsns.heatmap(pd.DataFrame(report).iloc[:-1, :].T, annot=True)\nplt.title(\"Classification Report\")\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7a9ef0d85bfb61fd1d6e969d5fcc4d65bc8ff93
19,799
ipynb
Jupyter Notebook
valid.ipynb
xiaomanai/Average-Pooling-Layer-for-Chinese-Summary
0346ba13c2d1464a451e2b2589eda3f57c270ca5
[ "MIT" ]
1
2021-06-14T23:20:27.000Z
2021-06-14T23:20:27.000Z
valid.ipynb
xiaomanai/Average-Pooling-Layer-for-Chinese-Summary
0346ba13c2d1464a451e2b2589eda3f57c270ca5
[ "MIT" ]
null
null
null
valid.ipynb
xiaomanai/Average-Pooling-Layer-for-Chinese-Summary
0346ba13c2d1464a451e2b2589eda3f57c270ca5
[ "MIT" ]
null
null
null
54.392857
1,212
0.619122
[ [ [ "from __future__ import division\n\nimport argparse\nimport os\nfrom others.logging import init_logger\nfrom train_abstractive import validate_abs, train_abs, baseline, test_abs, test_text_abs\nfrom train_extractive import train_ext, validate_ext, test_ext\n\nmodel_flags = ['hidden_size', 'ff_size', 'heads', 'emb_size', 'enc_layers', 'enc_hidden_size', 'enc_ff_size',\n 'dec_layers', 'dec_hidden_size', 'dec_ff_size', 'encoder', 'ff_actv', 'use_interval']\n\n\ndef str2bool(v):\n if v.lower() in ('yes', 'true', 't', 'y', '1'):\n return True\n elif v.lower() in ('no', 'false', 'f', 'n', '0'):\n return False\n else:\n raise argparse.ArgumentTypeError('Boolean value expected.')", "_____no_output_____" ], [ "if __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-task\", default='abs', type=str, choices=['ext', 'abs'])\n parser.add_argument(\"-encoder\", default='bert', type=str, choices=['bert', 'baseline'])\n parser.add_argument(\"-mode\", default='test', type=str, choices=['train', 'validate', 'test'])\n parser.add_argument(\"-bert_data_path\", default='./bert.pt_data/data')\n parser.add_argument(\"-model_path\", default='./models/')\n parser.add_argument(\"-result_path\", default='./results/cnndm')\n parser.add_argument(\"-temp_dir\", default='./temp')\n\n parser.add_argument(\"-batch_size\", default=140, type=int)\n parser.add_argument(\"-test_batch_size\", default=200, type=int)\n\n parser.add_argument(\"-max_pos\", default=512, type=int)\n parser.add_argument(\"-use_interval\", type=str2bool, nargs='?',const=True,default=True)\n parser.add_argument(\"-large\", type=str2bool, nargs='?',const=True,default=False)\n parser.add_argument(\"-load_from_extractive\", default='', type=str)\n\n parser.add_argument(\"-sep_optim\", type=str2bool, nargs='?',const=True,default=False)\n parser.add_argument(\"-lr_bert\", default=2e-3, type=float)\n parser.add_argument(\"-lr_dec\", default=2e-3, type=float)\n parser.add_argument(\"-use_bert_emb\", type=str2bool, nargs='?',const=True,default=False)\n\n parser.add_argument(\"-share_emb\", type=str2bool, nargs='?', const=True, default=False)\n parser.add_argument(\"-finetune_bert\", type=str2bool, nargs='?', const=True, default=True)\n parser.add_argument(\"-dec_dropout\", default=0.2, type=float)\n parser.add_argument(\"-dec_layers\", default=6, type=int)\n parser.add_argument(\"-dec_hidden_size\", default=768, type=int)\n parser.add_argument(\"-dec_heads\", default=8, type=int)\n parser.add_argument(\"-dec_ff_size\", default=2048, type=int)\n parser.add_argument(\"-enc_hidden_size\", default=512, type=int)\n parser.add_argument(\"-enc_ff_size\", default=512, type=int)\n parser.add_argument(\"-enc_dropout\", default=0.2, type=float)\n parser.add_argument(\"-enc_layers\", default=6, type=int)\n\n # params for EXT\n parser.add_argument(\"-ext_dropout\", default=0.2, type=float)\n parser.add_argument(\"-ext_layers\", default=2, type=int)\n parser.add_argument(\"-ext_hidden_size\", default=768, type=int)\n parser.add_argument(\"-ext_heads\", default=8, type=int)\n parser.add_argument(\"-ext_ff_size\", default=2048, type=int)\n\n parser.add_argument(\"-label_smoothing\", default=0.1, type=float)\n parser.add_argument(\"-generator_shard_size\", default=32, type=int)\n parser.add_argument(\"-alpha\", default=0.9, type=float)\n parser.add_argument(\"-beam_size\", default=5, type=int)\n parser.add_argument(\"-min_length\", default=15, type=int)\n parser.add_argument(\"-max_length\", default=150, type=int)\n parser.add_argument(\"-max_tgt_len\", default=140, type=int)\n\n\n\n parser.add_argument(\"-param_init\", default=0, type=float)\n parser.add_argument(\"-param_init_glorot\", type=str2bool, nargs='?',const=True,default=True)\n parser.add_argument(\"-optim\", default='adam', type=str)\n parser.add_argument(\"-lr\", default=1, type=float)\n parser.add_argument(\"-beta1\", default= 0.9, type=float)\n parser.add_argument(\"-beta2\", default=0.999, type=float)\n parser.add_argument(\"-warmup_steps\", default=8000, type=int)\n parser.add_argument(\"-warmup_steps_bert\", default=8000, type=int)\n parser.add_argument(\"-warmup_steps_dec\", default=8000, type=int)\n parser.add_argument(\"-max_grad_norm\", default=0, type=float)\n\n parser.add_argument(\"-save_checkpoint_steps\", default=5, type=int)\n parser.add_argument(\"-accum_count\", default=1, type=int)\n parser.add_argument(\"-report_every\", default=1, type=int)\n parser.add_argument(\"-train_steps\", default=200, type=int)\n parser.add_argument(\"-recall_eval\", type=str2bool, nargs='?',const=True,default=False)\n\n\n parser.add_argument('-visible_gpus', default='-1', type=str)\n parser.add_argument('-gpu_ranks', default='0', type=str)\n parser.add_argument('-log_file', default='./logs/cnndm.log')\n parser.add_argument('-seed', default=666, type=int)\n\n parser.add_argument(\"-test_all\", type=str2bool, nargs='?',const=True,default=False)\n parser.add_argument(\"-test_from\", default='./test_from/model_step_200.pt')\n parser.add_argument(\"-test_start_from\", default=-1, type=int)\n\n parser.add_argument(\"-train_from\", default='')\n parser.add_argument(\"-report_rouge\", type=str2bool, nargs='?',const=True,default=True)\n parser.add_argument(\"-block_trigram\", type=str2bool, nargs='?', const=True, default=True)\n\n args = args = parser.parse_args(args=['-task', 'abs'])\n args.gpu_ranks = [int(i) for i in range(len(args.visible_gpus.split(',')))]\n args.world_size = len(args.gpu_ranks)\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = args.visible_gpus\n\n init_logger(args.log_file)\n device = \"cpu\" if args.visible_gpus == '-1' else \"cuda\"\n device_id = 0 if device == \"cuda\" else -1\n\n if (args.task == 'abs'):\n if (args.mode == 'train'):\n train_abs(args, device_id)\n elif (args.mode == 'validate'):\n validate_abs(args, device_id)\n elif (args.mode == 'lead'):\n baseline(args, cal_lead=True)\n elif (args.mode == 'oracle'):\n baseline(args, cal_oracle=True)\n if (args.mode == 'test'):\n cp = args.test_from\n try:\n step = int(cp.split('.')[-2].split('_')[-1])\n except:\n step = 0\n test_abs(args, device_id, cp, step)\n elif (args.mode == 'test_text'):\n cp = args.test_from\n try:\n step = int(cp.split('.')[-2].split('_')[-1])\n except:\n step = 0\n test_text_abs(args, device_id, cp, step)\n\n elif (args.task == 'ext'):\n if (args.mode == 'train'):\n train_ext(args, device_id)\n elif (args.mode == 'validate'):\n validate_ext(args, device_id)\n if (args.mode == 'test'):\n cp = args.test_from\n try:\n step = int(cp.split('.')[-2].split('_')[-1])\n except:\n step = 0\n test_ext(args, device_id, cp, step)\n elif (args.mode == 'test_text'):\n cp = args.test_from\n try:\n step = int(cp.split('.')[-2].split('_')[-1])\n except:\n step = 0\n test_text_abs(args, device_id, cp, step)\n init_logger(args.log_file) \n test_abs(args, device_id, cp, step)", "[2020-06-21 10:21:12,660 INFO] Loading checkpoint from ./test_from/model_step_200.pt\n" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
e7a9fd5d5795af12ef938b867248222023edb8f3
58,023
ipynb
Jupyter Notebook
sem20-synchronizing/synchronizing.ipynb
Disadvantaged/caos_2019-2020
135a0af9608ce4019b00db273b1204561093c743
[ "MIT" ]
32
2019-10-04T20:02:32.000Z
2020-07-21T17:18:06.000Z
sem20-synchronizing/synchronizing.ipynb
Disadvantaged/caos_2019-2020
135a0af9608ce4019b00db273b1204561093c743
[ "MIT" ]
2
2020-04-05T12:52:13.000Z
2020-05-04T23:42:02.000Z
sem20-synchronizing/synchronizing.ipynb
Disadvantaged/caos_2019-2020
135a0af9608ce4019b00db273b1204561093c743
[ "MIT" ]
10
2020-09-03T17:25:42.000Z
2022-02-18T23:36:51.000Z
47.058394
15,438
0.536494
[ [ [ "# look at tools/set_up_magics.ipynb\nyandex_metrica_allowed = True ; get_ipython().run_cell('# one_liner_str\\n\\nget_ipython().run_cell_magic(\\'javascript\\', \\'\\', \\'// setup cpp code highlighting\\\\nIPython.CodeCell.options_default.highlight_modes[\"text/x-c++src\"] = {\\\\\\'reg\\\\\\':[/^%%cpp/]} ;\\')\\n\\n# creating magics\\nfrom IPython.core.magic import register_cell_magic, register_line_magic\\nfrom IPython.display import display, Markdown, HTML\\nimport argparse\\nfrom subprocess import Popen, PIPE\\nimport random\\nimport sys\\nimport os\\nimport re\\nimport signal\\nimport shutil\\nimport shlex\\nimport glob\\n\\n@register_cell_magic\\ndef save_file(args_str, cell, line_comment_start=\"#\"):\\n parser = argparse.ArgumentParser()\\n parser.add_argument(\"fname\")\\n parser.add_argument(\"--ejudge-style\", action=\"store_true\")\\n args = parser.parse_args(args_str.split())\\n \\n cell = cell if cell[-1] == \\'\\\\n\\' or args.no_eof_newline else cell + \"\\\\n\"\\n cmds = []\\n with open(args.fname, \"w\") as f:\\n f.write(line_comment_start + \" %%cpp \" + args_str + \"\\\\n\")\\n for line in cell.split(\"\\\\n\"):\\n line_to_write = (line if not args.ejudge_style else line.rstrip()) + \"\\\\n\"\\n if line.startswith(\"%\"):\\n run_prefix = \"%run \"\\n if line.startswith(run_prefix):\\n cmds.append(line[len(run_prefix):].strip())\\n f.write(line_comment_start + \" \" + line_to_write)\\n continue\\n run_prefix = \"%# \"\\n if line.startswith(run_prefix):\\n f.write(line_comment_start + \" \" + line_to_write)\\n continue\\n raise Exception(\"Unknown %%save_file subcommand: \\'%s\\'\" % line)\\n else:\\n f.write(line_to_write)\\n f.write(\"\" if not args.ejudge_style else line_comment_start + r\" line without \\\\n\")\\n for cmd in cmds:\\n display(Markdown(\"Run: `%s`\" % cmd))\\n get_ipython().system(cmd)\\n\\n@register_cell_magic\\ndef cpp(fname, cell):\\n save_file(fname, cell, \"//\")\\n\\n@register_cell_magic\\ndef asm(fname, cell):\\n save_file(fname, cell, \"//\")\\n \\n@register_cell_magic\\ndef makefile(fname, cell):\\n assert not fname\\n save_file(\"makefile\", cell.replace(\" \" * 4, \"\\\\t\"))\\n \\n@register_line_magic\\ndef p(line):\\n try:\\n expr, comment = line.split(\" #\")\\n display(Markdown(\"`{} = {}` # {}\".format(expr.strip(), eval(expr), comment.strip())))\\n except:\\n display(Markdown(\"{} = {}\".format(line, eval(line))))\\n \\ndef show_file(file, clear_at_begin=True, return_html_string=False):\\n if clear_at_begin:\\n get_ipython().system(\"truncate --size 0 \" + file)\\n obj = file.replace(\\'.\\', \\'_\\').replace(\\'/\\', \\'_\\') + \"_obj\"\\n html_string = \\'\\'\\'\\n <!--MD_BEGIN_FILTER-->\\n <script type=text/javascript>\\n var entrance___OBJ__ = 0;\\n var errors___OBJ__ = 0;\\n function refresh__OBJ__()\\n {\\n entrance___OBJ__ -= 1;\\n var elem = document.getElementById(\"__OBJ__\");\\n if (elem) {\\n var xmlhttp=new XMLHttpRequest();\\n xmlhttp.onreadystatechange=function()\\n {\\n var elem = document.getElementById(\"__OBJ__\");\\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\\n if (elem && xmlhttp.readyState==4) {\\n if (xmlhttp.status==200)\\n {\\n errors___OBJ__ = 0;\\n if (!entrance___OBJ__) {\\n elem.innerText = xmlhttp.responseText;\\n entrance___OBJ__ += 1;\\n console.log(\"req\");\\n window.setTimeout(\"refresh__OBJ__()\", 300); \\n }\\n return xmlhttp.responseText;\\n } else {\\n errors___OBJ__ += 1;\\n if (errors___OBJ__ < 10 && !entrance___OBJ__) {\\n entrance___OBJ__ += 1;\\n console.log(\"req\");\\n window.setTimeout(\"refresh__OBJ__()\", 300); \\n }\\n }\\n }\\n }\\n xmlhttp.open(\"GET\", \"__FILE__\", true);\\n xmlhttp.setRequestHeader(\"Cache-Control\", \"no-cache\");\\n xmlhttp.send(); \\n }\\n }\\n \\n if (!entrance___OBJ__) {\\n entrance___OBJ__ += 1;\\n refresh__OBJ__(); \\n }\\n </script>\\n \\n <font color=\"white\"> <tt>\\n <p id=\"__OBJ__\" style=\"font-size: 16px; border:3px #333333 solid; background: #333333; border-radius: 10px; padding: 10px; \"></p>\\n </tt> </font>\\n <!--MD_END_FILTER-->\\n <!--MD_FROM_FILE __FILE__ -->\\n \\'\\'\\'.replace(\"__OBJ__\", obj).replace(\"__FILE__\", file)\\n if return_html_string:\\n return html_string\\n display(HTML(html_string))\\n \\nBASH_POPEN_TMP_DIR = \"./bash_popen_tmp\"\\n \\ndef bash_popen_terminate_all():\\n for p in globals().get(\"bash_popen_list\", []):\\n print(\"Terminate pid=\" + str(p.pid), file=sys.stderr)\\n p.terminate()\\n globals()[\"bash_popen_list\"] = []\\n if os.path.exists(BASH_POPEN_TMP_DIR):\\n shutil.rmtree(BASH_POPEN_TMP_DIR)\\n\\nbash_popen_terminate_all() \\n\\ndef bash_popen(cmd):\\n if not os.path.exists(BASH_POPEN_TMP_DIR):\\n os.mkdir(BASH_POPEN_TMP_DIR)\\n h = os.path.join(BASH_POPEN_TMP_DIR, str(random.randint(0, 1e18)))\\n stdout_file = h + \".out.html\"\\n stderr_file = h + \".err.html\"\\n run_log_file = h + \".fin.html\"\\n \\n stdout = open(stdout_file, \"wb\")\\n stdout = open(stderr_file, \"wb\")\\n \\n html = \"\"\"\\n <table width=\"100%\">\\n <colgroup>\\n <col span=\"1\" style=\"width: 70px;\">\\n <col span=\"1\">\\n </colgroup> \\n <tbody>\\n <tr> <td><b>STDOUT</b></td> <td> {stdout} </td> </tr>\\n <tr> <td><b>STDERR</b></td> <td> {stderr} </td> </tr>\\n <tr> <td><b>RUN LOG</b></td> <td> {run_log} </td> </tr>\\n </tbody>\\n </table>\\n \"\"\".format(\\n stdout=show_file(stdout_file, return_html_string=True),\\n stderr=show_file(stderr_file, return_html_string=True),\\n run_log=show_file(run_log_file, return_html_string=True),\\n )\\n \\n cmd = \"\"\"\\n bash -c {cmd} &\\n pid=$!\\n echo \"Process started! pid=${{pid}}\" > {run_log_file}\\n wait ${{pid}}\\n echo \"Process finished! exit_code=$?\" >> {run_log_file}\\n \"\"\".format(cmd=shlex.quote(cmd), run_log_file=run_log_file)\\n # print(cmd)\\n display(HTML(html))\\n \\n p = Popen([\"bash\", \"-c\", cmd], stdin=PIPE, stdout=stdout, stderr=stdout)\\n \\n bash_popen_list.append(p)\\n return p\\n\\n\\n@register_line_magic\\ndef bash_async(line):\\n bash_popen(line)\\n \\n \\ndef show_log_file(file, return_html_string=False):\\n obj = file.replace(\\'.\\', \\'_\\').replace(\\'/\\', \\'_\\') + \"_obj\"\\n html_string = \\'\\'\\'\\n <!--MD_BEGIN_FILTER-->\\n <script type=text/javascript>\\n var entrance___OBJ__ = 0;\\n var errors___OBJ__ = 0;\\n function halt__OBJ__(elem, color)\\n {\\n elem.setAttribute(\"style\", \"font-size: 14px; background: \" + color + \"; padding: 10px; border: 3px; border-radius: 5px; color: white; \"); \\n }\\n function refresh__OBJ__()\\n {\\n entrance___OBJ__ -= 1;\\n if (entrance___OBJ__ < 0) {\\n entrance___OBJ__ = 0;\\n }\\n var elem = document.getElementById(\"__OBJ__\");\\n if (elem) {\\n var xmlhttp=new XMLHttpRequest();\\n xmlhttp.onreadystatechange=function()\\n {\\n var elem = document.getElementById(\"__OBJ__\");\\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\\n if (elem && xmlhttp.readyState==4) {\\n if (xmlhttp.status==200)\\n {\\n errors___OBJ__ = 0;\\n if (!entrance___OBJ__) {\\n if (elem.innerHTML != xmlhttp.responseText) {\\n elem.innerHTML = xmlhttp.responseText;\\n }\\n if (elem.innerHTML.includes(\"Process finished.\")) {\\n halt__OBJ__(elem, \"#333333\");\\n } else {\\n entrance___OBJ__ += 1;\\n console.log(\"req\");\\n window.setTimeout(\"refresh__OBJ__()\", 300); \\n }\\n }\\n return xmlhttp.responseText;\\n } else {\\n errors___OBJ__ += 1;\\n if (!entrance___OBJ__) {\\n if (errors___OBJ__ < 6) {\\n entrance___OBJ__ += 1;\\n console.log(\"req\");\\n window.setTimeout(\"refresh__OBJ__()\", 300); \\n } else {\\n halt__OBJ__(elem, \"#994444\");\\n }\\n }\\n }\\n }\\n }\\n xmlhttp.open(\"GET\", \"__FILE__\", true);\\n xmlhttp.setRequestHeader(\"Cache-Control\", \"no-cache\");\\n xmlhttp.send(); \\n }\\n }\\n \\n if (!entrance___OBJ__) {\\n entrance___OBJ__ += 1;\\n refresh__OBJ__(); \\n }\\n </script>\\n\\n <p id=\"__OBJ__\" style=\"font-size: 14px; background: #000000; padding: 10px; border: 3px; border-radius: 5px; color: white; \">\\n </p>\\n \\n </font>\\n <!--MD_END_FILTER-->\\n <!--MD_FROM_FILE __FILE__.md -->\\n \\'\\'\\'.replace(\"__OBJ__\", obj).replace(\"__FILE__\", file)\\n if return_html_string:\\n return html_string\\n display(HTML(html_string))\\n\\n \\nclass TInteractiveLauncher:\\n tmp_path = \"./interactive_launcher_tmp\"\\n def __init__(self, cmd):\\n try:\\n os.mkdir(TInteractiveLauncher.tmp_path)\\n except:\\n pass\\n name = str(random.randint(0, 1e18))\\n self.inq_path = os.path.join(TInteractiveLauncher.tmp_path, name + \".inq\")\\n self.log_path = os.path.join(TInteractiveLauncher.tmp_path, name + \".log\")\\n \\n os.mkfifo(self.inq_path)\\n open(self.log_path, \\'w\\').close()\\n open(self.log_path + \".md\", \\'w\\').close()\\n\\n self.pid = os.fork()\\n if self.pid == -1:\\n print(\"Error\")\\n if self.pid == 0:\\n exe_cands = glob.glob(\"../tools/launcher.py\") + glob.glob(\"../../tools/launcher.py\")\\n assert(len(exe_cands) == 1)\\n assert(os.execvp(\"python3\", [\"python3\", exe_cands[0], \"-l\", self.log_path, \"-i\", self.inq_path, \"-c\", cmd]) == 0)\\n self.inq_f = open(self.inq_path, \"w\")\\n interactive_launcher_opened_set.add(self.pid)\\n show_log_file(self.log_path)\\n\\n def write(self, s):\\n s = s.encode()\\n assert len(s) == os.write(self.inq_f.fileno(), s)\\n \\n def get_pid(self):\\n n = 100\\n for i in range(n):\\n try:\\n return int(re.findall(r\"PID = (\\\\d+)\", open(self.log_path).readline())[0])\\n except:\\n if i + 1 == n:\\n raise\\n time.sleep(0.1)\\n \\n def input_queue_path(self):\\n return self.inq_path\\n \\n def close(self):\\n self.inq_f.close()\\n os.waitpid(self.pid, 0)\\n os.remove(self.inq_path)\\n # os.remove(self.log_path)\\n self.inq_path = None\\n self.log_path = None \\n interactive_launcher_opened_set.remove(self.pid)\\n self.pid = None\\n \\n @staticmethod\\n def terminate_all():\\n if \"interactive_launcher_opened_set\" not in globals():\\n globals()[\"interactive_launcher_opened_set\"] = set()\\n global interactive_launcher_opened_set\\n for pid in interactive_launcher_opened_set:\\n print(\"Terminate pid=\" + str(pid), file=sys.stderr)\\n os.kill(pid, signal.SIGKILL)\\n os.waitpid(pid, 0)\\n interactive_launcher_opened_set = set()\\n if os.path.exists(TInteractiveLauncher.tmp_path):\\n shutil.rmtree(TInteractiveLauncher.tmp_path)\\n \\nTInteractiveLauncher.terminate_all()\\n \\nyandex_metrica_allowed = bool(globals().get(\"yandex_metrica_allowed\", False))\\nif yandex_metrica_allowed:\\n display(HTML(\\'\\'\\'<!-- YANDEX_METRICA_BEGIN -->\\n <script type=\"text/javascript\" >\\n (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)};\\n m[i].l=1*new Date();k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)})\\n (window, document, \"script\", \"https://mc.yandex.ru/metrika/tag.js\", \"ym\");\\n\\n ym(59260609, \"init\", {\\n clickmap:true,\\n trackLinks:true,\\n accurateTrackBounce:true\\n });\\n </script>\\n <noscript><div><img src=\"https://mc.yandex.ru/watch/59260609\" style=\"position:absolute; left:-9999px;\" alt=\"\" /></div></noscript>\\n <!-- YANDEX_METRICA_END -->\\'\\'\\'))\\n\\ndef make_oneliner():\\n html_text = \\'(\"В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False\" if yandex_metrica_allowed else \"\")\\'\\n html_text += \\' + \"<\"\"!-- MAGICS_SETUP_PRINTING_END -->\"\\'\\n return \\'\\'.join([\\n \\'# look at tools/set_up_magics.ipynb\\\\n\\',\\n \\'yandex_metrica_allowed = True ; get_ipython().run_cell(%s);\\' % repr(one_liner_str),\\n \\'display(HTML(%s))\\' % html_text,\\n \\' #\\'\\'MAGICS_SETUP_END\\'\\n ])\\n \\n\\n');display(HTML((\"В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False\" if yandex_metrica_allowed else \"\") + \"<\"\"!-- MAGICS_SETUP_PRINTING_END -->\")) #MAGICS_SETUP_END", "_____no_output_____" ] ], [ [ "# Синхронизация потоков\n\n<br>\n<div style=\"text-align: right\"> Спасибо <a href=\"https://github.com/SyrnikRebirth\">Сове Глебу</a>, <a href=\"https://github.com/Disadvantaged\">Голяр Димитрису</a> и <a href=\"https://github.com/nikvas2000\">Николаю Васильеву</a> за участие в написании текста </div>\n<br>\n\n\nСегодня в программе:\n* <a href=\"#mutex\" style=\"color:#856024\">Мьютексы</a>\n <br> MUTEX ~ MUTual EXclusion\n* <a href=\"#spinlock\" style=\"color:#856024\">Spinlock'и и атомики</a>\n <br> [Атомики в С на cppreference](https://ru.cppreference.com/w/c/atomic)\n <br> <a href=\"#c_atomic_life\" style=\"color:#856024\">Atomic в C и как с этим жить </a> (раздел от <a href=\"https://github.com/nikvas2000\">Николая Васильева</a>)\n <br> <details> <summary>Про compare_exchange_weak vs compare_exchange_strong</summary> <p>\nhttps://stackoverflow.com/questions/4944771/stdatomic-compare-exchange-weak-vs-compare-exchange-strong\n<br>The weak compare-and-exchange operations may fail spuriously, that is, return false while leaving the contents of memory pointed to by expected before the operation is the same that same as that of the object and the same as that of expected after the operation. [ Note: This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., loadlocked store-conditional machines. A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop. \n</p>\n</details>\n \n* <a href=\"#condvar\" style=\"color:#856024\">Condition variable (aka условные переменные)</a>\n* <a href=\"#condvar_queue\" style=\"color:#856024\">Пример thread-safe очереди</a>\n \n\n\n<a href=\"#hw\" style=\"color:#856024\">Комментарии к ДЗ</a>\n\n[Ридинг Яковлева](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/mutex-condvar-atomic)", "_____no_output_____" ], [ "# <a name=\"mutex\"></a> Mutex", "_____no_output_____" ] ], [ [ "%%cpp mutex.c\n%# Санитайзер отслеживает небезопасный доступ \n%# к одному и тому же участку в памяти из разных потоков\n%# (а так же другие небезопасные вещи). \n%# В таких задачах советую всегда использовать\n%run gcc -fsanitize=thread mutex.c -lpthread -o mutex.exe # вспоминаем про санитайзеры\n%run ./mutex.exe\n\n#define _GNU_SOURCE \n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/syscall.h>\n#include <sys/time.h>\n#include <pthread.h>\n#include <stdatomic.h>\n\nconst char* log_prefix(const char* func, int line) {\n struct timespec spec; clock_gettime(CLOCK_REALTIME, &spec); long long current_msec = spec.tv_sec * 1000L + spec.tv_nsec / 1000000;\n static _Atomic long long start_msec_storage = -1; long long start_msec = -1; if (atomic_compare_exchange_strong(&start_msec_storage, &start_msec, current_msec)) start_msec = current_msec;\n long long delta_msec = current_msec - start_msec;\n static __thread char prefix[100]; sprintf(prefix, \"%lld.%03lld %13s():%d [tid=%ld]\", delta_msec / 1000, delta_msec % 1000, func, line, syscall(__NR_gettid));\n return prefix;\n}\n#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, \"%s: \" fmt \"%s\", log_prefix(__FUNCTION__, __LINE__), __VA_ARGS__); }\n#define log_printf(...) log_printf_impl(__VA_ARGS__, \"\")\n\n// thread-aware assert\n#define ta_assert(stmt) if (stmt) {} else { log_printf(\"'\" #stmt \"' failed\"); exit(EXIT_FAILURE); }\n\n\ntypedef enum {\n VALID_STATE = 0,\n INVALID_STATE = 1\n} state_t;\n\n// Инициализируем мьютекс\npthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; // protects: state \nstate_t current_state = VALID_STATE;\n\nvoid thread_safe_func() {\n // all function is critical section, protected by mutex\n pthread_mutex_lock(&mutex); // try comment lock&unlock out and look at result\n ta_assert(current_state == VALID_STATE);\n current_state = INVALID_STATE; // do some work with state. \n sched_yield();\n current_state = VALID_STATE;\n pthread_mutex_unlock(&mutex);\n}\n\n// Возвращаемое значение потока (~код возврата процесса) -- любое машинное слово.\nstatic void* thread_func(void* arg) \n{\n int i = (char*)arg - (char*)NULL;\n log_printf(\" Thread %d started\\n\", i);\n for (int j = 0; j < 10000; ++j) {\n thread_safe_func();\n }\n log_printf(\" Thread %d finished\\n\", i);\n return NULL;\n}\n\nint main()\n{\n log_printf(\"Main func started\\n\");\n const int threads_count = 2;\n pthread_t threads[threads_count];\n for (int i = 0; i < threads_count; ++i) {\n log_printf(\"Creating thread %d\\n\", i);\n ta_assert(pthread_create(&threads[i], NULL, thread_func, (char*)NULL + i) == 0);\n }\n for (int i = 0; i < threads_count; ++i) {\n ta_assert(pthread_join(threads[i], NULL) == 0); \n log_printf(\"Thread %d joined\\n\", i);\n }\n log_printf(\"Main func finished\\n\");\n return 0;\n}", "_____no_output_____" ] ], [ [ "# <a name=\"spinlock\"></a> Spinlock\n\n\n[spinlock в стандартной библиотеке](https://linux.die.net/man/3/pthread_spin_init)", "_____no_output_____" ] ], [ [ "%%cpp spinlock.c\n%run gcc -fsanitize=thread -std=c11 spinlock.c -lpthread -o spinlock.exe\n%run ./spinlock.exe\n\n#define _GNU_SOURCE\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <sys/syscall.h>\n#include <sys/types.h>\n#include <sys/time.h>\n#include <pthread.h>\n#include <stdatomic.h> //! Этот заголовочный файл плохо гуглится\n\nconst char* log_prefix(const char* func, int line) {\n struct timespec spec; clock_gettime(CLOCK_REALTIME, &spec); long long current_msec = spec.tv_sec * 1000L + spec.tv_nsec / 1000000;\n static _Atomic long long start_msec_storage = -1; long long start_msec = -1; if (atomic_compare_exchange_strong(&start_msec_storage, &start_msec, current_msec)) start_msec = current_msec;\n long long delta_msec = current_msec - start_msec;\n static __thread char prefix[100]; sprintf(prefix, \"%lld.%03lld %13s():%d [tid=%ld]\", delta_msec / 1000, delta_msec % 1000, func, line, syscall(__NR_gettid));\n return prefix;\n}\n#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, \"%s: \" fmt \"%s\", log_prefix(__FUNCTION__, __LINE__), __VA_ARGS__); }\n#define log_printf(...) log_printf_impl(__VA_ARGS__, \"\")\n\n// thread-aware assert\n#define ta_assert(stmt) if (stmt) {} else { log_printf(\"'\" #stmt \"' failed\"); exit(EXIT_FAILURE); }\n\n\ntypedef enum {\n VALID_STATE = 0,\n INVALID_STATE = 1\n} state_t;\n\n_Atomic int lock = 0; // protects state\nstate_t current_state = VALID_STATE;\n\nvoid sl_lock(_Atomic int* lock) { \n int expected = 0;\n // weak отличается от strong тем, что может выдавать иногда ложный false. Но он быстрее работает.\n // atomic_compare_exchange_weak can change `expected`!\n while (!atomic_compare_exchange_weak(lock, &expected, 1)) {\n expected = 0;\n }\n}\n\nvoid sl_unlock(_Atomic int* lock) {\n atomic_fetch_sub(lock, 1);\n}\n\n// По сути та же функция, что и в предыдущем примере, но ипользуется spinlock вместо mutex\nvoid thread_safe_func() { \n // all function is critical section, protected by mutex\n sl_lock(&lock); // try comment lock&unlock out and look at result\n ta_assert(current_state == VALID_STATE);\n current_state = INVALID_STATE; // do some work with state. \n sched_yield(); // increase probability of fail of incorrect lock realisation\n current_state = VALID_STATE;\n sl_unlock(&lock);\n}\n\n// Возвращаемое значение потока (~код возврата процесса) -- любое машинное слово.\nstatic void* thread_func(void* arg) \n{\n int i = (char*)arg - (char*)NULL;\n log_printf(\" Thread %d started\\n\", i);\n for (int j = 0; j < 10000; ++j) {\n thread_safe_func();\n }\n log_printf(\" Thread %d finished\\n\", i);\n return NULL;\n}\n\nint main()\n{\n log_printf(\"Main func started\\n\");\n const int threads_count = 2;\n pthread_t threads[threads_count];\n for (int i = 0; i < threads_count; ++i) {\n log_printf(\"Creating thread %d\\n\", i);\n ta_assert(pthread_create(&threads[i], NULL, thread_func, (char*)NULL + i) == 0);\n }\n for (int i = 0; i < threads_count; ++i) {\n ta_assert(pthread_join(threads[i], NULL) == 0); \n log_printf(\"Thread %d joined\\n\", i);\n }\n log_printf(\"Main func finished\\n\");\n return 0;\n}", "_____no_output_____" ] ], [ [ "# <a name=\"condvar\"></a> Condition variable", "_____no_output_____" ] ], [ [ "%%cpp condvar.c\n%run gcc -fsanitize=thread condvar.c -lpthread -o condvar.exe\n%run ./condvar.exe > out.txt\n//%run cat out.txt\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/syscall.h>\n#include <sys/time.h>\n#include <pthread.h>\n#include <stdatomic.h>\n\nconst char* log_prefix(const char* func, int line) {\n struct timespec spec; clock_gettime(CLOCK_REALTIME, &spec); long long current_msec = spec.tv_sec * 1000L + spec.tv_nsec / 1000000;\n static _Atomic long long start_msec_storage = -1; long long start_msec = -1; if (atomic_compare_exchange_strong(&start_msec_storage, &start_msec, current_msec)) start_msec = current_msec;\n long long delta_msec = current_msec - start_msec;\n static __thread char prefix[100]; sprintf(prefix, \"%lld.%03lld %13s():%d [tid=%ld]\", delta_msec / 1000, delta_msec % 1000, func, line, syscall(__NR_gettid));\n return prefix;\n}\n#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, \"%s: \" fmt \"%s\", log_prefix(__FUNCTION__, __LINE__), __VA_ARGS__); }\n#define log_printf(...) log_printf_impl(__VA_ARGS__, \"\")\n\n// thread-aware assert\n#define ta_assert(stmt) if (stmt) {} else { log_printf(\"'\" #stmt \"' failed\"); exit(EXIT_FAILURE); }\n\n\n\ntypedef struct {\n // рекомендую порядок записи переменных:\n pthread_mutex_t mutex; // мьютекс\n pthread_cond_t condvar; // переменная условия (если нужна)\n \n int value;\n} promise_t;\n\nvoid promise_init(promise_t* promise) {\n pthread_mutex_init(&promise->mutex, NULL);\n pthread_cond_init(&promise->condvar, NULL);\n promise->value = -1;\n}\n\nvoid promise_set(promise_t* promise, int value) {\n pthread_mutex_lock(&promise->mutex); // try comment lock&unlock out and look at result\n promise->value = value;\n pthread_mutex_unlock(&promise->mutex);\n pthread_cond_signal(&promise->condvar); // notify if there was nothing and now will be elements\n}\n\nint promise_get(promise_t* promise) {\n pthread_mutex_lock(&promise->mutex); // try comment lock&unlock out and look at result\n while (promise->value == -1) {\n // Ждем какие-либо данные, если их нет, то спим.\n // идейно convar внутри себя разблокирует mutex, чтобы другой поток мог положить в стейт то, что мы ждем\n pthread_cond_wait(&promise->condvar, &promise->mutex);\n // после завершения wait мьютекс снова заблокирован\n }\n int value = promise->value;\n pthread_mutex_unlock(&promise->mutex);\n return value;\n}\n\npromise_t promise_1, promise_2;\n\n\nstatic void* thread_A_func(void* arg) {\n log_printf(\"Func A started\\n\");\n promise_set(&promise_1, 42);\n log_printf(\"Func A set promise_1 with 42\\n\");\n int value_2 = promise_get(&promise_2);\n log_printf(\"Func A get promise_2 value = %d\\n\", value_2);\n return NULL;\n}\n\nstatic void* thread_B_func(void* arg) {\n log_printf(\"Func B started\\n\");\n int value_1 = promise_get(&promise_1);\n log_printf(\"Func B get promise_1 value = %d\\n\", value_1);\n promise_set(&promise_2, value_1 * 100);\n log_printf(\"Func B set promise_2 with %d\\n\", value_1 * 100)\n return NULL;\n}\n\nint main()\n{\n promise_init(&promise_1);\n promise_init(&promise_2);\n \n log_printf(\"Main func started\\n\");\n \n pthread_t thread_A_id;\n log_printf(\"Creating thread A\\n\");\n ta_assert(pthread_create(&thread_A_id, NULL, thread_A_func, NULL) == 0);\n \n pthread_t thread_B_id;\n log_printf(\"Creating thread B\\n\");\n ta_assert(pthread_create(&thread_B_id, NULL, thread_B_func, NULL) == 0);\n \n ta_assert(pthread_join(thread_A_id, NULL) == 0); \n log_printf(\"Thread A joined\\n\");\n \n ta_assert(pthread_join(thread_B_id, NULL) == 0); \n log_printf(\"Thread B joined\\n\");\n \n log_printf(\"Main func finished\\n\");\n return 0;\n}", "_____no_output_____" ] ], [ [ "Способ достичь успеха без боли: все изменения данных делаем под mutex. Операции с condvar тоже делаем только под заблокированным mutex.", "_____no_output_____" ], [ "# <a name=\"condvar_queue\"></a> Пример thread-safe очереди", "_____no_output_____" ] ], [ [ "%%cpp condvar_queue.c\n%run gcc -fsanitize=thread condvar_queue.c -lpthread -o condvar_queue.exe\n%run (for i in $(seq 0 100000); do echo -n \"$i \" ; done) | ./condvar_queue.exe > out.txt\n//%run cat out.txt\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/syscall.h>\n#include <sys/time.h>\n#include <pthread.h>\n#include <stdatomic.h>\n\nconst char* log_prefix(const char* func, int line) {\n struct timespec spec; clock_gettime(CLOCK_REALTIME, &spec); long long current_msec = spec.tv_sec * 1000L + spec.tv_nsec / 1000000;\n static _Atomic long long start_msec_storage = -1; long long start_msec = -1; if (atomic_compare_exchange_strong(&start_msec_storage, &start_msec, current_msec)) start_msec = current_msec;\n long long delta_msec = current_msec - start_msec;\n static __thread char prefix[100]; sprintf(prefix, \"%lld.%03lld %13s():%d [tid=%ld]\", delta_msec / 1000, delta_msec % 1000, func, line, syscall(__NR_gettid));\n return prefix;\n}\n#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, \"%s: \" fmt \"%s\", log_prefix(__FUNCTION__, __LINE__), __VA_ARGS__); }\n#define log_printf(...) log_printf_impl(__VA_ARGS__, \"\")\n\n// thread-aware assert\n#define ta_assert(stmt) if (stmt) {} else { log_printf(\"'\" #stmt \"' failed\"); exit(EXIT_FAILURE); }\n\n#define queue_max_size 5\n\nstruct {\n // рекомендую порядок записи переменных:\n pthread_mutex_t mutex; // мьютекс\n pthread_cond_t condvar; // переменная условия (если нужна)\n \n // все переменные защищаемые мьютексом\n int data[queue_max_size];\n int begin; // [begin, end) \n int end;\n} queue;\n\nvoid queue_init() {\n pthread_mutex_init(&queue.mutex, NULL);\n pthread_cond_init(&queue.condvar, NULL);\n queue.begin = queue.end = 0;\n}\n\nvoid queue_push(int val) {\n pthread_mutex_lock(&queue.mutex); // try comment lock&unlock out and look at result\n while (queue.begin + queue_max_size == queue.end) {\n pthread_cond_wait(&queue.condvar, &queue.mutex); // mutex in unlocked inside this func\n }\n _Bool was_empty = (queue.begin == queue.end);\n queue.data[queue.end++ % queue_max_size] = val;\n pthread_mutex_unlock(&queue.mutex);\n \n if (was_empty) {\n pthread_cond_signal(&queue.condvar); // notify if there was nothing and now will be elements\n }\n}\n\nint queue_pop() {\n pthread_mutex_lock(&queue.mutex); // try comment lock&unlock out and look at result\n while (queue.begin == queue.end) {\n pthread_cond_wait(&queue.condvar, &queue.mutex); // mutex in unlocked inside this func\n }\n if (queue.end - queue.begin == queue_max_size) {\n // Не важно где внутри мьютекса посылать сигнал, так как другой поток не сможет зайти в критическую секцию, пока не завершится текущая\n pthread_cond_signal(&queue.condvar); // notify if buffer was full and now will have free space\n }\n int val = queue.data[queue.begin++ % queue_max_size];\n if (queue.begin >= queue_max_size) {\n queue.begin -= queue_max_size;\n queue.end -= queue_max_size;\n }\n pthread_mutex_unlock(&queue.mutex);\n return val;\n}\n\nstatic void* producer_func(void* arg) \n{\n int val;\n while (scanf(\"%d\", &val) > 0) {\n queue_push(val);\n //nanosleep(&(struct timespec) {.tv_nsec = 1000000}, NULL); // 1ms\n }\n queue_push(-1);\n return NULL;\n}\n\nstatic void* consumer_func(void* arg) \n{\n int val;\n while ((val = queue_pop()) >= 0) {\n printf(\"'%d', \", val);\n }\n return NULL;\n}\n\nint main()\n{\n queue_init();\n \n log_printf(\"Main func started\\n\");\n \n pthread_t producer_thread;\n log_printf(\"Creating producer thread\\n\");\n ta_assert(pthread_create(&producer_thread, NULL, producer_func, NULL) == 0);\n \n pthread_t consumer_thread;\n log_printf(\"Creating producer thread\\n\");\n ta_assert(pthread_create(&consumer_thread, NULL, consumer_func, NULL) == 0);\n \n ta_assert(pthread_join(producer_thread, NULL) == 0); \n log_printf(\"Producer thread joined\\n\");\n \n ta_assert(pthread_join(consumer_thread, NULL) == 0); \n log_printf(\"Consumer thread joined\\n\");\n \n log_printf(\"Main func finished\\n\");\n return 0;\n}", "_____no_output_____" ] ], [ [ "# <a name=\"c_atomic_life\"></a> Atomic в C и как с этим жить\n\n\nВ C++ атомарные переменные реализованы через `std::atomic` в силу объектной ориентированности языка. \nВ C же к объявлению переменной приписывается _Atomic или _Atomic(). Лучше использовать второй вариант (почему, будет ниже). Ситуация усложняется отсуствием документации. Про атомарные функции с переменными можно посмотреть в ридинге Яковлева.", "_____no_output_____" ], [ "## Пример с _Atomic", "_____no_output_____" ] ], [ [ "%%cpp atomic_example1.c\n%run gcc -fsanitize=thread atomic_example1.c -lpthread -o atomic_example1.exe\n%run ./atomic_example1.exe > out.txt\n%run cat out.txt\n\n#include <stdatomic.h>\n#include <stdint.h>\n#include <stdio.h>\n\n// _Atomic навешивается на `int`\n_Atomic int x;\n\nint main(int argc, char* argv[]) {\n atomic_store(&x, 1);\n printf(\"%d\\n\", atomic_load(&x));\n \n int i = 2;\n // изменение не пройдет, так как x = 1, а i = 2, i станет равным x\n atomic_compare_exchange_strong(&x, &i, 3);\n printf(\"%d\\n\", atomic_load(&x));\n\n // тут пройдет\n atomic_compare_exchange_strong(&x, &i, 3);\n printf(\"%d\\n\", atomic_load(&x));\n return 0;\n}", "_____no_output_____" ] ], [ [ "Казалось бы все хорошо, но давайте попробуем с указателями", "_____no_output_____" ] ], [ [ "%%cpp atomic_example2.c\n%run gcc -fsanitize=thread atomic_example2.c -lpthread -o atomic_example2.exe\n%run ./atomic_example2.exe > out.txt\n%run cat out.txt\n\n#include <stdatomic.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n// ПЛОХОЙ КОД!!!\n_Atomic int* x;\n\nint main(int argc, char* argv[]) {\n int data[3] = {10, 20, 30};\n int* one = data + 0;\n int* two = data + 1;\n int* three = data + 2;\n \n atomic_store(&x, one);\n\n printf(\"%d\\n\", *atomic_load(&x));\n \n int* i = two;\n // изменение не пройдет, так как x = 1, а i = 2, i станет равным x\n atomic_compare_exchange_strong(&x, &i, three);\n printf(\"%d\\n\", *atomic_load(&x));\n\n i = one;\n // тут пройдет\n atomic_compare_exchange_strong(&x, &i, three);\n printf(\"%d\\n\", *atomic_load(&x));\n return 0;\n}", "_____no_output_____" ] ], [ [ "Получаем ад из warning/error от компилятора\n(все в зависимости от компилятора и платформы: `gcc 7.4.0 Ubuntu 18.04.1` - warning, `clang 11.0.0 macOS` - error).\n\nМожет появиться желание написать костыль, явно прикастовав типы:", "_____no_output_____" ] ], [ [ "%%cpp atomic_example3.c\n%run gcc -fsanitize=thread atomic_example3.c -lpthread -o atomic_example3.exe\n%run ./atomic_example3.exe > out.txt\n%run cat out.txt\n\n#include <stdatomic.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n// ПЛОХОЙ КОД!!!\n_Atomic int* x;\n\nint main(int argc, char* argv[]) {\n int data[3] = {10, 20, 30};\n int* one = data + 0;\n int* two = data + 1;\n int* three = data + 2;\n \n atomic_store(&x, (_Atomic int*) one);\n\n printf(\"%d\\n\", *(int*)atomic_load(&x));\n\n int* i = two;\n // изменение не пройдет, так как x = 1, а i = 2, i станет равным x\n atomic_compare_exchange_strong(&x, (_Atomic int**) &i, (_Atomic int*) three);\n printf(\"%d\\n\", *(int*)atomic_load(&x));\n \n i = one;\n // тут пройдет\n atomic_compare_exchange_strong(&x, (_Atomic int**) &i, (_Atomic int*) three);\n i = (int*) atomic_load(&x);\n printf(\"%d\\n\", *(int*)atomic_load(&x));\n return 0;\n}", "_____no_output_____" ] ], [ [ "Теперь gcc перестает кидать warnings (в clang до сих пор error). Но код может превратиться в ад из кастов. \n\nНо! Этот код идейно полностью некорректен.\n\nПосмотрим на `_Atomic int* x;`\nВ данном случае это работает как `(_Atomic int)* x`, а не как `_Atomic (int*) x` что легко подумать!\n<br>То есть получается неатомарный указатель на атомарный `int`. Хотя задумывалось как атомарный указатель на неатомарный `int`.\n\nПоэтому лучше использовать `_Atomic (type)`.\n\nПри его использовании код становится вполне читаемым и что главное - корректным. Соответственно компилируется без проблем в gcc/clang.", "_____no_output_____" ], [ "## Как надо писать", "_____no_output_____" ] ], [ [ "%%cpp atomic_example4.c\n%run gcc -fsanitize=thread atomic_example4.c -lpthread -o atomic_example4.exe\n%run ./atomic_example4.exe > out.txt\n%run cat out.txt\n\n#include <stdatomic.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n// Теперь именно атомарный указатель. Как и должно было быть.\n_Atomic (int*) x;\n\nint main(int argc, char* argv[]) {\n int data[3] = {10, 20, 30};\n int* one = data + 0;\n int* two = data + 1;\n int* three = data + 2;\n \n atomic_store(&x, one);\n printf(\"%d\\n\", *atomic_load(&x));\n \n int* i = two;\n // изменение не пройдет, так как x = 1, а i = 2, i станет равным x\n atomic_compare_exchange_strong(&x, &i, three);\n printf(\"%d\\n\", *atomic_load(&x));\n\n i = one;\n // тут пройдет\n atomic_compare_exchange_strong(&x, &i, three);\n printf(\"%d\\n\", *atomic_load(&x));\n return 0;\n}", "_____no_output_____" ] ], [ [ "## Более общая мысль\n\nВ общем-то тут нет ничего особого. Так же как и _Atomic ведет себя всем знакомый const.\n\nВсе же помнят, что `const int*` это неконстантный указатель на константный `int`? :)\n\nНо натолкнувшись на ошибку компиляции в одном из вышеприведенных примеров, два человека очень долго тупили. Поэтому они написали этот текст :) \n\nЧтобы вы, читающие, не тупили как мы. Удачи!", "_____no_output_____" ], [ "# <a name=\"hw\"></a> Комментарии к ДЗ\n\n* inf17-0: posix/threads/mutex\n <br>Много потоков, много мьютексов, циклический список.\n <br>Решения с одним мьютексом не принимаются (в таких решениях потоки не будут выполнять работу (сложение чисел) параллельно). На каждый элемент нужен отдельный мьютекс (на самом деле необязательно, разрешаю применять творческий подход).\n <br>При этом изменение трех чисел должно происходить атомарно с точки зрения гипотетического потока, который в любой момент, может взять лок на все мьютексы и прочитать состояние всех чисел.\n* inf17-1: posix/threads/condvar\n <br>Здесь необязательно реализовывать очередь, а тем более ее копировать, но принципе тот же.\n <br>Вспоминаем про аргументы передаваемые потоку.\n <br>Нельзя использовать `pipe` и `socketpair`.\n <br>В задаче есть две \"тяжелые\" операции: поиск простого числа и его вывод. Они должны параллелиться по типу конвеера.\n* inf17-2: posix/threads/atomic\n <br>Задачка на CAS", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7a9ff7b92fc5015097a7e41840d336ef13cebfa
84,032
ipynb
Jupyter Notebook
src/dataset_div/dataset_div_bpi_12_w.ipynb
avani17101/goal-oriented-next-best-activity-recomendation
0b151cf84b39ae3b49c1c1dcbc4f2a79bf96b5d2
[ "MIT" ]
null
null
null
src/dataset_div/dataset_div_bpi_12_w.ipynb
avani17101/goal-oriented-next-best-activity-recomendation
0b151cf84b39ae3b49c1c1dcbc4f2a79bf96b5d2
[ "MIT" ]
null
null
null
src/dataset_div/dataset_div_bpi_12_w.ipynb
avani17101/goal-oriented-next-best-activity-recomendation
0b151cf84b39ae3b49c1c1dcbc4f2a79bf96b5d2
[ "MIT" ]
null
null
null
32.295158
177
0.439571
[ [ [ "cd ..", "/home/avani.gupta/bpirl2\n" ], [ "pwd", "_____no_output_____" ], [ "from utils import calc_third_quartile, get_unique_act, get_compliant_cases \nimport numpy as np\nimport os\nimport pickle\nimport pandas as pd\nimport random\nfrom statistics import mean, median", "_____no_output_____" ], [ "df2 = pd.read_pickle('dataset/preprocessed/bpi_12_w_design_mat.pkl')", "_____no_output_____" ], [ "df2", "_____no_output_____" ], [ "# get process flow compliance cases only\ndf = get_compliant_cases(df2,dset=\"bpi_12_w\")", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "dat_group = df.groupby(\"CaseID\")\n\ntotal_iter = len(dat_group.ngroup())\ncase_duration_dic = {}\nfor name, gr in dat_group:\n case_duration_dic[name] = gr['duration_time'].sum()\n ", "_____no_output_____" ], [ "max(df['duration_time'])", "_____no_output_____" ], [ "case_duration_dic", "_____no_output_____" ] ], [ [ "reference for calulating quartile [here](http://web.mnstate.edu/peil/MDEV102/U4/S36/S363.html#:~:text=The%20third%20quartile%2C%20denoted%20by,25%25%20lie%20above%20Q3%20)", "_____no_output_____" ] ], [ [ "mean(case_duration_dic.values())", "_____no_output_____" ], [ "# quartile calculation\nimport statistics\ndef calc_third_quartile(lis):\n lis.sort()\n size = len(lis)\n lis_upper_half = lis[size//2:-1]\n third_quartile = statistics.median(lis_upper_half)\n return third_quartile\n \ncase_durations = list(case_duration_dic.values())\nthird_quartile = calc_third_quartile(case_durations)", "_____no_output_____" ], [ "third_quartile", "_____no_output_____" ] ], [ [ "### Filter dataset for RL model\n", "_____no_output_____" ] ], [ [ "cases_gs = []\ncases_gv = []\nfor k,v in case_duration_dic.items():\n if v <= third_quartile:\n cases_gs.append(k)\n else:\n cases_gv.append(k)", "_____no_output_____" ], [ "len(cases_gs), len(cases_gv)", "_____no_output_____" ], [ "tot = len(cases_gs)+ len(cases_gv)\npercent_gs_cases = len(cases_gs) / tot\nprint(percent_gs_cases)", "0.749962161344029\n" ], [ "cases_train = cases_gs\ncases_test = cases_gv\n", "_____no_output_____" ], [ "df.shape, len(cases_train), len(cases_test)", "_____no_output_____" ], [ "data_train = df.loc[df['CaseID'].isin(cases_train)]\ndata_test = df.loc[df['CaseID'].isin(cases_test)]", "_____no_output_____" ], [ "data_train", "_____no_output_____" ], [ "data_test", "_____no_output_____" ] ], [ [ "## Analysing unique events", "_____no_output_____" ] ], [ [ "a = get_unique_act(data_train)", "_____no_output_____" ], [ "len(a)", "_____no_output_____" ], [ "tot = get_unique_act(df)\nlen(tot)", "_____no_output_____" ], [ "lis = []\nfor act in tot:\n if act not in a:\n lis.append(act)\nlis ", "_____no_output_____" ], [ "for act in lis:\n df_sub = df[df[\"class\"] == act]\n caseid_lis = list(df_sub[\"CaseID\"])\n l = len(caseid_lis)\n caseid_sel = caseid_lis[:l//2]\n if len(caseid_sel) == 0:\n caseid_sel = caseid_lis\n \n r = df.loc[df['CaseID'].isin(caseid_sel)]\n data_train = data_train.append(r)\n", "_____no_output_____" ], [ "data_train", "_____no_output_____" ], [ "len(get_unique_act(data_train)), len(get_unique_act(data_test))", "_____no_output_____" ], [ "len(get_unique_act(df))", "_____no_output_____" ], [ "env_name = \"bpi_12_w\"\nname = env_name+'_d0'\npickle.dump(data_train, open(name+\"_train_RL.pkl\", \"wb\"))\npickle.dump(data_test, open(name+\"_test_RL.pkl\", \"wb\"))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7aa30c1615504d976aaec153c782e14c2ea0619
140,907
ipynb
Jupyter Notebook
intro-neural-networks/gradient-descent/GradientDescent.ipynb
Abhinav2604/deep-learning-v2-pytorch
4a9c6b1d6d313083b3ca3afd783ee972d738f7c7
[ "MIT" ]
null
null
null
intro-neural-networks/gradient-descent/GradientDescent.ipynb
Abhinav2604/deep-learning-v2-pytorch
4a9c6b1d6d313083b3ca3afd783ee972d738f7c7
[ "MIT" ]
null
null
null
intro-neural-networks/gradient-descent/GradientDescent.ipynb
Abhinav2604/deep-learning-v2-pytorch
4a9c6b1d6d313083b3ca3afd783ee972d738f7c7
[ "MIT" ]
null
null
null
443.103774
104,148
0.9361
[ [ [ "# Implementing the Gradient Descent Algorithm\n\nIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n#Some helper functions for plotting and drawing lines\n\ndef plot_points(X, y):\n admitted = X[np.argwhere(y==1)]\n rejected = X[np.argwhere(y==0)]\n plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')\n plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')\n\ndef display(m, b, color='g--'):\n plt.xlim(-0.05,1.05)\n plt.ylim(-0.05,1.05)\n x = np.arange(-10, 10, 0.1)\n plt.plot(x, m*x+b, color)", "_____no_output_____" ] ], [ [ "## Reading and plotting the data", "_____no_output_____" ] ], [ [ "data = pd.read_csv('data.csv', header=None)\nX = np.array(data[[0,1]])\ny = np.array(data[2])\nplot_points(X,y)\nplt.show()", "_____no_output_____" ] ], [ [ "## TODO: Implementing the basic functions\nHere is your turn to shine. Implement the following formulas, as explained in the text.\n- Sigmoid activation function\n\n$$\\sigma(x) = \\frac{1}{1+e^{-x}}$$\n\n- Output (prediction) formula\n\n$$\\hat{y} = \\sigma(w_1 x_1 + w_2 x_2 + b)$$\n\n- Error function\n\n$$Error(y, \\hat{y}) = - y \\log(\\hat{y}) - (1-y) \\log(1-\\hat{y})$$\n\n- The function that updates the weights\n\n$$ w_i \\longrightarrow w_i + \\alpha (y - \\hat{y}) x_i$$\n\n$$ b \\longrightarrow b + \\alpha (y - \\hat{y})$$", "_____no_output_____" ] ], [ [ "# Implement the following functions\n\n# Activation (sigmoid) function\ndef sigmoid(x):\n y=1/(1+np.exp(-x))\n return y\n \n\n# Output (prediction) formula\ndef output_formula(features, weights, bias):\n y_hat=sigmoid(np.dot(features,weights)+bias)\n return y_hat\n \n\n# Error (log-loss) formula\ndef error_formula(y, output):\n error=-y*np.log(output)-(1-y)*np.log(1-output)\n return error\n\n# Gradient descent step\ndef update_weights(x, y, weights, bias, learnrate):\n y_hat=output_formula(x,weights,bias)\n error=y-y_hat\n weights+=learnrate*error*x\n bias+=learnrate*error\n return weights,bias\n ", "_____no_output_____" ] ], [ [ "## Training function\nThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.", "_____no_output_____" ] ], [ [ "np.random.seed(44)\n\nepochs = 100\nlearnrate = 0.01\n\ndef train(features, targets, epochs, learnrate, graph_lines=False):\n \n errors = []\n n_records, n_features = features.shape\n last_loss = None\n weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n bias = 0\n for e in range(epochs):\n del_w = np.zeros(weights.shape)\n for x, y in zip(features, targets):\n output = output_formula(x, weights, bias)\n error = error_formula(y, output)\n weights, bias = update_weights(x, y, weights, bias, learnrate)\n \n # Printing out the log-loss error on the training set\n out = output_formula(features, weights, bias)\n loss = np.mean(error_formula(targets, out))\n errors.append(loss)\n if e % (epochs / 10) == 0:\n print(\"\\n========== Epoch\", e,\"==========\")\n if last_loss and last_loss < loss:\n print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n else:\n print(\"Train loss: \", loss)\n last_loss = loss\n predictions = out > 0.5\n accuracy = np.mean(predictions == targets)\n print(\"Accuracy: \", accuracy)\n if graph_lines and e % (epochs / 100) == 0:\n display(-weights[0]/weights[1], -bias/weights[1])\n \n\n # Plotting the solution boundary\n plt.title(\"Solution boundary\")\n display(-weights[0]/weights[1], -bias/weights[1], 'black')\n\n # Plotting the data\n plot_points(features, targets)\n plt.show()\n\n # Plotting the error\n plt.title(\"Error Plot\")\n plt.xlabel('Number of epochs')\n plt.ylabel('Error')\n plt.plot(errors)\n plt.show()", "_____no_output_____" ] ], [ [ "## Time to train the algorithm!\nWhen we run the function, we'll obtain the following:\n- 10 updates with the current training loss and accuracy\n- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.\n- A plot of the error function. Notice how it decreases as we go through more epochs.", "_____no_output_____" ] ], [ [ "train(X, y, epochs, learnrate, True)", "\n========== Epoch 0 ==========\nTrain loss: 0.7135845195381634\nAccuracy: 0.4\n\n========== Epoch 10 ==========\nTrain loss: 0.6225835210454962\nAccuracy: 0.59\n\n========== Epoch 20 ==========\nTrain loss: 0.5548744083669508\nAccuracy: 0.74\n\n========== Epoch 30 ==========\nTrain loss: 0.501606141872473\nAccuracy: 0.84\n\n========== Epoch 40 ==========\nTrain loss: 0.4593334641861401\nAccuracy: 0.86\n\n========== Epoch 50 ==========\nTrain loss: 0.42525543433469976\nAccuracy: 0.93\n\n========== Epoch 60 ==========\nTrain loss: 0.3973461571671399\nAccuracy: 0.93\n\n========== Epoch 70 ==========\nTrain loss: 0.3741469765239074\nAccuracy: 0.93\n\n========== Epoch 80 ==========\nTrain loss: 0.35459973368161973\nAccuracy: 0.94\n\n========== Epoch 90 ==========\nTrain loss: 0.3379273658879921\nAccuracy: 0.94\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7aa36ba250c035d33ae288646af0ea395d1101d
159,710
ipynb
Jupyter Notebook
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
53
2021-08-28T07:41:49.000Z
2022-03-09T02:20:17.000Z
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
142
2021-07-27T07:23:10.000Z
2021-08-25T14:57:24.000Z
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
38
2021-07-27T04:54:08.000Z
2021-08-23T02:27:20.000Z
190.130952
72,338
0.871624
[ [ [ "# ElasticNet with RobustScaler & Power Transformer", "_____no_output_____" ], [ "This Code template is for regression analysis using the ElasticNet regressor where rescaling method used is RobustScaler and feature transformation is done via Power Transformer.", "_____no_output_____" ], [ "### Required Packages", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport seaborn as se\nimport warnings\nimport matplotlib.pyplot as plt\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import ElasticNet\nfrom imblearn.over_sampling import RandomOverSampler\nfrom sklearn.preprocessing import RobustScaler, PowerTransformer\nfrom sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "### Initialization\nFilepath of CSV file", "_____no_output_____" ] ], [ [ "#filepath\nfile_path= \"\"", "_____no_output_____" ] ], [ [ "List of features which are required for model training.", "_____no_output_____" ] ], [ [ "#x_values\nfeatures=[]", "_____no_output_____" ] ], [ [ "Target feature for prediction.", "_____no_output_____" ] ], [ [ "#y_value\ntarget=''", "_____no_output_____" ] ], [ [ "### Data Fetching\n\nPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.\n\nWe will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.", "_____no_output_____" ] ], [ [ "df=pd.read_csv(file_path) #reading file\ndf.head()", "_____no_output_____" ] ], [ [ "### Feature Selections\n\nIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.\n\nWe will assign all the required input features to X and target/outcome to Y.", "_____no_output_____" ] ], [ [ "X=df[features]\nY=df[target]", "_____no_output_____" ] ], [ [ "### Data Preprocessing\n\nSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.\n", "_____no_output_____" ] ], [ [ "def NullClearner(df):\n if(isinstance(df, pd.Series) and (df.dtype in [\"float64\",\"int64\"])):\n df.fillna(df.mean(),inplace=True)\n return df\n elif(isinstance(df, pd.Series)):\n df.fillna(df.mode()[0],inplace=True)\n return df\n else:return df\ndef EncodeX(df):\n return pd.get_dummies(df)", "_____no_output_____" ] ], [ [ "Calling preprocessing functions on the feature and target set.\n", "_____no_output_____" ] ], [ [ "x=X.columns.to_list()\nfor i in x:\n X[i]=NullClearner(X[i])\nX=EncodeX(X)\nY=NullClearner(Y)\nX.head()", "_____no_output_____" ] ], [ [ "#### Correlation Map\n\nIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.", "_____no_output_____" ] ], [ [ "f,ax = plt.subplots(figsize=(18, 18))\nmatrix = np.triu(X.corr())\nse.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)\nplt.show()", "_____no_output_____" ] ], [ [ "### Data Splitting\n\nThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.", "_____no_output_____" ] ], [ [ "x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)", "_____no_output_____" ] ], [ [ "### Model\n\nElastic Net first emerged as a result of critique on Lasso, whose variable selection can be too dependent on data and thus unstable. The solution is to combine the penalties of Ridge regression and Lasso to get the best of both worlds.\n\nFeatures of ElasticNet Regression-\n\nIt combines the L1 and L2 approaches.\nIt performs a more efficient regularization process.\nIt has two parameters to be set, λ and α.\n\n#### Model Tuning Parameters:\n**alpha: float, default=1.0** ->\nConstant that multiplies the penalty terms. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object.\n\n**l1_ratio: float, default=0.5** ->\nThe ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.\n\n**fit_intercept: bool, default=True** ->\nWhether the intercept should be estimated or not. If False, the data is assumed to be already centered.\n\n**normalize: bool, default=False** ->\nThis parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm.\n\n**precompute: bool or array-like of shape (n_features, n_features), default=False** ->\nWhether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always False to preserve sparsity.\n\n**max_iter: int, default=1000** ->\nThe maximum number of iterations.\n\n**copy_X: bool, default=True** ->\nIf True, X will be copied; else, it may be overwritten.\n\n**tol: float, default=1e-4** ->\nThe tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.\n\n**warm_start: bool, default=False** ->\nWhen set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution.\n\n**positive: bool, default=False** ->\nWhen set to True, forces the coefficients to be positive.\n\n**random_state: int, RandomState instance, default=None** ->\nThe seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls.\n\n**selection: {‘cyclic’, ‘random’}, default=’cyclic’** ->\nIf set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.", "_____no_output_____" ], [ "### Robust Scaler\nStandardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results.\nThis Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).", "_____no_output_____" ], [ "### Power Transformer\n\nApply a power transform featurewise to make data more Gaussian-like.\n\nPower transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.", "_____no_output_____" ] ], [ [ "model=make_pipeline(RobustScaler(), PowerTransformer(), ElasticNet())\nmodel.fit(x_train,y_train)", "_____no_output_____" ] ], [ [ "#### Model Accuracy\n\nWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.\n\nscore: The score function returns the coefficient of determination R2 of the prediction.\n\n", "_____no_output_____" ] ], [ [ "print(\"Accuracy score {:.2f} %\\n\".format(model.score(x_test,y_test)*100))", "Accuracy score 74.53 %\n\n" ], [ "#prediction on testing set\nprediction=model.predict(x_test)", "_____no_output_____" ] ], [ [ "Model evolution\nr2_score: The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.\n\nMAE: The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.\n\nMSE: The mean squared error function squares the error(penalizes the model for large errors) by our model.", "_____no_output_____" ] ], [ [ "print('Mean Absolute Error:', mean_absolute_error(y_test, prediction)) \nprint('Mean Squared Error:', mean_squared_error(y_test, prediction)) \nprint('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))", "Mean Absolute Error: 4649.768547480406\nMean Squared Error: 38936010.9586435\nRoot Mean Squared Error: 6239.872671669151\n" ], [ "print(\"R-squared score : \",r2_score(y_test, prediction))", "R-squared score : 0.7453424175665325\n" ] ], [ [ "#### Prediction Plot\n\nFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.\nFor the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(14,10))\nplt.plot(range(20),y_test[0:20], color = \"green\")\nplt.plot(range(20),model.predict(x_test[0:20]), color = \"red\")\nplt.legend([\"Actual\",\"prediction\"]) \nplt.title(\"Predicted vs True Value\")\nplt.xlabel(\"Record number\")\nplt.ylabel(target)\nplt.show()", "_____no_output_____" ] ], [ [ "#### Creator: Ayush Gupta , Github: [Profile](https://github.com/guptayush179)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7aa46502b4bdfbeebe4ed7fe5fdca641943dd09
92,886
ipynb
Jupyter Notebook
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
79133dd540cac3975bc69f5508b18cbd72dd47dc
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
79133dd540cac3975bc69f5508b18cbd72dd47dc
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
79133dd540cac3975bc69f5508b18cbd72dd47dc
[ "Apache-2.0" ]
null
null
null
37.605668
1,506
0.593846
[ [ [ "# Pokemon", "_____no_output_____" ], [ "## Context", "_____no_output_____" ], [ "Pokémon (ポケモン , Pokemon) és un dels videojocs que Satoshi Tajiri va crear per a diverses plataformes, especialment la Game Boy, i que gràcies a la seva popularitat va aconseguir expandir-se a altres mitjans d'entreteniment, com ara sèries de televisió, jocs de cartes i roba, convertint-se, així, en una marca comercial reconeguda al mercat mundial. Fins al dia 1 de desembre de 2006 havien arribat a 175 milions d'exemplars venuts (inclosa la versió Pikachu de la consola Nintendo 64), arribant a ocupar el segon lloc de les nissagues de videojocs més venudes de Nintendo.\n\nLa saga Pokémon fou creada el 27 de febrer de 1996 al Japó. És desenvolupada per la companyia programadora de software japonesa Game Freak, amb els personatges creats per Satoshi Tajiri per a la companyia de joguines Creatures Inc., i alhora distribuïda i/o publicada per Nintendo. La missió dels protagonistes d'aquests jocs és capturar i entrenar els Pokémons, que actualment arriben a 806 tipus diferents. La possibilitat d'intercanviar-los amb altres usuaris va fer que la popularitat dels jocs de Pokémon augmentés i va provocar un èxit en les vendes de jocs de Pokémon, de televisió, de pel·lícules i marxandatges.\n\n![title](img/pokemon.jpg)", "_____no_output_____" ], [ "## [1] Introducció\n\nEn aquesta pràctica es volen analitzar les dades dels *Pokemons* per tal d'extreure informació característica que ens permeti amplicar el coneixement i entendre millor la relació que hi ha entre ells.\n\n\nPer això s'utilitzaràn dos datasets (obtinguts de la plataforma [Kaggle](https://www.kaggle.com/)) que es complementen i que tenen les dades necessàries per realitzar l'anàlisi que es vol dur a terme.\n\n### Dades\n\nEls *datasets* utilitzats són:\n* [Informació dels pokemons](https://www.kaggle.com/rounakbanik/pokemon)\n * ***pokemon.csv:*** Fitxer que conté les dades dels *Pokemons* amb els camps:\n * ***abilities***: Llista d'algunes de habilitats que pot adquirir. (Categòrica)\n * ***against_?***: Debilitat respecte a un tipus concret (against_fire, against_electric, etc). (Numèrica)\n * ***attack***: Punts d'atac. (Numèrica)\n * ***base_egg_steps***: Nombre de passos requerits per a que l'ou del *Pokemon* eclosioni. (Numèrica)\n * ***base_happiness***: Felicitat base. (Numèrica)\n * ***capture_rate***: Probabilitat de captura. (Numèrica)\n * ***classification***: Classificació del *Pokemon* segons la descripció de la *Pokedex* del joc Sol/Luna. (Categòrica)\n * ***defense***: Punts defensa. (Numèrica)\n * ***experience_growth***: Creixement d'experiència. (Numèrica)\n * ***height_m***: Alçada en metres. (Numèrica)\n * ***hp***: Punts de vida. (Numèrica)\n * ***japanese_name***: Nom original Japonès. (Categòrica)\n * ***name***: Nom del *Pokemon*. (Categòrica)\n * ***percentage_male***: Percentatge de mascles. (Numèrica)\n * ***pokedex_number***: Número de l'entrada en la *Pokedex*. (Numèrica)\n * ***sp_attack***: Atac especial. (Numèrica)\n * ***sp_defense***: Defensa especial. (Numèrica)\n * ***speed***: Velocitat. (Numèrica)\n * ***type1***: Tipus primari. (Categòrica)\n * ***type2***: Tipus secundari. (Categòrica)\n * ***weight_kg***: Pes en quilograms. (Numèrica)\n * ***generation***: Primera generació en que va apareixer el *Pokemon*. (Categòrica)\n * ***is_legendary***: Si és o no llegendari. (Categòrica)\n \n \n* [Informació de combats](https://www.kaggle.com/terminus7/pokemon-challenge)\n * ***combats.csv:*** Fitxer que conté informació sobre combats hipotètics\n * ***First_pokemon***: Identificador *Pokedex* del primer *Pokemon* del combat.\n * ***Second_pokemon***: Identificador *Pokedex* del segon *Pokemon* del combat.\n * ***Winner***: Identificador *Pokedex* del guanyador.\n\n### Què es vol aconseguir?\n\nAmb aquestes dades es vol donar resposta a la següents preguntes\n\n* Quants *Pokemons* hi ha a cada generació?\n* Quants són llegendaris i com es reparteixen entre les diferents generacions?\n* Quin dels llegendaris és el més fort i quin el més dèbil?\n* Com es distribueixen els tipus?\n* Quines combinacions de tipus (*type1* i *type2*) hi ha?\n* Com es distribueix el pes i quin és el Pokemon de menor i major pes (en Kg)?\n* Com es distribueix l'alçada i quin és el Pokemon de menor i major alçada (en m)?\n* Com es distribueix velocitat i quin és el Pokemon de menor i major velocitat?\n* Com es distribueix l'atac i defensa i quin és el Pokemon de menor i major atac i defensa?\n* Quin és el resultat de la comparació entre l'atac, l'atac especial, la defensa i la defensa especial base?\n* Es pot considerar que els Pokemon de tipus roca i foc tenen el mateix pes?\n\nAquestes preguntes es poden contestar analitzant les dades del dataset *Informació dels Pokemons* (*pokemon.csv*), però es vol anar un pas més enllà i desenvolupar un model predictiu que sigui capaç de preedir quin *Pokemon* guanyaria un combat. Per això s'afegeix el dataset *Informació de combats* (*combats.csv*). \n\nAmb el model construit, es simularà un torneig amb 16 *Pokemons* i s'intentarà adivinar quin de tots ells seria el guanyador.", "_____no_output_____" ], [ "---\n\n## [2] Integració i selecció\n\n### Imports\n\nEn aquesta pràctica s'utilitzaran les llibreries:\n\n* *pandas*: Per treballar amb *DataFrames* (internament usa *numpy*).\n* *matplotlib* i *seaborn*: Per fer els gràfics.\n* *missingno*: Per fer gràfics de valors mancants.\n* *scipy*: Per fer testos estadístics.\n* *scikit-learn*: Per construir els models predictius", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n#conda install -c conda-forge missingno\nimport missingno as msno\nimport scipy as sp\n\npath_folder = './datasets'", "_____no_output_____" ] ], [ [ "### Carregar les dades\n#### Pokemon_info dataset", "_____no_output_____" ] ], [ [ "pokemon_info_df = pd.read_csv(path_folder+'/pokemon.csv')\n\n#Dimensions del DF (files, columnes)\nprint(pokemon_info_df.shape)", "_____no_output_____" ] ], [ [ "Hi ha **42 variables** i **801 registres**.\n\nQuins són els diferents tipus de variables?", "_____no_output_____" ] ], [ [ "print(pokemon_info_df.dtypes.unique())", "_____no_output_____" ] ], [ [ "Hi ha variables de tipus: \n\n* ***O***: Categòrica.\n* ***float64***: Real.\n* ***int64***: Enter.\n\nDe quin tipus és cada variable?", "_____no_output_____" ] ], [ [ "#Variables\nprint(pokemon_info_df.dtypes)", "_____no_output_____" ] ], [ [ "Distribució del tipus de les variables.", "_____no_output_____" ] ], [ [ "pd.value_counts(pokemon_info_df.dtypes).plot.bar()", "_____no_output_____" ] ], [ [ "**Nota:** Com es pot veure, hi ha moltes variables de tipus ***float64*** i ***int64***, es probable que donat el domini d'aquestes variables, es pogués canviar el tipus a **float32** i **int32** per així reduir la quantitat de memòria utilitzada.\n\n#### Selecció de variables\n\nA partir de les preguntes plantejades en el primer apartat, per aquest *dataset* es seleccionen les variables:\n\n* name\n* pokedex_number\n* generation\n* type1\n* type2\n* is_legendary\n* attack\n* sp_attack\n* defense\n* sp_defense\n* speed\n* hp\n* height_m\n* wegiht_kg\n* against_?", "_____no_output_____" ] ], [ [ "pokemon_info_df = pokemon_info_df[[\"name\",\"pokedex_number\",\\\n \"generation\",\"type1\",\"type2\",\\\n \"is_legendary\",\"attack\",\"sp_attack\",\\\n \"defense\",\"sp_defense\",\"speed\",\\\n \"hp\",\"height_m\",\"weight_kg\",\\\n \"against_bug\",\"against_dark\",\"against_dragon\",\\\n \"against_electric\", \"against_fairy\", \"against_fight\",\\\n \"against_fire\", \"against_flying\", \"against_ghost\",\\\n \"against_grass\", \"against_ground\", \"against_ice\",\\\n \"against_normal\", \"against_poison\", \"against_psychic\",\\\n \"against_rock\", \"against_steel\", \"against_water\"]]", "_____no_output_____" ] ], [ [ "### *pokemon_battles dataset*", "_____no_output_____" ] ], [ [ "pokemon_battles_df = pd.read_csv(path_folder+'/combats.csv')\n\nprint(pokemon_battles_df.shape)", "_____no_output_____" ] ], [ [ "Hi ha **38,743 registres** i **3 variables.**\n\nDe quin tipus són?", "_____no_output_____" ] ], [ [ "print(pokemon_battles_df.dtypes.unique())", "_____no_output_____" ], [ "print(pokemon_battles_df.dtypes)", "_____no_output_____" ] ], [ [ "**Nota:** Totes les variables són enteres (*int64*).\n\n#### Selecció de variables\n\nEn aquest *dataset* són necessaries totes les variables, i per tant, no es fa cap selecció.", "_____no_output_____" ], [ "---\n\n## [3] Neteja de les dades\n\nUn cop es coneixen les variables de les quals es disposa per l'anàlisi i el seu tipus, és important explorar quines d'aquestes variables tenen valors mancants i si això fa que deixin de ser útils.", "_____no_output_____" ] ], [ [ "#Hi ha algún camp en tot el DF que tingui un valor mancant?\nprint(pokemon_info_df.isnull().values.any())", "_____no_output_____" ] ], [ [ "### Valors mancants\n\nQuins camps tenen valors mancants?", "_____no_output_____" ] ], [ [ "pokemon_info_mv_list = pokemon_info_df.columns[pokemon_info_df.isnull().any()].tolist()\nprint(pokemon_info_mv_list)", "_____no_output_____" ] ], [ [ "Les variables: **height_m**, **percentage_male**, **type2**, **weight_kg** tenen valors mancants, però quants registres estan afectats?", "_____no_output_____" ] ], [ [ "def missing_values(df, fields):\n n_rows = df.shape[0]\n for field in fields:\n n_missing_values = df[field].isnull().sum()\n print(\"%s: %d (%.3f)\" % (field, n_missing_values, n_missing_values/n_rows))", "_____no_output_____" ], [ "msno.bar(pokemon_info_df[pokemon_info_mv_list], color=\"#b2ff54\", labels=True)", "_____no_output_____" ] ], [ [ "Com es distribueixen els valors mancants en funció de l'ordre del *Pokemon* imputat per la *Pokedex*?", "_____no_output_____" ] ], [ [ "msno.matrix(pokemon_info_df[pokemon_info_mv_list])", "_____no_output_____" ] ], [ [ "La variable **height_m** té 20 registres sense valor (2.5%), **type2** 384 (48%) i **weight_kg** 20 (2.5%)", "_____no_output_____" ], [ "### Imputar els valors perduts\n\nPer tal d'imputar correctament els valors perduts, cal primer observar els altres valors per cada una d'aquestes variables. Així que anem a veure quins valors diferents hi ha per cada variable.\n\n**type2**", "_____no_output_____" ] ], [ [ "print(pokemon_info_df[pokemon_info_df['type2'].notnull()]['type2'].unique())", "_____no_output_____" ] ], [ [ "Com es pot veure, hi ha 18 tipus de Pokemon diferents en la variable **type2**. \n\n**Com que es tracta d'una variable arbitraria definida pel dissenyador del *Pokemon*, no té cap sentit imputar un valor en base a la similitud que té amb els altres *Pokemons*, i per això, es decideix assignar l'etiqueta arbitrària (*unknown*) per distingir valor mancants.**", "_____no_output_____" ] ], [ [ "pokemon_info_df['type2'].fillna('unknown', inplace=True)", "_____no_output_____" ] ], [ [ "**height_m**\n\nCom que només hi ha un 20 registres sense valor per aquesta variable i el nombre de registres és molt superior a 50, es poden descartar. Per fer-ho assignem el valor 0, i així es remarca que la dada no existeix perquè no té sentit un *Pokemon* que no tingui alçada.\n\n**Nota:** En cas que el nombre de registres fos inferior a 50, es podria implementar una solució basada en crear un **model de regressió lineal simple** on la **variable a preedir** fos l'**alçada** i la **variable predictora** el **pes**. Aquesta predicció es podria fer a agrupant els *Pokemons* pel seu tipus i només per aquells on el factor de correlació de *Pearson* fos superior a 0,7 o inferior a -0,7.", "_____no_output_____" ] ], [ [ "pokemon_info_df['height_m'].fillna(np.int(0), inplace=True)", "_____no_output_____" ] ], [ [ "**weight_kg**\n\nIgual que amb la variable **height_m**\n", "_____no_output_____" ] ], [ [ "pokemon_info_df['weight_kg'].fillna(np.int(0), inplace=True)", "_____no_output_____" ] ], [ [ "Ara es pot comprovar que no hi ha cap valor *na* en tot el *dataset*", "_____no_output_____" ] ], [ [ "print(pokemon_info_df.columns[pokemon_info_df.isnull().any()].tolist() == [])", "_____no_output_____" ] ], [ [ "### Dades extremes\n\nLes dades extremes o *outliers* són aquelles que estàn fora del rang que es pot considerar normal per una variable numèrica. Hi ha diferents maneres de detectar les dades extremes, un dels més comuns és considerar com a tal a totes aquelles dades inferiors a *Q1* - 1.5 * *RIQ* o superior a *Q3* + 1.5 * *RIQ*.\n\nL'anàlisi de dades extremes es farà sobre les variables: *attack*, *sp_attack*, *defense*, *sp_defense*, *speed*, *hp*, *height_m* i *weight_kg*", "_____no_output_____" ] ], [ [ "def print_min_max(var):\n data = pokemon_info_df[var]\n data = sorted(data)\n q1, q2, q3 = np.percentile(data, [25,50,75])\n iqr = q3 - q1\n lower_bound = q1 - (1.5 * iqr)\n upper_bound = q3 + (1.5 * iqr)\n data_pd = pokemon_info_df[var]\n outliers = data_pd[(data_pd < lower_bound) | (data_pd > upper_bound)]\n print(\"{} - mínim: {}, mediana: {}, màxim: {}, number of outliers: {}\".format(var, min(pokemon_info_df[var])\\\n ,q2\\\n , max(pokemon_info_df[var])\\\n , len(outliers)))\n \nprint_min_max(\"attack\")\nprint_min_max(\"sp_attack\")\nprint_min_max(\"defense\")\nprint_min_max(\"sp_defense\")\nprint_min_max(\"speed\")\nprint_min_max(\"hp\")\nprint_min_max(\"weight_kg\")\nprint_min_max(\"height_m\")", "_____no_output_____" ] ], [ [ "Una manera de representar aquesta informació és a través de diagrames de caixa o *boxplots*", "_____no_output_____" ] ], [ [ "plt.subplots(figsize=(15,10))\nsns.boxplot(data=pokemon_info_df[['attack', 'sp_attack', 'defense', 'sp_defense', 'speed', \\\n 'hp', 'weight_kg', 'height_m']], orient='v')", "_____no_output_____" ] ], [ [ "De les variables analitzades, totes tenen relativament poques dades atípiques i les que en tenen no són molt pronunciats a excepció de la variable *weight_kg*, com que aquesta variable no s'usarà en la construcció del model predictiu, s'ha decidit assumir el risc de treballar amb les dades extremes i no eliminar-les del conjunt.\n\n### Guardar les dades preprocessades\n\nUn cop finalitzada l'etapa de integració, filtrat i nateja de dades, es guarda en un fitxer intermig anomenat *pokemon_clean_data.csv*", "_____no_output_____" ] ], [ [ "pokemon_info_df.to_csv(path_folder+'/pokemon_clean_data.csv')", "_____no_output_____" ] ], [ [ "## [4, 5]. Anàlisi descriptiu", "_____no_output_____" ], [ "### Generacions\n\nQuantes generacions hi ha?", "_____no_output_____" ] ], [ [ "print(\"Hi ha %d generacions de Pokemons\" %(pokemon_info_df[\"generation\"].nunique()))", "_____no_output_____" ] ], [ [ "#### Distribució dels *Pokemons* en base a la generació\n\nCom es distribueixen els Pokemons en base a la primera generació en que van apareixre?", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n\n#Diagrama de barres\nsns.countplot(x=\"generation\", data=pokemon_info_df, ax=ax1)\n\n#Diagrama de sectors\nsector_diagram = pd.value_counts(pokemon_info_df.generation)\nsector_diagram.plot.pie(startangle=90, autopct='%1.1f%%', shadow=False, \n explode=(0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05), ax=ax2)\nplt.axis(\"equal\")", "_____no_output_____" ] ], [ [ "Quines són les tres generacions on van apareixer més Pokemons?", "_____no_output_____" ] ], [ [ "print(\"5na generació -> %d Pokemons\"%(len(pokemon_info_df[pokemon_info_df[\"generation\"] == 5])))\nprint(\"1ra generació -> %d Pokemons\"%(len(pokemon_info_df[pokemon_info_df[\"generation\"] == 1])))\nprint(\"3era generació -> %d Pokemons\"%(len(pokemon_info_df[pokemon_info_df[\"generation\"] == 3])))", "_____no_output_____" ] ], [ [ "La generació amb més Pokemons és la **5na** amb **156 (19,5%)**, seguidament de la **1era** generació amb **151 (18,9%)** i finalment la **3era** generació amb **135 Pokemons (16,9%)**. Entre aquestes tres generacions hi ha el **55,3%** del total de *Pokemons*.", "_____no_output_____" ], [ "### Pokemons llegendaris\n\nHi ha *Pokemons* que despunten per sobre de la resta degut a les seves característiques especials. Sovint estan relacionats amb llegendes del passat i per això se'ls coneix com a llegendàris. Què podem dir al respecte d'aquests *Pokemons*?\n\n#### Quants Pokemons llegendaris hi ha?", "_____no_output_____" ] ], [ [ "print(\"Nombre total de Pokemons llegendaris: {}\".format(len(pokemon_info_df[pokemon_info_df[\"is_legendary\"] == True])))", "_____no_output_____" ] ], [ [ "En total hi ha **70 Pokemons llegendaris.**\n\n#### Distribució dels *Pokemons* llegendaris\n\nEn quines edicions apareixen aquests Pokemons?", "_____no_output_____" ] ], [ [ "pokemon_legendary_df = pokemon_info_df[pokemon_info_df[\"is_legendary\"] == True]\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n\n#Diagrama de barres\nsns.countplot(x=\"generation\", data=pokemon_legendary_df, ax=ax1)\n\n#Diagrama de sectors\nsector_diagram = pd.value_counts(pokemon_legendary_df.generation)\nsector_diagram.plot.pie(startangle=90, autopct='%1.1f%%', shadow=False, \n explode=(0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05), ax=ax2)\nplt.axis(\"equal\")", "_____no_output_____" ], [ "print(\"7na generació -> %d Pokemons\"%(len(pokemon_legendary_df[pokemon_legendary_df[\"generation\"] == 7])))\nprint(\"4rta generació -> %d Pokemons\"%(len(pokemon_legendary_df[pokemon_legendary_df[\"generation\"] == 4])))\nprint(\"5na generació -> %d Pokemons\"%(len(pokemon_legendary_df[pokemon_legendary_df[\"generation\"] == 5])))", "_____no_output_____" ] ], [ [ "La **7na generació** té **17 Pokemons llegendaris (24,3%)**, la **4rta** en té **13 (18,6%)** i la **5na 13**. Entre **aquestes tres generacions** hi ha un **61,5% de Pokemons llegendaris**.\n\n#### Tipus dels *Pokemons* llegendaris\n\nQuin són els tipus (*type1* i *type2*) que predominen en els *Pokemons* llegendaris?", "_____no_output_____" ] ], [ [ "def plot_by_type(dataFrame, title):\n plt.subplots(figsize=(15, 13))\n\n sns.heatmap(\n dataFrame[dataFrame[\"type2\"] != \"unknown\"].groupby([\"type1\", \"type2\"]).size().unstack(),\n cmap=\"Blues\",\n linewidths=1,\n annot=True\n )\n\n plt.xticks(rotation=35)\n plt.title(title)\n plt.show()", "_____no_output_____" ], [ "plot_by_type(pokemon_legendary_df, \"Pokemons llegendaris per Tipus\")", "_____no_output_____" ] ], [ [ "Els tipus **psíquic/fantasma**, **foc/volador**, **elèctric/volador**, **insecte/lluita** i **drac/psíquic** són els tipus amb més *Pokemons* llegendaris, tots ells amb 2 exemplars.\n\n#### *Pokemon* llegendari més fort\n\nQuin és el Pokemon llegendari amb més atac (attack), defensa (defense), vida (hp) i velocitat (velocity) mitjana?", "_____no_output_____" ] ], [ [ "legendary_with_more_attack = max(pokemon_legendary_df['attack'])\nlegendary_with_less_attack = min(pokemon_legendary_df['attack'])\n\nlegendary_with_more_defense = max(pokemon_legendary_df['defense'])\nlegendary_with_less_defense = min(pokemon_legendary_df['defense'])\n\nlegendary_with_more_hp = max(pokemon_legendary_df['hp'])\nlegendary_with_less_hp = min(pokemon_legendary_df['hp'])\n\nlegendary_with_more_speed = max(pokemon_legendary_df['speed'])\nlegendary_with_less_speed = min(pokemon_legendary_df['speed'])\n\n#Afegim el camp strong amb el comput en base al atac, defensa, vida i velocitat normalitzada.\npokemon_legendary_df[\"strong\"] = ((pokemon_legendary_df['attack'] - legendary_with_less_attack)/(legendary_with_more_attack-legendary_with_less_attack) + \n (pokemon_legendary_df['defense'] - legendary_with_less_defense)/(legendary_with_more_defense-legendary_with_less_defense) +\n (pokemon_legendary_df['hp'] - legendary_with_less_hp)/(legendary_with_more_hp-legendary_with_less_hp) +\n (pokemon_legendary_df['speed'] - legendary_with_less_speed)/(legendary_with_more_speed-legendary_with_less_speed))", "_____no_output_____" ], [ "print(pokemon_legendary_df[\"strong\"])", "_____no_output_____" ], [ "pokemon_legendary_df[pokemon_legendary_df[\"strong\"] == max(pokemon_legendary_df[\"strong\"])][[\"name\", \"strong\"]]", "_____no_output_____" ], [ "pokemon_legendary_df[pokemon_legendary_df[\"strong\"] == min(pokemon_legendary_df[\"strong\"])][[\"name\", \"strong\"]]", "_____no_output_____" ] ], [ [ "En base al càlcul realitzat, podem considerar que el *Pokemon* llegendari més fort és **Groudon** amb una ponderació de: 2,44 punts i el més dèbil és **Cosmog** amb una ponderació de 0 punts.", "_____no_output_____" ], [ "### *Type1* i *type2*\n\nCada *Pokemon* és d'un tipus concret **type1** o és una combinació de **type1** i **type2**, per aquest motiu, alguns d'ells no tenen **type2** (com s'ha vist en l'apartat anterior).\n\n### *Pokemons* d'un únic tipus i de doble tipus.", "_____no_output_____" ] ], [ [ "single_type_pokemons = []\ndual_type_pokemons = []\n\nfor i in pokemon_info_df.index:\n if(pokemon_info_df.type2[i] != \"unknown\"):\n single_type_pokemons.append(pokemon_info_df.name[i])\n else:\n dual_type_pokemons.append(pokemon_info_df.name[i])\n \nprint(\"Nombre de Pokemons amb un únic tipus %d: \" % len(single_type_pokemons))\nprint(\"Nombre de Pokemons amb dos tipus %d: \" % len(dual_type_pokemons))", "_____no_output_____" ] ], [ [ "Hi ha **417** d'un únic tipus (**52,1%**) i **384** amb doble tipus (**47,9%**), això es representa en el següent diagrama de sectors.", "_____no_output_____" ] ], [ [ "data= [len(single_type_pokemons), len(dual_type_pokemons)]\ncolors= [\"#ced1ff\",\"#76bfd4\"]\n\nplt.pie(data, labels=[\"Tipus únic\",\"Doble tipus\"], \n startangle=90, explode=(0, 0.15), \n shadow=True, colors=colors, autopct='%1.1f%%')\n\nplt.axis(\"equal\")\n\nplt.title(\"Tipus únic vs Doble tipus\")\n\nplt.tight_layout()\n\nplt.show()", "_____no_output_____" ] ], [ [ "### Distribució en base al tipus\n\nEn els següents diagrames de barres es mostra la distribució per **type1** i per **type2**", "_____no_output_____" ] ], [ [ "def plot_distribution(data, col, xlabel, ylabel, title):\n types = pd.value_counts(data[col])\n\n fig, ax = plt.subplots()\n fig.set_size_inches(15,7)\n sns.set_style(\"whitegrid\")\n\n ax = sns.barplot(x=types.index, y=types, data=data)\n ax.set_xticklabels(ax.get_xticklabels(), rotation=75, fontsize=12)\n ax.set(xlabel=xlabel, ylabel=ylabel)\n ax.set_title(title)", "_____no_output_____" ], [ "plot_distribution(pokemon_info_df, \"type1\", \"Tipus únic\", \"Nombre\", \n \"Distribució dels Pokemons amb un únic tipus\")", "_____no_output_____" ], [ "plot_distribution(pokemon_info_df, \"type2\", \"Doble tipus\", \"Nombre\", \n \"Distribució dels Pokemons amb un únic tipus\")", "_____no_output_____" ] ], [ [ "### Combinació de tipus\n\nAra volem saber quina combinació de tipus **type1** i **type2** hi ha entre tots els Pokemons.", "_____no_output_____" ] ], [ [ "plt.subplots(figsize=(15, 13))\n\nsns.heatmap(\n pokemon_info_df[pokemon_info_df[\"type2\"] != \"unknown\"].groupby([\"type1\", \"type2\"]).size().unstack(),\n cmap=\"Blues\",\n linewidths=1,\n annot=True\n)\n\nplt.xticks(rotation=35)\nplt.show()", "_____no_output_____" ] ], [ [ "Com es pot veure, la combinació de tipus més comuna és **normal/volador** amb **26 Pokemons** seguida per la combinació **planta/verí** i **insecte/volador** amb **14** i **13 Pokemons** respectivament.\n\n**Nota:** En aquest mapa de calor s'han filtrat tots aquells Pokemons sense segon tipus.", "_____no_output_____" ], [ "### Pes i alçada\n\nLa variable **height_m** conté l'alçada en metres, mentre que la variable **weight_kg** conté el pes en Kilgorams. Així que, quins són els *Pokemons* amb major i menor alçada? I els de major i menor pes?", "_____no_output_____" ] ], [ [ "tallest_m = max(pokemon_info_df['height_m'])\nshortest_m = tallest_m\nfor i in pokemon_info_df.index:\n if pokemon_info_df.height_m[i] > 0 and pokemon_info_df.height_m[i] < shortest_m:\n shortest_m = pokemon_info_df.height_m[i]\n\ntallest_pokemon = pokemon_info_df[pokemon_info_df['height_m'] == tallest_m]\nshortest_pokemon = pokemon_info_df[pokemon_info_df['height_m'] == shortest_m]\n\nprint(\"Els Pokemons més alts són:\")\nfor i in tallest_pokemon.index:\n print(\"\\t%s amb %.2f metres\" % (tallest_pokemon.name[i], tallest_pokemon.height_m[i]))\n\nprint(\"\\nEls Pokemons més petits són:\")\nfor i in shortest_pokemon.index:\n print(\"\\t%s amb %.2f metres\" % (shortest_pokemon.name[i], shortest_pokemon.height_m[i]))", "_____no_output_____" ], [ "max_weight = max(pokemon_info_df['weight_kg'])\nlight_kg = max_weight\nfor i in pokemon_info_df.index:\n if pokemon_info_df.weight_kg[i] > 0 and pokemon_info_df.weight_kg[i] < light_kg:\n light_kg = pokemon_info_df.weight_kg[i]\n\nheviest_pokemon = pokemon_info_df[pokemon_info_df['weight_kg'] == max_weight]\nlightest_pokemon = pokemon_info_df[pokemon_info_df['weight_kg'] == light_kg]\n\nprint(\"Els Pokemons amb més pes són:\")\nfor i in heviest_pokemon.index:\n print(\"\\t%s amb %.2f kilograms\" % (heviest_pokemon.name[i], heviest_pokemon.weight_kg[i]))\n\nprint(\"\\nEls Pokemons amb menys pes són:\")\nfor i in lightest_pokemon.index:\n print(\"\\t%s amb %.2f kilograms\" % (lightest_pokemon.name[i], lightest_pokemon.weight_kg[i]))", "_____no_output_____" ] ], [ [ "#### Distribució de l'alçada i del pes\n\nAra es vol veure quina és la distribució de l'alçada i pes dels Pokemons, per això es pot utilitzar histogrames i diagrames de caixa.", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5))\n\nsns.distplot(pokemon_info_df['height_m'], color='g', axlabel=\"Alçada (m)\", ax=ax1)\nsns.distplot(pokemon_info_df['weight_kg'], color='y', axlabel=\"Pes (kg)\", ax=ax2)", "_____no_output_____" ], [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n\nsns.boxplot(x=pokemon_info_df[\"height_m\"], color=\"g\", orient=\"v\", ax=ax1)\nsns.boxplot(x=pokemon_info_df[\"weight_kg\"], color=\"y\", orient=\"v\", ax=ax2)", "_____no_output_____" ] ], [ [ "Tots aquells Pokemons amb una alçada inferior a Com es pot veure, hi ha Pokemons molt dispersos a la resta, es con", "_____no_output_____" ], [ "### Velocitat\n\nQuins són els *Pokemons* més ràpids i quins els més lents?", "_____no_output_____" ] ], [ [ "fast_value = max(pokemon_info_df['speed'])\nslow_value = min(pokemon_info_df[pokemon_info_df['speed'] != 0]['speed'])\n\nfastest_pokemon = pokemon_info_df[pokemon_info_df['speed'] == max(pokemon_info_df['speed'])]\nslowest_pokemon = pokemon_info_df[pokemon_info_df['speed'] == slow_value]\n\nprint(\"Els Pokemons més ràpids són:\")\nfor i in fastest_pokemon.index:\n print(\"\\t%s amb una velocitat de %.f punts\" %(fastest_pokemon.name[i], fastest_pokemon.speed[i]))\n\nprint(\"Els Pokemons més lents són:\")\nfor i in slowest_pokemon.index:\n print(\"\\t%s amb una velocitat de %.f punts\" %(slowest_pokemon.name[i], slowest_pokemon.speed[i]))", "_____no_output_____" ] ], [ [ "#### Distribució de la velocitat", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\nsns.distplot(pokemon_info_df['speed'], color=\"orange\", ax=ax1)\nsns.boxplot(pokemon_info_df['speed'], color=\"orange\", orient=\"v\", ax=ax2)", "_____no_output_____" ] ], [ [ "### Atac i defensa\n\nEn els següents gràfics es comparen: l'atac i l'atac especial base, la defensa i la defensa especial base.", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n\nsns.distplot(pokemon_info_df['attack'], color=\"#B8F0FC\", hist=False, ax=ax1, label=\"Attack\")\nsns.distplot(pokemon_info_df[\"sp_attack\"], color=\"#52BAD0\", hist=False, ax=ax1, label=\"S. Attack\")\nax1.title.set_text(\"Attack vs Special Attack\")\n\nax2.title.set_text(\"Defense vs Special defense\")\nsns.distplot(pokemon_info_df['defense'], color=\"#C6FFBF\", hist=False, ax=ax2, label=\"Defense\")\nsns.distplot(pokemon_info_df[\"sp_defense\"], color=\"#61D052\", hist=False, ax=ax2, label=\"S. Defense\")", "_____no_output_____" ] ], [ [ "En els següents gràfics es comparen: l'atac i la defensa base, l'atac especial i la defensa especial base.", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n\nax1.title.set_text(\"Attack vs Defense\")\nsns.distplot(pokemon_info_df['attack'], color=\"#B8F0FC\", hist=False, ax=ax1, label=\"Attack\")\nsns.distplot(pokemon_info_df[\"defense\"], color=\"#52BAD0\", hist=False, ax=ax1, label=\"Defense\")\n\nax2.title.set_text(\"Special Attack vs Special defense\")\nsns.distplot(pokemon_info_df['sp_attack'], color=\"#C6FFBF\", hist=False, ax=ax2, label=\"Special Attack\")\nsns.distplot(pokemon_info_df[\"sp_defense\"], color=\"#61D052\", hist=False, ax=ax2, label=\"Special Defense\")", "_____no_output_____" ] ], [ [ "## [4]. Distribució de les variables\n\nEn aquest apartat s'estudiarà la distribució que segueixen algunes de les variables i s'aplicaran contrastos de hipòtesi amb la finalitat d'extreure conclusions en base al tipus dels Pokemons.\n\nS'ha decidit estudiar les variables *atac*, *hp*, *defensa* i *velocitat*. \n\n### Normalitat en la distribució\n\nS'aplica un test de normalitat *shapiro-wilks* per veure si segueixen una [distribució normal](https://en.wikipedia.org/wiki/Normal_distribution). Aquest test planteja el següent contrast de hipòtesis:\n\n$H_{0}: X$ és normal\n\n$H_{1}: X$ no és normal", "_____no_output_____" ] ], [ [ "sp.stats.shapiro(pokemon_info_df['attack'].to_numpy())", "_____no_output_____" ], [ "sp.stats.shapiro(pokemon_info_df['hp'].to_numpy())", "_____no_output_____" ], [ "sp.stats.shapiro(pokemon_info_df['defense'].to_numpy())", "_____no_output_____" ], [ "sp.stats.shapiro(pokemon_info_df['speed'].to_numpy())", "_____no_output_____" ], [ "sp.stats.shapiro(pokemon_info_df['height_m'].to_numpy())", "_____no_output_____" ], [ "sp.stats.shapiro(pokemon_info_df['weight_kg'].to_numpy())", "_____no_output_____" ] ], [ [ "Els testos per les variables *attack*, *hp*, *defense*, *speed*, *height_m* i *weight_kg* han obtingut un *p-value* inferior al nivell de significació ($\\alpha$ = 0.05), i per tant hi ha evidències estadístqiues suficients per rebutjar la hipòtesi nul·la i acceptar que no segueixen una distribució normal.\n\n### Homocedasticitat\n\n**Pes** en els Pokemons de tipus **roca** i **foc**\n\nAra es vol saber si hi ha diferència en la variancia (heterocedasticitat) o no (homocedasticitat) per la variable *weight_kg* en base a si el seu primer tipus és roca (*rock*) o foc (*fire*). Per això s'aplica un test de **Fligner-Killeen** (s'aplica aquest test perquè no és paramètric i com s'ha vist anteriorment, les dades no han superat el test de normalitat) on el contrast és el següent:\n\n$H_{0}$: La variància entre $X_{0}$ i $X_{1}$ és homogenea.\n\n$H_{1}$: La variància entre $X_{0}$ i $X_{1}$ és heterogenea.\n\nEl contrast es fa amb un nivell de significació de:\n\n$\\alpha = 0.05$", "_____no_output_____" ] ], [ [ "rock_pokemons_array = pokemon_info_df[(pokemon_info_df['type1'] == 'rock') \\\n & (pokemon_info_df['weight_kg'] != 0)]['weight_kg'].to_numpy()\nfire_pokemons_array = pokemon_info_df[(pokemon_info_df['type1'] == 'fire') \\\n & (pokemon_info_df['weight_kg'] != 0)]['weight_kg'].to_numpy()\nsp.stats.fligner(rock_pokemons_array, fire_pokemons_array)", "_____no_output_____" ] ], [ [ "Com que s'ha obtingut un ***p-value*** de **0,044** (0,044 < $\\alpha$), **hi ha suficients evidències estadístiques per rebutjar la hipótesi nul·la**, i per tant, s'accepta amb un **nivell de confiança del 95%** que **hi ha diferències entre les variancies dels Pokemons de tipus roca i els de tipus foc.**\n\n\n### Contrast - Pes dels Pokemons de tipus roca i foc\n\nAra es vol contestar a la pregunta: \n\nEs pot considerar que els Pokemons de tipus roca i foc tenen la mateixa mitja de pes? Per això es pot aplicar un **t-test** on el contrast d'hipòtesis és:\n\n$H_0:$ $\\mu_{1}-\\mu_{2} = 0$ (la mitja de **weight_kg** és igual pels Pokemons de tipus roca i foc)\n\n$H_1:$ $\\mu_{1}-\\mu_{2} \\ne 0$ (la mitja de **weight_kg** no és igual pels Pokemons de tipus roca i foc)\n\nOn:\n\n$\\alpha = 0.05$\n\n**Nota:** Tot i que la variable **weight_kg** no segueix una distribució normal, com que la mida de les dades és considerablement superior a 30, es pot assumir normalitat pel teorema del límit central.", "_____no_output_____" ] ], [ [ "sp.stats.ttest_ind(a = rock_pokemons_array, b = fire_pokemons_array)", "_____no_output_____" ] ], [ [ "Com que s'ha obtingut un ***p-value*** de **0,137**, **no hi ha evidències estadístiques suficients per rebutjar la hipòtesi nul·la**, i per tant **es pot considerar que la mitja de pes entre els Pokemons de tipus roca i de tipus foc és el mateix.**\n\n\n## [4, 5] Anàlisi predictiu\n\nEn aquest punt **es dona per finalitzat l'anàlisi descriptiu i es passa a l'anàlisi predictiu** amb l'objectiu de crear un model que permeti **adivinar quin guanyaria un combat entre dos *Pokemons***. El problema que es vol resoldre **és un problema de classificació amb dades etiquetades** (model supervisat), i per això, es crearan diversos models simples on es tindrà en compte l'accuracy com a únic paràmetre de bondat del model.\n\nPer mesurar l'*accuracy* s'aplicarà la tècnica de *k-fold cross validation* amb un valor de 10 per la *k*\n\n### Pokemon_battles_df\n\nEl *dataset* analitzat fins el moment no conté la informació relacionada als combats, i per això, es complementaran les dades amb el *dataset pokemon_battles_df* amb els següents camps:\n\n* ***First_pokemon***: Índex de la *Pokedex* pel primer contrincant.\n* ***Second_pokemon***: Índex de la *Pokedex* pel segon contrincant.\n* ***Winner***: Índex de la *Pokedex* del guanyador.", "_____no_output_____" ] ], [ [ "pokemon_battles_df", "_____no_output_____" ] ], [ [ "El primer que cal fer es relacionar el *dataset* que conté la informació dels Pokemons (*pokemon_info_df*) amb el *dataset* dels combats (*pokemon_battles_df*). \n\nPer això apliquem dos *joins*, el primer que relaciona aquests dos datasets per obtenir les dades del primer Pokemon i el segon *join* on es tornen a relacionar aquest dos *datasets*, però aquesta vegada per obtenir la informació del segón *Pokemon* implicat.", "_____no_output_____" ] ], [ [ "pokemon_battles_info_df = pokemon_battles_df.merge(pokemon_info_df, \\\n left_on='First_pokemon', \\\n right_on='pokedex_number' \\\n ).merge(pokemon_info_df, \\\n left_on='Second_pokemon', \\\n right_on='pokedex_number' \\\n )[['First_pokemon', 'Second_pokemon', \\\n 'Winner', 'name_x', 'attack_x', \\\n 'sp_attack_x', 'defense_x', 'sp_defense_x', \\\n 'hp_x', 'speed_x', 'type1_x','is_legendary_x', \\\n 'name_y', 'attack_y', \\\n 'sp_attack_y', 'defense_y', 'sp_defense_y', \\\n 'hp_y', 'speed_y', 'type1_y','is_legendary_y']]", "_____no_output_____" ] ], [ [ "El *dataset* resultant conté per nom *field_x* el resultat del primer join i *field_y* pel resultat del segon join. Apliquem un *rename* perquè els camps *field_x* començin per *First_pokemon* i els camps *field_y* per *Second_pokemon*", "_____no_output_____" ] ], [ [ "pokemon_battles_info_df.rename(columns={'name_x': 'First_pokemon_name', 'attack_x': 'First_pokemon_attack', \\\n 'sp_attack_x': 'First_pokemon_sp_attack', 'defense_x': 'First_pokemon_defense', \\\n 'sp_defense_x': 'First_pokemon_sp_defense', 'hp_x': 'First_pokemon_hp', \\\n 'speed_x': 'First_pokemon_speed', 'type1_x': 'First_pokemon_type1', \\\n 'is_legendary_x': 'First_pokemon_is_legendary', 'name_y': 'Second_pokemon_name', \\\n 'attack_y': 'Second_pokemon_attack', 'sp_attack_y': 'Second_pokemon_sp_attack', \\\n 'defense_y': 'Second_pokemon_defense', 'sp_defense_y': 'Second_pokemon_sp_defense', \\\n 'hp_y': 'Second_pokemon_hp', 'speed_y': 'Second_pokemon_speed', \\\n 'type1_y': 'Second_pokemon_type1', 'is_legendary_y': 'Second_pokemon_is_legendary'}, \\\n inplace=True)", "_____no_output_____" ] ], [ [ "### Camps *diff_?*\n\nPer construir el model predictiu cal calcular els camps amb les diferències entre les propietats implicades. Aquestes s'anomenaran *Diff_?*. Per exemple, la diferència d'atac seria: \n\n*Diff_attack* = *First_pokemon_attack* - *Second_pokemon_attack*", "_____no_output_____" ] ], [ [ "pokemon_battles_info_df['Diff_attack'] = pokemon_battles_info_df['First_pokemon_attack'] - pokemon_battles_info_df['Second_pokemon_attack']\n\npokemon_battles_info_df['Diff_sp_attack'] = pokemon_battles_info_df['First_pokemon_sp_attack'] - pokemon_battles_info_df['Second_pokemon_sp_attack']\n\npokemon_battles_info_df['Diff_defense'] = pokemon_battles_info_df['First_pokemon_defense'] - pokemon_battles_info_df['Second_pokemon_defense']\n\npokemon_battles_info_df['Diff_sp_defense'] = pokemon_battles_info_df['First_pokemon_sp_defense'] - pokemon_battles_info_df['Second_pokemon_sp_defense']\n\npokemon_battles_info_df['Diff_hp'] = pokemon_battles_info_df['First_pokemon_hp'] - pokemon_battles_info_df['Second_pokemon_hp']\n\npokemon_battles_info_df['Diff_speed'] = pokemon_battles_info_df['First_pokemon_speed'] - pokemon_battles_info_df['Second_pokemon_speed']", "_____no_output_____" ] ], [ [ "### Camp *winner_result*\nCom que l'objectiu d'aquest model predictiu és fer una classificació on el resultat sigui 0 si guanya el primer *Pokemon* o 1 en cas contrari. Afegim el camp ***Winner_result*** amb aquest càlcul.", "_____no_output_____" ] ], [ [ "pokemon_battles_info_df['Winner_result'] = np.where(\\\n pokemon_battles_info_df['First_pokemon'] == \\\n pokemon_battles_info_df['Winner'], 0, 1)", "_____no_output_____" ] ], [ [ "### Seleccionar els camps del model\n\nAra creem el *dataset* ***pokemon_battles_pred_df*** amb els camps que s'usaran com a predictors, que són: \n\n* ***Diff_attack***\n* ***Diff_sp_attack***\n* ***Diff_defense***\n* ***Diff_sp_defense***\n* ***Diff_hp***\n* ***Diff_speed***\n* ***First_pokemon_is_legendary***\n* ***Second_pokemon_is_legendary***\n\nI el *dataset* ***pokemon_battles_res_df*** amb el camp resultat que és *Winner_result*", "_____no_output_____" ] ], [ [ "pokemon_battles_pred = pokemon_battles_info_df[['Diff_attack', 'Diff_sp_attack', \\\n 'Diff_defense', 'Diff_sp_defense', \\\n 'Diff_hp', 'Diff_speed', \\\n 'First_pokemon_is_legendary',\n 'Second_pokemon_is_legendary']].values\n\npokemon_battles_res = pokemon_battles_info_df['Winner_result'].values", "_____no_output_____" ] ], [ [ "### Escalar les dades\n\nSi els rangs de valors per les variables utilitzades en el model és considerablment diferent, poden causar distorsions en els resultats obtinguts. Per mostrar la seva distribució es pot utilitzar un *boxplot*.", "_____no_output_____" ] ], [ [ "plt.subplots(figsize=(15,10))\nsns.boxplot(data=pokemon_battles_pred[:,0:6], orient='v')", "_____no_output_____" ] ], [ [ "Com es pot observar hi ha diferència entre el rang de les dades, per això es pot aplicar un escalat robust.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import RobustScaler\nrs = RobustScaler()\n\nrs.fit(pokemon_battles_pred)\n\npokemon_battles_pred = rs.transform(pokemon_battles_pred)", "_____no_output_____" ], [ "plt.subplots(figsize=(15,10))\nsns.boxplot(data=pokemon_battles_pred[:,0:6], orient='v')", "_____no_output_____" ] ], [ [ "### Separar les dades en *dades d'entrenament* i *dades de prova*\n\nCom que és un **model supervisat**, cal separar les dades en dades d'entrenament i dades de prova. El model utilitzarà les dades d'entrenament per aprendre (fase d'entrenament) i les dades de prova per comprovar si el que ha aprés és o no correcte (fase de test).\n\nCom que **hi ha una quantitat relativament alta de registres** (38,743), s'ha decidit utilitzar un **80% de les dades per a l'entrenament** (30,994 registres) i **un 20% pel test** (7,749 registres).", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\n#S'ha decidit assignar el valor 23 a la llavor per així obtenir sempre el mateix resultat.\npokemon_battle_pred_train, pokemon_battle_pred_test, \\\npokemon_battle_res_train, pokemon_battle_res_test = train_test_split(\\\n pokemon_battles_pred, \\\n pokemon_battles_res, \\\n test_size=0.2, random_state = 23)", "_____no_output_____" ] ], [ [ "### Crear el model de regressió logística", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\nclassifier = LogisticRegression(random_state = 0)\nclassifier.fit(pokemon_battle_pred_train, pokemon_battle_res_train)", "_____no_output_____" ], [ "pokemon_battle_results = classifier.predict(pokemon_battle_pred_test)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(pokemon_battle_res_test, pokemon_battle_results)", "_____no_output_____" ], [ "print(cm)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\naccuracies = cross_val_score(estimator = classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy')\nprint('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))", "_____no_output_____" ] ], [ [ "**Accuracy:** 87,97%", "_____no_output_____" ], [ "### K nearest Neighbours (*Knn*)", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import KNeighborsClassifier\nknn_classifier = KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2)\nknn_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train)", "_____no_output_____" ], [ "knn_pokemon_battle_results = knn_classifier.predict(pokemon_battle_pred_test)", "_____no_output_____" ], [ "knn_cm = confusion_matrix(pokemon_battle_res_test, knn_pokemon_battle_results)", "_____no_output_____" ], [ "print(knn_cm)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\naccuracies = cross_val_score(estimator = knn_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy')\nprint('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))", "_____no_output_____" ] ], [ [ "**Accuracy:** 87,58%", "_____no_output_____" ], [ "### Support Vector Machine - SVM", "_____no_output_____" ] ], [ [ "from sklearn.svm import SVC\nsvm_classifier = SVC(kernel='rbf', random_state=0)", "_____no_output_____" ], [ "svm_classifier = svm_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train)", "_____no_output_____" ], [ "svm_pokemon_battle_results = svm_classifier.predict(X=pokemon_battle_pred_test)\nsvm_cm = confusion_matrix(pokemon_battle_res_test, svm_pokemon_battle_results)", "_____no_output_____" ], [ "print(svm_cm)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\naccuracies = cross_val_score(estimator = svm_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy')\nprint('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))", "_____no_output_____" ] ], [ [ "**Accuracy:** 90,92%", "_____no_output_____" ], [ "### Classificació per xarxa bayesiana (*Naive bayes*)", "_____no_output_____" ] ], [ [ "from sklearn.naive_bayes import GaussianNB\nnb_classifier = GaussianNB()", "_____no_output_____" ], [ "nb_classifier = nb_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train)\nnb_pokemon_battle_results = nb_classifier.predict(X=pokemon_battle_pred_test)", "_____no_output_____" ], [ "nb_cm = confusion_matrix(pokemon_battle_res_test, nb_pokemon_battle_results)\nprint(nb_cm)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\naccuracies = cross_val_score(estimator = nb_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy')\nprint('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))", "_____no_output_____" ] ], [ [ "**Accuracy:** 79,95%", "_____no_output_____" ], [ "### Random Forest Classifier (RFC)", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestClassifier", "_____no_output_____" ], [ "rfc_classifier = RandomForestClassifier(n_estimators=10, criterion='entropy', random_state=0)\nrfc_classifier = rfc_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train)", "_____no_output_____" ], [ "rfc_pokemon_battle_results = rfc_classifier.predict(X=pokemon_battle_pred_test)\nrfc_cm = confusion_matrix(pokemon_battle_res_test, rfc_pokemon_battle_results)", "_____no_output_____" ], [ "print(rfc_cm)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\naccuracies = cross_val_score(estimator = rfc_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy')\nprint('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))", "_____no_output_____" ] ], [ [ "**Accuracy:** 92,25", "_____no_output_____" ], [ "### Millor model\n\nEl model que ha obtingut un millor *accuracy* ha estat el *Random Forest Classifier* amb un encert del 92,51%:\n\n### Millorar el model (afegir el tipus dels *Pokemons*)\n\nCom s'ha mostrat en apartats anteriors, **cada *Pokemon* té un tipus base** i pot tenir un segon tipus. Evidentment, aquestes propietats **influeixen a l'hora de determinar el guanyador en un combat**, per exemple, un Pokemon d'aigua més dèbil (menys atac, defensa, vida, etc.) pot guanyar amb més facilitat a un Pokemon de foc que tingui més elevades les característiques, que a un Pokemon de planta.\n\nPer això, anem a calcular una nova propietat que determini l'eficacia en base al tipus de Pokemon. Aquesta propietat vindrà definida en funció del primer i segon tipus del Pokemon (*type1* i *type2*) i la seva debilitat en vers als altres tipus (*against_?*).\n\nD'aquesta manera, si comparem els Pokemons *Pikachu* (elèctric/elèctric) i *Onix* (roca/terra), té avantatge l'Ònix perquè no té debilitat en vers a l'electricitat (*against_electric = 0*) i en canvi, en Pikachu té debilitat per la roca (*against_rock = 1*) i per la terra (*against_ground = 2*).\n\nPer obtenir un valor numèric, s'aplica la formula:\n\n$f(p1, p2) = g(p1, p2) - g(p2, p1)$\n\nOn:\n\n* $g(p1, p2) = dbt1(p1, p2)*ft1 + dbt2(p1, p2)*ft2$\n* $dbt1(p1, p2)$ = Debilitat del *Pokemon* p2 en vers al primer tipus del *Pokemon* p1.\n* $dbt2(p1, p2)$ = Debilitat del *Pokemon* p2 en vers el segon tipus del *Pokemon* p1.\n* $ft1$ = Factor arbitrari per ponderar el tipus 1\n* $ft2$ = Factor arbitrari per ponderar el tipus 2\n\nD'aquesta manera, per l'exemple de l'*Onix* vs *Pikachu* donat:\n\n* *Onix*: *type1* = rock, *type2* = ground, *against_electric* = 0\n* *Pikachu*: *type1* = eletric, *type2* = eletric, *against_rock* = 1, *against_ground*= 2\n* $ft1$ = 1\n* $ft2$ = 0.3\n\ntenim:\n\n$f(Onix, Pikachu) = (1*1 + 2*0.3) - (0*1 + 0*0.3) = 1.6$\n\nCom era d'esperar, degut que els *Pokemons* de tipus roca i terra tenen avantatge davant dels *Pokemons* de tipus elèctric, s'ha obtingut un valor positiu.", "_____no_output_____" ] ], [ [ "def effectivity_against(pokemon1, pokemon2, effectivity_type1, effectivity_type2):\n type1 = pokemon1['type1'].iloc[0]\n type2 = pokemon1['type2'].iloc[0]\n against_type1 = pokemon2['against_'+type1].iloc[0]\n if type2 == 'unknown':\n return against_type1 * effectivity_type1\n else:\n against_type2 = pokemon2['against_'+type2].iloc[0]\n return (against_type1 * effectivity_type1) + (against_type2 * effectivity_type2)", "_____no_output_____" ], [ "def balance_effectivity_against(pokemon1, pokemon2, effectivity_type1 = 1, effectivity_type2 = 0.3):\n return effectivity_against(pokemon1, pokemon2, \\\n effectivity_type1, effectivity_type2) - effectivity_against(pokemon2, pokemon1, \\\n effectivity_type1, effectivity_type2)", "_____no_output_____" ], [ "def balance_effectivity_against_by_pokedex_number(pokemon_number1, pokemon_number2, \\\n effectivity_type1 = 1, effectivity_type2 = 0.3):\n pokemon1 = pokemon_info_df[pokemon_info_df['pokedex_number'] == pokemon_number1]\n pokemon2 = pokemon_info_df[pokemon_info_df['pokedex_number'] == pokemon_number2]\n return balance_effectivity_against(pokemon1, pokemon2, effectivity_type1, effectivity_type2)", "_____no_output_____" ] ], [ [ "Ara cal afegir la propietat *balance_effectivity* al *dataframe pokemon_battles_info_df*", "_____no_output_____" ] ], [ [ "pokemon_battles_info_df['balance_effectivity'] = [\\\n balance_effectivity_against_by_pokedex_number(\\\n row['First_pokemon'], \\\n row['Second_pokemon']) \\\n for index, row in pokemon_battles_df.iterrows()\\\n ]", "_____no_output_____" ] ], [ [ "S'afegeix la columna *balance_effectivity* al *dataframe pokemon_battles_pred*", "_____no_output_____" ] ], [ [ "pokemon_battles_improved_pred = pokemon_battles_info_df[['Diff_attack', 'Diff_sp_attack', \\\n 'Diff_defense', 'Diff_sp_defense', \\\n 'Diff_hp', 'Diff_speed', \\\n 'First_pokemon_is_legendary',\n 'Second_pokemon_is_legendary',\n 'balance_effectivity']].values", "_____no_output_____" ] ], [ [ "Distribució de les variables", "_____no_output_____" ] ], [ [ "plt.subplots(figsize=(15,10))\nsns.boxplot(data=pokemon_battles_improved_pred[:,[0,1,2,3,4,5,8]], orient='v')", "_____no_output_____" ] ], [ [ "Es normalitzen altre vegada les variables numèriques.", "_____no_output_____" ] ], [ [ "rs = RobustScaler()\n\nrs.fit(pokemon_battles_improved_pred)\n\npokemon_battles_improved_pred = rs.transform(pokemon_battles_improved_pred)\n\nplt.subplots(figsize=(15,10))\nsns.boxplot(data=pokemon_battles_improved_pred[:,[0,1,2,3,4,5,8]], orient='v')", "_____no_output_____" ] ], [ [ "Un cop escalades, tornem a separar-les en un conjunt d'entrenament i un de prova.", "_____no_output_____" ] ], [ [ "pokemon_battles_improved_pred_train, pokemon_battles_improved_pred_test, \\\npokemon_battles_improved_res_train, pokemon_battles_improved_res_test = train_test_split(\\\n pokemon_battles_improved_pred, \\\n pokemon_battles_res, \\\n test_size=0.2, random_state = 23)", "_____no_output_____" ] ], [ [ "#### *Random forest* millorat\n\nCalculat l'atribut *balance_effectivity* que té en compte el tipus dels Pokemons involucrats en el combat, tornem a crear el model basat en *random forest* (ja que és amb el que hem obtingut un major *accuracy*) per veure si millorem els resultats.", "_____no_output_____" ] ], [ [ "improved_rfc_classifier = RandomForestClassifier(n_estimators=10, criterion='entropy', random_state=0)\nimproved_rfc_classifier = improved_rfc_classifier.fit(\\\n X=pokemon_battles_improved_pred_train, \\\n y=pokemon_battles_improved_res_train)", "_____no_output_____" ], [ "improved_rfc_pokemon_battle_results = improved_rfc_classifier.predict(X=pokemon_battles_improved_pred_test)\nimproved_rfc_cm = confusion_matrix(pokemon_battles_improved_res_test, improved_rfc_pokemon_battle_results)", "_____no_output_____" ], [ "print(improved_rfc_cm)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\naccuracies = cross_val_score(estimator = improved_rfc_classifier, X = pokemon_battles_improved_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy')\nprint('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))", "_____no_output_____" ] ], [ [ "**Accuracy:** 92,56%\n\n**Nota:** Afegint la variable *balance_effectivity* augmenta la complexitat del model i millora l'accuracy només en un 0,31%.", "_____no_output_____" ], [ "### [Corba ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)\n\nLa corba característica pel model obtingut és:", "_____no_output_____" ] ], [ [ "from sklearn.metrics import roc_curve, auc\n\nfpr, tpr, _ = roc_curve(y_true=pokemon_battles_improved_res_test , y_score=improved_rfc_pokemon_battle_results)\nauc = auc(fpr, tpr)", "_____no_output_____" ], [ "plt.subplots(figsize=(15, 8))\nplt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % auc)\nplt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate (FPR)')\nplt.ylabel('True Positive Rate (TPR)')\nplt.title('ROC - Classificació de combats')\nplt.legend(loc=\"lower right\")\nplt.show()", "_____no_output_____" ] ], [ [ "## Torneig *Pokemon*\n\nPer comprovar l'efectivitat del model de predicció creat s'ha decidit realitzar un Torneig *Pokemon*, on hi participen **16 *Pokemons***, **8** dels quals **són llegendaris**. El Torneig consta de **8 combats** dividits en **4 fases**. ", "_____no_output_____" ] ], [ [ "# Construeix les dades del combat que enfronta el pokemon1 contra el pokemon2, \n#les dades retornades ja estan normalitzades.\ndef build_fight(name_pokemon1, name_pokemon2):\n pokemon1 = pokemon_info_df[pokemon_info_df['name'] == name_pokemon1].iloc[0]\n pokemon2 = pokemon_info_df[pokemon_info_df['name'] == name_pokemon2].iloc[0]\n return rs.transform(pd.DataFrame.from_dict({'Diff_attack': [pokemon1['attack']-pokemon2['attack']],\\\n 'Diff_sp_attack': [pokemon1['sp_attack']-pokemon2['sp_attack']],\\\n 'Diff_defense': [pokemon1['defense']-pokemon2['defense']],\\\n 'Diff_sp_defense': [pokemon1['sp_defense']-pokemon2['sp_defense']],\\\n 'Diff_hp': [pokemon1['hp']-pokemon2['hp']],\\\n 'Diff_speed': [pokemon1['speed']-pokemon2['speed']],\\\n 'First_pokemon_is_legendary': [pokemon1['is_legendary']],\\\n 'Second_pokemon_is_legendary': [pokemon2['is_legendary']],\\\n 'balance_effectivity': [balance_effectivity_against_by_pokedex_number(\\\n pokemon1['pokedex_number'], pokemon2['pokedex_number'])]}))", "_____no_output_____" ], [ "# Realitza una lluita que enfronta el Pokemon1 contra el Pokemon2 i \n# fa la predicció del guanyador amb el classifier\ndef fight(classifier, name_pokemon1, name_pokemon2):\n pokemon_fight = build_fight(name_pokemon1, name_pokemon2)\n \n #Make the prediction\n result = classifier.predict_proba(X=pokemon_fight)\n \n if result[0][0] > 0.5:\n print('The winner is: {} with a probability of: {}%'.format(name_pokemon1, (result[0][0]*100)))\n else:\n print('The winner is: {} with a probability of: {}%'.format(name_pokemon2, (result[0][1]*100)))", "_____no_output_____" ] ], [ [ "### Round 1\n\n![title](img/torneig/round__1.jpg)", "_____no_output_____" ] ], [ [ "fight1 = fight(classifier=improved_rfc_classifier, name_pokemon1='Snorlax', name_pokemon2='Ninetales')", "_____no_output_____" ], [ "fight2 = fight(classifier=improved_rfc_classifier, name_pokemon1='Gengar', name_pokemon2='Altaria')", "_____no_output_____" ], [ "fight3 = fight(classifier=improved_rfc_classifier, name_pokemon1='Raikou', name_pokemon2='Mew')", "_____no_output_____" ], [ "fight4 = fight(classifier=improved_rfc_classifier, name_pokemon1='Articuno', name_pokemon2='Kommo-o')", "_____no_output_____" ], [ "fight5 = fight(classifier=improved_rfc_classifier, name_pokemon1='Swampert', name_pokemon2='Solgaleo')", "_____no_output_____" ], [ "fight6 = fight(classifier=improved_rfc_classifier, name_pokemon1='Nidoking', name_pokemon2='Rayquaza')", "_____no_output_____" ], [ "fight7 = fight(classifier=improved_rfc_classifier, name_pokemon1='Mewtwo', name_pokemon2='Celebi')", "_____no_output_____" ], [ "fight8 = fight(classifier=improved_rfc_classifier, name_pokemon1='Arceus', name_pokemon2='Milotic')", "_____no_output_____" ] ], [ [ "### Round 2\n![title](img/torneig/round_2.jpg)", "_____no_output_____" ] ], [ [ "fight9 = fight(classifier=improved_rfc_classifier, name_pokemon1='Snorlax', name_pokemon2='Raikou')", "_____no_output_____" ], [ "fight10 = fight(classifier=improved_rfc_classifier, name_pokemon1='Altaria', name_pokemon2='Kommo-o')", "_____no_output_____" ], [ "fight11 = fight(classifier=improved_rfc_classifier, name_pokemon1='Swampert', name_pokemon2='Mewtwo')", "_____no_output_____" ], [ "fight12 = fight(classifier=improved_rfc_classifier, name_pokemon1='Rayquaza', name_pokemon2='Arceus')", "_____no_output_____" ] ], [ [ "### Round 3\n![title](img/torneig/round_3.jpg)", "_____no_output_____" ] ], [ [ "fight9 = fight(classifier=improved_rfc_classifier, name_pokemon1='Snorlax', name_pokemon2='Mewtwo')", "_____no_output_____" ], [ "fight10 = fight(classifier=improved_rfc_classifier, name_pokemon1='Kommo-o', name_pokemon2='Arceus')", "_____no_output_____" ] ], [ [ "### Round 4\n![title](img/torneig/round_4.jpg)", "_____no_output_____" ] ], [ [ "fight10 = fight(classifier=improved_rfc_classifier, name_pokemon1='Mewtwo', name_pokemon2='Arceus')", "_____no_output_____" ] ], [ [ "### Resultat del torneig\n\n![title](img/torneig/final_.jpg)\n\n### [Mewtwo](https://www.wikidex.net/wiki/Mewtwo#Biolog.C3.ADa)\n\nAquest *Pokemon* es un dels primers creats per la ciència i es la conseqüència d'una producció genèticament realçada de Mew, donant com a resultat un *Pokemon* molt intel·ligent, de fet molt més intel·ligent que els humans. L'objectiu de la seva creació és crear el *Pokemon* més fort del món. \n\n\nLes seves habilitats psíquiques li permeten volar a través de levitació, comunicar-se telepàticament, bloquejar les habilitats especials d'altres *Pokemons*, hipnotitzar altres éssers, entre moltes altres. \n\n\n![title](img/torneig/winner.png)", "_____no_output_____" ], [ "## 6. Conclusions\n\nL'anàlisi realitzat ha començat plantejant un conjunt de preguntes que es volien respondre. Seguidament, s'han descrit les dades de treball juntament amb el tipus de les variables. \n\nLlavors s'han buscat les variables amb valors mancants per veure si la falta de valor podria suposar un problema. Degut a la naturalesa d'aquestes variables i a la quantitat relativament grant de dades amb les que es treballava, s'ha decidit assignar un valor fora de rang a les variables afectades (*unknown* a *type2* i 0 a *weight_kg* i *heigh_m*).\n\n\nUn cop feta la integració i nateja s'ha fet un anàlisi descriptiu on:\n\n* S'ha parlat de la distribució dels *Pokemons* en base a la generació en que van apareixre per primera vegada.\n* S'han analitzat diferents factors dels *Pokemons* llegendaris\n* S'ha vist la distribució de tipus\n* S'han comparat els *Pokemons* amb doble tipus \n* S'ha analitzat el pes, alçada, velocitat, atac i defensa\n* S'ha contrastat comprovat la normalitat en la distribució de les variables: atac, punts de vida, defensa, velocitat, alçada i pes.\n* S'ha comprobat la variancia del pes entre els *Pokemons* de tipus foc i roca.\n* A partir de les variables: *Diff_attack*, *Diff_sp_attack*, *Diff_defense*, *Diff_sp_defense*, *Diff_hp*, *Diff_speed*, *First_pokemon_is_legendary*, *Second_pokemon_is_legendary* i *balance_effectivity* s'han creat diferents models predictius:\n * Regressió logística.\n * *KNN*\n * *SVM*\n * Classificació per *Naive bayes*\n * Random Forest Classifier\n* A partir del millor model de classificació, s'han seleccionat 16 *Pokemons* per fer un torneig basat en 4 fases i on el sistema ha determinat que el guanyador seria el *Pokemon* ***Mewtwo***\n\n**Amb això s'ha pogut respondre a TOTES les preguntes!**\n\n## Recursos utilitzats\n\n* Trevor Hastie, Robert Tibshirani, Daniela Witten, Gareth James [2013] Introduction to Statistical Learning \n* Mireia Calvo González, Diego Oswaldo Pérez Trenard i Laia Subirats Maté - Introducció a la nateja i anàlisi de dades.\n* [5 Ways to Detect Outliers/Anomalies That Every Data Scientist Should Know (Python Code)](https://towardsdatascience.com/5-ways-to-detect-outliers-that-every-data-scientist-should-know-python-code-70a54335a623)\n* [ROC Curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)\n* [Escalar les dades](https://towardsdatascience.com/scale-standardize-or-normalize-with-scikit-learn-6ccc7d176a02)\n\n# Contribucions\n\n|Contribució|Firma|\n|----------|-------------|\n|Investigació prèvia |OGA|\n|Redacció de les respostes|OGA|\n|Desenvolupament codi|OGA|", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7aa74f388a0f6a5521b6c6f09e8fc6cd403df75
121,069
ipynb
Jupyter Notebook
Tutorial 4/Tutorial_4_Plotting.ipynb
drkndl/IITB-Astro-Tutorials
bf38155444355157029e08bd11c921cc88799434
[ "MIT" ]
null
null
null
Tutorial 4/Tutorial_4_Plotting.ipynb
drkndl/IITB-Astro-Tutorials
bf38155444355157029e08bd11c921cc88799434
[ "MIT" ]
null
null
null
Tutorial 4/Tutorial_4_Plotting.ipynb
drkndl/IITB-Astro-Tutorials
bf38155444355157029e08bd11c921cc88799434
[ "MIT" ]
null
null
null
605.345
53,042
0.939307
[ [ [ "<a href=\"https://colab.research.google.com/github/Drishika-Nadella/Krittika-IITB-Assignments/blob/master/Tutorial_4_Plotting.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nplt.plot([2,4,6,8,10])\nplt.show()\n\n#plotting with lists\n%matplotlib inline\nplt.plot([2,5,9,7],[2,1,3,5], color='green', marker='o') #number of x points should be equal to number of y points\nplt.xlabel('Bombs')\nplt.ylabel('People')\nplt.xlim(-5,20)\nplt.ylim(0,20)\nplt.show()\n\n#plotting with numpy arrays\n%matplotlib inline\n#arr=np.arange(5,10,0.5)\n#plt.plot(arr, arr**2, '1:c', label=\"y=$x^2$\") #using LaTeX for better formatting\n#plt.plot(arr, arr, 'or', label=\"y=x\")\n#plt.legend()\n\n#plotting on a log scale\nplt.loglog(arr, arr, 'or', label=\"y=x\")\nplt.loglog(arr, arr**2, '1:c', label=\"y=$x^2$\")\nplt.loglog(arr, arr**3, 'b--', label=\"y=$x^3$\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.legend()\nplt.show()\n\n#using only one axis as log scale\nt=np.arange(-5,20,0.5)\nplt.plot(t, np.exp(t))\nplt.yscale('log')\nplt.show()\n\n#bar graphs\nnames=['a','b','c', 'd']\nvalues=np.random.randint(0,25,4)\nplt.bar(names,values,color='r')\nplt.xlabel(\"Names\")\nplt.ylabel(\"Values\")\nplt.title(\"Bar Graph\")\nplt.show()\n\n#histograms\nvalue=np.random.randn(5000)\nplt.hist(value)\nplt.show()\n\n#scatterplots\nx_values = np.random.randn(1000)\ny_values_1 = np.sin(np.pi*x_values) + 0.25*np.random.randn(1000)\ny_values_2 = np.cos(np.pi*x_values) + 0.25*np.random.randn(1000)\n\nplt.figure(figsize=(8,6)) \nplt.scatter(x_values,y_values_1,s=10,color='darkorange',label='Sine')\nplt.scatter(x_values,y_values_2,s=1,color='indigo',label='Cosine') \nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('Scatter Plot')\nplt.legend()\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]